![Loading...](https://link.springer.com/static/c4a417b97a76cc2980e3c25e2271af3129e08bbe/images/pdf-preview/spacer.gif)
-
Chapter and Conference Paper
SpatialDETR: Robust Scalable Transformer-Based 3D Object Detection From Multi-view Camera Images With Global Cross-Sensor Attention
Based on the key idea of DETR this paper introduces an object-centric 3D object detection framework that operates on a limited number of 3D object queries instead of dense bounding box proposals followed by no...
-
Chapter and Conference Paper
Improved Semantic Stixels via Multimodal Sensor Fusion
This paper presents a compact and accurate representation of 3D scenes that are observed by a LiDAR sensor and a monocular camera. The proposed method is based on the well-established Stixel model originally d...
-
Chapter and Conference Paper
Boosting LiDAR-Based Semantic Labeling by Cross-modal Training Data Generation
Mobile robots and autonomous vehicles rely on multi-modal sensor setups to perceive and understand their surroundings. Aside from cameras, LiDAR sensors represent a central component of state-of-the-art percep...
-
Chapter and Conference Paper
What Is in Front? Multiple-Object Detection and Tracking with Dynamic Occlusion Handling
This paper proposes a multiple-object detection and tracking method that explicitly handles dynamic occlusions. A context-based multiple-cue detector is proposed to detect occluded vehicles (occludees). First,...
-
Chapter and Conference Paper
Object-Level Priors for Stixel Generation
This paper presents a stereo vision-based scene model for traffic scenarios. Our approach effectively couples bottom-up image segmentation with object-level knowledge in a sound probabilistic fashion. The rele...
-
Chapter and Conference Paper
Stixmantics: A Medium-Level Model for Real-Time Semantic Scene Understanding
In this paper we present Stixmantics, a novel medium-level scene representation for real-time visual semantic scene understanding. Relevant scene structure, motion and object class information is encoded using so...
-
Chapter and Conference Paper
Efficient Multi-cue Scene Segmentation
This paper presents a novel multi-cue framework for scene segmentation, involving a combination of appearance (grayscale images) and depth cues (dense stereo vision). An efficient 3D environment model is utili...
-
Chapter and Conference Paper
High-Level Fusion of Depth and Intensity for Pedestrian Classification
This paper presents a novel approach to pedestrian classification which involves a high-level fusion of depth and intensity cues. Instead of utilizing depth information only in a pre-processing step, we propos...
-
Chapter and Conference Paper
Pedestrian Recognition from a Moving Catadioptric Camera
This paper presents a real-time system for vision-based pedestrian recognition from a moving vehicle-mounted catadioptric camera. For efficiency, a rectification of the catadioptric image using a virtual cylin...