Skip to main content

and
  1. No Access

    Chapter and Conference Paper

    SpatialDETR: Robust Scalable Transformer-Based 3D Object Detection From Multi-view Camera Images With Global Cross-Sensor Attention

    Based on the key idea of DETR this paper introduces an object-centric 3D object detection framework that operates on a limited number of 3D object queries instead of dense bounding box proposals followed by no...

    Simon Doll, Richard Schulz, Lukas Schneider, Viviane Benzin in Computer Vision – ECCV 2022 (2022)

  2. No Access

    Chapter and Conference Paper

    Improved Semantic Stixels via Multimodal Sensor Fusion

    This paper presents a compact and accurate representation of 3D scenes that are observed by a LiDAR sensor and a monocular camera. The proposed method is based on the well-established Stixel model originally d...

    Florian Piewak, Peter **gera, Markus Enzweiler, David Pfeiffer in Pattern Recognition (2019)

  3. Chapter and Conference Paper

    Boosting LiDAR-Based Semantic Labeling by Cross-modal Training Data Generation

    Mobile robots and autonomous vehicles rely on multi-modal sensor setups to perceive and understand their surroundings. Aside from cameras, LiDAR sensors represent a central component of state-of-the-art percep...

    Florian Piewak, Peter **gera, Manuel Schäfer in Computer Vision – ECCV 2018 Workshops (2019)

  4. No Access

    Chapter and Conference Paper

    What Is in Front? Multiple-Object Detection and Tracking with Dynamic Occlusion Handling

    This paper proposes a multiple-object detection and tracking method that explicitly handles dynamic occlusions. A context-based multiple-cue detector is proposed to detect occluded vehicles (occludees). First,...

    Junli Tao, Markus Enzweiler, Uwe Franke in Computer Analysis of Images and Patterns (2015)

  5. No Access

    Chapter and Conference Paper

    Object-Level Priors for Stixel Generation

    This paper presents a stereo vision-based scene model for traffic scenarios. Our approach effectively couples bottom-up image segmentation with object-level knowledge in a sound probabilistic fashion. The rele...

    Marius Cordts, Lukas Schneider, Markus Enzweiler, Uwe Franke in Pattern Recognition (2014)

  6. Chapter and Conference Paper

    Stixmantics: A Medium-Level Model for Real-Time Semantic Scene Understanding

    In this paper we present Stixmantics, a novel medium-level scene representation for real-time visual semantic scene understanding. Relevant scene structure, motion and object class information is encoded using so...

    Timo Scharwächter, Markus Enzweiler, Uwe Franke, Stefan Roth in Computer Vision – ECCV 2014 (2014)

  7. No Access

    Chapter and Conference Paper

    Efficient Multi-cue Scene Segmentation

    This paper presents a novel multi-cue framework for scene segmentation, involving a combination of appearance (grayscale images) and depth cues (dense stereo vision). An efficient 3D environment model is utili...

    Timo Scharwächter, Markus Enzweiler, Uwe Franke, Stefan Roth in Pattern Recognition (2013)

  8. No Access

    Chapter and Conference Paper

    High-Level Fusion of Depth and Intensity for Pedestrian Classification

    This paper presents a novel approach to pedestrian classification which involves a high-level fusion of depth and intensity cues. Instead of utilizing depth information only in a pre-processing step, we propos...

    Marcus Rohrbach, Markus Enzweiler, Dariu M. Gavrila in Pattern Recognition (2009)

  9. No Access

    Chapter and Conference Paper

    Pedestrian Recognition from a Moving Catadioptric Camera

    This paper presents a real-time system for vision-based pedestrian recognition from a moving vehicle-mounted catadioptric camera. For efficiency, a rectification of the catadioptric image using a virtual cylin...

    Wolfgang Schulz, Markus Enzweiler, Tobias Ehlgen in Pattern Recognition (2007)