Log in

Recognizing Anomalies in Urban Road Scenes Through Analysing Single Images Captured by Cameras on Vehicles

  • Original Paper
  • Published:
Sensing and Imaging Aims and scope Submit manuscript

Abstract

In this paper, we propose to recognize anomalies in urban road scenes through analysing single images captured by cameras on driving vehicles. Anomaly detection is one of the most important functions for visual driver-assistance systems and autonomous vehicles. Anomaly detection provides drivers and autonomous vehicles important information about driving environments and help them drive more safely. In this work, we define anything on roads that are within a certain distance of a driving vehicle and pose potential dangers for it as anomalies, such as traffic accidents, reckless driven vehicles and pedestrians. The proposed approach recognizes anomalies in urban road scenes by analysing appearances of single images captured by cameras on driving vehicles. To do so, first, we collect a large number of urban road scene images that do not contain any anomalies. Second, we segment the road regions from these images and represent the obtained road regions based on the bag of visual words method. After that, we apply k-means clustering to the region representations for acquiring a small set of reference images. Third, we establish dense correspondence between input images and the reference images to create representations for the input images. Following that, representations of normal images are used to train a one-class Support Vector Machine classifier. Finally, we use the classifier to recognize images containing anomalies. Experiments on urban road scene images are conducted. Obtained results demonstrate that by using the proposed approach we can recognize urban road scene images containing anomalies.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
EUR 32.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or Ebook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7

Similar content being viewed by others

References

  1. Autoscope. http://www.autoscope.com.

  2. Barth, A., & Franke, U. (2010). Tracking oncoming and turning vehicles at intersections. In Proceedings of 13th international IEEE ITSC (pp. 861–868).

  3. Bertozzi, M., & Broggi, A. (1998). Gold: A parallel real-time stereo vision system for generic obstacle and lane detection. IEEE Transactions on Image Processing, 7(1), 62–81.

    Article  Google Scholar 

  4. Bertozzi, M., Fascioli, A., & Broggi, A. (1999). Performance analysis of a low cost solution to vision-based obstacle detection. In Proceedings of IEEE international conference on intelligent transportation systems (pp. 350–355).

  5. Brox, T., Bruhn, A., Papenberg, N., & Weickert, J. (2004). High accuracy optical flow estimation based on a theory for war**. In Proceedings of European conference on computer vision (pp. 25–36).

    Google Scholar 

  6. Bruhn, A., Weickert, J., & Schnorr, C. (2005). Lucas/kanade meets horn/schunk: Combining local and global optical flow methods. International Journal of Computer Vision, 61(3), 211–231.

    Article  Google Scholar 

  7. Buch, N., Velastin, S. A., & Orwell, J. (2011). A review of computer vision techniques for the analysis of urban traffic. IEEE Transactions on Intelligent Transportation Systems, 12, 920–939.

    Article  Google Scholar 

  8. Cherng, S., Fang, C.-Y., Chen, C.-P., & Chen, S.-W. (2009). Critical motion detection of nearby moving vehicles in a vision-based driver-assistance system. IEEE Transactions on Intelligent Transportation Systems, 10(1), 70–82.

    Article  Google Scholar 

  9. Citilog. http://www.citilog.com.

  10. Crs, computer recognition systems. http://www.crs-traffic.co.uk.

  11. Denasi, S., & Quaglia, G. (2001). Obstacle detection using a deformable model of vehicles. In IEEE symposium intelligent vehicle (pp. 145–150).

  12. Doshi, A., & Trivedi, M. (2009). On the roles of eye gaze and head dynamics in predicting drivers intent to change lanes. IEEE Transactions on Intelligent Transportation Systems, 10(3), 453–462.

    Article  Google Scholar 

  13. Eidehall, A., Pohl, J., Gustafsson, F., & Ekmark, J. (2007). Toward autonomous collision avoidance by steering. IEEE Transactions on Intelligent Transportation Systems, 8(1), 84–94.

    Article  Google Scholar 

  14. Fang, C. Y., Fuh, C. S., Chen, S., & Yen, P. S. (2003). A road sign recognition system based on dynamic visual model. In Proceedings of IEEE international conference on computer vision and pattern recognition (pp. 750–755).

  15. Fang, C. Y., Hsueh, H. L., & Chen, S. W. (2006). Dangerous driving event analysis system by a cascaded fuzzy reasoning petri net. In Proceedings of IEEE 9th international conference on intelligent transportation systems, Toronto, Canada (pp. 337–342).

  16. Garcia, F., Cerri, P., Broggi, A., de la Escalera, A., & Armingol, J. M. (2012). Data fusion for overtaking vehicle detection based on radar and optical flow. In Proceedings of IEEE IV symposium (pp. 494–499).

  17. Ghica, D., Lu, S., Yuan, X. (1995). Recognition of traffic signs by artificial neural network. In Proceedings of IEEE international conference on neural networks (pp. 1444–1449).

  18. Ipsotek. http://www.ipsotek.com.

  19. Kasper, D., Weidl, G., Dang, T., Breuel, G., Tamke, A., & Rosenstiel, W. (2011). Object-oriented bayesian networks for detection of lane change maneuvers. In Proceedings of 13th international IEEE ITSC (pp. 673–678).

  20. Kong, H., Audibert, J.-Y., & Ponce, J. (2010). General road detection from a single image. IEEE Transactions on Image Processing, 19(8), 2211–2220.

    Article  MathSciNet  Google Scholar 

  21. Kragic, D., Petersson, L., & Christensen, H. I. (2002). Visually guided manipulation tasks. Robotics and Autonomous Systems, 40(2/3), 193–203.

    Article  Google Scholar 

  22. Lazebnik, S., Schmid, C., & Ponce, J. (2006). Beyond bags of features: Spatial pyramid matching for recognizing natural scene categories. In Proceedings of IEEE conference on computer vision and pattern recognition.

  23. Li, J., Su, H., Lim, Y., & Fei-Fei, L. (2014). Object bank: An object-level image representation for high-level visual recognition. International Journal of Computer Vision, 107(1), 20–39.

    Article  Google Scholar 

  24. Lin, Y., & Saripalli, S. (2012). Road detection and tracking from aerial desert imagery. Journal of Intelligence and Robotic System, 65, 345–359.

    Article  Google Scholar 

  25. Liu, C., Yuen, J., & Torralba, A. (2011). Sift flow: Dense correspondence across scenes and its applications. IEEE Transactions on Pattern Analysis and Machine Intelligence, 33(5), 978–994.

    Article  Google Scholar 

  26. Lowe, D. G. (1999). Object recognition from local scale-invariant features. In Proceedings of the IEEE international conference on computer vision, Kerkyra, Greece (pp. 1150–1157).

  27. Manevitz, L. M., & Yousef, M. (2002). One-class SVMs for document classification. Journal of Machine Learning Research, 2(2), 139–154.

    MATH  Google Scholar 

  28. Margolin, R., Zelnik-Manor, L., & Tal, A. (2014). Otc: A novel local descriptor for scene classification. In Proceedings of European conference on computer vision.

    Google Scholar 

  29. Matsushita, Y., Kamijo, S., Ikeuchi, K., & Sakauchi, M. (2000). Image processing based incident detection at intersections. In Proceedings of 4th Asian conference on computer vision (pp. 520–527).

  30. Mikolajczyk, K., & Schmid, C. (2005). A performance evaluation of local descriptors. IEEE Transactions on Pattern Analysis and Machine Intelligence, 27(10), 1615–1630.

    Article  Google Scholar 

  31. Oliva, A., & Torralba, A. (2001). Modeling the shape of the scene: A holistic representation of the spatial envelope. International Journal of Computer Vision, 42(3), 145–175.

    Article  Google Scholar 

  32. Ruichek, Y. (2005). Multilevel- and neural-network-based stereo-matching method for real-time obstacle detection using linear cameras. IEEE Transactions on Intelligent Transportation Systems, 6(1), 54–62.

    Article  Google Scholar 

  33. Scholkopf, B., Platt, J., Shawe-Taylor, J., Smola, A., & Williamson, R. (1999). Estimating the support of a high-dimensional distribution. Technical Report MSR-TR-99-87, Microsoft Research.

  34. Shekhovtsov, A., Kovtun, I., & Hlavac, V. (2007). Efficient MRF deformation model for non-rigid image matching. In Proceedings of IEEE international conference on computer vision and pattern recognition.

  35. Sivaraman, S., Morris, B. T., & Trivedi, M. M. (2011). Learning multi-lane trajectories using vehicle-based vision. In Proceedings of IEEE international conference on computer vision workshop (pp. 2070–2076).

  36. Sivaraman, S., & Trivedi, M. M. (2013). Looking at vehicles on the road: A survey of vision-based vehicle detection, tracking, and behavior analysis. IEEE Transactions on Intelligent Transportation Systems, 14(4), 1759–1773.

    Google Scholar 

  37. Sun, Z., Bebis, G., & Miller, R. (2006). On-road vehicle detection: A review. IEEE Transactions on Intelligent Transportation Systems, 28, 694–711.

    Google Scholar 

  38. Traficon. http://www.traficon.com.

  39. William, H., Teukolsky, S. A., Vetterling, W. T., & Flannery, B. P. (2007). Numerical recipes: The art of scientific computing (3rd ed.). New York: Cambridge University Press.

    MATH  Google Scholar 

  40. Wu, J., & Rehg, J. M. (2011). Centrist: A visual descriptor for scene categorization. IEEE Transactions on Pattern Analysis and Machine Intelligence, 33(8), 1489–1501.

    Article  Google Scholar 

  41. Yamada, M., Ueda, K., Horiba, I., & Sugie, N. (2001). Discrimination of the road condition toward understanding of vehicle driving environments. IEEE Transactions on Intelligent Transportation Systems, 2(1), 26–34.

    Article  Google Scholar 

  42. Yao, W., Zhao, H., Davoine, F., & Zha, H. (2012). Learning lane change trajectories from on-road driving data. In Proceedings of IEEE IV symposium (pp. 885–890).

  43. Zhou, H., Kong, H., Wei, L., Creighton, D., & Nahavandi, S. (2015). Efficient road detection and tracking for unmanned aerial vehicle. IEEE Transactions on Intelligent Transportation Systems, 16(1), 297–306.

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Shuang Bai.

Additional information

This work was Supported by National Natural Science Foundation of China (61602027).

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Bai, S., Han, C. & An, S. Recognizing Anomalies in Urban Road Scenes Through Analysing Single Images Captured by Cameras on Vehicles. Sens Imaging 19, 34 (2018). https://doi.org/10.1007/s11220-018-0218-7

Download citation

  • Received:

  • Revised:

  • Published:

  • DOI: https://doi.org/10.1007/s11220-018-0218-7

Keywords

Navigation