首页    期刊浏览 2025年08月16日 星期六
登录注册

文章基本信息

  • 标题:Image processing for autonomous parking procedures.
  • 作者:Luca, Razvan ; Simion, Carmen ; Troester, Fritz
  • 期刊名称:Annals of DAAAM & Proceedings
  • 印刷版ISSN:1726-9679
  • 出版年度:2011
  • 期号:January
  • 语种:English
  • 出版社:DAAAM International Vienna
  • 摘要:Key words: line features, bird view, autonomous parking, video processing
  • 关键词:Algorithms

Image processing for autonomous parking procedures.


Luca, Razvan ; Simion, Carmen ; Troester, Fritz 等


Abstract: This paper presents a floor marking detection approach used in guiding intelligent autonomous vehicles based on a vision system. Floor markings representing a limited parking area are used for the parking process of the vehicle. The obtained perspective video data is transformed into a bird view and filtered so that only relevant data is kept for further processing and representation. The guiding approach relies on a Hough-transformation algorithm, where lines are extracted key features.

Key words: line features, bird view, autonomous parking, video processing

1. INTRODUCTION

To initiate an autonomous parking procedure various criteria must be met. At first, the parking lot must be identified to acknowledge the vehicle about the existence of the spare place. Parameters and data about the parking lots have to be communicated to the autonomous system so it can be decided either a transversal or lateral parking maneuver is possible. The identification of objects or persons inside the parking lot represents an important factor which is also evaluated to decisive criteria of parking. For the floor markings identification a Matlab/Simulink Software was programmed to read an online capture and to process the filtering and calculation algorithms. The presented method was tested on the second development layer, representing a 1:8 scaled vehicle.

2. RELATED WORK

For the semiautomatic parking of vehicles, Toyota developed a vision and ultrasonic based parking assistant in the year 2003 in which the detected parking place was optionally selected by the driver. The steering control was processed by an ECU where a defined track for driving into the parking lot was calculated, while the acceleration and breaking still remaining task of the driver. The company Valeo also introduced in 2008 the Park4you system on the market, based on a similar procedure. As a successive development, our task consists of creating a system that drives a vehicle autonomous into a parking lot, without any human inference by using a video processing unit and additional proximity sensors. A vision based algorithm used for path determination of autonomous vehicles is described by (Hoover & Olsen, 1999). Robot vision systems are designed and described by (Siefert & Woerner, 2005) and (Wu et al., 2005), where specific tasks are described, as golf-ball collecting and soccer robots for the RoboCup challenge. In his book (Schreer, 2005) describes several video processing approaches related to the feature extraction. The problematic of object tracking refers in this project to the specific floor markings. A similar approach is described in (Jean et al., 2005), where an application of mobile robots based on shape features is described. Our system characteristics are defined by the identification of the floor markings and the transformation of the image in a "bird view" perspective using line features. The path planning calculation refers to the extracted features, which are used in an extra module as inputs in a path planning algorithm based on a potential field method.

3. HARDWARE CONCEPT

The vehicle platform is a non-holonomic model with incremental, ultrasonic and laser sensors. A Microsoft video camera (Lifecam HD-6000) capable of HD recording is mounted in the front of the vehicle. The processing unit is a PC-104 system capable of running embedded C code. The camera can be adjusted to obtain a good picture of the road. The described scheme represents the working principle of camera used in this application by defining the most important elements.

[FIGURE 1 OMITTED]

The recording settings are limited to 15 frames/second and a resolution of 640x480 pixel. The approach assumes that the vehicle is moving on a perfect plane and specific perspective transformation are not needed unless the one for obtaining the bird view. The maximum cruising speed is limited to 5 m/s due to the limitation of a clear video capture requirement.

[FIGURE 2 OMITTED]

4. FLOOR MARKING DETECTION

The floor markings representing parking areas were simulated in a laboratory environment on a printed support as shown in the figure below.

[FIGURE 3 OMITTED]

For the extraction algorithm of the line features following steps are made:

* firstly the colour capture video is transformed into intensity

* an Sobel edge detection filter is initialized

* a Hough-transformation detects possible edges

* a limitation of the edges is made by finding local maxima

* the detected features are represented

* the transformation using the pin hole camera model is made for obtaining the bird view

[FIGURE 4 OMITTED]

The camera position is established by defining the coordinates of the focus point. Here the importance of the scala is high. Because of the measurements of the floor markings, we used meters as reference unit. The focal length of the camera is a very important parameter for the transformation. The pixel values of the detected endpoints of the lines are multiplied to the pixel size and combined with the coordinates of the focal point and focal length. The pixel size can be explained by the resolution and the size of the sensor matrix. The size of a 1 / 3"CMOS sensor is 4.8 mm x 3.6 mm. At a resolution of 640 x 480 pixels, this results in a pixel size of 7.5 microns or 0.0000075 m. For the calculation of the distance of the points to the vehicle or the camera, a point-line direction is defined, between the focal point and the point on the sensor matrix. To get to the point in the road plane, the z-component, which is defined here as the height above ground level, is set equal to zero. The resulting vectors are the coordinates of the point where the line intersects the plane of the road. This point is now available in world coordinates (x, y) and corresponds to the shown in the image representing the bird view extraction.

The adjustable tilt angle of the camera function can also be adapted to different vehicle environments. This requires only two parameters to be changed; the inclination angle and the camera position. To another in terms the height above ground level. By inclination, elements that do not belong to the road marking such as trees or houses are not recorded by the camera and thus there is no interference with the line detection. The validity of the images is increased.

[FIGURE 5 OMITTED]

The results of the simulated algorithm were tested in a real scenario considering the floor markings as in the picture below.

[FIGURE 6 OMITTED]

5. CONCLUSION AND FURTHER RESEARCH

The use of the the parallel to the road oriented camera has a major advantage by obtaining a meaningful representation of the floor markings. Redundant elements in the image are omitted because of the adjustable camera angle approach. Lane markings are detected closer to the vehicle and the total sensor matrix is exploited. A dynamical object recognition system is to be implemented as a next step of the research for a collision free driving into the identified parking lots. The research will lead to the possibility of fully autonomous intelligent vehicles to park in specific parking areas by identifying floor markings using a video guidance system and proximity sensors. A senzor data fusion concept has also to be considered as a future step.

6. ACKNOWLEDGEMENTS

This work was supported by the Heilbronn University (Heilbronn, Germany), the Lucian Blaga University of Sibiu (Sibiu, Romania) within the project POSDRU/6/1.5/S/26 of the European Social Fund Operational Programme for Human Resources Development 2007-2013 and the car components manufacturer Valeo (Bietigheim-Bissingen, Germany).

7. REFERENCES

Hoover, A. & Olsen, B. D. (1999). Path planning for Mobile Robots Using a Video Camera Network, International Conference on Advanced Intelligent Mechatronics, 19-23 September 1999, Atlanta, USA, ISBN-0-7803-5038-3, pp. 890-895

Jean, J. H.; Wu J. L. & Huang, Y. C. (2005). A Visual Servo System for Object Tracking Applications of Mobile Robots Based on Shape Features, CACS Automatic Control Conference, 18-19 November 2005, Tainan, Taiwan

Schreer O., Stereoanalyse und Bildsynthese (2005), Springer Verlag Berlin-Heidelberg ISBN 978-3-540-23439-5

Siefert, R. & Woerner, S. (2010). Bildverarbeitung fur einen autonomen Parkvorgang, Heilbronn University, Germany

Wu, S. L.; Cheng M. Y. & Hsu, W. C. (2005). Design and implementation of a prototype vision-guided golf-ball collecting mobile robot, IEEE International Conference on Mechatronics 1CM '05, 10-12 July 2005, Taipei, Taiwan pp. 611-615
联系我们|关于我们|网站声明
国家哲学社会科学文献中心版权所有