Keywords

1 Introduction

Image processing is a method to perform some operations on an image in order to get an enhanced image or to extract some useful information from it. Image processing basically includes three steps [1]. The first step is importing the image via acquisition tools while the second step is analyzing and manipulating the image and the third step is generating output that can be an image or a report based on the image analysis [2]. The processing of images to increase their usefulness is called image enhancement. This process increases image quality, intelligibility or visual appearance on the display. Image enhancement is dependent on the application context, and their criterion is subjective and depends on the human viewer perception [3]. The algorithm used for enhancement process is simple, qualitative and ad hoc. In addition, the algorithm performance from one class to another class is different from each other and it depends on the application used. Objective of image enhancement is to make the process image better than the unprocessed image. For example, image display can be enhanced via modifying its contrast or dynamic range. Increasing the contrast and the dynamic range can significantly improve the quality of such an image [4]. A good transformation in typical image applications can be identified by computing the histogram of the output image and studying its characteristics [5]. For an image with a low contrast, the pixel values are clustered in a small range and intensity. Meanwhile for high contrast the pixel values are distributed uniformly at all range.

2 Design Scheme for Image Processing

See Fig. 1.

Fig. 1
figure 1

Design application system

2.1 Selection of Image Sample

All images used in this study were captured and recorded in AVI file format. The images have different resolutions, namely 640 × 480, 720 × 480, 800 × 600 and 1024 × 768. The images used in this study were taken from 5 m, 10 m and 15 m distances during daylight, rainy daylight and night conditions. The distances were measured using infrared meter. Recognition of motorcycle process can be classified as the identified motorcycle plat number at the back side and back lamp. The process of workflow can be summarized as:

  1. (1)

    Find out plate number: Each plate dimension has a square shape. Each plate is black in colour and contains the number at the center of square shape.

  2. (2)

    Find out the white number: The white number contains alphabet and number at the center of the square shape.

  3. (3)

    Find out the back lamp: The lamp is located up to the plate number at the back of motorcycle. The lighting changes while the motorcycle break is pressed.

  4. (4)

    Locate origin: The origin of the motorcycle image detected is inside a rectangle bounding box. Its location is defined by four zones based on its relative position with a bounding box. Four zones which are surrounded by a rectangle bounding box is defined as TOP, BOTTOM, RIGHT and LEFT. All zones are exhibited in Figs. 2 and 3.

    Fig. 2
    figure 2

    Four zones surrounding a rectangle bounding box

    Fig. 3
    figure 3

    Bounding box defines by eight zones

Motorcycle image detection was conducted in extreme conditions such as rainy and night. Experiment on normal condition was carried out during daylight. Two considerations were taken into account as the detection occurred while the image was in static situation and relative location.

2.1.1 Static Location

The angle of the image capture from the camera is in straight line and 90°. This means during simulation a video is paused and set t = 0. It is set at a distance of 5 m up to 15 m. The distance named as (D) will be influenced by the camera tilting effect and its measurement can be calculated depends on the motorcycle footprint or tyre touch on the ground named as y coordinate. Stabilization process to solve tilting effect is done while image is detected with Matlab Simulink.

2.1.2 Relative Location

The different case from static location image is captured while motorcycle is moving. During simulation, a real time video image of moving motorcycle was recorded. The detection process occurs, and in this situation, bounding box keeps following the motorcycle and the movement synchronized each other for pattern matching. The detection of motorcycle images is set at 45° for the left boundary and right boundary from the camera mounting on car dashboard. The situation is shown as Fig. 4.

Fig. 4
figure 4

Camera mounted in car

2.1.3 Distance Measurement Model

The distance measurement between motorcycle and camera can be measured using the distance measurement model as shown in Figs. 5, 6 and 7 [6]. The distance can be computed as;

$$ {\text{dcam}} = \frac{{{\text{kFh}}}}{{\text{a}}} $$
(1)
$$ \frac{{\text{kFh }}}{{\text{a}}} = {\text{d}} + \Delta {\text{d}} $$
(2)
Fig. 5
figure 5

Object position

Fig. 6
figure 6

Distance of driver to motorcycle on side view

Fig. 7
figure 7

Distance of camera to motorcycle on side view

where,

dcam:

distance from camera to the motorcycle

d:

distance from the front end of driver’s vehicle to the motorcycle

Δd:

distance from the camera to the front end of the driving vehicle

h:

height of the camera from the ground

F:

focal length of the camera

a:

pixel difference from y coordinate of the vehicle bottom in image frame to y coordinate of the image frame center.

The distance, d is influenced by the camera tilting effect, and the distance measurement depended on the y coordinate of the vehicle footprint. Therefore, the modified distance measure formula after calibration using the tilting angle of the camera is shown in Eq. 3. During capturing the motorcycle images, vibration occurred because the road plane is not flat. The car and the motorcycle move at the same time. This situation caused the tilting angle of the camera to the object detection ≠ 0. Figure 5 shows the diagram of the object detection from rear location. The object detected position at initial is in the center of bounding box. The object location targeted defined as yc coordinate at this stage. The next stage during object moved a vibration occurred and create a new vanishing point. This new point defined as yc′. In this case, new yc′ = yc ± |Fθx|.

Hence the distance measurement model after calibration is shown as follows;

$$ d = k $$
(3)
$$ {\text{F}}\uptheta {\text{x }} = {\text{y}} {\text{c}}^{\prime}{-}{\text{yc}} $$
(4)

where

d:

the distance from the front end of driver’s vehicle to the motorcycle

Δd:

the distance from the camera to the front end of the driving vehicle

F:

the focal length of the camera

h:

the height of the camera from the ground

yvb:

y coordinate of the vehicle lower bound in image frame

yc:

y coordinate of centroid of the image frame

yc′:

new vanishing point

k:

scaling factor which is determined by different image resolution.

The block diagram was constructed based on MATLAB Simulink block library and integrated with each other. All the details about the block are provided in Table 1. Referring to Fig. 1, the overall MATLAB Simulink block set diagrams that were used for the development of detection systems comprised multimedia files, image from file, convert image to file, 2D correlation, maximum block, data type conversion, constant block, mux block, draw shape block and video viewer. All of these blocks are explained one by one based on their respective functions in relation to the development of the detection system that was created. In the first stage, the multimedia tracker block file system was developed. This block can receive data in the form of video and audio directly. Accordingly, motorcycle video was adapted to this block in AVI file format. Multimedia files block can work and be implemented in Windows operating system. It can read compressed or unmodified video frames or audio frames. However, in other operating systems, it can only read the video frame or audio frame that has not been compacted in AVI file format.

Table 1 Detection system blocks

In the second stage, the target motorcycle image was removed from the video taken and adapted to block image from file in Portable Network Graphic file format. The target motorcycle image was identified beforehand. Block image from this file can also receive motorcycle images resulting from the use of MATLAB software. In addition, motorcycle images in RGB and grayscale formats can also be adapted to this block. In the third stage, data from the target motorcycle video were transformed into single floating point. Through this change, a new prototype of ideas can be implemented quickly. In the fourth stage, the 2D correlation block was used to identify the parts within the framed video that best matched with the target motorcycle image. This rating is known as a pattern matching process. In this process, the support region collation occurs when the template images and target images are matched and overlap with each other. In the fifth stage, the maximum block was used to find the maximum index value for each target matrix motor image input. During the development of parameter, mod for detection trackers in this block was set to index. Indexed images consist of data matrices, X, and Colormap matrices. In the sixth stage, the conversion block was used to change the index value sent by the maximum block. An exchange occurred when 32 bit unsigned integers were converted to single floating points.

In the seventh stage, the size of the target motorcycle template image was identified by setting the desired size value in block constant. By increasing the value of the target image, the size became large, otherwise the values reduced. In the eighth stage, Mux block was used to combine the maximum value and size of the template image into a single vector. This vector defines a region as a rectangle on the target portion of the target motorcycle and is also known as a region of interest (ROI). In the ninth stage, part of the rectangular shape around each part of the video frame most suitable with the target motorcycle image is drawn using drawing shape block. This rectangle was known as bounding box in this study. In the tenth and final stage, the video viewer block is used. This block displays motorcycle videos with ROI views on it. Bounding box continues to follow motorcycle motions in the video. The video block automatically displays the video viewer window during the simulation. The output image display is white and black. This condition occurs because the image is represented by a single floating point. Value 0 corresponds to black and value 1 corresponds to white.

3 Method

Visibility of motorcycle plate image recognition performance for FPGA is recorded from various threshold level values at stage 10. The values setting from threshold level 2 until level 10. Observation of quality output image from Fig. 8 is made at each threshold level running. Image quality evaluation was made with histogram level and histogram equalization via MATLAB programming. Histogram level and histogram equalization in Figs. 9 and 10 shows each image properties are dividing into three areas tonal variations zones such as dark area, mid tones area and light area.

Fig. 8
figure 8

Original image

Fig. 9
figure 9

Histogram equalization

Fig. 10
figure 10

Histogram equalization

3.1 MATLAB Programming

This project adopted MATLAB Simulink to simulate motorcycle image in different conditions such as lighting, distances and resolutions. The block diagram is rearranged and connected for tracking and detection process. The nest process to analyze motorcycle output image for evaluations depicted as below.

  • I = rgb2gray (rgb)

  • figure

  • imshow (I)

  • figure, imhist(I)

  • J = histeq(I)

  • imshow (J)

  • figure, imhist(J, 64)

  • RGB = imread (‘C:\…)

  • [j,T] = histeeq(I)

  • figure, plot ((0:255)/255, T)

4 Results and Discussion

Motorcycle image edge detection focusing on plate number recognition was carried out in this project. The traffic offenders can be recognized through their own motorcycle plate number registered with Department of Road Transport Malaysia (JPJ). In this case motorcycle image edge detection evaluated based on different threshold level setting, resolution used, different lighting and weather condition. The parameter involved during the evaluation process is histogram level, histogram equalization and image contrast stretching method. MATLAB Simulink software is implemented through all the experiment in this Module for benchmark evaluation.

Visibility of motorcycle image edge detection performance in daylight condition for this platform is recorded from various threshold level setting values from level 2 until level 10. Observation of quality output image is made at each threshold level running. The effect of each output image is shown as Figs. 11 and 12. Threshold level 2 image output is clear compared to threshold level 10. These results show quality output image is degraded since increasing a threshold value. This factor can be validated through a histogram level and manipulated each image through a contrast stretching method.

Fig. 11
figure 11

Output image at threshold level 2 with 640 × 480 resolutions

Fig. 12
figure 12

Output image at threshold level 2 with 1024 × 768 resolutions

Motorcycle output image in Fig. 16 for threshold level 2 is not recognized using software platform. The character of plate number obviously does not appear at all resolution size. Histogram level for each image in Fig. 11 is representing as Fig. 13a. The pixel tabulated in histogram level in Fig. 13 is not so much different at all resolutions. Meanwhile the pixel tabulated has lower value than hardware platform from another research. Histogram equalization in Fig. 13b expands local contrast in an image for enhancement process. However, the pixel tabulated only at the light area in histogram.

Fig. 13
figure 13

Histogram a level, b equalization at threshold level 2 with 640 × 480 resolutions

4.1 Performance Evaluations

Figures 11 and 12 show histogram for each image display in Figs. 13 and 14. An image histogram is a graph to show how many pixels are at each scale level or at each index for the color image. The histogram contains information needed for image equalization, where the image pixels are stretched to give a reasonable contrast. Histogram can be obtained by plotting pixel value distribution over the full grayscale range [7]. Histogram equalization in Figs. 13b and 14b represents to enhance the overall contrast of the image [8]. The basic idea for histogram equalization is to change the original image pixel gray to value of the number of pixels in the image grayer value to widen. The image then converted into the form of a histogram corresponding to a uniform distribution. Histogram equalization is a distribution function transformation method based on histogram modification method. Histogram Equalization is a useful technique to expand local contrast in an image. It selects, computes, and equalizes histogram of a local neighborhood. After that, centered pixel is mapped based on the new equalized local histogram. Image output has better contrast than original image. Plate characters appear clearer after contrast stretching that result from histogram equalization, and the background color remains the same [9].

Fig. 14
figure 14

Histogram a level, b equalization at threshold level 2 with 1024 × 768 resolutions

Motorcycle plate number output in Fig. 16 for threshold level 10 is not recognized using software platform and image output quality decreasing compared to Fig. 12 at threshold level 2. The character of plate number obviously not appears at all resolution size. Histogram level for each image in Figs. 15 and 16 is representing as Figs. 17a and 18a. The pixel tabulated in histogram level in Figs. 17a and 18a is not so much different at all resolutions. Meanwhile the pixel tabulated has lower value than hardware platform from another research. Histogram equalization in Figs. 17b and 18b expands local contrast in an image. However the pixel tabulated only at the light area in histogram compared to hardware platform which is pixel tabulated uniformly.

Fig. 15
figure 15

Output image at threshold level 10 with 640 × 480 resolutions

Fig. 16
figure 16

Output image at threshold level 10 with 1024 × 768 resolutions

Fig. 17
figure 17

Histogram a level, b equalization at threshold level 2 with 640 × 480 resolutions

Fig. 18
figure 18

Histogram a level, b equalization at threshold level 2 with 1024 × 768 resolutions

Table 2 shows the percentage for detection in different conditions from 5 m distance for all resolutions. Motorcycle image detection was 99% for daylight, rainy daylight and night conditions. In addition, high percentage detection of 99% was recorded in daylight condition. Meanwhile, for rainy daylight condition, the percentage detection was 75% at 800 × 600 resolutions and 99% at 1024 × 768 resolutions and night condition recorded 0%. The percentage detection increased to 99% at 1024 × 768 resolution and at all threshold levels (T) from T = 2 until T = 10. The processing speeds during detection process at all levels and conditions were 33.33 ms. In the case of distances of 10 and 15 m for all conditions, the detection percentage was neglected due to the occurrence of many errors.

Table 2 Percentage image detection

5 Conclusion

Develop framework showed great accuracy is segmentation of plate number from motorcycle image in daylight condition compared to rainy daylight and night condition. Motorcycle traffic offender can be identified based on plate number recognize. Quality of images is increased since increased threshold value from level 2 to level 10. Resolution used and lighting conditions also contribute for motorcycle image detection. This data can be delivered to the department of road transport officer to be analyzed. The advantage of develo** framework prototype is flexible compared to previous system fixed. This is turn contributes significantly to the accuracy, reliability and simplicity of any motorcycle detection system used as the need for safety road. This approach can reduce illegal racer as much as possible thus enabling the use of develop framework in department of road transportation to trace traffic offender.