Finding Lane Lines - Self Driving
I joined udacity ”Self-Driving Nano-Degree Program”. Here is my notes on the first project - detecting lane lines.
Detect Lane Lines On Still Image
The first step is detecting lane lines on a still image. Here is an example image that we use to detect the lane lines.

Canny Edge Detection
First read in an image and convert to grayscale.
1 | import matplotlib.pyplot as plt |
Now let's try the Canny edge detector. We are applying Canny to the image. The algorithm first detect strong edge (strong gradient) pixels above high_threshold and reject pixels below low_threshold. the ratio of low_threshold to high_threshold is recommended to be 1:2 or 1:3.
The course recommend we include Gaussian smoothing before running Canny. Gaussian smoothing is essentially a way of suppressing noise and spurious gradients by averaging (Here - OpenCV Doc). The kernel_size for Gaussian smoothing to be any odd number. A larger kernel_size implies averaging or smoothing over a larger area.
1 | # Do all the relevant imports |
More details, this Introduction to Computer Vision course on udacity helps.
Hough Transform
At this point, we have the image applied Canny edge detection. In order to detect lines, we use Hough Transform on top of the Canny image. To do this, we will use an OpenCV function called HoughLinesP that takes several parameters.
If you want to know how Hough Transform is implemented in the first place, take a look at this blog.
Here is the complete source.
1 | import matplotlib.pyplot as plt |
For more details on HoughLinesP API:
1 | lines = cv2.HoughLinesP(edges, rho, theta, threshold, np.array([]), min_line_length, max_line_gap) |
-
edges- the output image fromCanny -
rhoandtheta- distance and angular resolution of our grid in Hough space. Remember that in Hough, we have a grid laid out along the ($theta, $rho) axis -
thresholdspecifies minimum number of votes (intersections in a given grid cell). -
np.array([])is just a placeholder, no need to change. -
min_line_lengthis minimum length of a line that you will accept in the output -
max_line_gapis maximum distance between segments that you will allow to be connected into a single line
Detect Lane Lines On Video (project)
Process on video is similar to process on still images. What we do is to consider write a pipeline process on still images and treat the video as a list of images and apply the pipe line on it. We already have learned how to detect lane lines on a still image. Here is the difficult part to what we learned. Previously we use hough transform to detect lines, now we need to only draw on solid line for left and right lane. that solid line should be connect to the bottom edge so we can detect where the lane starts while driving.

The output should look something like above after detecting line segments, and the goal is to connect/average/extrapolate line segments to get output like below.

Pipeline on Still Image
For detail on how to detect lines, please see Canny Edge Detection and Hough Transform. Previously, we print lines on map using the following method.
1 | for line in lines: |
That method draws all the lines we can find, but now we want only two lines. One on the left and one on the right. How am I going to tweak this method and make it draw two lines left and right?
Here is how I did. It must not be the best plan but it is the one I use in the project. First, we can see that the slope for left line is negative and the slope for the right line is positive. (When x increase and y increase, the slope is positive).
So I loop all the lines and find one with positive slope and one with negative slope.
Now here are two ways I can do. One is to just loop though all the lines and find all the slopes, make an array of left slops and right slops put the positive and negative number into the correct array, and calculate left and right slope average. But this is not what I did. Why?
When I calculate the slope, I found out that some line I detected does not belongs to left or right, so when I get one sample of left and right slope I calculate if the slope is 0.1 difference to the left or right slope. If this close to neither left nor right, I ignore this line. The rest the similar, I put slope of the line I want in to left array and right array, calculate the average.
Note: when I was looping though the lines and calculate the slope, I also need to calculate the y-intersection points. See formula below to get all the y-intersections for the line and also calculate the average.
1 | m = (y_2 - y_1) / (x_2 - x_1) |
From this point, I have both slope and y-interactions for left line and right line. There is one more thing we need to do. The line we draw must starts at the bottom edge of the image. However, the min points we get may not 100% starting from the edge, so we need to calculate the point value ourselves. How? $y = mx + b$, we have m and we have b, the y is the image height, so we can get x.
The other point will the point with the minY value. filter all the points for both left line and right line and find the minimum y-value point.
Below is all the codes I use to draw the line. Since I do not have too much time on this project, the code is a bit mass here. Its only for my own reference.
1 | leftM = 0 |
Use Pipeline On Video
To use it on a video, we have all the sample codes provided. All we do is to wrap our pipe line into a def process_image(image) function and to apply this function to every frame of the video.
1 | def process_image(image): |
Improvement
How to make the algorithm more robust? Currently the algorithm only detect straight line because I am using the linear equation. So in situation when the road is not straight, this algorithm may fail. In order to make it better, instead calculating and drawing the straight line, we can draw the curve. I think drawing curve is not that as easy as drawing the line so another work round may be calculating multiple slope and drawing many lines to form a curve.
Above are just my thoughts on how to make improvements.
