Setting up the Camera

The aim of the last meeting was to attach the cam of the RasbPi on the car and to adjust the angle of view to get useable pictures. After different approaches the resolution was to let the cam stick out of the hood of the car. So it is necessary to glue the cam on a frame, and glue this frame on the frame (in the inside) of the car.

To work economical some leftovers of plasiccasts were used to build the frame for the cam. After cutting on, the board with mounted-cam was glued on this frame first. Later this frame was glued on the frame of the car. A hole to stick the frame of the cam through it was built in the hood of the car.

To adjust the cam, the RasbPi was connected to a laptop via LAN to tranfer the pictures of the cam to a screen. The glue wasn’t dry yet, so the frame of the cam could be pressed in the right angle very well.

At this time the broadcasting (video stream) was fluent and without any interferences. But after turning on the remote controller and the motor of the car, the broadcasting was very haltingly.

After some tests the remote could be set as the source of distrubance. It sends with a frequency of 2,4 GHz. This band of frequency is used very often for remote controllers of RC-cars, because you can use this band without licences. Such a high frequency is not necessary for the data rate which we want to transmit. The frequency is, from a technical view, a unfavorable choice. If we want to connect the RasbPi via WLAN to a computer, some problems can arise, because WLAN works at the same band of frequency (2,Ghz) as the remote. Apart from this the use of higher frequencies takes higher Doppler frequencies with it too. That will be another source of interferences. It would be advantageous to use lower frequencies (lower than one kHz).

So the high frequency of the remote controller interferes the frequency generator on the RasbPi which is responsable for the picture frequency to display the video. The super position of this two frequencies (sending and picture frequency) results a delayed picture-frequency. This is the reason why the video isn’t fluent anymore if the remote controller is turned on.

To fix this interference we have to build a shielding around all affected parts (circuit board of RasbPi and broadband cable to the cam). The first idea is to realize a shielding with aluminum foil around this parts. If necessary it can be connected to the minus pole of the battery in the car, too.

Further information will follow after testing the shielding.

Stay tuned!

Posted in Electronics, RaspberryPi | Leave a comment

Detect a street – First experiments #2

In the last experiment we found out, that street lines can be found using gaussian blur and edge detection algorithms like Canny, Sobel or even Scharr. The following experiment should act as a support for these edge detectors. For sure, it is very important to find the street lines, but a classifier which finds out whether the car is currently located at a street or not, is also a necessary part of such a system. So we used a Random Forest based on HoG (Histogram of Oriented Gradients) features based approach, to tackle this problem. The source of this Random Forest approach and the training/test sets will be introduced in another post. This post presents (just) a baseline experiment.


The dataset is divided into positive and negative samples to train the Random Forest. The pictures were resized to 100px in the largest dimension to ensure a fast processing step.


The decision trees maximize the information gain (and minimize the entropy) to find the best split in each node to reach the best classification results in the leaf. The experiment used 10 decision trees – each of these trees was trained with another (random) set of pictures (which gave the classifier the name “Random” Forest). The goal of this randomness is to maximize the generalization (to avoid overfitting) of the classifier, because each of these trees provides his own vote. After this, the votes of all trees provide the general classification result. In this experiment, the classifier provides a response map that shines bright (for a positive result) or is simply dark (negative result). This short explanation was not scientifically correct, but provides a baseline comprehension to interpret the resulting pictures correctly.

Short note on performance: An average estimation step took ~0.1sec.


Negative Example

Negative Example - First Response

Negative Example - Second Response

Positive Example

Positive Example - First Response

Positive Example - Second Response


If we are comparing the output of positive vs. negative example we can see the more emitting response maps for the positive. This leads us to the conclusion that the classifier has successfully learned his task. For the future and further experiments we have to optimize some parameters and collect more data to build a better classifer. With some improvements, this classifier can be used to detect an initial ‘street’ or ‘non street’ state for the car.

Posted in Artificial Intelligence, Computer Vision | Leave a comment

Detect a street – First experiments

The first experiments are in the box. We collected some pictures from Google and processed them with some filters. We used some edge detection filters and gaussian blur to find out how it affects the quality of the input.


The images from Google were resized to a specific size and converted to grayscale (because it is faster to process). The Canny algorithm is a well-known edge detection algorithm, but for this experiment we used the standard edge detector of imagemagicks convert. This worked fined! For the next experiment the gaussian blur (with various parameters) was added before edge detection. The resulting pictures can be seen below.

Sample Images


The gaussian blur brought more smoothness which reduced the noise of surface irregularities in the picture. Certainly, the used pictures are not taken with our RaspiCam, but the angle of view is nearly equivalent to the cars cam.


To achieve a batch resizing (with the largest dimension to 600px) we used:

sips -Z 600

Grayscaling (black/white) was done with:

convert -colorspace GRAY

And last but not least we used imagemagicks function convert with

convert -gaussian-blur 3


convert -edge 1

for this experiment.

Posted in Computer Vision | Leave a comment