Inspired by Andrew Ng’s example here in his class about “Structuring Machine Learning Projects”, I will try to give the starting points in a project which describes how to build a Self-Driving Car.
I won’t adress the hardware side – I suppose that we have data from a few video cameras, eventually from a few Lidar detectors installed around the car and maybe a radar.
Building an end-to-end Deep Learning system would be not recommended here – the problem is too complex and the available data is sparse yet (even Google’s 1milion miles database is rather small for this problem) .
The more reasonable approach would be to combine some DL systems with some expert system, and the complete plan is presented bellow :
From the video and Lidar /radar data and extract using a Multi-task Deep Learning algoritm the most important features (like the cars, the pedestrians, the road signs and the shape of the road).
We use this informations as features for an expert system (a rule-based system) to build a driving strategy – a route path, together with the speeds at which each section of the path should be drove.
Finally, these infomations are translated to engine and steering commands by a simple system, usually already found on actual cars (if the car is able to self-park, it already have electronic engine and steering control).
Of course, this project is complex. For the moment I will focus on the first part of this system, using Video and Lidar images to identify the important features on the road (cars, people, road-signs).
To implement this NN I will use the same approach described in my previous post for Image Classification , taking care that the NN has a 4 neurons on the output (one for each classifier) and using a sliding window on the initial image.
I’ll keep you posted in a new article !!!
Part 2 – Small scale experiment
As the normal geek that I am, I started playing with an RC car which I will use as the base for building the simplest autonomous car.
I bought a 1/18 scale car with nice front and back bumpers so I can install the cameras and sensors (2 frontal, 2 back, 2 sides)
The controller will be a Raspberry Pi , graciously provided by Caroline(Thanks!).
We just remove the top cover to see the nice chasse where the sensors and motherboard will fit perfectly.
Next steps is to acquire an arduino kit (many thanks Alecu !!!!) for controlling those engines, so I can concentrate more on the vision an sensor part of the project.
More components are arriving from friends :
For the lidar (Garmin lite V3) module the specs are :
- Accuracy: +/- 2.5cm
- Range: 0-40 meters
- Power: 4.75-5 VDC; 6 V Max
Soon I will also acquire a rotating “tourelle) ” : http://www.robotshop.com/eu/fr/kit-aluminium-pan-and-tilt-lynxmotion2.html?gclid=EAIaIQobChMI1_jc3aOl1wIVBKFRCh2kvAkREAEYASAFEgIgXvD_BwE
More hardware arrives (robotic kit from Sunfounder, taking care of all connections between the engines and the computer)
Meanwhile, we work on the obstacle detection algoritm, and we choose Yolo because :
- very fast
- implementation described very well on coursera by Andrew NG
Here is our first result on object detection general images provided by drive.ai :
November 17, 2017 – the Sunfounder PiCar -V kit arrived – lots of nuts and bolts to be put together -> I managed to assemble it watching Discovery 🙂 . Then I realized that it uses a strange type of batteries (18650) so I thought about using some AA batteries I had in the house.
The best way to start is to install raspbian on the Raspberry Pi3, then install the car software and pyqt5.
Then mount frame of the car(rear motors, rear wheels, Raspberry Pi + the 3 red boards (1 IO +1 servo controller + 1 engine controller)
Next, and pretty important before mounting the front wheels is to connect the servos and run the provided utility ( servo reset) to reset the servos in “neutral” position.
Then you can mount the front wheels and the top camera. As in my project, I count and using 3(6) fixed camera I just taped the camera I had to the front of the car (although the kit comes with a 2 servo-platform to rotate it).
Here are some images during mounting :
With the batteries in place, start your car and check what network you have (Note : as the HDMI output from the RaspPi is blocked by the front servo, be sure to do this – finding your IP and activating SSH BEFORE you install the front servo and front wheels).
Then you can connect remotely from your laptop, start the server on the car and connect to the http://<car_ip>:8000 to control the car.
Here is a video captured from the car , as I was trying to avoid some mean Maruska pedestrians, a red Royal Bus and an Arduino fellow ….
And roughly the same sequence filmed from above
Tomorrow I will try to make yolo detect each obstacle in the car’s way in real time (I use the raspberry just to capture and relay the images and control the engines. The Yolo works on my Laptop (Core i7 + GTX 1070) with tensorflow. As I am planning to regroup flux from 3-6 camera, the raspberry pi will be seriously underpowered, but enough to stream the video flux to the laptop.
Well, probably the laptop will become underpowered soon also for real-time object recognition from 6 cameras … but we will deal with this when the times come (maybe I get a big server from Santa-Clauss)
11/20/2017 I applied Yolo on the first image from the video below … and surprise …. one Maruska detected. The bus and the android cart not detected yet, maybe I need to put some more realistic toy cars? My thanks again to Andrew Ng, who really puts AI in the hands of the people (through his excellent courrses on Coursera) !
I’ll keep you posted when I got news!