How I Built a Self Driving Model Car
Part 2 — The Software and Learning the First Model
Previous posts in this series can be found here.
Installing the Software
I like to use ssh to log onto the Jetson and then use byobu with tmux_resurrect installed to allow me to run multiple windows within the putty console and to restore from where I left off before I last shut the car down.
VirtualEnv
On the Jetson, first of all install VirtualEnv and create a DonkeyCar environment. Activate this and do everything in this environment. I use virtualenvwrapper from here.
DonkeyCar
The first and main piece of software to install is from DonkeyCar. For the Nano there is a small config change depending on your version of JetPack. The DonkeyCar distribution comes with a script to install for JetPack 4.4 but I had 4.5. Go to the install/nano folder and create a new install-jp45.sh and change the relevant line :
sudo -H pip3 install --pre --extra-index-url https://developer.download.nvidia.com/compute/redist/jp/v45 tensorflow==2.2.0+nv20.6
You can then run pip install -e .[nano]
Calibrating
DonkeyCar comes with a calibrate utility that allows you to determine the max, min and centre or zero positions for your car. First of all its worth checking that the PCA9865 bus is configured correctly.
sudo i2cdetect -r -y 1
which will output something like this. It says the PCA9865 can be addressed through 0x40 and 0x70. The first is the default and should be already configured. Look for the following in myconfig.py which says we’re using bus number 1 on 0x40.
PCA9685_I2C_ADDR = 0x40
PCA9685_I2C_BUSNUM = 1
You then run the calibration tool and enter values manually into the console and observe the steering and throttle (make sure the car is off the ground!). By iterating and gradually increasing/decreasing the values you find where the thresholds are. These go into myconfig.py in mycar.
donkey calibrate --channel 0 --bus=1
which for example may result in
STEERING_CHANNEL = 0
STEERING_LEFT_PWM = 480
STEERING_RIGHT_PWM = 280THROTTLE_CHANNEL = 1
THROTTLE_FORWARD_PWM = 450
THROTTLE_STOPPED_PWM = 370
THROTTLE_REVERSE_PWM = 330
JoyStick
I had an existing Steam Controller and wanted to use this to get me going. Unfortunately this isn’t one of the standard configs in DonkeyCar so I had to create my own using the custom joystick utility that comes with DonkeyCar. However the Steam Controller requires the PyGameController and some other bits and pieces to work.
sudo apt-get install python-dev libsdl-image1.2-dev libsdl-mixer1.2-dev libsdl-ttf2.0-dev libsdl1.2-dev libsmpeg-dev python-numpy subversion libportmidi-dev ffmpeg libswscale-dev libavformat-dev libavcodec-dev libfreetype6-dev
sudo apt-get install libsdl2-dev
sudo apt-get install python-pygame
Then get the steamcontroller code from here. I cloned it and ran the setup and then added the steamcontroller rules in /etc/udev/rules.d. You’ll need the user that runs it to be in the games group. Now we can run :
sc-xbox.py start
and you’ll see js0 appear in /dev/input. This is the default location for DonkeyCar to look for a controller.
Now run the joystick wizard :
donkey createjs
This will write out a python file but the class is configured to extend JoystickController. We need to edit it so that it extends PyGameController :
from donkeycar.parts.controller import PyGameJoystick, JoystickController
class MyJoystick(PyGameJoystick):
Next we need to get DonkeyCar to use the CustomController which means editing myconfig so that it the CustomController is created :
# add controller
if cfg.USE_JOYSTICK_AS_DEFAULT:
from my_joystick import MyJoystickController
ctr = MyJoystickController(throttle_dir=cfg.JOYSTICK_THROTTLE_DIR,
throttle_scale=cfg.JOYSTICK_MAX_THROTTLE,
steering_scale=cfg.JOYSTICK_STEERING_SCALE,
auto_record_on_throttle=cfg.AUTO_RECORD_ON_THROTTLE)
ctr.set_deadzone(cfg.JOYSTICK_DEADZONE)
print('created myjoystick controller')
print(ctr)
And in myconfig.py set the controller type to custom :
CONTROLLER_TYPE=custom
I had problems with this controller though which was that it was difficult to control the car easily and sometimes there was a lot of lag or the controls were unresponsive and the bluetooth range wasn’t great. I also tried one of the ‘standard’ controllers, the Logitech F710, however the range was even worse (a few meters at best).
These controllers were ok for the first indoor prototype although I had to follow the car around so I wasn’t out of range, but they weren’t going to be good enough for the next version.
OpenCV
I wanted to do some image preprocessing so I installed openCV which means building it. This requires the additional swap space from the first article and I then used this. Note this built a gpu enabled version of OpenCV although you have to change your code to use Cuda. It also works on the cpu without change and so this is what I started with.
Learning the First Model
After driving the car around inside and collecting around 8000 images I was ready to learn a model. I had access to an Nvidia 1080Ti so I decided to learn the models on that machine and then copy the model back to the car.
I started with 480x640 images and I wanted to use the Nvidia architecture from here. Of course this meant adding a custom neural architecture into the DonkeyCar keras.py file and updating the utils.py to select that architecture if the passed in model type was nvidia.
In keras.py :
class NvidiaModel(KerasPilot):
def __init__(self, num_outputs=2, input_shape=(240, 320, 3), *args, **kwargs):
super(NvidiaModel, self).__init__(*args, **kwargs)
self.model = customArchitecture(num_outputs, input_shape)
...
In utils.py :
elif model_type == "nvidia":
from donkeycar.parts.keras import NvidiaModel
kl = NvidiaModel(input_shape=(480,640,3))
And the train command :
python train.py --model models/my_nvidia.h5 --type nvidia --tubs data
This resulted in the following loss curve :
which was relatively high loss. In any case I downloaded the model onto the car and tried it. The car occasionally appeared to recognise a road edge but most of the time it simply drove over it.
What’s Next?
In the next article I’ll look into how OpenCV can be used for image preprocessing to help the training, along with finding bugs in inference pipeline.