Create self-driving trucks inside Euro Truck Simulator 2

Gyuri Im
Gyuri Im
Sep 5, 2017 · 4 min read

In the field of Autonomous Driving, several papers employ the game TORCS as a testing tool for research. TORCS, which stands for The Open Racing Car Simulator, has been cited over 20 times, and more than 300 papers have used the game for developing artificial intelligence algorithms.

While TORCS is fine, we wanted a more modern simulation environment with realistic graphics and physics. After a few weeks of coding, we came up with europilot: an open source project that enables you to create self-driving trucks inside Euro Truck Simulator 2 (ETS2).


We got started on the project out of frustration, when we wanted to create a self-driving program with neural networks. What we found on the web was either partly closed source, hard to build, limited in features, unrealistic graphics, or compatible only with Windows (We don’t have Windows).

So we created a tool to control ETS2 with python, which runs on OS X and Linux.

Quick tip: Steam has periodic sales that sells ETS2 for $4.99. A quick Google search will find you other sellers that have the game on sale.


Delving into the project, europilot is a bridge between ETS2 and your python program. The usage can be largely divided into two cases.

I. Creating driving datasets used for training neural networks.

  • By specifying the area of the screen to capture, europilot captures the screen along with the steering wheel data. This data is nicely formatted into a csv file.

II. Testing self-driving programs.

  • Europilot can create a virtual joystick driver that can be recognized inside ETS2. A real-time inference network can use this joystick to output the relevant steering commands to control the truck inside the game.

We tried to make the project easy to use. There are example notebook files that takes you through each step. All in all the project is fairly simple, because the project relies heavily on other open source projects. Feel free to dive into the source code.


There are several approaches to creating self-driving programs. In the paper DeepDriving: Learning Affordance for Direct Perception in Autonomous Driving, the authors describe three major paradigms.

Today, there are [three] major paradigms for vision-based autonomous driving systems: mediated perception approaches that parse an entire scene to make a driving decision, and behavior reflex approaches that directly map an input image to a driving action by a regressor … [and] a third paradigm: a direct perception based approach to estimate the affordance for driving.

We implemented a “behavior reflex”, or “end-to-end” approach, inspired by the paper End to End Learning for Self-Driving Cars. The paper presents PilotNet, a CNN architecture that maps front-facing camera frames to steering commands.

Our implementation is similar to the model presented in the paper. However our implementation only includes frames from one front-facing camera, and doesn’t have data augmentation. It also has Batch-Normalization after every layer, and a larger input size than the original model in the paper. We found out that even without data augmentation and a training set of only 5 hours, the model worked surprisingly well.

While there is more work to do, we wanted early feedback from the community to help shape the future of the project. Hope you find the project useful, or at least amusing.

You can visit the project at https://github.com/marshq/europilot.

Mars Auto

Join us on our journey of developing driverless trucks. Visit us at https://marsauto.io/.

Gyuri Im

Written by

Gyuri Im

Passionate about Autonomous Driving. Co-Founder of https://marsauto.io/.

Mars Auto

Mars Auto

Join us on our journey of developing driverless trucks. Visit us at https://marsauto.io/.