Coding the Road Ahead: Self-Driving Cars with JavaScript and AI

Krish Bhoopati
6 min readNov 25, 2023

Ever thought a car could drive itself? Well, it’s not a wild dream anymore. Imagine cruising down the road, with no hands on the wheel, just lines of code leading the way. Sounds crazy, right? But what’s even crazier is this code makes the car not only move but also dodge obstacles. And if things get tricky, the car hits the brakes all by itself. Join me as we unravel the magic where simple lines of code transform a car into a smart, self-driving wonder.

JavaScript’s Role

In the world of self-driving cars, JavaScript takes on a crucial role as the guide, telling the car how to move and what decisions to make. Imagine it as the brain behind the wheel, using simple instructions to steer, accelerate, and make choices. Just like we follow a map while driving, JavaScript guides it through the roads and traffic, turning code into actions. As the conductor, JavaScript interfaces with the vehicle’s electronic control unit (ECU) to manipulate actuators, steering mechanisms, and acceleration pedals. Through carefully crafted algorithms, it transforms abstract instructions into tangible movements, enabling the car to navigate the road autonomously. In essence, JavaScript empowers the vehicle with the intelligence to make split-second decisions, turning lines of code into a dynamic driving experience.

Sensors and Cameras

For the self-driving car to ‘see’ the road, a sophisticated array of sensors and cameras act as its eyes and ears. Cameras capture visual data, while sensors, such as LiDAR and radar, provide a 360-degree awareness of the surroundings. This fusion of visual and depth perception creates a comprehensive understanding of the environment, allowing the car to perceive lane markings, identify obstacles, and anticipate potential hazards. The data collected by these sensors is fed into the system, creating a real-time map of the car’s surroundings. This map becomes the foundation for decision-making, enabling the car to navigate complex scenarios with precision. Through this sensory input, the self-driving car mirrors and, in some cases, surpasses human perception, ensuring a heightened level of awareness on the road.

If we examine the blue car in the illustration, we notice five yellow lines extending from it. These represent the directions in which the car’s cameras are focused. In real-world scenarios, actual cars equipped with cameras also have these conceptual yellow lines extending in all directions. Take a closer look at the yellow lines on the far left and far right. You’ll notice that a section of the line is black. This specific feature is a crucial communication method for the car. It essentially informs the car that it’s safe to go as far as the yellow line reaches, but beyond that lies a potential collision or obstacle.

In simpler terms, these yellow lines act like the car’s eyes, telling it where it can safely navigate and where it should avoid to prevent any unwanted bumps. It’s a bit like a protective force field, ensuring the car stays within safe bounds and steers clear of potential dangers.

Now let's get to the code

There is a lot of code so we’ll just cover the important ones

class Car{
constructor(x,y,width,height,controlType,maxSpeed=3){
this.x=x;
this.y=y;
this.width=width;
this.height=height;

this.speed=0;
this.acceleration=0.2;
this.maxSpeed=maxSpeed;
this.friction=0.05;
this.angle=0;

this.damaged=false;

this.useBrain=controlType=="AI";

if(controlType!="DUMMY"){
this.sensor=new Sensor();
this.brain=new NeuralNetwork(
[this.sensor.rayCount,4]
);
}
this.controls=new Controls(controlType);
}

update(roadBorders,traffic){
if(!this.damaged){
this.#move();
this.polygon=this.#createPolygon();
this.damaged=this.#assessDamage(roadBorders,traffic);
}
if(this.sensor){
this.sensor.update(this.x,this.y,this.angle,roadBorders,traffic);
const offsets=this.sensor.readings.map(
s=>s==null?0:1-s.offset
);
const outputs=NeuralNetwork.feedForward(offsets,this.brain);
if(this.useBrain){
this.controls.forward=outputs[0];
this.controls.left=outputs[1];
this.controls.right=outputs[2];
this.controls.reverse=outputs[3];
}
}
}

#assessDamage(roadBorders,traffic){
for(let i=0;i<roadBorders.length;i++){
if(polysIntersect(
[...this.polygon,this.polygon[0]],
roadBorders[i])
){
return true;
}
}
for(let i=0;i<traffic.length;i++){
const poly=traffic[i].polygon;
if(polysIntersect(
[...this.polygon,this.polygon[0]],
[...poly,poly[0]])
){
return true;
}
}
return false;
}

#createPolygon(){
const points=[];
const rad=Math.hypot(this.width,this.height)/2;
const alpha=Math.atan2(this.width,this.height);
points.push({
x:this.x-Math.sin(this.angle-alpha)*rad,
y:this.y-Math.cos(this.angle-alpha)*rad
});
points.push({
x:this.x-Math.sin(this.angle+alpha)*rad,
y:this.y-Math.cos(this.angle+alpha)*rad
});
points.push({
x:this.x-Math.sin(Math.PI+this.angle-alpha)*rad,
y:this.y-Math.cos(Math.PI+this.angle-alpha)*rad
});
points.push({
x:this.x-Math.sin(Math.PI+this.angle+alpha)*rad,
y:this.y-Math.cos(Math.PI+this.angle+alpha)*rad
});
return points;
}

#move(){
if(this.controls.forward){
this.speed+=this.acceleration;
}
if(this.controls.reverse){
this.speed-=this.acceleration;
}

if(this.speed!=0){
const flip=this.speed>0?1:-1;
if(this.controls.left){
this.angle+=0.03*flip;
}
if(this.controls.right){
this.angle-=0.03*flip;
}
}

if(this.speed>this.maxSpeed){
this.speed=this.maxSpeed;
}
if(this.speed<-this.maxSpeed/2){
this.speed=-this.maxSpeed/2;
}

if(this.speed>0){
this.speed-=this.friction;
}
if(this.speed<0){
this.speed+=this.friction;
}
if(Math.abs(this.speed)<this.friction){
this.speed=0;
}

this.x-=Math.sin(this.angle)*this.speed;
this.y-=Math.cos(this.angle)*this.speed;
}

draw(ctx,drawSensor=false){
if(this.damaged){
ctx.fillStyle="gray";
}else{
ctx.fillStyle="black";
}
ctx.beginPath();
ctx.moveTo(this.polygon[0].x,this.polygon[0].y);
for(let i=1;i<this.polygon.length;i++){
ctx.lineTo(this.polygon[i].x,this.polygon[i].y);
}
ctx.fill();
if(this.sensor && drawSensor){
this.sensor.draw(ctx);
}
}
}

Constructor:

  • The Car class is like a blueprint for creating cars in our self-driving project.
  • It takes initial parameters like position (x, y), dimensions (width, height), control type (human, AI, or dummy), and maximum speed.

Initialization:

  • It initializes properties like speed, acceleration, friction, and the car’s orientation (angle).
  • damaged keeps track of whether the car has collided with something.
  • useBrain is a flag indicating if the car uses AI for control.

Sensor and Neural Network Setup:

  • If the car isn’t a dummy, it sets up a sensor and a neural network.
  • The sensor simulates the car’s ability to “see” the road, and the neural network acts as the car’s brain for decision-making.

Controls:

  • It initializes controls based on the control type (keyboard, AI, or dummy).

Update Method:

  • The update method is like the car's brain and muscles combined.
  • It moves the car, updates its polygonal shape, and checks for damage.
  • If there’s a sensor, it updates the sensor readings and processes them through the neural network.

Assess Damage:

  • It checks if the car has collided with road borders or other traffic by comparing polygons.

Create Polygon:

  • It calculates the four points of the polygon representing the car’s position and shape.

Move Method:

  • Handles the car’s movement based on controls, speed, and friction.
  • Adjusts the car’s position and angle accordingly.

Draw Method:

  • Draws the car on the canvas.
  • If there’s a sensor and we want to visualize it, it draws the sensor as well.

Now this code is only for Car.Js, but there is a lot more files with even more code, to see them, you can check it out here → Github

Learning

The car collects data every time we run the code, sometimes it will output good results, but most times it will output bad results. By good results, what we mean is that when the car follows the path without crashing, and bad results are the opposite. The car often crashes a lot, this is because it is still learning, but over time, it will crash less and less often. This is exactly how it works in real life too. Self-driving cars are tested continuously and the car constantly captures data the more it drives, and every time, it gets better and better.

With the use of Fleet Learning, cars operate as aconnected fleet, sharing anonymized data and insights. Fleet learning allows each vehicle to benefit from the collective experiences of the entire network, fostering a smarter and more adaptive system. This system will evolve to make even more nuanced decisions, adapting not only to the immediate environment but also considering long-term objectives. This includes strategic route planning for efficiency and improved energy management.

Neural Network (NN)

Imagine a Neural Network (NN) as a digital brain inspired by how our brains work. It’s like a team of interconnected workers, each responsible for processing information. These workers, or nodes, communicate through weighted connections, like teammates sharing information. During training, the NN learns from experiences, tweaking these weights to get better at tasks. In the self-driving car project, the NN is the brain making decisions based on what it ‘sees’ through sensors. The display on the right side is like a window into this digital brain and while the car is moving, the NN is reacting to it. Showing how it’s thinking and adapting to different situations, much like how we learn from our surroundings.

My name is Krish, I’m a 17-year-old high school student with a passion for automated vehicles. If you have any questions, suggestions, or comments, I would love to hear them. You can reach out to me on LinkedIn or my Twitter, and make sure to drop a follow to be updated with what I’m working on. Thank you for taking the time to read my article and I hope you learned something new!

--

--