The Bellman-Ford Algorithm

Ishita Gupta
Analytics Vidhya
Published in
6 min readMay 3, 2020

You’re Given a Weighted Graph. You know the source and need to reach all the other vertices through the shortest path. What do you do to solve this problem? You choose Dijkstra’s Algorithm. But what if there are negative weights included? Dijkstra’s can’t work on this problem then. We now need a new algorithm.

The Bellman-Ford algorithm is an algorithm similar to Dijkstra that is it finds the shortest path in a graph from a single source vertex to all other vertices in a weighted graph but it works even when there are negative weights.

THE GLOOMY GRAPH

A gloomy graph is what I call a graph with negative weights. It’s not actually called this, but the name kind of suits, doesn’t it? A negative weight is just like a positive weight, a value on the top of an edge. Now, why would anyone have a graph with negative weights? Because they are not as useless as they may seem. Negative weights can explain a lot of phenomena, like your savings where a positive edge can represent money spent but a negative edge will be the one you would want to take as it will represent cash gained, or heat reactions, where each positive weight will stand for heat dissipation, each negative weight will show heat absorption and the set of reaction where minimum energy is found has to be calculated.

But then what about the gloomy part? Here it comes. Make way for negative cycles. This is something to be careful of. A cycle is a path where the first and the last vertex is the same, that is, it is a closed path. So a Negative cycle becomes a cycle that sums up to a negative value. Shortest path algorithms are not able to detect such cycles and give incorrect results. This is something that even the Bellman ford algorithm can’t defeat. Look at this illustration below to get a better idea.

In this image, the vertices B, C, and D form a cycle where the starting node is B which is also the ending node. Also, this cycle acts as a negative cycle because the total value sums up to a negative value -1.

THE ALGORITHM

The first point to know about the algorithm would be that is doesn’t work on a greedy algorithm like Dijkstra. Yes, they are similar but not the same, duh! A dynamic programming approach is taken to implement this program. Moving on to understanding this algorithm more.

function bellmanFord(G, S)
for each vertex V in G
D[V] <- infinite

D[S] <- 0
for each vertex V in G
for each edge (U,V) in G
tempDistance <- D[U] + edge_weight(U,V)
if tempDistance < D[V]
D[V] <- tempDistance
for each edge (U,V) in G
If D[U] + edge_weight(U, V) < D[V]
Error: Negative Cycle Exists
return
print D[]

Don’t get into panic mode just yet. We’ll discuss every bit.

Telling the definition first, the Bellman-Ford algorithm works by first overestimating the length of the path from the starting vertex to all other vertices. Then it iteratively relaxes those estimates by finding new paths that are shorter than the previously overestimated paths. Finally, it checks for negative cycles

In simpler terms, let V be the number of vertices, E be the number of edges, S be the starting node, and D be an array which tracks the best distance between the source node and rest vertices. Okay?

Following the step of overestimation, we set each entry in the array to +infinity, similar to Dijkstra. This is because the distance to each node initially is unknown so we assign the highest value possible. Now we assign D[S]=0 for obvious reasons, as the minimum distance from source to source is, take a guess? obviously 0. So we have reached the state shown below

for each vertex V in G
D[V] <- infinite
previous[V] <- NULL
distance[S] <- 0

Now, infinite levels are too high for us, stress is building up. So its time to relaaaaax! In this step, we aim to find what we have been looking for altogether, the shortest path to each vertex. We start a loop that will run V times for each edge because in the worst case, a vertex’s path length might need adjustment V times. In the loop, for each edge, we take the value of the vertex from where the edge is starting (D[U]) and add it to the edge cost. This added value is them compared to the value of the vertex where the edge is ending (D[V]). If the sum value is found to be less, the end vertex value (D[V]) becomes equal to the sum.

for each vertex V in G				
for each edge (U,V) in G
tempDistance <- D[U] + edge_weight(U,V)
if tempDistance < D[V]
D[V] <- tempDistance

Taking an example, we are gonna go through a few steps to understand the functioning.

In this graph, 0 is considered as the source vertex. Starting the loop, the first edge we take is 0 →1, after which 1 is assigned the value 5. Next, the edges 1→2, 1 →5 and 1 →6 are taken, due to which the value of 6 becomes (5+60 i.e the cost of source vertex 1 added to the cost of the edge,60)= 65, 2 becomes (5+20)= 25 and 5 becomes (5+30)= 35. Similarly, the value of 3 becomes 35.

After determining the cost of 3, we take the next edges, which are 3 →2 and 2→4. This makes the value of 2 as ( 35 -15)=20 and the value of 4 as 100. 20 is a reduced value from the earlier 25.

Continuing in the loop, the edge 4 →9 makes the value of 9 as 200. Now another point of optimization to notice carefully. We take the edge 5→6 which makes the value of 6 (35+5)=40. Similarly, taking the edge 5→4 totals the value of 4 to 60. These values are less or more optimized than the previous values.

So that is how the step of relaxation works

Moving on the third and the last step, Spotting our enemy, the negative cycles. Now, why does our algorithm fail in front of negative cycles? Its because Bellman ford Relaxes all the edges. It finds a global optimum solution and so if there is a negative cycle, the algorithm will keep ongoing indefinitely. It will always keep finding a more optimized, that is, a more negative value than before. So it's necessary to identify these cycles. But how?

We run the same loop again, taking edges and relaxing them. As we have already reached an optimized value already, so if we can relax an edge again that means we have encountered a negative cycle. As soon as that happens, the IF condition becomes true and the return statement is executed, ending the function else the array D is printed.

for each edge (U,V) in G
If D[U] + edge_weight(U, V) < D[V]
Error: Negative Cycle Exists
return
print D[]

We have now successfully completed the Bellman-Ford algorithm. Yay!

COMPARING DJIKSTRA AND BELLMAN-FORD

We have already gone through the main differences that are

  • Both are the shortest path algorithms but Djikstra lowers its weapons against negative weights whereas Bellman-Ford wins the war.
  • Djikstra uses the greedy approach whereas Bellman-Ford uses dynamic programming.

The difference that we haven’t touched so far is

  • Djikstra is fast. The time complexity of Bellman ford is higher than that of Djikstra. But if optimal time is not the highest priority then no doubt Bellman Ford is a better shortest path algorithm.

This completes our journey of the Bellman-Ford algorithm. I’m sure Richard Bellman and Lester Ford Jr would be proud of you, just sleeping and smiling in their graves. ( Yes I sneaked in a little history fact there!).

I hope you guys liked this blog. Do leave some feedback, I am really looking forward to it. Enjoy!

--

--