Concurrency Fundamentals In Java

Robin Srivastava
7 min readApr 28, 2024

--

What is Concurrency to be precise ?

Generically, Concurrency is an attempt to increase throughput of the application by delegating different tasks to different execution entities.
However, it is different from simultaneous/parallel execution in a way that execution entities may not execute different tasks at the same point of time.

It may enhance the throughput and decrease latency of the application as within the same time frame it may execute more number of tasks than application would have without the provision of Concurrency.

P.S. : Please note the emphasis on the word “may”.
Further section of the article would provide valuable insights where Concurrency may lead to an increase in System latency.

Prerequisite to understanding Concurrency :

This article aims to lay the fundamental foundation to leverage Concurrency in our day to day Application design and development in Java.

As part of that, reader is advised to go through this article to understand Memory model in Java which serves as one of the building blocks to leverage Concurrency.

Why do we even need to understand Concurrency ?

As technological landscape evolves, we have easy and relatively cheaper access to multicore hardware which can be leveraged to compute
multiple instructions simultaneously.
This provision entails increased throughput and reduced latency which eventually leads to better business outcome and profit.

On the contrary, if we are not able to leverage multi-core hardware which is ubiquitous now, we are wasting computing power of our infrastructure
and in turn investing relatively more money to get the desired business outcome.

Building blocks for Concurrency :

To understand fundamentals of Concurrency in Java, we need to get hold of following technical constructs :

  1. Process :
    This is basically the fundamental construct that represents our application at runtime.
    e.g. Let’s say we run a Java Program by executing it’s main class then instructions are fetched into memory and is treated as a unit which can be interacted with independently by CPU.
    This independent unit is termed as Process.
    Process is allocated a dedicated memory space which is used solely for the instructions pertaining to the process.
  2. Thread :
    While Process in itself is the whole application, thread is the construct that actually executes the instructions sequentially.
    Threads share the address space of the parent process and each CPU core can only execute instruction from one thread at any point of time.
  3. Critical Section :
    Java supports only multithreading and not multiprocessing i.e. a process can spawn multiple threads which will share memory,
    I/O resources etc. to perform intended programmed instructions.

    However, this sharing of resources may lead to a situation where multiple threads are in contention to access the same memory addresses to complete the assigned task.
    These address spaces/instructions/tasks are essentially called the Critical section and whole trick of effectively using Concurrency lies on
    how well critical section is guarded to prevent inconsistency in application.
  4. Scheduler :
    Since, the fundamental construct of work is Thread so we would need some way to assign different threads from different applications to the
    underlying core.
    Hence, threads are pre-emptive and can be time sliced by Scheduler to give a fair chance for threads of all applications to perform their intended operation.

P.S. : All these building blocks need a dedicated article in itself which would be covered in Future articles.

Understanding Concurrency using Monitor, Synchronize, Wait and Notify constructs :

To consolidate our understanding, let’s try to implement a use case where in we create two threads and generate odd and even numbers alternatively.

As explained in aforementioned Building blocks section, let’s start by creating a main class as follows which would consequently represent Process at runtime.

package assignment.blogs;

public class GenerateOddEven {
public static void main(String[] args) {

Object lock = new Object();
OddGeneratorTask oddTask = new OddGeneratorTask(lock);
EvenGeneratorTask evenTask = new EvenGeneratorTask(lock);

Thread threadOdd = new Thread(oddTask, "Thread-Odd");
Thread threadEven = new Thread(evenTask, "Thread-Even");

threadEven.start();
threadOdd.start();
}
}

Above code block is our application code and this is what is termed as “Process” once execution starts.

On careful consideration, reader would observe that we are creating two Threads named threadOdd and threadEven respectively.

Also, we would need to assign the tasks that would be executed once corresponding threads are spawned from the main thread.

The line Thread threadOdd = new Thread(oddTask, “Thread-Odd”); instructs JVM to spawn a new thread from main thread and assign a task named oddTask which would be executed by the created thread.

Let’s define the task that would be executed by threadOdd as follows :

package assignment.blogs;

public class OddGeneratorTask implements Runnable {
private final Object lock;

public OddGeneratorTask(Object lock) {
this.lock = lock;
}

/**
* When an object implementing interface <code>Runnable</code> is used
* to create a thread, starting the thread causes the object's
* <code>run</code> method to be called in that separately executing
* thread.
* <p>
* The general contract of the method <code>run</code> is that it may
* take any action whatsoever.
*
* @see Thread#run()
*/
@Override
public void run() {
int i = 1;
synchronized (lock) {
try {
while (i <= 10) {
System.out.printf("Executing %s via thread : %s \n", i, Thread.currentThread().getName());
i += 2;
lock.notify();
lock.wait(1000);
}
} catch (InterruptedException exception) {
exception.printStackTrace();
} finally {
lock.notify();
}
}

}
}

To define a task in Java, we would need to define a class implementing Runnable interface and implementing it’s run() method.
Spawned thread has access only to this assigned task.

Consequently, let’s defined other task which would be executed by Thread threadEven as follows :

package assignment.blogs;

public class EvenGeneratorTask implements Runnable {
private final Object lock;

public EvenGeneratorTask(Object lock) {
this.lock = lock;
}

/**
* When an object implementing interface <code>Runnable</code> is used
* to create a thread, starting the thread causes the object's
* <code>run</code> method to be called in that separately executing
* thread.
* <p>
* The general contract of the method <code>run</code> is that it may
* take any action whatsoever.
*
* @see Thread#run()
*/
@Override
public void run() {
int i = 0;
synchronized (lock) {
try {
while (i <= 10) {
System.out.printf("Executing %s via thread : %s \n", i, Thread.currentThread().getName());
i += 2;
lock.notify();
lock.wait(1000);
}
} catch (InterruptedException exception) {
exception.printStackTrace();
} finally {
lock.notify();
}
}
}
}

Now, as soon as Main thread (the one created by Kernel to start execution of our application) executes statement threadOdd.start() and threadEven.start() then corresponding threads are spawned by JVM and the instructions present in run() method of the tasks are executed by concerned thread.

Our use case demands generating odd and even numbers alternatively which mandates communication between two threads in a way that if one thread is being executed then other should wait.

To achieve this, we need to leverage concept of Critical Section as mentioned in building blocks section.

In our task implementation, we are passing a reference of an object named as lock.
This object is essentially the Monitor which prevents multiple threads to execute the shared instructions simultaneously.

In our case, to alternate execution of thread, we have defined a shared instruction via Object called lock and used a synchronized java construct which is implemented in a way such that synchronized(lock){} statement would allow only one thread to progress with their assigned task and other threads would be mandated to wait at synchronized(lock) {} statement within their assigned task.

So, using lock Object’s common reference and synchronize construct, we are able to define a shared critical section among threads which is used to allow only one thread to execute it’s assigned task at any point of time.

Note : It is extremely important to get hold of the concept explained in above paragraph.

Now, as only one thread is allowed to execute it’s assigned task at a time so we need a way to notify other thread that it can proceed with it’s execution and make the caller thread to wait() for it’s turn again to proceed with it’s next round of execution.

This is achieved via inbuilt Object methods wait() and notify().
Wait() basically relinquishes access to common shared memory (e.g. lock object) location and enable other threads to acquire access to common shared location.
Scheduler decides which thread will access the Critical section so we cannot guarantee the order of thread execution in case of more than 2 threads.

Similarly, notify() method is used to communicate to Scheduler that it can schedule other runnable threads to start corresponding task execution.

Difference between notify and wait is that thread calling wait() relinquishes lock and gets blocked instantly while notify() notifies other thread and continues execution.
Thread blocked by wait() can be brought back to runnable state by calling notify() method on the shared monitor object as we have done above.

So, to alternate even and odd threads, thread which gets executed first notifies other thread and then gets blocked by calling wait() method and similarly other thread follows the same set of operations.
This ensures that threads are called alternatively by communicating with each other via wait and notify methods of the shared Monitor object.

Output of above code would be as follows :

Caveat with Concurrency :

As mentioned in Building blocks section above, threads are preemptive and are time sliced by Scheduler.
This context switching leads to an overhead which ultimately adds up to the latency of our application.
Consequently, we can infer that blindly creating too many threads in the hope that it will increase throughput and decrease latency
is not entirely true.

On the contrary, creating too many threads will eventually lead to higher memory consumption as each thread has it’s own dedicated stack
as well as context switching would in some cases may lead to higher latency than application without multithreading.

Future Articles scoped around Concurrency :

Concurrency in Java is a critical concept to grasp to leverage it in day to day activity.
As part of that, I will be covering in-depth articles in future related to Process, Thread, Scheduler, Context Switching, Executor Service, Semaphore, Critical Section and few more that would enable reader to confidently deploy it as per business use case.

That’s all folks :)

I intend to demystify fundamental building blocks of Software Architecture and Development.
Do subscribe if you find my articles helpful.

--

--