- The volatile concept is specific to Java. Its easier to understand volatile, if you understand the problem it solves.
- If you have a variable say a counter that is being worked on by a thread, it is possible the thread keeps a copy of the counter variable in the CPU cache and manipulates it rather than writing to the main memory. The JVM will decide when to update the main memory with the value of the counter, even though other threads may read the value of the counter from the main memory and may end up reading a stale value.
- If a variable is declared volatile then whenever a thread writes or reads to the volatile variable, the read and write always happen in the main memory.
- As a further guarantee, all the variables that are visible to the writing thread also get written-out to the main memory alongside the volatile variable. Similarly, all the variables visible to the reading thread alongside the volatile variable will have the latest values visible to the reading thread.
- Volatile comes into play because of multiples levels of memory in hardware architecture required for performance enhancements.
- If there’s a single thread that writes to the volatile variable and other threads only read the volatile variable then just using volatile is enough,
- however, if there’s a possibility of multiple threads writing to the volatile variable then “synchronized” would be required to ensure atomic writes to the variable.
- Java’s primary tool for rendering interactions between threads predictably is the
synchronizedkeyword. Many programmers think of
synchronizedstrictly in terms of enforcing a mutual exclusion semaphore (mutex) to prevent execution of critical sections by more than one thread at a time. Unfortunately, that intuition does not fully describe what
- Synchronize is more than mutual exclusion : The semantics of
synchronizeddo indeed include mutual exclusion of execution based on the status of a semaphore, but they also include rules about the synchronizing thread's interaction with main memory. In particular, the acquisition or release of a lock triggers a memory barrier -- a forced synchronization between the thread's local memory and main memory. (Some processors -- like the Alpha -- have explicit machine instructions for performing memory barriers.) When a thread exits a
synchronizedblock, it performs a write barrier -- it must flush out any variables modified in that block to main memory before releasing the lock. Similarly, when entering a
synchronizedblock, it performs a read barrier -- it is as if the local memory has been invalidated, and it must fetch any variables that will be referenced in the block from main memory.
- The proper use of synchronization guarantees that one thread will see the effects of another in a predictable manner. Only when threads A and B synchronize on the same object will the JMM guarantee that thread B sees the changes made by thread A, and that changes made by thread A inside the
synchronizedblock appear atomically to thread B (either the whole block executes or none of it does.)
- Furthermore, the JMM ensures that
synchronizeblocks that synchronize on the same object will appear to execute in the same order as they do in the program.
- Java’s answer to the traditional mutex is the reentrant lock, which comes with additional bells and whistles.
- It is similar to the implicit monitor lock accessed when using
synchronizedmethods or blocks.
- With the reentrant lock, you are free to lock and unlock it in different methods but not with different threads. If you attempt to unlock a reentrant lock object by a thread which didn't lock it initially, you'll get an IllegalMonitorStateException. This behavior is similar to when a thread attempts to unlock a pthread mutex.
- We saw how each java object exposes the three methods,
notifyAll()which can be used to suspend threads till some condition becomes true.
- You can think of Condition as factoring out these three methods of the object monitor into separate objects so that there can be multiple wait-sets per object.
- As a reentrant lock replaces
synchronizedblocks or methods, a condition replaces the object monitor methods. In the same vein, one can't invoke the condition variable's methods without acquiring the associated lock, just like one can't wait on an object's monitor without synchronizing on the object first.
- In fact, a reentrant lock exposes an API to create new condition variables, like so:
Lock lock = new ReentrantLock();
Condition myCondition = lock.newCondition();
- Notice, how can we now have multiple condition variables associated with the same lock. In the
synchronizedparadigm, we could only have one wait-set associated with each object.
Differences Between Lock and Synchronized Block
- A synchronized block is fully contained within a method — we can have Lock API’s lock() and unlock() operation in separate methods
- A synchronized block doesn’t support the fairness, any thread can acquire the lock once released, no preference can be specified. We can achieve fairness within the Lock APIs by specifying the fairness property. It makes sure that longest waiting thread is given access to the lock
- A thread gets blocked if it can’t get an access to the synchronized block. The Lock API provides tryLock() method. The thread acquires lock only if it’s available and not held by any other thread. This reduces blocking time of thread waiting for the lock
- A thread which is in “waiting” state to acquire the access to synchronized block, can’t be interrupted. The Lock API provides a method lockInterruptibly() which can be used to interrupt the thread when it’s waiting for the lock