Solution for “optimistic locking failed” database transaction issue.

Damith Neranjan Samarakoon
Javarevisited
6 min readFeb 18, 2023

--

Last couple of days i struggle with above issue. In this post i’m about to discuss all my experience for above.

This error message typically occurs when there is a concurrency issue in a database transaction, where two or more transactions are trying to modify the same data at the same time. Optimistic locking is a technique used to handle this situation by allowing multiple transactions to access the same data simultaneously, but only one of them can modify it at a time. In this case, the error occurred because the data being modified was already updated or deleted by another transaction before the current transaction could modify it.

The cause of the error may also be due to incorrect mapping of unsaved values.

To resolve the issue, you may need to retry the transaction or implement a different concurrency control mechanism, such as pessimistic locking or versioning.

The solution to this error message depends on the specific context and application architecture, but generally involves one or more of the following:

  1. Implementing optimistic or pessimistic locking mechanisms to prevent multiple transactions from modifying the same data simultaneously.
  2. Using versioning or other concurrency control strategies to detect and resolve conflicts when multiple transactions access the same data.
  3. Retrying the transaction to ensure that the most up-to-date data is being modified.
  4. Checking the database configuration, as the error may be due to incorrect mapping of unsaved values.
  5. Reviewing the code to identify any race conditions or other concurrency issues that may be causing the error.

In general, the best approach is to understand the cause of the error and design a solution that addresses the specific needs of the application while ensuring that data consistency is maintained.

OK, here is an example of how you can handle optimistic locking in a Spring Boot application using Java:

Let’s assume we have an entity called “Product” with an “id”, “name”, and “quantity” fields. We want to ensure that only one user can update the quantity of a product at a time to prevent concurrency issues.

First, we need to annotate the entity with @Version to enable optimistic locking. The @Version annotation marks a version field that will be used to detect concurrent modifications.

@Entity
public class Product {

@Id
private Long id;

private String name;

private int quantity;

@Version
private Long version;

// Getters and setters
}

Next, we can use the Spring Data JPA repository to handle database operations, and catch the StaleObjectStateException exception to handle optimistic locking issues.

@Repository
public interface ProductRepository extends JpaRepository<Product, Long> {

@Lock(LockModeType.OPTIMISTIC_FORCE_INCREMENT)
Optional<Product> findById(Long id);

}

Here, we’re using the @Lock annotation with LockModeType.OPTIMISTIC_FORCE_INCREMENT to specify that we want to use optimistic locking. We’re also catching the StaleObjectStateException exception that can occur when a concurrent update is detected.

Finally, we can use this repository in our service layer to update the quantity of a product and annotate with @Transactional annotation.

OK, How this happen when using save() or saveAll() method in Spring Data JPA?

let’s move with this,

The behavior of optimistic locking when using the saveAll() method in Spring Data JPA depends on the specific implementation and database configuration.

In general, when you use saveAll() to save a collection of entities, Spring Data JPA will perform the save operation in a batch, which means that it will execute a single SQL statement to insert or update all the entities in the collection.

If any of the entities in the collection have been modified by another transaction since they were fetched, an optimistic locking exception will be thrown, and the entire save operation will fail.

However, some databases may not support batch updates with optimistic locking. In this case, Spring Data JPA will fall back to performing individual save operations for each entity in the collection, which may result in multiple database queries and potentially slower performance.

To ensure that optimistic locking works correctly when using saveAll(), you should make sure that your database and JPA provider support batch updates with optimistic locking, and that you have configured them appropriately. Additionally, you should handle optimistic locking exceptions in a way that makes sense for your application, such as retrying the save operation or notifying the user of the conflict.

For example, let me explain this how to use optimistic locking with batch updates in Spring Data JPA for PostgreSQL using Java.

First, we need to configure our PostgreSQL database to use optimistic locking. This can be done by adding a version column to the table that corresponds to our entity. Here's an example SQL statement that adds a version column to a products table.

ALTER TABLE products ADD COLUMN version BIGINT DEFAULT 0 NOT NULL;

Next, we need to modify our entity class to include the @Version annotation, which tells Spring Data JPA to use optimistic locking for this entity.

@Entity
public class Product {

@Id
@GeneratedValue(strategy = GenerationType.IDENTITY)
private Long id;

private String name;

private int quantity;

@Version
private Long version;

// Getters and setters
}

In this example, we’ve added a version field with the @Version annotation to our Product entity class.

Next, we need to configure Spring Data JPA to use batch updates with optimistic locking. This can be done by setting the spring.jpa.properties.hibernate.jdbc.batch_versioned_data property to true in our application.properties file.

spring.jpa.properties.hibernate.jdbc.batch_versioned_data=true

Finally, we can use the saveAll() method to update multiple entities with optimistic locking.

@Service
public class ProductService {

@Autowired
private ProductRepository productRepository;

@Transactional
public void updateProducts(List<Product> products) {
productRepository.saveAll(products);
}
}

In this example, we’re using the saveAll() method to update a list of Product entities. If any of the entities in the list have been modified by another transaction since they were fetched, an optimistic locking exception will be thrown, and the entire save operation will fail. However, since we've configured our database to use batch updates with optimistic locking, Spring Data JPA will perform the save operation in a batch, which means that it will execute a single SQL statement to insert or update all the entities in the list, improving performance.

All done now.but after all this implementation you may occurs following issue with your database transactions.

Problem: what happen If the same record is being inserted multiple times into the database when the same request is sent multiple times,

Solution: it is likely due to the fact that the database transaction is not being committed after the record is inserted.

To prevent this issue, you can make sure that the database transaction is committed after the record is inserted. You can do this by adding the @Transactional annotation to your service method that inserts the record, and by making sure that the transaction is committed at the end of the method.

Here’s an example of how you can do this in your service method:

@Service
public class ProductService {

@Autowired
private ProductRepository productRepository;

@Transactional
public void createProduct(Product product) {
productRepository.save(product);
productRepository.flush(); // commit the transaction
}
}

In this example, we’re using the @Transactional annotation to ensure that the database transaction is committed after the product is inserted. We're also using the flush() method to explicitly commit the transaction and make sure that the record is persisted to the database.

By doing this, if the same request is sent multiple times, the same record will not be inserted multiple times into the database. Instead, the first request will insert the record, and subsequent requests will fail due to a unique constraint violation or other database constraint violation, which you can handle appropriately in your application.

That’s all about it and now i’m in safe side :-).

Thanks for reading and happy coding…!!!

--

--

Damith Neranjan Samarakoon
Javarevisited

I’m an Engineering Lead @Persistent System, blogger,thinker, husband & father of “little angle”. passionate about Technology,Engineering and Learning.