Hibernate : Mistakes to avoid
Hibernate is the most popular and widely used object relational tool for Java programming. While using hibernate we may run into some unexpected behaviour due to a few things that can easily get missed.
I have listed down some of the mistakes that I made during development and can be easily avoided :
1. Not giving fetch type for to-one side of relationships.
The default fetch type for to-one side of relationship is eager. FetchType.Eager causes hibernate to load all the associated entities as soon as you load the parent entity. This results in executing a lot of unwanted queries. This becomes a huge performance issue in case you have large number of records to be fetched.
So always remember to explicitly define fetchType lazy if you do not want your associated relationships to be loaded at the time of loading parent entity.
2. Not specifying allocation size for SQL generated sequence.
Hibernate provides facility to use custom sql sequence for generating ids for your entities.
In the above case, you must create a person_sequence via sql. Important factor to note here is specifying allocation size for your sequence. The default allocation size is 50 and it may result in generating unexpected negative values.
If you have defined you sequence next value as 1 in the beginning, The generator takes the next value as 1 and it can take value from range of -49 to 1 and mostly result in generating a negative value for your sequence. To avoid this problem the increment value of the sequence and the allocationSize must be set to the same value.
3. Not using @JsonManagedReference and @JsonBackReference for bidirectional relationships.
Suppose we have two associated entities as follows :
When we try to serialise an instance of Address it will throw JsonMappingException. This occurs due to infinite recursion which occurs because Address has instance of Person which in turn has List of Address in it. To avoid this condition, we need to annotate our entities as follows :
This allows Jackson to better manage the relationship. Here @JsonManagedReference is the forward part of reference which gets serialized normally.
@JsonBackReference is the back part of reference which will not be considered during serialization. Alternatively, we can use @JsonIgnore to tell jackson which property to ignore during serialization.
4. Using cascade type REMOVE or ALL for many-to-many relationships.
It is wrong to use CascadeType.REMOVE or CascadeType.ALL( as it means all operations are to be cascaded, including remove). This is because it will trigger removing all the records associated with a particular instance which is deleted. But the removed child records may have association with other parent instances as well.
5. Defining many-to-many relationships as list
It is a very crucial mistake to model your many-to-many side of relationship to wrong data type. If you declare you association as list, at every operation( insertion / deletion) hibernate internally deletes all the existing records with that association and re inserts a new one for each managed relationship.
When I insert or delete a new Address, hibernate deletes all records associated with given Address from person_address table and re-inserts all the records along with new one, which is completely unnecessary.
If I use Set instead of List and try to carry out the same operation, it will just insert the new record in the person_address relationship table.
6. Setting wrong value for hibernate.hbm2ddl.auto
The property hibernate.hbm2ddl.auto is used to customize ddl generation of your project during deployment. The possible values are :
validate : it validates the existing schema, making no changes to the database
create : It deletes all your existing schemas in the database and creates the specified schema
update : it updates the existing schema
delete-drop : it creates the schema, performs operations and deletes the schema
It is best to use validate on production environments since one should avoid relying on hibernate for critical database operations. Other operations can result in losing data or even a small mistake like typing error may break your application. It is best to use migration tools for schema creation or updates and then just validate your schemas using validate property.
7. Trying to save an associated object which is fetched from cache or is out of current transaction :
If you retrieve an associated object from cache and assign it to parent object, saving the parent object will give the error “Detached entity passed to persist”. The cause of this error is that hibernate maintains a session for all database transactions, and if it does not find the entity belonging to current transactional context, it considers it detached. Since we are fetching the entity from cache and not actual database, hibernate does not find it associated with current transaction, and hence the error. To avoid this error we can make use of merge strategy provided by entity manager.
As you can notice, these are very small mistakes but can have huge impact on performance and result in unexpected behaviour. These are few things to watch out for while development and one can easily avoid few common mistakes one may be doing while using hibernate.
The Java Persistence API (JPA) , and Hibernate ORM as its most popular implementation are used in most Java…stackify.com
The Problem is, that I get negative IDs in my object and the generated IDs are not unique. I get the error…hibernate.atlassian.net
As a full-featured ORM framework, Hibernate is responsible for lifecycle management of persistent objects (entities)…www.baeldung.com