Red Hat

In Relation To

The Hibernate team blog on everything data.

Hibernate Performance Tuning and Best Practices

Posted by    |       |    Tagged as Discussions Hibernate ORM

This article is based on the latest chapter that’s been added to the Hibernate User Guide. The Performance Tuning and Best Practices chapter aims to help the application developer to get the most out of their Hibernate persistence layer.

Every enterprise system is unique. However, having a very efficient data access layer is a common requirement for many enterprise applications. Hibernate comes with a great variety of features that can help you tune the data access layer.

Schema management

Although Hibernate provides the update option for the configuration property, this feature is not suitable for a production environment.

An automated schema migration tool (e.g. Flyway, Liquibase) allows you to use any database-specific DDL feature (e.g. Rules, Triggers, Partitioned Tables). Every migration should have an associated script, which is stored on the Version Control System, along with the application source code.

When the application is deployed on a production-like QA environment, and the deploy worked as expected, then pushing the deploy to a production environment should be straightforward since the latest schema migration was already tested.

You should always use an automatic schema migration tool and have all the migration scripts stored in the Version Control System.


Whenever you’re using a framework that generates SQL statements on your behalf, you have to ensure that the generated statements are the ones that you intended in the first place.

There are several alternatives to logging statements. You can log statements by configuring the underlying logging framework. For Log4j, you can use the following appenders:

### log just the SQL

### log JDBC bind parameters ###

However, there are some other alternatives like using datasource-proxy or p6spy. The advantage of using a JDBC Driver or DataSource Proxy is that you can go beyond simple SQL logging:

Another advantage of using a DataSource proxy is that you can assert the number of executed statements at test time. This way, you can have the integration tests fail when a N+1 query issue is automatically detected.

While simple statement logging is fine, using datasource-proxy or p6spy is even better.

JDBC batching

JDBC allows us to batch multiple SQL statements and to send them to the database server into a single request. This saves database roundtrips, and so it reduces response time significantly.

Not only INSERT and UPDATE statements, but even DELETE statements can be batched as well. For INSERT and UPDATE statements, make sure that you have all the right configuration properties in place, like ordering inserts and updates and activating batching for versioned data. Check out this article for more details on this topic.

For DELETE statements, there is no option to order parent and child statements, so cascading can interfere with the JDBC batching process.

Unlike any other framework which doesn’t automate SQL statement generation, Hibernate makes it very easy to activate JDBC-level batching as indicated in the Batching chapter, in our User Guide.


Choosing the right mappings is very important for a high-performance data access layer. From the identifier generators to associations, there are many options to choose from, yet not all choices are equal from a performance perspective.


When it comes to identifiers, you can either choose a natural id or a synthetic key.

For natural identifiers, the assigned identifier generator is the right choice.

For synthetic keys, the application developer can either choose a randomly generates fixed-size sequence (e.g. UUID) or a natural identifier. Natural identifiers are very practical, being more compact than their UUID counterparts, so there are multiple generators to choose from:




Although the TABLE generator addresses the portability concern, in reality, it performs poorly because it requires emulating a database sequence using a separate transaction and row-level locks. For this reason, the choice is usually between IDENTITY and SEQUENCE.

If the underlying database supports sequences, you should always use them for your Hibernate entity identifiers.

Only if the relational database does not support sequences (e.g. MySQL 5.7), you should use the IDENTITY generators. However, you should keep in mind that the IDENTITY generators disables JDBC batching for INSERT statements.

If you’re using the SEQUENCE generator, then you should be using the enhanced identifier generators that were enabled by default in Hibernate 5. The pooled and the pooled-lo optimizers are very useful to reduce the number of database roundtrips when writing multiple entities per database transaction.


JPA offers four entity association types:

  • @ManyToOne

  • @OneToOne

  • @OneToMany

  • @ManyToMany

And an @ElementCollection for collections of embeddables.

Because object associations can be bidirectional, there are many possible combinations of associations. However, not every possible association type is efficient from a database perspective.

The closer the association mapping is to the underlying database relationship, the better it will perform.

On the other hand, the more exotic the association mapping, the better the chance of being inefficient.

Therefore, the @ManyToOne and the @OneToOne child-side association are best to represent a FOREIGN KEY relationship.

The parent-side @OneToOne association requires bytecode enhancement so that the association can be loaded lazily. Otherwise, the parent-side is always fetched even if the association is marked with FetchType.LAZY.

For this reason, it’s best to map @OneToOne association using @MapsId so that the PRIMARY KEY is shared between the child and the parent entities. When using @MapsId, the parent-side becomes redundant since the child-entity can be easily fetched using the parent entity identifier.

For collections, the association can be either:

  • unidirectional

  • bidirectional

For unidirectional collections, Sets are the best choice because they generate the most efficient SQL statements. Unidirectional Lists are less efficient than a @ManyToOne association.

Bidirectional associations are usually a better choice because the @ManyToOne side controls the association.

Embeddable collections (`@ElementCollection) are unidirectional associations, hence Sets are the most efficient, followed by ordered Lists, whereas bags (unordered Lists) are the least efficient.

The @ManyToMany annotation is rarely a good choice because it treats both sides as unidirectional associations.

For this reason, it’s much better to map the link table as depicted in the Bidirectional many-to-many with link entity lifecycle User Guide section. Each FOREIGN KEY column will be mapped as a @ManyToOne association. On each parent-side, a bidirectional @OneToMany association is going to map to the aforementioned @ManyToOne relationship in the link entity.

Just because you have support for collections, it does not mean that you have to turn any one-to-many database relationship into a collection.

Sometimes, a @ManyToOne association is sufficient, and the collection can be simply replaced by an entity query which is easier to paginate or filter.


JPA offers SINGLE_TABLE, JOINED, and TABLE_PER_CLASS to deal with inheritance mapping, and each of these strategies has advantages and disadvantages.

  • SINGLE_TABLE performs the best in terms of executed SQL statements. However, you cannot use NOT NULL constraints on the column-level. You can still use triggers and rules to enforce such constraints, but it’s not as straightforward.

  • JOINED addresses the data integrity concerns because every subclass is associated with a different table. Polymorphic queries or `@OneToMany base class associations don’t perform very well with this strategy. However, polymorphic @ManyToOne` associations are fine, and they can provide a lot of value.

  • TABLE_PER_CLASS should be avoided since it does not render efficient SQL statements.


Fetching too much data is the number one performance issue for the vast majority of JPA applications.

Hibernate supports both entity queries (JPQL/HQL and Criteria API) and native SQL statements. Entity queries are useful only if you need to modify the fetched entities, therefore benefiting from the automatic dirty checking mechanism.

For read-only transactions, you should fetch DTO projections because they allow you to select just as many columns as you need to fulfill a certain business use case. This has many benefits like reducing the load on the currently running Persistence Context because DTO projections don’t need to be managed.

Fetching associations

Related to associations, there are two major fetch strategies:


  • LAZY

Prior to JPA, Hibernate used to have all associations as LAZY by default. However, when JPA 1.0 specification emerged, it was thought that not all providers would use Proxies. Hence, the @ManyToOne and the @OneToOne associations are now EAGER by default.

The EAGER fetching strategy cannot be overwritten on a per query basis, so the association is always going to be retrieved even if you don’t need it. More, if you forget to JOIN FETCH an EAGER association in a JPQL query, Hibernate will initialize it with a secondary statement, which in turn can lead to N+1 query issues.

So, EAGER fetching is to be avoided. For this reason, it’s better if all associations are marked as LAZY by default.

However, LAZY associations must be initialized prior to being accessed. Otherwise, a LazyInitializationException is thrown. There are good and bad ways to treat the LazyInitializationException.

The best way to deal with LazyInitializationException is to fetch all the required associations prior to closing the Persistence Context. The JOIN FETCH directive is goof for @ManyToOne and OneToOne associations, and for at most one collection (e.g. @OneToMany or @ManyToMany). If you need to fetch multiple collections, to avoid a Cartesian Product, you should use secondary queries which are triggered either by navigating the LAZY association or by calling Hibernate#initialize(proxy) method.


Hibernate has two caching layers:

The first-level cache is not a caching solution "per se", being more useful for ensuring REPEATABLE READ(s) even when using the READ COMMITTED isolation level.

While the first-level cache is short lived, being cleared when the underlying EntityManager is closed, the second-level cache is tied to an EntityManagerFactory. Some second-level caching providers offer support for clusters. Therefore, a node needs only to store a subset of the whole cached data.

Although the second-level cache can reduce transaction response time since entities are retrieved from the cache rather than from the database, there are other options to achieve the same goal, and you should consider these alternatives prior to jumping to a second-level cache layer:

  • tuning the underlying database cache so that the working set fits into memory, therefore reducing Disk I/O traffic.

  • optimizing database statements through JDBC batching, statement caching, indexing can reduce the average response time, therefore increasing throughput as well.

  • database replication is also a very valuable option to increase read-only transaction throughput

After properly tuning the database, to further reduce the average response time and increase the system throughput, application-level caching becomes inevitable.

Topically, a key-value application-level cache like Memcached or Redis is a common choice to store data aggregates. If you can duplicate all data in the key-value store, you have the option of taking down the database system for maintenance without completely loosing availability since read-only traffic can still be served from the cache.

One of the main challenges of using an application-level cache is ensuring data consistency across entity aggregates. That’s where the second-level cache comes to the rescue. Being tightly integrated with Hibernate, the second-level cache can provide better data consistency since entries are cached in a normalized fashion, just like in a relational database. Changing a parent entity only requires a single entry cache update, as opposed to cache entry invalidation cascading in key-value stores.

The second-level cache provides four cache concurrency strategies:

READ_WRITE is a very good default concurrency strategy since it provides strong consistency guarantees without compromising throughput. The TRANSACTIONAL concurrency strategy uses JTA. Hence, it’s more suitable when entities are frequently modified.

Both READ_WRITE and TRANSACTIONAL use write-through caching, while NONSTRICT_READ_WRITE is a read-through caching strategy. For this reason, NONSTRICT_READ_WRITE is not very suitable if entities are changed frequently.

When using clustering, the second-level cache entries are spread across multiple nodes. When using Infinispan distributed cache, only READ_WRITE and NONSTRICT_READ_WRITE are available for read-write caches. Bear in mind that NONSTRICT_READ_WRITE offers a weaker consistency guarantee since stale updates are possible.

For more about Hibernate Performance Tuning, check out the High-Performance Hibernate presentation from Devoxx France.

Hibernate Community Newsletter 19/2016

Posted by    |       |    Tagged as Discussions Hibernate ORM

Welcome to the Hibernate community newsletter in which we share blog posts, forum, and StackOverflow questions that are especially relevant to our users.


Michael Simons wrote a very good article about using Hibernate Search with Elasticsearch on Pivotal Cloud Foundry.

On our blog, I wrote an article about configuring Hibernate to use a custom timezone (e.g. UTC) that’s different than the default JVM one.

For our Russian-speaking readers, Николай Алименков has written a very good post about Hibernate usage scenarios. Nikolai is one of the organizers of JEEConf, and together with Igor Dmitriev, they gave several presentations about Hibernate performance tuning.

If you want to learn what is the best way to handle the infamous LazyInitializationException, check out this article.

Alejandro Gervasio wrote a very good article, Persisting Java Objects the Easy Way on SitePoint.

Hibernate offers a very various ways to load entity attributes lazily. Check out this article for more details.

Thomas Kratz has written a very article about using Hibernate JSON types with Kotlin.

When developing Hibernate, we are using the hibernate-testing module to simplify Session/Transaction management, as described in this article. The hibernate-testing library is available on Maven Central, so it’s very easy to make use of it in your project. Happy testing!


Patrycja Wegrzynowicz gave a very interesting presentation at Java One about Second-level Caching. The presentation goes from defining what response time is and how you can level it up using caching. At the end of the presentation, Patrycja discusses how Ehacache and Infinispan implement the Hibernate second-level caching contract.

JBoss Community Asylum - I git your flow

Posted by    |       |    Tagged as asylum git

A while back this tweet was posted by Andrew Lee Rubinger about merge commits. It started a nice discussion which we last week sat down and try put into podcast form.

And here it is - Emmanuel, Andrew and I talking about merge vs rebase, tab vs spaces and other things in the realm of git workflows.

Episode 42 - Show notes and podcast.


Hibernate ORM 5.0.11.Final and 5.1.2.Final released

Posted by    |       |    Tagged as Hibernate ORM Releases

Hibernate ORM 5.0.11.Final:

Hibernate ORM 5.1.2.Final:

For information on consuming the release via your favorite dependency-management-capable build tool, see


Hibernate 5.2 has migrated to Java 1.8. In this article, I’m going to show you how easily you can now test JPA logic using Java 1.8 lambdas.

Integration testing

Hibernate has thousands of integration tests, and each unit test runs in isolation. Traditionally, every test required to open an EntityManager, as well as to coordinate the underlying database transaction by calling begin, commit, and rollback.

EntityManager entityManager = getOrCreateEntityManager();
try {
    entityManager.persist( item );
    assertTrue( entityManager.contains( item ) );
catch (Exception e){
    if ( entityManager.getTransaction() != null &&
         entityManager.getTransaction().isActive() ) {
    throw e;
finally {

That’s very verbose because we need to ensure that the EntityManager always gets closed so that connections are released back to the pool. More, the database transaction must be rolled back on every failure as otherwise locks might be held, therefore preventing a schema-drop process.

For this reason, we decided to extract the whole EntityManager and JPA transaction management logic into a common utility class:

import static org.hibernate.testing.transaction.TransactionUtil.*;

What’s great about these utilities is that you don’t even need to create them. We’ve got you covered!

You only have to add the following dependency to your Maven pom.xml:


Therefore, the previous test case is reduced to four lines of code:

doInJPA( this::entityManagerFactory, entityManager -> {
    entityManager.persist( item );
    assertTrue( entityManager.contains( item ) );
} );


The aforementioned example relies on the presence of the entityManagerFactory() method that returns an EntityManagerFactory instance.

If you prefer the Hibernate native API, you can do it as follows:

doInHibernate( this::sessionFactory, session -> {
    session.persist( item );
    assertTrue( session.contains( item ) );
} );

Analogous to the previous example, a sessionFactory() method is assumed to be existing in the test class. However, you can name these methods any way you like and just make sure to update the first argument of the doInJPA or doInHibernate methods.

Behind the scenes

If you’re interested in the underlying implementations, then check out the source code on GitHub.

Enjoy testing!


When it comes to time zones, there is no easy way of handling timestamps generated across the globe. Prior to Java 1.8, the Date and Calendar API was not helping this either.

While the java.time package allows better handling of Date/Time objects when it comes to Relational Databases in general and JDBC in particular, the API is still very much based on the dreadful java.util.Calendar.

In this post, I’m going to unravel a new configuration property that we added so that we can better handle timestamps at the JDBC level.

The problem

Let’s assume we have the following JPA entity:

@Entity(name = "Person")
public class Person {

    private Long id;

    private Timestamp createdOn;

Since our application is available over the Internet, it is much simpler if every user saves all timestamp values in UTC format. This convention is very common

When Alice, who’s living in Los Angeles, inserts the following Person entity into the database:

doInHibernate( this::sessionFactory, session -> {
    Person person = new Person(); = 1L;

    //Y2K - 946684800000L
    long y2kMillis = LocalDateTime.of( 2000, 1, 1, 0, 0, 0 )
        .atZone( ZoneId.of( "UTC" ) )
    assertEquals(946684800000L, y2kMillis);

    person.createdOn = new Timestamp(y2kMillis);
    session.persist( person );
} );

Hibernate executes the following INSERT statement:

INSERT INTO Person (createdOn, id)
VALUES (?, ?)

-- binding parameter [1] as [TIMESTAMP] - [1999-12-31 16:00:00.0]
-- binding parameter [2] as [BIGINT]    - [1]

The timestamp value is set as 1999-12-31 16:00:00.0, and that’s what we get when we query it from the database:

s.doWork( connection -> {
    try (Statement st = connection.createStatement()) {
        try (ResultSet rs = st.executeQuery(
                "SELECT to_char(createdon, 'YYYY-MM-DD HH24:MI:SS.US') " +
                "FROM person" )) {
            while ( ) {
                    "1999-12-31 16:00:00.000000",
                    rs.getString( 1 )
} );

What’s just happened?

Because Alice’s time zone, Pacific Standard Time, is 8 hours behind UTC (UTC-8), the timestamp value was transposed in the local JVM timezone. But why?

To answer this question, we have to first check how Hibernate saves the underlying timestamp value inside Hibernate 5.2.2 TimestampTypeDescriptor:

st.setTimestamp( index, timestamp );

If we take a look in the PostgreSQL JDBC Driver version 9.4.1210.jre7 (Sep. 2016) PreparedStatement.setTimestamp() implementation, we are going to find the following logic:

public void setTimestamp(int parameterIndex, Timestamp x)
    throws SQLException {
    setTimestamp(parameterIndex, x, null);

public void setTimestamp(int i, Timestamp t, java.util.Calendar cal)
    throws SQLException {

    if (t == null) {
      setNull(i, Types.TIMESTAMP);

    int oid = Oid.UNSPECIFIED;
    if (t instanceof PGTimestamp) {
      PGTimestamp pgTimestamp = (PGTimestamp) t;
      if (pgTimestamp.getCalendar() == null) {
        oid = Oid.TIMESTAMP;
      } else {
        oid = Oid.TIMESTAMPTZ;
        cal = pgTimestamp.getCalendar();
    if (cal == null) {
      cal = getDefaultCalendar();
    bindString(i, connection.getTimestampUtils().toString(cal, t), oid);

So, if there is no Calendar being passed, the following default Calendar is going to be used:

private Calendar getDefaultCalendar() {
    TimestampUtils timestampUtils = connection.getTimestampUtils();

    if (timestampUtils.hasFastDefaultTimeZone()) {
      return timestampUtils.getSharedCalendar(null);
    Calendar sharedCalendar = timestampUtils
    if (defaultTimeZone == null) {
      defaultTimeZone = sharedCalendar.getTimeZone();
    return sharedCalendar;

So, unless we are providing a default java.util.Calendar, PostgreSQL is going to use a default one, which falls back to the underlying JVM time zone.

A workaround

Traditionally, to overcome this issue, the JVM time zone should be set to UTC:

Either declaratively:

java -Duser.timezone=UTC ...

or programmatically:

TimeZone.setDefault( TimeZone.getTimeZone( "UTC" ) );

If the JVM time zone is set to UTC, Hibernate is going to execute the following insert statement:

INSERT INTO Person (createdOn, id)
VALUES (?, ?)

-- binding parameter [1] as [TIMESTAMP] - [2000-01-01 00:00:00.0]
-- binding parameter [2] as [BIGINT]    - [1]

The same is true when fetching the timestamp value from the database:

s.doWork( connection -> {
    try (Statement st = connection.createStatement()) {
        try (ResultSet rs = st.executeQuery(
                "SELECT to_char(createdon, 'YYYY-MM-DD HH24:MI:SS.US') " +
                "FROM person" )) {
            while ( ) {
                String timestamp = rs.getString( 1 );
                assertEquals("2000-01-01 00:00:00.000000", timestamp);
} );

Unfortunately, sometimes we cannot change the default time zone of the JVM because the UI requires it to render UTC-based timestamps into the user-specific locale and current time zone.

The JDBC time zone setting

Starting from Hibernate 5.2.3, you’ll be able to provide a JDBC-level time zone so that you don’t have to change the default JVM setting.

This is done via the hibernate.jdbc.time_zone SessionFactory-level configuration property:

    TimeZone.getTimeZone( "UTC" )

Once set, Hibernate is going to call the following JDBC PreparedStatement.setTimestamp() method which takes a specific Calendar instance.

Now, when executing the insert statement, Hibernate is going to log the following query parameters:

INSERT INTO Person (createdOn, id)
VALUES (?, ?)

-- binding parameter [1] as [TIMESTAMP] - [1999-12-31 16:00:00.0]
-- binding parameter [2] as [BIGINT]    - [1]

This is expected since the java.sql.Timestamp uses Alice’s JVM Calendar (e.g. Los Angeles) to display the underlying date/time value. When fetching the actual timestamp value from the database, we can see that the UTC value was actually saved:

s.doWork( connection -> {
    try (Statement st = connection.createStatement()) {
        try (ResultSet rs = st.executeQuery(
                "SELECT " +
                "   to_char(createdon, 'YYYY-MM-DD HH24:MI:SS.US') " +
                "FROM person" )) {
            while ( ) {
                String timestamp = rs.getString( 1 );
                assertEquals("2000-01-01 00:00:00.000000", timestamp);
} );

You can even override this setting on a per Session level:

Session session = sessionFactory()
    .jdbcTimeZone( TimeZone.getTimeZone( "UTC" ) )

Since many applications tend to use the same time zone (usually UTC) when storing timestamps, this change is going to be very useful, especially for front-end nodes which need to retain the default JVM time zone for UI rendering.

Hibernate Community Newsletter 18/2016

Posted by    |       |    Tagged as Discussions Hibernate ORM

Welcome to the Hibernate community newsletter in which we share blog posts, forum, and StackOverflow questions that are especially relevant to our users.


Both Gunnar and I have participated in the wonderful JavaZone conference.

Gunnar’s presentation outlines the benefits of Hibernate Search and demonstrates how easily you can propagate JPA entity changes to ElasticSearch.

My High-Performance Hibernate presentation shows how you can run Hibernate ORM at warp-speed.


The pick for this newsletter is Michael Simons' article about Hibernate Search. In this blog post, Michael uses Hibernate Search to index his tweets and then demonstrates the power of the Fulltext-Search Query DSL.

Because Concurrency Control is very important for every enterprise application, Hibernate offers both optimistic and pessimistic locking right out of the box. However, sometimes you might need to coordinate the child entity state with a common parent entity, in which case, you need to provide some custom entity listeners. This article demonstrates how you can trigger a parent entity version change whenever a child entity is added, removed or even modified.

The Vertabelo’s blog features an article about Composite Primary Keys and how to map them using Hibernate and jOOQ. The article makes a remark which might confuse the reader into thinking that Hibernate didn’t support Database-First mapping:

Hibernate is a Java object-relational mapper that follows the “Object-First” approach. This means that the appropriate database structures are generated based on the Java code.

Not only Hibernate can be used to generate a database schema from the entity mapping, but in reality, you want to have the database schema managed by an automatic migration tool such a FlywayDB.

Thorben Janssen shows you how you can count queries using Hibernate Statistics. You can take this idea one step further and monitor to any SQL statement type (e.g. insert, update, delete), and you can even build a tool that automatically discovers N+1 statements, as demonstrated in this blog post.

I also wrote an article which aims to warn the reader about the dangers of using the hibernate.enable_lazy_load_no_trans configuration property.

Time to upgrade

Hibernate Search is getting closer and closer to releasing the Final release of the ElasticSearch integration. Now, it’s the right time to shape this new API, so we’d really love you give us feedback on this topic.

Hibernate Validator 5.3.0.CR1 is out, so feel free to try it out and tell us what you think.

This summer was relatively quiet in terms of releases, but many have been testing and improving the Beta1 release of our Hibernate Search / Elasticsearch integration.

So today we release version 5.6.0.Beta2 with 45 fixes and enhancements!

For a detailed list of all improvements, see this JIRA query.

The day of a Final release gets closer, but highly depends on your feedback. Please keep the feedback coming!

Please let us know of any problem or suggestion by creating an issue on JIRA, by sending an email to the developer’s developer’s mailing lists, or posting on the forums.

We also monitor Stack Overflow; when posting on SO please use the tag hibernate-search.

How to get this release

Everything you need is available on Hibernate Search’s web site.

Get it from Maven Central using the following coordinates:


Downloads from Sourceforge are available as well.

Notes on compatibility

This version is compatible with Apache Lucene versions from 5.3.x to 5.5.x, and with Hibernate ORM versions 5.0.x and 5.1.x.

Compatibility with Hibernate ORM 5.2.x is not a reality yet - we expect to see that materialize in early October. Compatibility with Lucene 6.x is scheduled for Hibernate Search 6.0, which will take longer - probably early 2017.

Finally, the version we used of Elasticsearch for all developing and tests of this version was Elasticsearch v. 2.3.1. We will soon upgrade this to the latest version, and discuss strategies to test against multiple versions.

Hibernate Validator 5.3.0.CR1 is out

Posted by    |       |    Tagged as Hibernate Validator Releases

We are making good progress working on Bean Validation 2.0 and we decided to move the target version for BV 2.0 to Hibernate Validator 6 (see more at the end of this post!).

We did not want to leave 5.3 half backed so we are preparing a 5.3 release with bugfixes, additional translations…​ and a few new features. This Candidate Release 1 is the first step towards this process. Expect a quick 5.3.0.Final release after that so please test this version as thoroughly as possible and report any bugs you may find in it!

Programmatic API for constraint definition and declaration

The experimental notion of ConstraintDefinitionContributor has been removed in favor of a new fluid API, more consistent with what already existed in Validator.

If you want to define a new ValidPassengerCount constraint annotation which relies on a ValidPassengerCountValidator validator, you can use the API as follows:

ConstraintMapping constraintMapping = configuration.createConstraintMapping();

    .constraintDefinition( ValidPassengerCount.class )
        .validatedBy( ValidPassengerCountValidator.class );

It can also be used to replace the implementation of the validator used for a given annotation constraint. Say you need to support International Domain Names (IDN) in your URL validation, the default URLValidator won’t work for you as it uses which does not support IDN. We provide an alternative RegexpURLValidator and you might want to use it in this case:

ConstraintMapping constraintMapping = configuration.createConstraintMapping();

    .constraintDefinition( URL.class )
        .includeExistingValidators( false )
        .validatedBy( RegexpURLValidator.class );

Constraint mapping contributor*s*

Thanks to the new hibernate.validator.constraint_mapping_contributors property, you can now declare several constraint mapping contributors separated by a comma, whereas you were limited to only one before.

Note that in 5.3, the existing hibernate.validator.constraint_mapping_contributor property is still supported but has been deprecated.

The deprecated hibernate.validator.constraint_mapping_contributor property will be removed as of Hibernate Validator 6.

Email validation

We changed the way email validation is done. It is now both more correct and stricter. We know of a few people running random tests on the constraints and they might have to update their tests: the domain of the email now needs to be a valid domain with each label (part between 2 dots) being at most 63 characters long. So you can’t just generate a 80 characters long domain with random characters, you need to be a bit more careful.


We added a few new translations of messages of the constraints we provide:

  • an Arabic translation thanks to Kathryn Killebrew

  • a Russian translation thanks to Andrey Derevyanko

Several other translations were updated.

What else is there?

Other changes of this release are an upgrade of all our Maven dependencies and a few fixes and polishing here and there.

You can find the complete list of all addressed issues in the change log.

To get the release with Maven, Gradle etc. use the GAV coordinates org.hibernate:{hibernate-validator|hibernate-validator-cdi|hibernate-validator-annotation-processor}:5.3.0.CR1.

Alternatively, a distribution bundle containing all the bits is provided on on SourceForge (TAR.GZ, ZIP).

To get in touch, use the following channels:

Next stop?

We are actively working on Bean Validation 2.0 and Hibernate Validator 6 with a strong focus on supporting Java 8 new features (and much more!). The more the merrier, so feel free to join us: drop ideas, comment on others' proposals, now is the time to define the future of Bean Validation. You can find all the necessary information on the Bean Validation website.

Hibernate Community Newsletter 17/2016

Posted by    |       |    Tagged as Discussions Hibernate ORM

Welcome to the Hibernate community newsletter in which we share blog posts, forum, and StackOverflow questions that are especially relevant to our users.


Fitbit is using Hibernate for data persistence. On their engineering blog, they published an article about connection provider instrumentation. Although this is a very clever solution, in case you need a solution for monitoring connection pool usage, I’d suggest using FlexyPool instead.

I’ve run a survey on my Twitter account to find out the JPA provider market share in 2016. Just like in 2014 and 2015, Hibernate is leading by a very large margin. Thanks for choosing Hibernate, and stay tuned for even more great features.

You can use the Fluent Interface pattern with Hibernate and JPA. Check out this post for more details.

Anghel Leonard has added comprehensive list of test cases for both Hibernate ORM and OGM on his blog and GitHub repository.

Thorben Janssen wrote a series of articles related to:

Hibernate has great support for concurrency control. In this article, you can find out how you can increment a root entity version whenever any child entity is being added/removed or even modified.

back to top