Red Hat

In Relation To

The Hibernate team blog on everything data.


When I started writing High-Performance Java Persistence, I decided to install four database systems on my current machine:

  • Oracle XE

  • SQL Server Express Edition

  • PostgreSQL

  • MySQL

These four relational databases are the most commonly referred ones on our forum, StackOverflow, as well as on most JIRA issues. However, these four top-ranked databases are not enough because, from time to time, we need to integrate Pull Requests for other database systems, like Informix or DB2.

Since installing a plethora of databases on a single machine is not very practical, we can do better than that. Many database providers have generated Docker images for their products, and this post is going to show you haw easy we can start an Informix database.

Running Informix on Docker

IBM offers Docker images for both Informix Innovator-C and DB2 Express-C.

As explained on Docker Hub, you have to start the container using the following command:

docker run -it --name iif_innovator_c --privileged -p 9088:9088 -p 27017:27017 -p 27018:27018 -p 27883:27883 -e LICENSE=accept ibmcom/informix-innovator-c:latest

To run the Informix Docker container, you have to execute the following command:

docker start iif_innovator_c

After the Docker container is started, we can attach a new shell to it:

docker exec -it iif_innovator_c bash

We have a databases.gradle configuration file which contains the connection properties for all databases we use for testing, and, for Informix, we have the following entry:

informix : [
    'db.dialect' : 'org.hibernate.dialect.InformixDialect',
    'jdbc.driver': 'com.informix.jdbc.IfxDriver',
    'jdbc.user'  : 'informix',
    'jdbc.pass'  : 'in4mix',
    'jdbc.url'   : 'jdbc:informix-sqli://;user=informix;password=in4mix'

With this configuration in place, I only need to setup the current configuration file to use Informix:

gradle clean testClasses -Pdb=informix

Now I can run any Informix integration test right from my IDE.

When I’m done, I stop the Docker container with the following command:

docker stop iif_innovator_c

As simple as that!

Bean Validation and the Jigsaw Liaison

Posted by Gunnar Morling    |       |    Tagged as Discussions

Unless you’ve been living under a rock for the last months and years, you’ve probably heard about the efforts for adding a module system to the Java platform, code-named "Project Jigsaw".

Defining a module system and modularizing a huge system like the JDK is by no means a trivial task, so it’s not by surprise that the advent of Jigsaw has been delayed several times. But I think by now it’s a rather safe bet to expect Jigsaw to be released as part of JDK 9 eventually (the exact release date remains to be defined), especially since it became part of the early access builds a while ago.

This means that if you are an author of a library or framework, you should grab the latest JDK preview build and make sure your lib can be used a) on Java 9 and b) within modularized applications using Jigsaw.

The latter is what we are going to discuss in more detail in the following, taking Bean Validation and its reference implementation, Hibernate Validator as an example. We’ll see what is needed to convert them into Jigsaw modules and use them in a modularized environment.

Now one might ask why having a module system and providing libraries as modules based on such system is a good thing? There are many facets to that, but I think a good answer is that modularization is a great tool for building software systems from encapsulated, loosely-coupled and re-usable components with clearly defined interfaces. It makes API design a very conscious decision and on the other hand gives library authors the freedom to change internal implementation aspects of their module without risking compatibility issues with clients.

If you are not yet familiar with Jigsaw at all, it’s recommended to take a look at the project home page. It contains links to many useful resources, especially check out "The State of the Module System" which gives a great overview. I also found this two-part introduction very helpful.

Getting started

In order to follow our little experiment of "Jigsaw-ifying" Bean Validation, you should be using a Bash-compatible shell and be able to run commands such as wget. On systems lacking support for Bash by default, Cygwin can be used. You also need git to download some source code from GitHub.

Let’s get started by downloading and installing the latest JDK 9 early access build (build 122 has been used when writing this post). Then run java -version to confirm that JDK 9 is enabled. You should see an output like this:

java version "9-ea"
Java(TM) SE Runtime Environment (build 9-ea+122)
Java HotSpot(TM) 64-Bit Server VM (build 9-ea+122, mixed mode)

After that, create a base directory for our experiments:

mkdir beanvalidation-with-jigsaw

Change into that directory and create some more sub-directories for storing the required modules and 3rd-party libraries:

cd beanvalidation-with-jigsaw

mkdir sources
mkdir modules
mkdir automatic-modules
mkdir tools

As tooling support for Java 9 / Jigsaw still is rather limited at this point, we are going to use plain javac and java commands to compile and test the code. Although that’s not as bad as it sounds and it’s indeed a nice exercise to learn about the existing and new compiler options, I admit I’m looking forward to the point where the known build tools such as Maven will fully support Jigsaw and allow compiling and testing modularized source code. But for now, the plain CLI tools will do the trick :)

Download the source code for Bean Validation and Hibernate Validator from GitHub:

git clone sources/beanvalidation-api
git clone sources/hibernate-validator

As we cannot leverage Maven’s dependency management, we fetch the dependencies required by Hibernate Validator via wget, storing them in the automatic-modules (dependencies) and tools (the JBoss Logging annotation processor needed for generating logger implementations) directory, respectively:

wget -P automatic-modules
wget -P automatic-modules
wget -P automatic-modules
wget -P automatic-modules
wget -P automatic-modules
wget -P automatic-modules
wget -P automatic-modules
wget -P automatic-modules

wget -P tools
wget -P tools
wget -P tools

Automatic modules are a means of Jigsaw to work with libraries in a modularized environment which have not yet been modularized themselves. Essentially, an automatic module is a module which exports all its packages and reads all other named modules.

Its module name is derived from the JAR file name, applying some rules for splitting artifact name and version and replacing hyphens with dots. So e.g. jboss-logging-annotations-2.0.1.Final.jar will have the automatic module name jboss.logging.annotations.

Creating modules for Bean Validation API and Hibernate Validator

Currently, the Bean Validation API and Hibernate Validator are in a state where they can be compiled out of the box using Java 9. But what’s still missing are the required module descriptors which describe a module’s name, its public API, its dependencies to other modules and some other things.

Module descriptors are Java files named and live in the root of a given module. Create the descriptor for the Bean Validation API with the following contents:

module javax.validation {(1)
    exports javax.validation;(2)
    exports javax.validation.bootstrap;
    exports javax.validation.constraints;
    exports javax.validation.constraintvalidation;
    exports javax.validation.executable;
    exports javax.validation.groups;
    exports javax.validation.metadata;
    exports javax.validation.spi;

    uses javax.validation.spi.ValidationProvider;(3)
1 Module name
2 All the packages the module exports (as this is an API module, all contained packages are exported)
3 The usage of the ValidationProvider service


Services have been present in Java for a long time. Originally added as an internal component in the JDK, the service loader mechanism became an official part of the platform as of Java 6.

Since then it has seen wide adoption for building extensible applications from loosely coupled components. With its help, service consumers can solely be implemented against a well-defined service contract, without knowing upfront about a specific service provider and its implementation. Jigsaw embraces the existing service concept and makes services first-class citizens of the modularized world.

Luckily, Bean Validation has been using the service mechanism for locating providers (such as Hibernate Validator) from the get go, so things play out nicely with Jigsaw. As we’ll see in a minute, Hibernate Validator provides an implementation of the ValidationProvider service, allowing the user to bootstrap it without depending on this specific implementation.

But for now let’s compile the Bean Validation module:

export BASE=`pwd`
cd sources/beanvalidation-api
javac -d $BASE/modules/javax.validation $(find src/main/java -name "*.java")
cd $BASE

After compilation, the built module can be found under modules/javax.validation. Note that modules usually will be packaged and redistributed as JAR files, but to keep things simple let’s just work with class directory structures here.

Things get a bit more interesting when it comes to Hibernate Validator. Its module descriptor should look like this:

module org.hibernate.validator.engine {(1)
    exports org.hibernate.validator;(2)
    exports org.hibernate.validator.cfg;
    exports org.hibernate.validator.cfg.context;
    exports org.hibernate.validator.cfg.defs;
    exports org.hibernate.validator.constraints;
    exports org.hibernate.validator.constraintvalidation;
    exports org.hibernate.validator.constraintvalidators;
    exports org.hibernate.validator.engine;
    exports org.hibernate.validator.messageinterpolation;
    exports org.hibernate.validator.parameternameprovider;
    exports org.hibernate.validator.path;
    exports org.hibernate.validator.resourceloading;
    exports org.hibernate.validator.spi.cfg;
    exports org.hibernate.validator.spi.resourceloading;
    exports org.hibernate.validator.spi.time;
    exports org.hibernate.validator.spi.valuehandling;
    exports org.hibernate.validator.valuehandling;

    exports org.hibernate.validator.internal.util.logging to jboss.logging;(3)
    exports org.hibernate.validator.internal.xml to java.xml.bind;

    requires javax.validation;(4)
    requires joda.time;
    requires javax.el.api;
    requires jsoup;
    requires jboss.logging.annotations;
    requires jboss.logging;
    requires classmate;
    requires paranamer;
    requires hibernate.jpa;
    requires java.xml.bind;
    requires java.xml;
    requires java.scripting;
    requires javafx.base;

    provides javax.validation.spi.ValidationProvider with

    uses javax.validation.ConstraintValidator;(6)
1 The module name
2 All the packages the module exports; Hibernate Validator always had a very well defined public API, with all the code parts not meant for public usage living in an internal package. Naturally, only the non-internal parts are exported. Things will be more complex when modularizing an existing component without such clearly defined public API. Likely you’ll need to move some classes around first, untangling public API and internal implementation parts.
3 Two noteworthy exceptions are o.h.v.internal.util.logging and o.h.v.internal.xml which are exported via a so-called "qualified exports". This means that only the jboss.logging module may access the logging package and only java.xml.bind may access the XML package. This is needed as these modules require reflective access to the logging and XML classes, respectively. Using qualified exports, this exposure of internal classes can be limited to the smallest degree possible.
4 All the modules which this module requires. These are the javax.validation module we just built, all the automatic modules we downloaded before and some modules coming with the JDK itself (java.xml.bind, javafx.base etc).
Some of these dependencies might be considered optional at runtime, e.g. Joda Time would only be needed at runtime when actually validating Joda Time types with @Past or @Future. Unfortunately - and in contrast to OSGi or JBoss Modules - Jigsaw doesn’t support the notion of optional module requirements, meaning that all module requirements must be satisfied at compile time as well as runtime. That’s a pity, as it prevents a common pattern for libraries which expose certain functionality depending on what dependencies/classes are available at runtime or not.
The "right answer" with Jigsaw would be to extract these optional features into their own modules (e.g. hibernate.validator.joda.time, hibernate.validator.jsoup etc.) but this comes at the price of making things more complex for users which then need to deal with all these modules.
5 The module provides an implementation of the ValidationProvider service
6 The module uses the ConstraintValidator service, see below

With the module descriptor in place, we can compile the Jigsaw-enabled Hibernate Validator module:

cd sources/hibernate-validator/engine
mkdir -p target/generated-sources/jaxb

xjc -enableIntrospection -p org.hibernate.validator.internal.xml \(1)
    -extension \
    -target 2.1 \
    -d target/generated-sources/jaxb \
    src/main/xsd/validation-configuration-1.1.xsd src/main/xsd/validation-mapping-1.1.xsd \
    -b src/main/xjb/binding-customization.xjb

javac -addmods java.xml.bind,java.annotations.common \(2)
    -g \
    -modulepath $BASE/modules:$BASE/automatic-modules \
    -processorpath $BASE/tools/jboss-logging-processor-2.0.1.Final.jar:$BASE/tools/jdeparser-2.0.0.Final.jar:$BASE/tools/jsr250-api-1.0.jar:$BASE/automatic-modules/jboss-logging-annotations-2.0.1.Final.jar::$BASE/automatic-modules/jboss-logging-3.3.0.Final.jar \
    -d $BASE/modules/org.hibernate.validator.engine \
    $(find src/main/java -name "*.java") $(find target/generated-sources/jaxb -name "*.java")

cp -r src/main/resources/* $BASE/modules/org.hibernate.validator.engine;(3)

cp -r src/main/xsd/* $BASE/modules/org.hibernate.validator.engine/META-INF;(4)
cd $BASE
1 The xjc utility is used to create some JAXB types from the XML constraint descriptor schemas
2 Compile the source code via javac
3 Copy error message resource bundle into the module directory
4 Copy XML schema files into the module directory

Note how the module path used for compilation refers to the modules directory (containing the javax.validation module) and the automatic-modules directory (containing all the dependencies such as Joda Time etc.).

The resulting module is located under modules/org.hibernate.validator.engine.

Giving it a test ride

Having converted Bean Validation API and Hibernate Validator into proper Jigsaw modules, it’s about time to give these modules a test ride. Create a new compilation unit for that:

mkdir -p sources/com.example.acme/src/main/java/com/example/acme

Within that directory structure, create a very simple domain class and a class with a main method for validating it:

package com.example.acme;

import java.util.List;

import javax.validation.constraints.Min;

public class Car {

    public int seatCount;

    public List<String> passengers;

    public Car(int seatCount, List<String> passengers) {
        this.seatCount = seatCount;
        this.passengers = passengers;
package com.example.acme;

import java.util.Collections;
import java.util.Set;

import javax.validation.ConstraintViolation;
import javax.validation.Validation;
import javax.validation.Validator;

public class ValidationTest {

    public static void main(String... args) {
        Validator validator = Validation.buildDefaultValidatorFactory()

        Set<ConstraintViolation<Car>> violations = validator.validate( new Car( 0, Collections.emptyList() ) );

        System.out.println( "Validation error: " + violations.iterator().next().getMessage() );

This obtains a Validator object via Validation#buildDefaultValidatorFactory() (which internally uses the service mechanism described above) and performs a simple validation of a Car object.

Of course we need a, too:

module com.example.acme {
    exports com.example.acme;

    requires javax.validation;

That should look familiar by now: we just export the single package (so Hibernate Validator can access the state of the Car object) and depend on the Bean Validation API module.

Also compilation of this module isn’t much news:

cd sources/com.example.acme
javac \
    -g \
    -modulepath $BASE/modules:$BASE/automatic-modules \
    -d $BASE/modules/com.example.acme $(find src/main/java -name "*.java")
cd $BASE

And with that, we finally can run a first test of Bean Validation under Jigsaw:

java \
    -modulepath modules:automatic-modules \
    -m com.example.acme/com.example.acme.ValidationTest

Similar to javac, there is a new modulepath option for the java command, pointing to one or more directories with Jigsaw modules. The -m switch specifies the main class to run by giving its module name and fully qualified class name.

Mh, that was not really successful:

HV000149: An exception occurred during message interpolation
Caused by: java.lang.UnsupportedOperationException: ResourceBundle.Control not supported in named modules
    at java.util.ResourceBundle.checkNamedModule(java.base@9-ea/
    at java.util.ResourceBundle.getBundle(java.base@9-ea/
    at org.hibernate.validator.resourceloading.PlatformResourceBundleLocator.loadBundle(org.hibernate.validator.engine/

What’s that about? Hibernate Validator is using the Control class in order to merge the contents (error messages) of several resource bundles with the same name found on the classpath. This is not supported in the modularized environment any longer, hence the exception above is raised. Eventually, Hibernate Validator should handle this situation automatically (this is tracked under HV-1073).

For now let’s hack around it and disable the troublesome bundle aggregregation in Hibernate Validator’s AbstractMessageInterpolator. To do so, change true to false in the constructor invocation on line 165:

new PlatformResourceBundleLocator(

Re-compile the Hibernate Validator module. After running the test again, you should now see the following output on the console:

Validation error: must be greater than or equal to 1

Tada, the first successful bean validation in the Jigsaw environment :)

Let me quickly recap what has happened so far:

  • We added a module descriptor to the Bean Validation API, making it a proper Jigsaw module

  • We added a module descriptor to Hibernate Validator; this Bean Validation provider will be discovered by the API module using the service mechanism

  • We created a test module with a main method, which uses the Bean Validation API to perform a simple object validation

(Not) overstepping boundaries

Now let’s be nasty and see whether the module system actually is doing its job as expected. For that add the module requirement to Hibernate Validator to the module descriptor (so its considered for compilation at all) and cast the validator to the internal implementation type in ValidationTest:

module com.example.acme {
    exports com.example.acme;

    requires javax.validation;
    requires org.hibernate.validator.engine;
package com.example.acme;

import javax.validation.Validation;

import org.hibernate.validator.internal.engine.ValidatorImpl;

public class ValidationTest {

    public static void main(String... args) throws Exception{
        ValidatorImpl validator = (ValidatorImpl) Validation.buildDefaultValidatorFactory()

Running javac again, you should now get a compilation error, complaining about the type not being found. So Jigsaw prevents accesses to non-exported types. If you like, try referencing anything from the packages exported by Hibernate Validator, which will work.

That’s a great advantage over the traditional flat classpath, where you might have organized your code base into public and internal parts but then had to hope for users of your library not to step across the line and - accidentally or intentionally - access internal classes.

Custom constraints

With the modules basically working, it’s time to get a bit more advanced and create a custom Bean Validation constraint. This one should make sure that a car does not have more passengers than seats available.

For that we need an annotation type:

package com.example.acme;

import static java.lang.annotation.ElementType.TYPE;
import static java.lang.annotation.RetentionPolicy.RUNTIME;

import java.lang.annotation.Documented;
import java.lang.annotation.Retention;
import java.lang.annotation.Target;

import javax.validation.Constraint;
import javax.validation.Payload;

@Constraint(validatedBy = { PassengersDontExceedSeatCountValidator.class })
@Target({ TYPE })
public @interface PassengersDontExceedSeatCount {

    String message() default "{com.example.acme.PassengersDontExceedSeatCount.message}";
    Class<?>[] groups() default { };
    Class<? extends Payload>[] payload() default { };

And also a constraint validator implementation:

package com.example.acme;

import javax.validation.ConstraintValidator;
import javax.validation.ConstraintValidatorContext;

import com.example.acme.PassengersDontExceedSeatCount;

public class PassengersDontExceedSeatCountValidator implements
        ConstraintValidator<PassengersDontExceedSeatCount, Car> {

    public void initialize(PassengersDontExceedSeatCount constraintAnnotation) {}

    public boolean isValid(Car car, ConstraintValidatorContext constraintValidatorContext) {
        if ( car == null ) {
            return true;

        return car.passengers == null || car.passengers.size() <= car.seatCount;

A resource bundle with the error message for the constraint is needed, too:

com.example.acme.PassengersDontExceedSeatCount.message=Passenger count must not exceed seat count

Now we can put the new constraint type to the Car class and finally validate it:

public class Car {
package com.example.acme;

import java.util.Arrays;
import java.util.Set;

import javax.validation.Validation;
import javax.validation.Validator;
import javax.validation.ConstraintViolation;

public class ValidationTest {

    public static void main(String... args) throws Exception{
        Validator validator = Validation.buildDefaultValidatorFactory()

        Set<ConstraintViolation<Car>> violations = validator.validate(
            new Car( 2, Arrays.asList( "Anna", "Bob", "Alice" ) )

        System.out.println( "Validation error: " + violations.iterator().next().getMessage() );

Compile the example module again; don’t forget to copy the resource bundle to the module directory:

cd sources/com.example.acme
javac \
    -g \
    -modulepath $BASE/modules:$BASE/automatic-modules \
    -d $BASE/modules/com.example.acme $(find src/main/java -name "*.java")
cp -r src/main/resources/* $BASE/modules/com.example.acme
cd $BASE

Run it as before, and you should get a nice error message. But what’s that:

Validation error: {com.example.acme.PassengersDontExceedSeatCount.message}

It seems the error message wasn’t resolved properly, so the raw interpolated message key from the annotation definition has been returned. Now why is this?

Bean Validation error messages are loaded through java.util.ResourceBundle, and due to the strong encapsulation of the modularized environment the Hibernate Validator module cannot "see" the resource bundle provided in the example module.

The updated JavaDocs of ResourceBundle make it clear that only bundles located in the same module as the caller of ResourceBundle#getBundle() can be accessed. In order to access resource bundles from other modules, the service loader mechanism is to be used as per Java 9; A new SPI interface, ResourceBundleProvider, has been added to the JDK for that purpose.

Ultimately, Bean Validation should take advantage of that mechanism, but how can we make things work out for now? As it turns out, Hibernate Validator has its own extension point for customizing the retrieval of resource bundles, ResourceBundleLocator.

This comes in very handy now: we just need to create an implementation of that SPI in the example module:

package com.example.acme.internal;

import java.util.Locale;
import java.util.ResourceBundle;

import org.hibernate.validator.spi.resourceloading.ResourceBundleLocator;

public class MyResourceBundleLocator implements ResourceBundleLocator {

    public ResourceBundle getResourceBundle(Locale locale) {
        return ResourceBundle.getBundle( "ValidationMessages", locale );

When bootstrapping the validator factory, configure a message interpolator using that bundle locator like this:

import org.hibernate.validator.messageinterpolation.ResourceBundleMessageInterpolator;
import com.example.acme.internal.MyResourceBundleLocator;


Validator validator = Validation.byDefaultProvider()
    .messageInterpolator( new ResourceBundleMessageInterpolator( new MyResourceBundleLocator() ) )

As the call to ResourceBundle#getBundle() now originates from the same module that declares the ValidationMessages bundle, the bundle can be found and the error message will be interpolated correctly. Success!

Keeping your privacy

With the custom constraint in place, let’s think about encapsulation a bit more. Wouldn’t it be nice if the constraint validator implementation didn’t live in the exported package but rather somewhere under internal? After all, that class is an implementation detail and should not be referenced directly by users of the @PassengersDontExceedSeatCount constraint.

Another feature of Hibernate Validator is helpful here: service-loader based discovery of constraint validators.

This allows us to remove the reference from the constraint annotation to its validator (just add an empty @Constraint({}) annotation) and relocate the validator implementation to the internal package:

mv sources/com.example.acme/src/main/java/com/example/acme/ \

Also adapt the package declaration in the source file accordingly and add an import for the Car type. We then need to declare the constraint validator as service provider in the module descriptor:

module com.example.acme {

    provides javax.validation.ConstraintValidator
        with com.example.acme.internal.PassengersDontExceedSeatCountValidator;

Compile and run the example module again. You should get an error like this:

java.lang.IllegalAccessException: class org.hibernate.validator.internal.util.privilegedactions.NewInstance (in module org.hibernate.validator.engine) cannot access class com.example.acme.internal.PassengersDontExceedSeatCountValidator (in module com.example.acme) because module com.example.acme does not export com.example.acme.internal to module org.hibernate.validator.engine

This originates from the fact that Hibernate Validator is using the service loader mechanism only for detecting validator types and then instantiates them for each specific constraint usage. As the internal package has not been exported, this instantiation is bound to fail. You have two options now:

  • Use a qualified export in to expose that package to the Hibernate Validator module

  • Use the new -XaddExports option of the java command to dynamically add this export when running the module

Following the latter approach, the java invocation would look like this:

java \
    -modulepath modules:automatic-modules \
    -XaddExports:com.example.acme/com.example.acme.internal=org.hibernate.validator.engine \
    -m com.example.acme/com.example.acme.ValidationTest

While this approach works, it can become a bit tedious when taking other libraries into the picture, that need to perform reflective operations on non-exported types. JPA providers such as Hibernate ORM and dependency injection frameworks are just two examples.

Luckily, the OpenJDK team is aware of that issue and there is an entry for it in the requirements list for the Java Module System: ReflectiveAccessToNonExportedTypes. I sincerely hope that this one gets addressed before Java 9 gets finalized.

XML configuration

As last part of our journey of "Jigsaw-ifying" Bean Validation, let’s take a look at XML-based configuration of constraints. This is a useful alternative, if you cannot put constraint metadata to a model via annotations or e.g. want to override existing annotation-based constraints externally.

The Bean Validation spec defines a validation mapping file for this, which in turn can point to one or more constraint mapping XML files. Create the following files in order to override the @Min constraint of the Car class:

<?xml version="1.0" encoding="UTF-8"?>


<?xml version="1.0" encoding="UTF-8"?>

    xsi:schemaLocation=" validation-mapping-1.1.xsd"
    xmlns="" version="1.1">

    <bean class="com.example.acme.Car" ignore-annotations="true">
        <field name="seatCount">
            <constraint annotation="javax.validation.constraints.Min">
                <element name="value">2</element>

Traditionally, Bean Validation will look for META-INF/validation.xml on the classpath and resolve any linked constraint mapping files relatively to that. If you’ve followed this article that far, you won’t be surprised that this is not going to work in Jigsaw. There is no notion of a "flat classpath" any longer, and thus a validation provider cannot see XML files in other modules, akin to the case of error message bundles discussed above.

More specifically, the method ClassLoader#getResourceAsStream() which is used by Hibernate Validator to open mapping files, won’t work for named modules as of JDK 9. That change will be a tough nut to crack for many projects when migrating to Java 9, as it renders strategies for resource loading known from existing modular environment such as OSGi inoperable. E.g. Hibernate Validator allows to pass in a classloader for loading user-provided resources. In OSGi this can be used to pass what’s called the bundle loader and allow Hibernate Validator to access constraint mapping files and other things provided by the user. This pattern cannot be employed with Jigsaw unfortunately, as getResourceAsStream() "does not find resources in named modules".

But Bean Validation has a way out for this issue, too, as it allows to pass in constraint mappings as InputStream opened by the bootstrapping code. Class#getResourceAsStream() continues to work for resources from the same module, so things will work out as expected when bootstrapping the validator factory like this (don’t forget to close the stream afterwards):

InputStream constraintMapping = ValidationTest.class.getResourceAsStream( "/META-INF/constraints-car.xml" );

Validator validator = Validation.byDefaultProvider()
    .addMapping( constraintMapping )

That way the constraint mapping is opened by code from the same module and thus can be accessed and passed to the Bean Validation provider.

In the longer term, APIs such as Bean Validation should foresee some kind of SPI contract for conveying all required configuration information as per Jigsaw spec lead Mark Reinhold. The user would then expose an instance of that contract via a service implementation.

Wrapping it up

With that, we conclude our experiment of making Bean Validation ready for the Jigsaw module system. Overall, things work out pretty well, and without too much effort Bean Validation and Hibernate Validator can be made first-class citizens in a fully-modularized world.

Some observations from the experiment:

  • The lack of optional module requirements poses a usability issue in my opinion, as it prevents libraries from exposing additional functionality based on what classes and modules are present at runtime in a given environment. This means that users of the library either need to provide other modules they don’t actually need or - if the library has been cut into several modules, one per optional dependency - need to add several modules now where they could have worked with a single one before.

  • The need to explicitly expose internal packages for reflective access by libraries such as Hibernate Validator, but also JPA providers or DI containers, can become tedious. I hope there will be a way to enable such access in a more global way, e.g. by whitelisting "trustworthy modules" such as the aforementioned libraries.

  • The changed behaviors around loading of resources such as configuration files or resource bundles provided by the user of a library will potentially affect many applications when migrating to Java 9. The established pattern of accepting an external classloader for loading user resources will not work anymore, so libraries need to adapt either by providing dedicated extension points (akin to addMapping(InputStream) in Bean Validation) or by migrating to service based approaches as envisioned by the makers of Jigsaw.

  • Tools such as Maven (including plug-ins as e.g. Surefire for running tests) or Gradle but also IDEs still need to catch up with Jigsaw. Using plain javac and java can be fun for a while, but you wish back more powerful tools rather quickly :)

  • Converting Hibernate Validator into a Jigsaw module is relatively easy, as we luckily were very careful about a proper API/implementation split from the beginning. Modularizing existing libraries or applications without such clear distinction will be a much tougher exercise, as it may require lots of types to be moved around and unwanted (package) dependencies to be broken up. There are some tools that can help with that, but that might be a topic for a future blog post by itself.

One thing is for sure: interesting times lie ahead! While migration might be painful here and there, I think it’s overdue that Java gets its proper module system and I look forward to seeing it as an integrated part of the platform very much.

Got feedback from following the steps described above or from your own experiments with Jigsaw? Let’s all together learn from our different experiences and insights, so please share any thoughts on the topic below.

Many thanks to Sander Mak, Sanne Grinovero and Guillaume Smet for reviewing this post!


When you’re using JDBC or if you are generating SQL statements by hand, you always know what statements are sent to the database server. Although there are situations when a native query is the most obvious solution to a given business use case, most statements are simple enough to be generated automatically That’s exactly what JPA and Hibernate do, and the application developer can focus on entity state transitions instead.

Nevertheless, the application developer must always assert that Hibernate generates the expected statements, as well as the number of statements being generated (to avoid N+1 query issues).

Proxying the underlying JDBC Driver or DataSource

In production, it’s very common to Proxy the underlying Driver Connection providing mechanism so that the application benefits from connection pooling, or for monitoring connection pool usage. For this purpose, the underlying JDBC Driver or DataSource can be proxied using tools such as P6spy or datasource-proxy. In fact, this is also a very convenient way of logging JDBC statements along with their bind parameters.

While for many application, it’s not an issue to add yet another dependency when you are developing an open source framework you strive to minimize the number of dependencies your project needs to depend on. Luckily, for Hibernate, we don’t even need to use an external dependency for intercepting JDBC statements, and this post is going to show you how easily you can tackle this requirement.


For many use cases, the StatementInspector is the only thing you need to capture all SQL statements that are executed by Hibernate. The StatementInspector must be provided during SessionFactory bootstrapping as follows:

public class SQLStatementInterceptor {

    private final LinkedList<String> sqlQueries = new LinkedList<>();

    public SQLStatementInterceptor(SessionFactoryBuilder sessionFactoryBuilder) {
        (StatementInspector) sql -> {
            sqlQueries.add( sql );
            return sql;
        } );

    public LinkedList<String> getSqlQueries() {
        return sqlQueries;

With this utility we can easily verify the Oracle follow-on-locking mechanism which is caused by the FOR UPDATE clause restrictions imposed by the database engine:


List<Product> products = session.createQuery(
    "select p from Product p", Product.class )
.setLockOptions( new LockOptions( LockMode.PESSIMISTIC_WRITE ) )
.setFirstResult( 40 )
.setMaxResults( 10 )

assertEquals( 10, products.size() );
assertEquals( 11, sqlStatementInterceptor.getSqlQueries().size() );

So far, so good. But as simple as the StatementInspector may be, it does not mix well with JDBC batching. StatementInspector intercepts the prepare phase, whereas for batching we need to intercept the addBatch and executeBatch method calls.

Even without native support for such a feature, we can easily design a custom ConnectionProvider that can intercept all PreparedStatement method calls.

First, we start with the ConnectionProviderDelegate which is capable of substituting any other ConnectionProvider that would otherwise be picked by Hibernate (e.g. DatasourceConnectionProviderImpl, DriverManagerConnectionProviderImpl, HikariCPConnectionProvider) for the current configuration properties.

public class ConnectionProviderDelegate implements
        ServiceRegistryAwareService {

    private ServiceRegistryImplementor serviceRegistry;

    private ConnectionProvider connectionProvider;

    public void injectServices(ServiceRegistryImplementor serviceRegistry) {
        this.serviceRegistry = serviceRegistry;

    public void configure(Map configurationValues) {
        Map<String, Object> settings = new HashMap<>( configurationValues );
        settings.remove( AvailableSettings.CONNECTION_PROVIDER );
        connectionProvider = ConnectionProviderInitiator.INSTANCE.initiateService(
        if ( connectionProvider instanceof Configurable ) {
            Configurable configurableConnectionProvider = (Configurable) connectionProvider;
            configurableConnectionProvider.configure( settings );

    public Connection getConnection() throws SQLException {
        return connectionProvider.getConnection();

    public void closeConnection(Connection conn) throws SQLException {
        connectionProvider.closeConnection( conn );

    public boolean supportsAggressiveRelease() {
        return connectionProvider.supportsAggressiveRelease();

    public boolean isUnwrappableAs(Class unwrapType) {
        return connectionProvider.isUnwrappableAs( unwrapType );

    public <T> T unwrap(Class<T> unwrapType) {
        return connectionProvider.unwrap( unwrapType );

With the ConnectionProviderDelegate in place, we can now implement the PreparedStatementSpyConnectionProvider which, using Mockito, it returns a Connection spy instead of an actual JDBC Driver Connection object:

public class PreparedStatementSpyConnectionProvider
        extends ConnectionProviderDelegate {

    private final Map<PreparedStatement, String> preparedStatementMap = new LinkedHashMap<>();

    public Connection getConnection() throws SQLException {
        Connection connection = super.getConnection();
        return spy( connection );

    private Connection spy(Connection connection) {
        if ( new MockUtil().isMock( connection ) ) {
            return connection;
        Connection connectionSpy = Mockito.spy( connection );
        try {
            doAnswer( invocation -> {
                PreparedStatement statement = (PreparedStatement) invocation.callRealMethod();
                PreparedStatement statementSpy = Mockito.spy( statement );
                String sql = (String) invocation.getArguments()[0];
                preparedStatementMap.put( statementSpy, sql );
                return statementSpy;
            } ).when( connectionSpy ).prepareStatement( anyString() );
        catch ( SQLException e ) {
            throw new IllegalArgumentException( e );
        return connectionSpy;

     * Clears the recorded PreparedStatements and reset the associated Mocks.
    public void clear() {
        preparedStatementMap.keySet().forEach( Mockito::reset );

     * Get one and only one PreparedStatement associated to the given SQL statement.
     * @param sql SQL statement.
     * @return matching PreparedStatement.
     * @throws IllegalArgumentException If there is no matching PreparedStatement or multiple instances, an exception is being thrown.
    public PreparedStatement getPreparedStatement(String sql) {
        List<PreparedStatement> preparedStatements = getPreparedStatements( sql );
        if ( preparedStatements.isEmpty() ) {
            throw new IllegalArgumentException(
                    "There is no PreparedStatement for this SQL statement " + sql );
        else if ( preparedStatements.size() > 1 ) {
            throw new IllegalArgumentException( "There are " + preparedStatements
                    .size() + " PreparedStatements for this SQL statement " + sql );
        return preparedStatements.get( 0 );

     * Get the PreparedStatements that are associated to the following SQL statement.
     * @param sql SQL statement.
     * @return list of recorded PreparedStatements matching the SQL statement.
    public List<PreparedStatement> getPreparedStatements(String sql) {
        return preparedStatementMap.entrySet()
                .filter( entry -> entry.getValue().equals( sql ) )
                .map( Map.Entry::getKey )
                .collect( Collectors.toList() );

     * Get the PreparedStatements that were executed since the last clear operation.
     * @return list of recorded PreparedStatements.
    public List<PreparedStatement> getPreparedStatements() {
        return new ArrayList<>( preparedStatementMap.keySet() );

To use this custom provider, we just need to provide an instance via the hibernate.connection.provider_class configuration property:

private PreparedStatementSpyConnectionProvider connectionProvider =
    new PreparedStatementSpyConnectionProvider();

protected void addSettings(Map settings) {

Now, we can assert that the underlying PreparedStatement is batching statements according to our expectations:

Session session = sessionFactory().openSession();
session.setJdbcBatchSize( 3 );

try {
    for ( long i = 0; i < 5; i++ ) {
        Event event = new Event(); = id++; = "Event " + i;
        session.persist( event );
finally {

PreparedStatement preparedStatement = connectionProvider.getPreparedStatement(
    "insert into Event (name, id) values (?, ?)" );

verify(preparedStatement, times( 5 )).addBatch();
verify(preparedStatement, times( 2 )).executeBatch();

The PreparedStatement is not a mock but a real object spy, which can intercept method call while also propagating the call to the underlying actual JDBC Driver PreparedStatement object.

Although getting the PreparedStatement by its associated SQL String is useful for the aforementioned test case, we can also get all executed PreparedStatements like this:

List<PreparedStatement> preparedStatements = connectionProvider.getPreparedStatements();
assertEquals(1, preparedStatements.size());
preparedStatement = preparedStatements.get( 0 );

verify(preparedStatement, times( 5 )).addBatch();
verify(preparedStatement, times( 2 )).executeBatch();

Hibernate Community Newsletter 12/2016

Posted by Vlad Mihalcea    |       |    Tagged as Discussions Hibernate ORM

Welcome to the Hibernate community newsletter in which we share blog posts, forum, and StackOverflow questions that are especially relevant to our users.


Hibernate ORM 5.2 release

Posted by Steve Ebersole    |       |    Tagged as Hibernate ORM Releases

The 5.2.0 release of Hibernate ORM has just been tagged and published.

Many of the changes in 5.2.0 have important ramifications in terms of both usage and extension. Be sure to read the 5.2 Migration Guide for details.

The complete list of changes can be found here. Below is a discussion of the major changes.

Java 8 baseline

5.2 moves to Java 8 as its baseline, both for JDK and JRE. This means:

  • The hibernate-java8 module has been removed; that functionality has been consolidated into hibernate-core.

  • Native support for Java 8 date/time types as Query parameters.

  • Support for streaming ( query results.

  • Support for java.util.Optional as return from methods that may return null.

  • Leveraging Java 8 "default methods" when introducing new methods to extension points.

Consolidating JPA support into hibernate-core.

That effectively means that the hibernate-entitymanager module no longer exists. Its functionality has been consolidated into hibernate-core.

JCache support

Support for using any JCache-compliant cache impl as a second-level caching provider. See HHH-10770.

Session-level batch size support

Support has been added for specifying a batch size for write operations per Session. See HHH-10431.

Getting it

For information on consuming the release via your favorite dependency-management-capable build tool, see

For those of you allergic to dependency-management-capable build tools, the release bundles can be obtained in ZIP or TGZ format from SourceForge.

Hibernate Community Newsletter 11/2016

Posted by Vlad Mihalcea    |       |    Tagged as Discussions Hibernate ORM

Welcome to the Hibernate community newsletter in which we share blog posts, forum, and StackOverflow questions that are especially relevant to our users.


Meet Mark Paluch, a true open-source champion

Posted by Vlad Mihalcea    |       |    Tagged as Discussions Hibernate OGM

In this post, I’d like you to meet Mark Paluch, who, among other projects, is one of our Hibernate OGM project contributors.

  1. Hi, Mark. Would you like to introduce yourself and tell us what you are currently working on?

    I am Mark Paluch, and I am working for Pivotal Software as Spring Data Engineer. I am a member of the JSR 365 EG (CDI 2.0), project lead of the lettuce Redis driver, and I run a couple of other open source projects. I enjoy tinkering on Internet of Things projects in my spare time. Before I joined Pivotal, I worked since the early 2000’s as a freelancer in a variety of projects using Java SE/EE and web technologies. My focus lies now on Spring Data with Redis, Cassandra, and MongoDB in particular.

  2. You have contributed a lot to the Hibernate OGM Redis module. Can you please tell us a little bit about Redis?

    I was not the first one bringing up the idea of Redis support in Hibernate OGM. In fact, Seiya Kawashima did a pretty decent job with his pull-request but at some point, Hibernate OGM development and the PR diverged. I came across the pull request and picked it up from there.

    Redis is an in-memory data structure store, used as database, cache and message broker. It originated from a key-value store but evolved by supporting various data structures like lists, sets, hashes and much more. Redis is blazing-fast although it runs mostly single-threaded. Its performance originates in a concise implementation and that all operations are performed in-memory. This does not mean that Redis has no persistence. Redis is configured by default to store data on disk and disk I/O is asynchronous. Redis facilitates through its versatile nature an enormous number of use-cases such as Caching, queues, remote locking, just storing data and much more. An important fact to me is always that I’d never use Redis for data I cannot recover as wiping data from Redis is just too easy but using it as semi-persistent a store is the perfect use.

  3. You are also the author of the Lettuce open source project. How does it compare to Hibernate OGM?

    Hibernate OGM and lettuce are projects with different aims. Lettuce is a driver/Java-binding for Redis. It gives Java developers access to the Redis API using synchronous, asynchronous and reactive API bindings. You can invoke the Redis API with lettuce directly and get the most out of Redis if you need it. Any JDBC driver is situated on a similar abstraction level as lettuce except for some specific features. lettuce does not require connection-pooling and dealing with broken connections as it allows users to benefit from auto-reconnection and thread-safe connections. Hibernate OGM Redis uses this infrastructure and provides its data mapping features on top of lettuce.

  4. What benefit do you think Hibernate OGM offers to application developers compared to using the NoSQL API directly?

    Each NoSQL data store has its own, very specific API. Native APIs require developers not only get familiar with the data store traits but also with its API. Redis API comes with over 150 commands that translate to 650+ commands with sub-command permutations.

    Every Redis command is very specific and behaves on its own. The Redis command documentation provides detailed insight to commands, but users are required to spend a fair amount of their time to get along with the native API.

    Hibernate OGM applies elements from the JPA spec to NoSQL data stores and comes up with an API that Java EE/JPA developers are familiar. Hibernate OGM lowers barriers to entry. Hibernate OGM comes with a purpose of mapping data into a NoSQL data store. Mapping simple JPA entities to the underlying data store works fine but sometimes, like associations or transactions, do not map well to MongoDB and Redis. Users of Hibernate OGM need to be aware of the underlying persistence technology to get familiar with its concepts and strengths as well with their limitations.

    I also see a great advantage of the uniform configuration mechanism of Hibernate OGM. Every individual datastore driver comes up with its individual configuration method. Hibernate OGM unifies the styles into a common approach. One item on my wish list for Hibernate OGM is JDNI/WildFly configuration support to achieve similar flexibility as it is possible with JDBC data sources.

  5. Do you plan on supporting Hibernate OGM in Spring Data as well?

    Hibernate OGM and Spring Data follow both the idea of supporting NoSQL data stores. Hibernate OGM employs several features from NoSQL data stores to enhance its data mapping centering around JPA. JPA is an inherently relational API, which talks about concepts that are not necessarily transferable to the NoSQL world. Spring Data comes with modules for various popular data stores with a different approach to providing a consistent programming model for the supported stores but not try to force everything into a single abstracting API. Spring Data Modules provide multiple levels of abstraction on top of the NoSQL data store APIs. Core concepts of NoSQL data stores are exposed through an API that commonly looks and feels like Spring endpoints. Hibernate OGM can already be used together with Spring Data JPA. A good use-case is the Spring Data Repositories abstraction which provides a uniform interface to access data from various data stores that do not require users to write a query language and leverages store-specific features.

Thank you, Mark, for taking your time. It is a great honor to have you here. To reach Mark, you can follow him on Twitter.

After over 60 resolved tasks, we’re proud to release Hibernate Search version 5.6.0.Beta1.

The Elasticsearch integration made significant progress, and we believe it to be ready for wider usage.

Progress of the Elasticsearch integration

Improvements since the previous milestone:


significant better performance as it now uses bulk operations.

Calendar, Dates, numbers and mapping details

several corrections and improvements were made to produce a cleaner schema.

Cluster state

we now wait for a newly started Elasticsearch cluster to be "green" - or optionally "yellow" - before starting to use it.

WildFly modules

a critical bug was resolved, the modules should work fine now.

Many more

for a full list of all 63 improvements, see this JIRA query.

What is missing yet?

Performance testing

we didn’t do much performance testing, it’s probably not as efficient as it could be.

Relax the expected Elasticsearch version

it’s being tested with version 2.3.1 but we have plans to support a wider range of versions.

Explicit refresh requests

we plan to add methods to issue an indexreader refresh request, as the changes pushed to Elasticsearch are not immediately visible by default.

Your Feedback!

we think it’s in pretty good shape, it would be great for more people to try it out and let us know what is missing and how it’s working for you.

Notable differences between using embededd Lucene vs Elasticsearch

Unless you reconfigure Hibernate Search to use an async worker, by default when using the Lucene backend after you commit a transaction the changes to the index are immediately applied and any subsequent search will "see" the changes. On Elasticsearch the default is different: changes received by the cluster are only "visible" to searches after some seconds (1 by default).

You can reconfigure Hibernate Search to force a refresh of indexes after each write operation by using the configuration setting.

This setting defaults to false as that’s the recommended setting for optimal performance on Elasticsearch. You might want to set this to true to make it simpler to write unit tests, but you should take care to not rely on the synchronous behaviour for your production code.

Improvements for embedded Lucene users

While working on Elasticsearch, we also applied some performance improvements which apply to users of the traditional Lucene embedded users.

Special thanks to Andrej Golovnin, who contributed several patches to reduce allocation of objects on the hot path and improve overall performance.

How to get this release

Everything you need is available on Hibernate Search’s web site.

Get it from Maven Central using the above coordinates.

Downloads from Sourceforge are available as well.


Feedback always welcome!

Please let us know of any problem or suggestion by creating an issue on JIRA, or by sending an email to the developer’s developer’s mailing lists, or posting on the forums.

We also monitor Stack Overflow; when posting on SO please use the tag hibernate-search.

Meet Sergey Chernolyas

Posted by Vlad Mihalcea    |       |    Tagged as Discussions Hibernate OGM

In this post, I’d like you to meet Sergey Chernolyas who is one of our Hibernate OGM project contributors.

  1. Hi, Sergey. You are one of the people who contributed to the Hibernate OGM project. Can you please introduce yourself?

    Hi, Vlad! My name is Sergey Chernolyas. I am from Russia, and I am 38 years old. I have been working with Java technologies since 2000. During my career, I got four certificates on Java technologies from Oracle and got involved in many development and integration projects.

  2. Can you tell us what project are you currently working on and if it uses Hibernate OGM?

    Now, I am working on a new module for Hibernate OGM, which aims to integrate the OrientDB NoSQL database. With this module, OGM will support a total of 7 NoSQL databases. Although at my current job, my work is not related to NoSQL solutions or Hibernate OGM, I am interested in this topic, and that’s why I pushed myself to learn Hibernate OGM and exploring NoSQL databases.

  3. Can you tell us a little about OrientDB?

    OrientDB is a graph-oriented and document-oriented database, and it is built using Java technologies. Briefly, the main advantages of using OrientDB are:

    1. It can operate in several modes: as an in-memory database, or through a network connection, or it can be store data in a local file.

    2. It offers join-less entity associations.

    3. It supports stored procedures that may be written in Java, JavaScript and any other language implementing the JSR-223 specification (e.g. Groovy, JRuby, etc.).

    4. It has good performance and is Big Data-oriented.

      For more details about OrientDB, you can visit the official documentation. Recently, the OrientDB team released the 2.2 GA version, so it’s worth giving it a try.

  4. What is the main benefit of using Hibernate OGM for accessing OrientDB over using their native API?

    The main benefit of using Hibernate OGM over the native API is the standard way for application development. Also, Hibernate OGM hides many low-level operations for creating and managing database connections, or for executing queries.

    While implementing the first version of the OrientDB Hibernate OGM module, I was faced with some OrientDB issues that prevented me integrate all features that ought to be supported by any Hibernate OGM module. Luckily, the OrientDB team was helpful and supportive, and I hope that by the time I finish this integration, the OrientDB team had already fixed my previously reported issues.

Thank you, Sergey for taking your time, and keep up the good work.

Hibernate OGM 5 is out!

Posted by Davide D'Alto    |       |    Tagged as Hibernate OGM Releases

Hibernate OGM 5.0.0.Final is finally here!

What’s new?

Compared to the 4.2.Final, this version includes:

There are also several bug fixes and you can find all the details in the changelog.

A nice Getting started guide is available on our website for people who want to start playing with it. A more in depth explanation of all the details around Hibernate OGM is in the reference documentation.

If you need to upgrade from a previous version, you can find help on the migration notes.

What’s coming next?

Now that Hibernate OGM 5 is out, we can focus on working on some new integrations like Neo4j remote and Hot Rod.

You can also have a look at the roadmap for up-to-date news about what’s coming next.

If you think that something is missing or if you have some opinion about what we should include, please, let us hear your voice.

Where can I get it?

You can get Hibernate OGM 5.0.0.Final core via Maven using the following coordinates:

  • org.hibernate.ogm:hibernate-ogm-core:5.0.0.Final

and these are the back-ends currently available:

  • Cassandra: org.hibernate.ogm:hibernate-ogm-cassandra:5.0.0.Final

  • CouchDB: org.hibernate.ogm:hibernate-ogm-couchdb:5.0.0.Final

  • Infinispan: org.hibernate.ogm:hibernate-ogm-infinispan:5.0.0.Final

  • Ehcache: org.hibernate.ogm:hibernate-ogm-ehcache:5.0.0.Final

  • MongoDB: org.hibernate.ogm:hibernate-ogm-mongodb:5.0.0.Final

  • Neo4j: org.hibernate.ogm:hibernate-ogm-neo4j:5.0.0.Final

  • Redis: org.hibernate.ogm:hibernate-ogm-redis:5.0.0.Final

Alternatively, you can download archives containing all the binaries, source code and documentation from SourceForge.

How can I get in touch?

You can find us through the following channels:

We are looking forward to hear your feedback!

back to top