Red Hat

In Relation To Discussions

In Relation To Discussions

Hibernate Community Newsletter 8/2017

Posted by    |       |    Tagged as Discussions Hibernate ORM

Welcome to the Hibernate community newsletter in which we share blog posts, forum, and StackOverflow questions that are especially relevant to our users.

Articles

The Hibernate ResultTransformer is extremely useful to customize the way you fetch data from the database. Check out this article to learn more about this topic.

JPA and Hibernate use the first-level cache as a staging buffer for every read/write operation, and understanding its inner workings is very important if you want to use JPA effectively. For more details, you should definitely read this article.

Marco Behler offers 6 very practical video tutorials for Hibernate beginners.

Dealing with difficult software problems is easier than you might think. Check out this article for more details on how you can solve any software issue with the help of our wonderful software community.

If you wonder why you should choose Hibernate over plain JDBC, this article gives you 15 reasons why Hibernate is worth using.

This very short article offers a quick introduction to mapping a bidirectional one-to-many association. If you want to know what is the best way to map a one-to-many database relationship, then you should definitely read this article as well.

Database concurrency control is a very complex topic. PostgreSQL advisory locks are a very useful concurrency control API which you can use to implement multi-node coordination mechanisms. Check out this article for more details on this topic.

Time to upgrade

Hibernate ORM 5.2.10 has been released, as well as Hibernate Search 5.8.0.Beta1 which is now compatible with ElasticSearch 5.

Accessing private state of Java 9 modules

Posted by    |       |    Tagged as Discussions

Data-centric libraries often need to access private state of classes provided by the library user.

An example is Hibernate ORM. When the @Id annotation is given on a field of an entity, Hibernate will by default directly access fields - as opposed to calling property getters and setters - to read and write the entity’s state.

Usually, such fields are private. Accessing them from outside code has never been a problem, though. The Java reflection API allows to make private members accessible and access them subsequently from other classes. With the advent of the module system in Java 9, rules for this will change a bit, though.

In the following we’ll explore the options authors of a library provided as a Java 9 module have to access private state of classes defined in other modules.

An example

As an example, let’s consider a simple method which takes an object - e.g. an instance of an entity type defined by the user - and a field name and returns the value of the object’s field of that name. Using reflection, this method could be implemented like this (for the sake of simplicity, we are ignoring the fact that a security manager could be present):

package com.example.library;

public class FieldValueAccessor {

    public Object getFieldValue(Object object, String fieldName) {
        try {
            Class<?> clazz = object.getClass();
            Field field = clazz.getDeclaredField( fieldName );
            field.setAccessible( true );
            return field.get( object );
        }
        catch (NoSuchFieldException | SecurityException | IllegalArgumentException | IllegalAccessException e) {
            throw new RuntimeException( e );
        }
    }
}

By calling Field#setAccessible() we can obtain the field’s value in the following, even if it had been declared private. The module descriptor for the library module is trivial, it just exports the package of the accessor class:

module com.example.library {
    exports com.example.library;
}

In a second module, representing our application, let’s define a simple "entity":

package com.example.entities;

public class MyEntity {

    private String name;

    public MyEntity(String name) {
        this.name = name;
    }

    // ...
}

And also a simple main method which makes use of the accessor to read a field from the entity:

package com.example.entities;

public class Main {
    public static void main(String... args) {
        FieldValueAccessor accessor = new FieldValueAccessor();
        Object fieldValue = accessor.getFieldValue( new MyEntity( "hey there" ), "name" );
        assert "hey there".equals( fieldValue );
    }
}

As this module uses the library module, we need to declare that dependency in the entity module’s descriptor:

module com.example.myapp {
    requires com.example.library;
}

With the example classes in place, let’s run the code and see what happens. It would have been fine on Java 8, but as of Java 9 we’ll see this exception instead:

java.lang.reflect.InaccessibleObjectException:
Unable to make field private final java.lang.String com.example.entities.MyEntity.name accessible:
module com.example.myapp does not "opens com.example.entities" to module com.example.library

The call to setAccessible() fails, as by default code in one module isn’t allowed to perform so-called "deep reflection" on code in another (named) module.

Open this module!

Now what can be done to overcome this issue? The error message is giving us the right hint already: the package with the type to reflect on must be opened to the module containing the code invoking setAccessible().

If a package has been opened to another module, that module can access the package’s types reflectively at runtime. Note that opening a package will not make it accessible to other modules at compile time; this would require the package to be exported instead (as in the case of the library module above).

There are several options for opening a package. The first is to make the module an open module:

open module com.example.myapp {
    requires com.example.library;
}

This opens up all packages in this module for reflection by all other modules (i.e. this would be the behavior as known from other module systems such as OSGi). In case you’d like some more fine-grained control, you can opt to open specific packages only:

module com.example.myapp {
    opens com.example.entities;
    requires com.example.library;
}

This will allow for deep reflection on the entities package, but not on other packages within the application module. Finally, there is the possibility to limit an opens clause to one or more specific target modules:

module com.example.myapp {
    opens com.example.entities to com.example.library;
    requires com.example.library;
}

That way the library module is allowed to perform deep reflection, but not any other module.

No matter which of these options we use, the library module can now make private fields of types in the entities package of the entities module accessible and subsequently read or write their value.

Opening up packages in one way or another lets library code written prior to Java 9 continue to function as before. It requires some implicit knowledge, though. I.e. application developers need to know which libraries need reflective access to which types so they can open up the right packages. This can become tough to manage in more complex applications with multiple libraries performing reflection.

Luckily, there’s a more explicit approach in the form of variable handles.

Can you handle the var?

Variable handles - defined by JEP 193 - are a very powerful addition to the Java 9 API, providing "read and write access to [variables] under a variety of access modes". Describing them in detail would go far beyond the scope of this post (refer to the JEP and this article if you would like to learn more). For our purposes let’s focus on their capability for accessing fields, representing an alternative to the traditional reflection-based approach.

So how could our FieldValueAccessor class be implemented using variable handles?

Var handles are obtained via the MethodHandles.Lookup class. If such lookup has "private access" to the entities module, it will let us access private fields of that module’s types. To get hold of such lookup, we let the client code pass in a lookup when bootstrapping the library code:

Lookup lookup = MethodHandles.lookup();
FieldValueAccessor accessor = new FieldValueAccessor( lookup );

In FieldValueAccessor#getFieldValue() we can now use the method MethodHandles#privateLookupIn() which will return a new lookup granting private access to the given entity instance. From that lookup we can eventually obtain a VarHandle which allows us to get the object’s field value:

public class FieldValueAccessor {

    private final Lookup lookup;

    public FieldValueAccessor(Lookup lookup) {
        this.lookup = lookup;
    }

    public Object getFieldValue(Object object, String fieldName) {
        try {
            Class<?> clazz = object.getClass();
            Field field = clazz.getDeclaredField( fieldName );

            MethodHandles.Lookup privateLookup = MethodHandles.privateLookupIn( clazz, lookup );
            VarHandle handle = privateLookup.unreflectVarHandle( field );

            return handle.get( object );
        }
        catch (NoSuchFieldException | IllegalAccessException e) {
            throw new RuntimeException( e );
        }
    }
}

Note that this only works if the original lookup has been created by code in the entities module.

This is because MethodHandles#lookup() is a caller sensitive method, i.e. the returned value will depend on the direct caller of that method. privateLookupIn() checks whether the given lookup is allowed to perform deep reflection on the given class. Thus a lookup obtained in the entities module will do the trick, whereas a lookup retrieved in the library module wouldn’t be of any use.

Which route to take?

Both discussed approaches let libraries access private state from Java 9 modules.

The var handle approach makes the requirements of the library module more explicit, which I like. Expecting a lookup instance during bootstrap should be less error-prone than the rather implicit requirement for opening up packages or modules.

Mails by the OpenJDK team also seem to suggest that - together with their siblings, method handles - var handles are the way to go in the long term. Of course it requires the application module to be cooperative and pass the required lookup. It remains to be seen how this could look like in container / app server scenarios, where libraries typically aren’t bootstrapped by the application code but by the server runtime. Injecting some helper code for obtaining the lookup object upon deployment may be one possible solution.

As var handles only are introduced in Java 9 you might want to refrain from using them if your library is supposed to run with older Java versions, too (actually, you can do both by building multi-release JARs). A very similar approach can be implemented in earlier Java versions using method handles (see MethodHandles.Lookup#findGetter()). Unfortunately, though, there’s no official way to obtain method handles with private access prior to Java 9 and the introduction of privateLookupIn​(). Ironically, the only way to get such handles is employing some reflection.

The final thing to note is that there may be a performance advantage to using var and method handles, as access checking is done only once when getting them (as opposed to every invocation). Some proper benchmarking would be in order, though, to see what’s the difference in a given use case.

As always, your feedback is very welcome. Which approach do you prefer? Or perhaps you’ve found yet other solution we’ve missed so far? Let us know in the comments below.

As the Bean Validation 2.0 spec is making good progress, you may want to try out the features of the new spec revision with your existing Java EE applications.

WildFly, as a compatible Java EE 7 implementation, comes with Bean Validation 1.1 and its reference implementation Hibernate Validator 5 out of the box. In the following we’ll show you how easy it is to upgrade the server’s modules to the latest Bean Validation release, using a patch file provided by Hibernate Validator.

Getting the patch files

First download the patch file of the Hibernate Validator version you want to upgrade to. The latest release of Hibernate Validator at this point is 6.0.0.Alpha2 which implements the 2.0.0.Alpha2 release of Bean Validation. You can simply fetch the patch file for this release from Maven Central. If you are interested in the latest 5.x release of Hibernate Validator - which is the current stable implementation of Bean Validation 1.1 - you’d grab this file instead.

Both patches can be applied to WildFly 10.1 instances. Generally, we provide patch files for the latest stable version of WildFly at the time of the Hibernate Validator release.

Applying and undoing the patch

Once you’ve downloaded the patch file, change to the installation directory of your WildFly instance and run the following command to apply the patch:

./bin/jboss-cli.sh --command="patch apply hibernate-validator-modules-6.0.0.Alpha2-wildfly-10.1.0.Final-patch.zip"

Now (re-)start WildFly and you can begin to experiment with the new features of Hibernate Validator 6 and Bean Validation 2.0, such as the validation of container elements (think List<@Email String> emails), support for the Java 8 date/time API or the new built-in constraints such as @NotEmpty or @Email.

The nice thing of the patch mechanism is that patches aren’t actually modifying the patched WildFly instance. Any modules contained in the patch are just added in a separate "overlays" directory.

So in case you want go back to the version of Hibernate Validator originally coming with the server, just run this command to undo the patch:

./bin/jboss-cli.sh --command="patch rollback --reset-configuration=true"

You can learn more about the WildFly patching infrastructure in general here and here.

Bonus: Fully automated update for integration tests

In case you are running integration tests for your applications using the fabulous Arquillian framework, you can add the following configuration to your pom.xml. This will first download and extract WildFly, then download the patch file and apply it to the unzipped server:

...
<properties>
    <wildfly.version>10.1.0.Final</wildfly.version>
    <wildfly.core.version>1.0.1.Final</wildfly.core.version>
    <hibernate.validator.version>6.0.0.Alpha2</hibernate.validator.version>
</properties>
...
<build>
    <plugins>
        <plugin>
            <groupId>org.apache.maven.plugins</groupId>
            <artifactId>maven-dependency-plugin</artifactId>
            <version>3.0.0</version>
            <executions>
                <execution>
                    <id>unpack-wildfly</id>
                    <phase>pre-integration-test</phase>
                    <goals>
                        <goal>unpack</goal>
                    </goals>
                    <configuration>
                        <artifactItems>
                            <artifactItem>
                                <groupId>org.wildfly</groupId>
                                <artifactId>wildfly-dist</artifactId>
                                <version>${wildfly.version}</version>
                                <type>tar.gz</type>
                                <overWrite>false</overWrite>
                                <outputDirectory>${project.build.directory}</outputDirectory>
                            </artifactItem>
                        </artifactItems>
                    </configuration>
                </execution>
                <execution>
                    <id>copy-patch</id>
                    <phase>pre-integration-test</phase>
                    <goals>
                        <goal>copy</goal>
                    </goals>
                    <configuration>
                        <artifactItems>
                            <artifactItem>
                                <groupId>org.hibernate.validator</groupId>
                                <artifactId>hibernate-validator-modules</artifactId>
                                <version>${hibernate.validator.version}</version>
                                <classifier>wildfly-${wildfly.version}-patch</classifier>
                                <type>zip</type>
                                <outputDirectory>${project.build.directory}</outputDirectory>
                            </artifactItem>
                        </artifactItems>
                    </configuration>
                </execution>
            </executions>
        </plugin>
        <plugin>
            <groupId>org.wildfly.plugins</groupId>
            <artifactId>wildfly-maven-plugin</artifactId>
            <version>1.1.0.Final</version>
            <dependencies>
                <!-- Contains the patch command -->
                <dependency>
                    <groupId>org.wildfly.core</groupId>
                    <artifactId>wildfly-patching</artifactId>
                    <version>${wildfly.core.version}</version>
                </dependency>
            </dependencies>
            <executions>
                <!-- Currently the WF Maven plug-in cannot apply offline commands,
                     although patch itself wouldn't require a running server;
                     see https://issues.jboss.org/projects/WFMP/issues/WFMP-11 -->
                <execution>
                    <id>start-wildfly-for-patching</id>
                    <phase>pre-integration-test</phase>
                    <goals>
                        <goal>start</goal>
                    </goals>
                </execution>
                <execution>
                    <id>apply-patch-file</id>
                    <phase>pre-integration-test</phase>
                    <goals>
                        <goal>execute-commands</goal>
                    </goals>
                    <configuration>
                        <fail-on-error>false</fail-on-error>
                        <commands>
                            <command>patch apply --path ${project.build.directory}/hibernate-validator-modules-${hibernate.validator.version}-wildfly-${wildfly.version}-patch.zip</command>
                        </commands>
                    </configuration>
                </execution>
                <execution>
                    <id>shutdown-wildfly-for-patching</id>
                    <phase>pre-integration-test</phase>
                    <goals>
                        <goal>shutdown</goal>
                    </goals>
                </execution>
            </executions>
            <configuration>
                <jbossHome>${project.build.directory}/wildfly-${wildfly.version}/</jbossHome>
            </configuration>
        </plugin>
    </plugins>
</build>
...

Applying this configuration will give you a WildFly instance with the latest release of Bean Validation and Hibernate Validator which you then can use as a deployment target for your integration tests, employing the latest Bean Validation 2.0 features. You can find a complete Maven project with a simple Arquillian test on GitHub in the hibernate-demos repository.

As you see it’s not difficult to upgrade WildFly 10 to Bean Validation 2.0, so don’t hesitate and give it a try. Your feedback on the new spec revision is always welcome on the mailing list or in the forum.

Hibernate Community Newsletter 7/2017

Posted by    |       |    Tagged as Discussions Hibernate ORM

Welcome to the Hibernate community newsletter in which we share blog posts, forum, and StackOverflow questions that are especially relevant to our users.

Upcoming events

Java Gruppen Danemark is hosting a High-Performance Hibernate webinar. If you want to participate, then you should register on Eventbrite.

Hibernate OGM will be at Devoxx UK 2017. If you want to learn more about Hibernate OGM and Infinispan, then you should definitely come to see Sanne’s presentation.

Articles

A very handy feature when working with JDBC batch updates is to find out which statement triggered a batch failure. Read this article for more info on this topic.

You can use Hibernate with CockroachDB. Check out this tutorial to see how easily you can integrate them.

Mapping the @OneToMany association is not as easy as you might think. Check out this article which shows you the best way to map a @OneToMany relationship.

Alon Segal shows you how to implement Multitenancy with Spring and Hibernate. Multitenancy is very handy when you need to support multiple customers on a single platform so that each customer is limited to its own tenant.

I read this article which describes a way to provide a JPA AttributeConverter to support Java 1.8 Data/Time types. However, this is not needed since Hibernate has been supported them for quite a while now.

For our Czech readers, Roman Pichlík wrote a very good article about all sorts of application performance issues, and the Open-Session in View is mentioned as well.

For our French readers, eXo platform has written an article that shows you how to integrate eXO with JPA and Hibernate.

Time to upgrade

Hibernate Validator 5.4.1 and 6.0.0 Alpha 2 are out.

GORM 6.1 has been released with support for Hibernate 5.2.

New foray into serverless - announcing library shift

Posted by    |       |    Tagged as Discussions

Red Hat has been exploring serverless (aka FaaS) through the Fabric8 Funktion project. It has been long due for us to get serious on the subject. We are turning all of our libraries into services starting today. This is going to be a multi year effort but we are committed to it.

We thought hard and long about which service to start with. In our performance lab, we realized that the slowest and most expensive part of serverless functions was CPU branch misprediction. In a serverless approach, you want to squeeze as much operations per CPU cycle as possible. Any misprediction has huge consequences and kill any of the mechanical sympathy effort we put in libraries like vert.x.

In our experiments, we found that the optimal solution was to get rid of if branches in serverless functions. We are proud to introduce IF as a Service or IFaaS (pronounced aye face). Your code changes from:

user = retrieveUserProfile();
if ( user.isCustomer() ) {
    displayAds();
}
user.extractRevenu();

to

user = retrieveUserProfile();
if ( user.isCustomer(), "displayAdFunction" );
user.extractRevenu();

We have been using it for a year now and forked the javac compiler to convert each if branch into a proper if service call. But we also completely re shaped how we write code to pure linear code, changing all if primitives with their service call equivalent. This is great because it also fixed the tab vs space problem: we no longer have any indenting in our code. Two stones in one bird ! I cannot emphasize enough how much development speed we gained in our team by fighting less on each pull request against these tab bastards.

The good thing about this external if service is that you can add a sidecar proxy to cache and simulate branch predictions on OpenShift and scale then horizontally ad nauseam. This is a huge benefit compared to the hardcoded and embedded system that is a CPU. We typically change the branch implementation 3 to 5 times a day depending on user needs.

FAQ

Is else a function too?

Else is not part of the MVP but we have it working in the labs. We are currently struggling to implement for loops, more specifically nested ones. We have HTTP error 310: too many redirects errors.

Where can I download it ?

You can’t, it’s a service dummy. Go to <ifaas.io>.

Can I use it in Go?

Goggle did not accept our pull request. We have a fork called NoGo.

What is the pricing model?

The basic if service is and will remain free. We are working on added values with pro and enterprise plans.

Hibernate Community Newsletter 6/2017

Posted by    |       |    Tagged as Discussions Hibernate ORM

Welcome to the Hibernate community newsletter in which we share blog posts, forum, and StackOverflow questions that are especially relevant to our users.

Articles

Implementing the soft delete pattern with Hibernate is trivial. Check out this article for more details.

Sri Vikram Sundar wrote a very detailed tutorial about integrating Spring MVC, MySQL, and Hibernate.

Stefan Pröll wrote two articles about using Hibernate Search and Spring Boot:

Baeldung features an article about using the Hibernate-specific @Immutable annotation to mark entities that should never be modified, which allow Hibernate to enable some flush-time performance optimizations.

For our Portuguese readers, Rafael Ponte wrote a guide to controlling transactions programmatically in Legacy Systems using Java 8 Lambdas and the Template Pattern. For non-Portuguese readers, you can use Google Translate since most Romance languages are easily translated into English.

Time to upgrade

Hibernate ORM 5.1.5 and 5.2.9 have been released.

Hibernate Community Newsletter 5/2017

Posted by    |       |    Tagged as Discussions Hibernate ORM

Welcome to the Hibernate community newsletter in which we share blog posts, forum, and StackOverflow questions that are especially relevant to our users.

Interviews

Don’t miss our Hibernate developer interviews with Marco Pivetta and Kevin Peters.

If you want to share your story about Hibernate, let us know, and we can share it with our huge community of passionate developers.

Books

Javin Paul, a long-time Java blogger, gives a review of the two best Hibernate books for Java developers.

Thorben Janssen is now writing a Hibernate Tips book, and you can get a free copy if you want to review it.

Articles

Nowadays, many RDBMS support JSON column types and Hibernate makes it very easy to use JSON object as entity attributes as this article demonstrates it.

Encrypting and decrypting column values is easy-peasy when using Hibernate. Check out this article for a detailed tutorial on this topic.

Arnold Gálovics wrote a very good article how the LazyInitializationException works in Hibernate.

Craig Andrews is building a Hibernate SpringCache prototype which acts like a Hibernate second-level cache implementation on top of Spring Cache. The idea is very interesting, and we are looking for your feedback on this topic.

Our colleague, Chris Cranford have a talk about Hibernate Performance at DevNexus, and here are the slides.

If you’re using MySQL, then you should know that we refactored the MySQL Dialects so that it’s much easier for you to match a Hibernate Dialect with a given MySQL server version.

Concurrency Control is a very interesting topic, and if you every wondered how MVCC (Multi-Version Concurrency Control) works, then this article is going to unravel how INERT, UPDATE, and DELETE statements work in MVCC-based database engines.

Thorben Janssen wrote two articles about Hibernate Search, one about custom Analyzers and another one about Facets.

Time to upgrade

Microservices, data and patterns

Posted by    |       |    Tagged as Discussions

One thing you don’t hear enough about in the microservices world is data. There is plenty of info on how your application should be stateless, cloud native, yadayadayada. But at the end of the day, you need to deal with state and store it somewhere.

I can’t blame this blind spot. Data is hard. Data is even harder in a unstable universe where your containers will be killed randomly and eventually. These problems are being tacked though in many fronts and we do our share.

But once you have dealt with the elasticity problem, you need to address a second problem: data evolution. This is even more pernicious in a microservices universe where:

  • The data structure can and will evolve faster per microservice. Remember this individual puppy is supposed to be small, manageable and as agile as a ballet dancer.

  • For your services to be useful, data must flow from one microservice to another without interlock. So they share data structure directly or via copy, implicitly or via an explicit schema, etc. Requiring to release microservices A, B and C together because they share a common data structure is a big no no.

My colleague Edson Yanaga has written a short but insightful book on exactly those problems. How to deal with data in a zero downtime microservice universe. How to evolve your data structure in a safe and incremental way. How do do that with your good old legacy RDBMS. And a few more subjects.

If you are interested in, or embarking in a microservices journey, I recommend you read this focused book. This will make you progress in your thinking.

The cool thing is that this book is free (or rather Red Hat paid the bill). Go grab a copy of Migrating to microservice databases - O’Reilly (you’ll need to register).

Today we’ll be talking about Hibernate Validator and how you can provide your own constraints and/or validators in a fully self-contained manner. Meaning packaging it all into its own JAR file, in a way that others can use your library by simply adding it to the classpath.

This functionality is based on Hibernate Validator usage of Java’s ServiceLoader mechanism that allows to register additional constraint definitions. But more on the details later.

What can be a real life scenario for building your own library with constraints and sharing it? Well, let’s say that you are building some library with data classes that user might want to validate. As it may be tough to keep track of all such libraries and write/maintain all those constraints for them - Hibernate Validator provides authors of such libraries a possibility to write and share their own validation extensions. Which can be picked up by Hibernate Validator and used to validate your data classes.

Also this ServiceLoader mechanism allows to solve another problem. As you are trying to be a good developer and provide end users of your library only with the relevant classes and hide implementation details from them, you may not want to expose your validator implementation by mentioning it in the validatedBy() parameter of the @Constraint annotation. By using the approach described in this post you can achieve all these thing.

For our examples we will be creating Maven projects with two modules - one will contain validators and represent a "library" that can be shared, another module will be a consumer of this library and will contain some tests.

Enough of the talking, let’s validate some beans! That’s why we all gathered here, right? :)

Using custom annotations and validators

First let’s consider a case of adding your own constraint annotation and a corresponding validator.

Time, it needs time …​

While Hibernate Validator 5.4 supports a wide range of the Java 8 date/time API (and Bean Validation 2.0 will move this support to the specification level), there are some types not supported, one of them being Duration. This type is not describing a point in time so the regular date/time constraints (@Future / @Past) do not make sense for it. So if for instance we wanted to validate that a given duration has a specified minimum length, a new constraint is needed. Let’s call it @DurationMin.

Our new constraint annotation might look like this:

DurationMin.java
@Documented
@Constraint(validatedBy = { })
@Target({ METHOD, FIELD, ANNOTATION_TYPE, CONSTRUCTOR, PARAMETER, TYPE_USE })
@Retention(RUNTIME)
@Repeatable(List.class)
@ReportAsSingleViolation
public @interface DurationMin {

    String message() default "{com.acme.validation.constraints.DurationMin.message}";
    Class<?>[] groups() default { };
    Class<? extends Payload>[] payload() default { };

    long value() default 0;
    ChronoUnit units() default ChronoUnit.NANOS;

    /**
     * Defines several {@code @DurationMin} annotations on the same element.
     */
    @Target({ METHOD, FIELD, ANNOTATION_TYPE, CONSTRUCTOR, PARAMETER, TYPE_USE })
    @Retention(RUNTIME)
    @Documented
    @interface List {
        DurationMin[] value();
    }
}

Now that we have an annotation, we need to create a corresponding constraint validator. To do that you need to implement the ConstraintValidator<DurationMin, Duration> interface, which contains two methods:

  • initialize() - initializes the validator based on annotation parameters

  • isValid() - performs actual validation

An implementation might look like this:

DurationMinValidator.java
public class DurationMinValidator implements ConstraintValidator<DurationMin, Duration> {

    private Duration duration;

    @Override
    public void initialize(DurationMin constraintAnnotation) {
        this.duration = Duration.of( constraintAnnotation.value(), constraintAnnotation.units() );
    }

    @Override
    public boolean isValid(Duration value, ConstraintValidatorContext context) {
        // null values are valid
        if ( value == null ) {
            return true;
        }
        return duration.compareTo( value ) < 1;
    }
}

As we are creating a new constraint annotation, we should also provide a default message for it. This can be done by placing a ContributorValidationMessages.properties property file in the classpath. This property file should contain a key/message pair, where key is the one used in annotation declaration (in our case it’s com.acme.validation.constraints.DurationMin.message) and message is the one you would like to show when validation fails. Our property file looks like so:

ContributorValidationMessages.properties
com.acme.validation.constraints.DurationMin.message = must be greater than or equal to {value} {units}

The bundle ContributorValidationMessages is queried by Hibernate Validator if the standard ValidationMessages bundle doesn’t contain a given message key, allowing library authors to provide default messages for their constraints as part of their JAR.

If you leave everything else as is, your constraint annotation will live its own life without knowing about the presence of validator. Hibernate Validator will not know of the validator as well. So to make sure that Hibernate Validator discovers your DurationMinValidator, you need to create the file META-INF/services/javax.validation.ConstraintValidator and put the fully qualified name of the validator implementation in it:

META-INF/services/javax.validation.ConstraintValidator
com.acme.validation.validators.DurationMinValidator

After all of this, your new constraint annotation on Duration elements can be used like this:

Task.java
public class Task {

    private String taskName;
    @DurationMin(value = 2, units = ChronoUnit.HOURS)
    private Duration timeSpent;

    public Task(String taskName, Duration timeSpent) {
        this.taskName = taskName;
        this.timeSpent = timeSpent;
    }
}

The project structure should look similar to next one:

project structure, align=

The whole source code presented here can be found in the hibernate-demos repository on GitHub.

Use standard constraints for non standard classes

Now let’s consider the case where you would want a standard Bean Validation constraint to support some other type, besides the ones that are already supported.

ThreeTen Extra types validation

As we were talking about date/time related validation, let’s stay on the same topic for this example as well. In this section we will look at ThreeTen Extra types - a great library that provides additional date and time classes to complement those already present in Java.

Bean Validation provides support for validating temporal types via the @Past/@Future annotations. So we would want to use these annotations on ThreeTen Extra types as well. To keep this example simple we will provide validators only for YearWeek and YearQuarter.

Let’s start with implementing ConstraintValidator<Future, YearWeek> interface:

FutureYearWeekValidator.java
public class FutureYearWeekValidator implements ConstraintValidator<Future, YearWeek> {

    @Override
    public void initialize(Future constraintAnnotation) {
    }

    public boolean isValid(YearWeek value, ConstraintValidatorContext context) {
        if ( value == null ) {
            return true;
        }
        return YearWeek.now().isBefore( value );
    }
}

The next step is to provide a list of implemented validators in META-INF/services/javax.validation.ConstraintValidator file:

META-INF/services/javax.validation.ConstraintValidator
com.acme.validation.validators.FutureYearQuarterValidator
com.acme.validation.validators.FutureYearWeekValidator
com.acme.validation.validators.PastYearQuarterValidator
com.acme.validation.validators.PastYearWeekValidator

After this we can package it all in a JAR file and we are ready to use our validators and share them with the world!

In the end our project structure should look similar to this:

project structure, align=

Now you can place @Past/@Future annotations on YearQuarter and YearWeek types like this:

PastEvent.java
public static class PastEvent {

    @Past
    private YearWeek yearWeek;
    @Past
    private YearQuarter yearQuarter;

    public PastEvent(YearWeek yearWeek, YearQuarter yearQuarter) {
        this.yearWeek = yearWeek;
        this.yearQuarter = yearQuarter
    }
}
FutureEvent.java
public static class FutureEvent {

    @Future
    private YearWeek yearWeek;
    @Future
    private YearQuarter yearQuarter;

    public FutureEvent(YearWeek yearWeek, YearQuarter yearQuarter) {
        this.yearWeek = yearWeek;
        this.yearQuarter = yearQuarter
    }
}

You also can find this example at GitHub.

Conclusion

So, as you can see, custom constraint validators can be built and shared in a fully self-contained way. And it can be done in a few simple steps:

  • create a validator implementing the ConstraintValidator interface

  • reference this validator’s fully qualified name in a META-INF/services/javax.validation.ConstraintValidator file

  • (optional) define custom/default messages by adding a ContributorValidationMessages.properties file

  • package it all as a JAR

  • you are ready to share your constraints, people can add them by simply ading your JAR to the classpath

Meet Kevin Peters

Posted by    |       |    Tagged as Discussions Hibernate ORM Interview

In this post, I’d like you to meet Kevin Peters, a Software Developer from Germany and Hibernate aficionado.

Kevin Peters, align=

Hi, Kevin. Would you like to introduce yourself and tell us a little bit about your developer experience?

My name is Kevin Peters, and I live in Germany where I work as a Software Developer. My first contact with the Java language was around 2005 during my vocational training, and I fell in love with it immediately.

I worked for several companies leveraging Java and Spring to implement ERP extensions, customizing eCommerce systems and PIM solutions. Nearly one year ago, I joined the GBTEC Software + Consulting AG, one of the leading suppliers of business process management (BPM) software, and there we are now reimplementing a BPM system in a cloud-based manner using Dockerized Spring Boot microservices.

You have recently mentioned on Twitter a DataSource proxy solution for validating auto-generated statements. Can you tell us what about this tool and how it works?

We use Spring Data JPA with Hibernate as JPA provider to implement our persistence layer, and we really enjoy the convenience coming along with it. But we also know about the "common" obstacles like Cartesian Products or the N+1 query problem while working with an ORM framework.

In our daily technical discussions and during knowledge transfer sessions we try to raise awareness for these topics among our colleagues, and in my opinion, the best way to achieve this is implementing tests and real world code examples showing that practically.

I started to prepare a small mapping example for one of our technical meetings, called "techtime", to demonstrate the "unordered element collection recreation" issue, and I wanted to show the unexpected amount of queries fired in this simple use case.

Fortunately, I came across the ttddyy/datasource-proxy GitHub project which helped me a lot to make that problem tangible. The datasource-proxy project empowers you to wrap your existing datasource with a proxy and allows you to count all executed queries separated by query type (e.g. INSERT, UPDATE, etc.). With that opportunity you can not only write tests which assert that you are doing the right thing within your use cases, you can also check if you are doing it in an effective way and avoid the traps I did mention before.

At the time when our Coding Architect Ingo Griebsch suggested to use this approach to enhance our test environment by automating the hunt for performance penalties, you caught us talking about your article on Twitter.

Proxies are a great way to add cross-cutting concerns without cluttering business logic. For instance, FlexyPool brings Monitoring and Fallback capabilities to connection pools. Are you using Proxies for other concerns as well, like logging statements?

There are many ways to enrich application code with proxies, facades or aspects. Starting with small things like logging with a facade like SLF4J, using Spring Security for access control, Hystrix service-to-service communication or even "basic" stuff like transactions in Spring Data, all these features are working with proxies, and we won’t miss them anymore.

Why did you choose Hibernate for that particular project, and did it meet your expectations, especially when it comes to application performance?

Hibernate provides a lot of convenience to us, especially if we combine it with Spring Data JPA. But the fact I enjoy most is that you can still switch to Hibernate specific features like Hibernate Named Queries or special Hibernate annotations.

It’s important to know when you can relax using "magic" ORM features and when the opposite is needed - forgo bidirectional relations and write HQL instead or even using database native queries to receive complex data. In our opinion, Hibernate offers the best balance between convenience and performance if one knows how to use it.

Hence, we have a quite complex data model and customers which store a lot of data it’s vital for our software to fetch and write data in a performant way in every of our use cases. And in case of any doubts, at least your articles help us getting things done right.

We always value feedback from our users, so can you tell us what you’d like us to improve or are there features that we should add support for?

In general, we love the feature set of Hibernate. Only the support of UNION HQL queries/Criteria API would be an awesome feature that we missed recently.

Thank you, Kevin, for taking your time. It is a great honor to have you here. To reach Kevin, you can follow him on Twitter.

back to top