Red Hat

In Relation To Emmanuel Bernard

In Relation To Emmanuel Bernard

Handling queries on complex types in Hibernate Search

Posted by Emmanuel Bernard    |       |    Tagged as Hibernate Search

Writing queries using complex types can be a bit surprising in Hibernate Search. For these multi-fields types, the key is to target each individual field in the query. Let’s discuss how this works.

What’s a complex type?

Hibernate Search lets you write custom types that take a Java property and create Lucene fields in a document. As long as there is a one property for one field relationship, you are good. It becomes more subtle if your custom bridge stores the property in several Lucene fields. Say an Amount type which has the numeric part and the currency part.

Let’s take a real example from a user using Infinispan's search engine - proudly served by Hibernate Search.

The FieldBridge
public class JodaTimeSplitBridge implements TwoWayFieldBridge {

     * Set year, month and day in separate fields
    public void set(String name, Object value, Document document, LuceneOptions luceneoptions) {
        DateTime datetime = (DateTime) value;
            name+".year", String.valueOf(datetime.getYear()), document
            name+".month", String.format("%02d", datetime.getMonthOfYear()), document
            name+".day", String.format("%02d", datetime.getDayOfMonth()), document

    public Object get(String name, Document document) {
        IndexableField fieldyear = document.getField(name+".year");
        IndexableField fieldmonth = document.getField(name+".month");
        IndexableField fieldday = document.getField(name+".day");
        String strdate = fieldday.stringValue()+"/"+fieldmonth.stringValue()+"/"+fieldyear.stringValue();
        DateTime value = DateTime.parse(strdate, DateTimeFormat.forPattern("dd/MM/yyyy"));
        return String.valueOf(value);

    public String objectToString(Object date) {
        DateTime datetime = (DateTime) date;
        int year = datetime.getYear();
        int month = datetime.getMonthOfYear();
        int day = datetime.getDayOfMonth();
        String value = String.format("%02d",day)+"/"+String.format("%02d",month)+"/"+String.valueOf(year);
        return String.valueOf(value);
The entity using the bridge
class BlogEntry {

    @Field(store=Store.YES, index=Index.YES)
    DateTime creationdate;

Let’s query this field

A naive but intuitive query looks like this.

Incorrect query
QueryBuilder qb = sm.buildQueryBuilderForClass(BlogEntry.class).get();
Query q = qb.keyword().onField("creationdate").matching(new DateTime()).createQuery();
CacheQuery cq = sm.getQuery(q, BlogEntry.class);

Unfortunately that query will always return 0 result. Can you spot the problem?

It turns out that Hibernate Search does not know about these subfields creationdate.year, creationdate.month and A FieldBridge is a bit of a blackbox for the Hibernate Search query DSL, so it assumes that you index the data in the field name provided by the name parameter (creationdate in this example).

We have plans in a not so future version of Hibernate Search to address that problem. It will only require you to provide a bit of metadata when you write such advanced custom field bridge. But that’s the future, so what to do now?

Use a single field

I am cheating here but as much as you can, try and keep the one property = one field mapping. Life will be much simpler to you. In this specific JodaTime type example, this is extremely easy. Use the custom bridge but instead of creating three fields (for year, month, day), keep it as a single field in the form of yyyymmdd.

Let’s again use our user real life solution.

A bridge using one field
public class JodaTimeSingleFieldBridge implements TwoWayFieldBridge {

     * Store the data in a single field in yyymmdd format
    public void set(String name, Object value, Document document, LuceneOptions luceneoptions) {
        DateTime datetime = (DateTime) value;
            name, datetime.toString(DateTimeFormat.forPattern("yyyyMMdd")), document

    public Object get(String name, Document document) {
        IndexableField strdate = document.getField(name);
        return DateTime.parse(strdate.stringValue(), DateTimeFormat.forPattern("yyyyMMdd"));

    public String objectToString(Object date) {
        DateTime datetime = (DateTime) date;
        return datetime.toString(DateTimeFormat.forPattern("yyyyMMdd"));

In this case, it would even be better to use a Lucene numeric format field. They are more compact and more efficient at range queries. Use luceneOptions.addNumericFieldToDocument( name, numericDate, document );.

The query above will work as expected now.

But my type must have multiple fields!

OK, OK. I won’t avoid the question. The solution is to disable the Hibernate Query DSL magic and target the fields directly.

Let’s see how to do it based on the first FieldBridge implementation.

Query targeting multiple fields
int year = datetime.getYear();
int month = datetime.getMonthOfYear();
int day = datetime.getDayOfMonth();

QueryBuilder qb = sm.buildQueryBuilderForClass(BlogEntry.class).get();
Query q = qb.bool()
    .must( qb.keyword().onField("creationdate.year").ignoreFieldBridge().ignoreAnalyzer()
                .matching(year).createQuery() )
    .must( qb.keyword().onField("creationdate.month").ignoreFieldBridge().ignoreAnalyzer()
                .matching(month).createQuery() )
    .must( qb.keyword().onField("").ignoreFieldBridge().ignoreAnalyzer()
                .matching(day).createQuery() )

CacheQuery cq = sm.getQuery(q, BlogEntry.class);

The key is to:

  • target directly each field,

  • disable the field bridge conversion for the query,

  • and it’s probably a good idea to disable the analyzer.

It’s a rather advanced topic and the query DSL will do the right thing most of the time. No need to panic just yet.

But in case you hit a complex type needs, it’s interesting to understand what is going on underneath.

IntelliJ IDEA, Mac OS X and Hibernate ORM

Posted by Emmanuel Bernard    |       |    Tagged as Hibernate ORM

Hibernate ORM, IntelliJ IDEA and Mac OS X have been a pretty tough combination to work with since our move to Gradle. It’s not exactly anyone’s fault but the combination of

  • gradle / IntelliJ integration

  • JDK 6 running IntelliJ IDEA

  • Hibernate ORM advanced use of Gradle and of custom plugins

made for a long and painful journey.

These days are over. Steve found the last blocking issue and you can now import Hibernate ORM natively in IntelliJ IDEA.

Importing Hibernate ORM in IntelliJ IDEA on Mac OS X

git clone
  • Open IntelliJ IDEA

  • File → Open

  • Select hibernate-orm

  • The defaults should be good

    • Make especially sure that you select Use default gradle wrapper

    • I use Gradle JVM to 1.8

  • Click OK and let it work and download the internet

You now have a properly imported Hibernate ORM project :)

This worked on Mac OS X 10.10.4 and IntelliJ IDEA CE 14.1.4.

Importing Hibernate ORM in NetBeans on Mac OS X

And this incidentally fixed an import bug for NetBeans.

  • Open NetBeans

  • Install the Gradle plugin

  • Open project

  • Select hibernate-orm

This worked on Mac OS X 10.10.4 and NetBeans 8.0.2.

Hibernate Search, JMS and transactions

Posted by Emmanuel Bernard    |       |    Tagged as Hibernate Search

Hibernate Search sends the indexing requests in the post transaction phase. Until now. The JMS backend can now send its indexing requests transactionally with the database changes. Why is that useful? Read on.

A bit of context

When you change indexed entities, Hibernate Search collects these changes during the database transaction. It then waits for the transaction to be successful before pushing them to the backend.

Hibernate Search has a few backends:

  • lucene: this one uses Lucene to index the entities

  • JMS: this one sends a JMS message with the list of index changes. This JMS queue is then read by a master which uses the Lucene backend.

  • and a few more that are not interesting here

Running the backend after the transaction (in the afterTransaction phase to be specific) is generally what you want. Just to name a few reasons:

  • you don’t want index changes to be executed if you end up rollbacking the database transaction

  • you don’t necessarily want your database changes to fail because the indexing fails: you can always rebuild the index from your database.

  • and most backends don’t support transactions anyways

Hibernate Search lets you enlist an error callback so that you are notified of these indexing problems when they happen and react the way you want (log, raise an exception, retry etc).

So why make the JMS backend join the transaction

If you make the JMS backend join the transaction, then either the database changes happen and the JMS messages are received by the queue, or nothing happens (no database change and no JMS message).

The non transactional approach is still our recommended approach. But there are a few reasons why you want to go transactional.

No code to handle the message failure

It eliminates the need to write an error callback and handle this problematic case.

Simpler exploitation processes

It simplifies your exploitation processes. You can focus on monitoring your JMS queue (rates of messages coming in, rates of messages coming out) which will give you an accurate status of the health of Hibernate Search’s work.

Transactional mass indexing

When doing changes to lots of indexed entities, it is common to use the following pseudo pattern to avoiod OutOfMemoryException

for (int i = 0 ; i < workLoadSize ; i++) {
    // do changes
    if ( i % 500 == 0 ) {

If you use the transactional JMS backend, then all the message will be sent or none of them.

Make sure your JMS implementation and your JTA transaction manager are smart and don’t keep the messages in memory or you might face an OutOfMemoryException.

More consistent batching frameworks flow

If you use a batching framework like Spring Batch which keeps its "done" status in a database, you have a guarantee that changes, indexing requests and batch status are all consistent.

How to use the feature

This feature is now integrated in master and will be in an Hibernate Search release any time soon.

We kept the configuration as simple as possible. Simply add the following property

If you try and use this option on a non transactional backend (i.e. not JMS), Hibernate Search will yell at you.

Make sure to use a XA JMS queue and that your database supports XA as we are talking about coordinated transactional systems.

Many thanks to Yoann, one of our customers, who helped us refine the why and how of that feature.

Sanne is going to do a virtual JBoss User Group session Tuesday July 14th at 6PM BST / 5PM UTC / 1PM EDT / 10 AM PDT. He is going to talk about Lucene in Java EE.

He will also describe some projects dear to our heart. If you want to know what Hibernate Search, Infinispan bring to the Lucene table and how they use Lucene internally, that’s the event to be in!

Apache Lucene is the de-facto standard open source library for Java developers to implement full-text-search capabilities.

While it’s thriving in its field, it is rarely mentioned in the scope of Java EE development.

In this talk we will see for which features many developers love Lucene, make some concrete examples of common problems it elegantly solves, and see some best practices about using it in a Java EE stack.

Finally we’ll see how some popular OSS projects such as Hibernate ORM (JPA provider), WildFly (Java EE runtime) and Infinispan (in-memory datagrid, JCache implementor) actually provide great Lucene integration capabilities.

If you are interested, get some more info on Meetup and enlist.

Multitenancy and current session

Posted by Emmanuel Bernard    |       |    Tagged as Hibernate ORM

Today let’s discuss the interaction between multitenancy and the current session feature.

Multitenancy let’s you isolate Session operations between different tenants. This is useful to create a single application isolating different customers from one another.

The current session feature returns the same session for a given context, typically a (JTA) transaction. This facilitates the one session per view/transaction/conversation pattern and avoids the one session per operation anti-pattern.

Session session = sessionFactory.getCurrentSession();
// do some other work
// later in the same context (e.g. JTA Transaction)
Session session2 = sessionFactory.getCurrentSession();

// semantically we have
assert session == session2

The two features work well together, simply implement CurrentTenantIdentifierResolver. That will give Hibernate ORM the expected tenant id when the current session is created.

How current is current?

When discussing with Florian and the ToulouseJUG, we exchanged on a small case where things might not work as you expect. Hibernate ORM considers that, for a given context (e.g. transaction), there can only be a single current Session.

So if in the same context (e.g. transaction):

  • your CurrentTenantIdentifierResolver implementation returns different values of tenant id,

  • you use getCurrentSession()

you will get a TenantIdentifierMismatchException.

However, it is sometimes useful to be able to reach several tenants from the same context. You have two options:

  • Manually create the Session

  • Implement a custom CurrentSessionContext

Manually create the Session

You can use the SessionFactory API to create a Session for the tenant id you are looking for.

Session session1 = sessionFactory.withOptions()
        .tenantIdentifier( tenant1 )
Session session2 = sessionFactory.withOptions()
        .tenantIdentifier( tenant2 )

But you have to make sure to close these sessions. If you are used to CDI or Spring handling sessions for you, or if you rely on the current session feature to propagate the session across your stack, this might be annoying.

Implement a custom CurrentSessionContext

The alternative is to implement your own version of CurrentSessionContext, avoid raising the TenantIdentifierMismatchException, and keep Session instances per both context and tenant id.

In practice, current sessions are stored in a ConcurrentHashMap keyed by context identifier, you just need to improve that part. Start with JTASessionContext and hack away!

What now?

We are currently discussing whether the default CurrentSessionContext implementations should partition by tenant id or raise the exception. If you have your opinion, chime in!

In the mean time, use one of the options above.

Revamped blog

Posted by Emmanuel Bernard    |       |    Tagged as Discussions

Welcome to the newly revamped Hibernate and friends blog.

As you can see, we made it look like and we took the opportunity to clean up the tags to make them more useful. But we had other reasons to migrate.

Blog early, blog often

We have been very happy with the Asciidoctor-based writing experience of both our reference documentation and of We are bringing this ease of writing to our blog and the not so secret objective is for us to blog more. So if you don’t see more content here, you can start yelling at us :)

Write once, display anywhere

There is an added bonus to using Asciidoctor for all of our writing: we can move content easily between systems. What starts as a blog can easily become a how-to page on the site or a section in the reference documentation.


We had a few issues with the old infrastructure that were keeping us away from our IDEs a bit too often for comfort.
This blog is a statically generated site, we use Awestruct. This means that we can make is scale very easily and with very low maintenance. Nothing beats plain old HTML :)

We did migrate all the old entries and comments from the old blog: fear not, nothing is lost.

If you seen any issue, let us know in the comments or by tweet. Oh and enlist to our new feed URL.

Célébrons l'open source et le partage (English version below).

Place pour Devoxx France à gagner

Devoxx France vous offre une place: je ne suis que le messager :) Elle sera gagnée par l'un d'entre vous. La règle est simple.

Contribuer à un projet open source (code, doc, etc) entre maintenant et dimanche 29 mars 2015 et tweeter le lien vers la pull request ou le patch à @Hibernate.

  • Quel projet ? N'importe pourvu que la licence soit open source. Donc pas limité à Hibernate.
  • Comment sera choisi le vainqueur ? La contribution que je préfère sera choisie. Super bonus si c'est votre première contribution à ce projet : il faut que ça brasse :)
  • Mais si on contribue à un projet Red Hat, on a plus de chance ? Non, tous les projets open source sont (libres et) égaux.
  • Je suis employé Red Hat, je peux gagner? Non. Contacte-moi directement.
  • Et ? C'est tout.

Devoxx France se déroule du 8 au 10 avril à Paris (Palais des Congrès). J'y parlerai entre autre des sujets Object Mappers dans le NoSQL et une BoF Hibernate.

Aller au boulot !

Free pass for Devoxx France

Devoxx France is giving away a free pass through me. I am just the messenger :) One of you will win it. The rule is simple.

Contribute to an open source project (code, doc, etc) between now and Sunday evening 29th of March 2015 and tweet the link to the pull request or the patch to @Hibernate.

  • Which project? Any project released under an open source license. Not limited to Hibernate.
  • How will you chose the winner? The contribution I prefer will be chosen. Extra bonus if that's your first contribution on that project: let's exchange!
  • But if I contribute to a Red Hat project, I have a better chance? No, all open source projects are equals.
  • I am a Red Hat employee, can I win? No. Contact me directly instead.

Devoxx France happens in Paris from the 8th to the 10th of April. I will be speaking about a few topics including Object Mappers and NoSQL and an Hibernate BoF.

Now go contribute!

PS légal: cette place est offerte par Devoxx France et Emmanuel Bernard à titre personnel, pas par Red Hat. Bref, je fais ce que je veux, avec mes cheveux. Legal PS: this pass if given away by Devoxx France and Emmanuel Bernard as an individual and free human being, not Red Hat.

First Hibernate OGM release (aka 4.1 Final)

Posted by Emmanuel Bernard    |       |    Tagged as Hibernate OGM Releases

Today is a big day. The first release of Hibernate OGM with final status. Ever! Don't be fooled by the 4.1 number. Hibernate OGM is an object mapper for various NoSQL stores and offers the familiar JPA APIs. This final version offers mapping for MongoDB, Neo4J, Infinispan and Ehcache.

It has been a long journey to reach this release, much longer than we thought. And there is a long journey ahead of us to implement our full (and exciting!) vision. But today is the time to celebrate: download this puppy and try it out.

What is Hibernate OGM?

Hibernate OGM is an object mapper. It persists object graphs into your NoSQL datastore.

We took great care to map the object structures the most natural way possible ; we considered all the best practices for each of the NoSQL store we support. Storing a association in a document store is vastly different than storing the same association in a graph database.

// Unidirectional one to many association

// Basket
  "_id" : "davide_basket",
  "owner" : "Davide",
  "products" : [ "Beer", "Pretzel" ]

// Products
  "_id" : "Pretzel",
  "description" : "Glutino Pretzel Sticks"
  "_id" : "Beer",
  "description" : "Tactical nuclear penguin"

Hibernate OGM is 90% Hibernate ORM. We changed the parts that are specific to SQL and JDBC, but most of the engine remains untouched. Same power, same flexibility.

What is the API like?

Very simple. It's JPA. Or Hibernate ORM. Map your entities with JPA annotations (or via XML), then use JPA or the Hibernate native APIs to manipulate your objects.

@PersistenceContext EntityManager em;

// the transaction boundary is really here to express the flush time
public void createSomeUser() {
    Employer redHat =
        em.createQuery("from Employer e where = :name")
        .setParamater("name", "Red Hat")
    User emmanuel = new User("Emmanuel", "Bernard");

Our goal is to have a zero barrier of entry to NoSQL object mappers for people familiar with JPA or Hibernate ORM.

Hibernate OGM also has a flexible option system that lets you customize some of the NoSQL store specifics or mapping options. For example what is the MongoDB Write Concern for this entity (see code example) or should associations be stored in the owning entity document.

@WriteConcern(JOURNALED) // MongoDB write concern
public class User {

And queries?

We cannot talk about JPA without mentioning JP-QL. Offering JP-QL support is challenging at many levels. To mention only two, joins usually don't exist in NoSQL, and each store has a very different set of query capabilities.

Hibernate OGM can convert JP-QL queries to the underlying native query language of the datastore. This functionality is still limited however. Besides some queries will never map to JP-QL. So we also let you write native queries specific to your NoSQL store and map the results to managed entities.

// native query using CypherQL
String query = "MATCH ( n:Poem { name:'Portia', author:'Oscar Wilde' } ) RETURN n";
Poem poem = (Poem) em.createNativeQuery( query, Poem.class ).getSingleResult();

Where can I use Hibernate OGM?

It works anywhere Hibernate ORM or any JPA provider works. Java SE, Java EE, all should be good. We do require JPA 2.1 though. If you use WildFly (8.2), we have a dedicated module to make things even easier.

For which NoSQL store?

MongoDB, Neo4J, Infinispan and Ehcache are the one we consider stable. We are working on CouchDB and Cassandra. But really, any motivated person can try and map other NoSQL stores: that's how a few got started. We have an API that has proven flexible enough so far.

Can Hibernate OGM do X, Y, Z?

Probably. Maybe not.

The best is to talk to us in our forums, check our documentation (we spent a lot of time on it), and simply give it a try.

Generally, the mapping support is complete. Our query support is still a bit limited compared to where we want it to be. It will improve quickly now that the foundations are here.

We want to know what you need out of Hibernate OGM, what feature you miss, which one should be changed. Come and talk to us in our forum or anywhere you can find us.

How to get started?

Most of what you need is available in our web site. There is a getting started guide, and the more complete reference documentation. Get the full Hibernate OGM distribution. And last but not least, for any help, reach us via our forum.

It would be impossible to mention all the persons that contributed to Hibernate OGM and how: conversations, support, code, documentation, bootstrapping new datastore providers... Many thanks to all of you for making this a reality.

We are not done yet, far from it. We have plenty of ideas on where we want to bring Hibernate OGM. That's a discussion for another day.

Hibernate OGM 4.1.0.CR1 - stable mapping

Posted by Emmanuel Bernard    |       |    Tagged as Hibernate OGM Releases

After a huge push, we are now one release away from our Final version. So without further due, I present you Hibernate OGM 4.1.0.CR1. To sum it up:

  • stable mapping for each of the supported datastores (Infinispan, Ehcache, MongoDB, Neo4J)
  • new and better one cache per entity structure for key/value stores
  • improvement in Neo4J and MongoDB around embedded objects and composite ids
  • better documentation

Our goal for Hibernate OGM 4.1 is to offer a good Object Mapper for each of the primary datastores we target. Go test it before the final and give us feedback!

Mapping stability and documentation

This CR release signals the stable version of how we persist each data structure on the various datastores. We strive to offer a mapping that is natural to each individual datastore. We made some final improvements and are confident we can support this version.

For each datastore, we documented how each JPA mapping is persisted (entities, star-to-one, star-to-many, embedded id, etc.). It makes for a tree-killer documentation but shows what is the truth for each mapping.

We took the opportunity to improve the documentation even further and plan to finish that work for the final version.

Additional key/value cache structure

Our tests have showed that storing each entity type and association in a dedicated cache is actually more efficient than sharing the same cache for all entities. Since we also think it is a more natural mapping, we now offer this option and make it the default.

A User and an Address entities would lead to the following caches:

  • User: contains the users
  • Address: contains the address
  • associations_User_Address: contains the navigation from a user to its list of addresses

An interesting side effect is that it makes the keys smaller in size and faster to compare. Both Infinispan and Ehcache are benefiting from this.

Improvements around embedded objects, embedded ids and properties

In Neo4J, (non id) embedded objects are now represented as individual nodes. It is more in line with the connection behavior of graph databases.

In MongoDB, embedded id foreign keys have been improved and are now represented as nested documents like embedded ids were already. We did not hear complains about this one, so we think you guys don't use composite ids with MongoDB. That's good, keep doing that :)

The null properties are no longer stored in any of the data stores. While it might make some queries involving null values a bit harder, it is the more natural mapping for the datastores.

Go, go, go

This post is about validation, Bean Validation specifically. And the what it solves. It is a reaction to a post by Julien Tournay which basically claims - if I sum it up - that:

  • Scala is faster than Java (he is coming from far far away on that one - du diable vauvert as we say in French)
  • Bean Validation is kind of crap
  • The new Play Unified Validation API is awesome

I usually am old and wise enough not to answer these kind of posts. But I want to feel the blood of my youth again damn it! After all I am from an era when The Server Side was all the rage, trolls had real teeth, things were black or white. That was good fun.

Why the You missed the point like Mars Climate Orbiter title? With a sensationalist title like Scala is faster than Java, I had to step up my game. Plus Mars Climate Orbiter missed Mars due to a conversion error and we will talk about conversion today.

I won't refute things point by point but rather highlight a few fundamental misunderstandings and explain why things are like they are in Bean Validation and why it makes sense.

Java was limiting

IMHO, @emmanuelbernard a créer son API avec les outils a sa disposition - @skaalf
IMHO, @emmanuelbernard has created his API with the tools he had at his disposal - @skaalf

We certainly pushed the boundaries of Java when both CDI and Bean Validation were designed and I wished I had more freedom here and there. But no Bean Validation is like it is because that's the most natural way validation is expressed in the Java ecosystem for what we wanted to achieve. It is no good to offer something that does not fit into the ecosystem.

For example, in Java, people use mutable objects. Mutable believe it or not is not a swear word in this language. My understanding is that Play Unified Validation API makes use of Scala's community incline for immutability.

Validation, conversion and marshalling

In the use case Julien takes as an example, a JSON string is unmarshalled, conversion is applied (for dates - darn JSON) and then values are validated. He considers it weird that Bean Validation only does the latter and finds the flow wrong.

We kept this separation on purpose. The expert group vastly agreed that Bean Validation was not in the business of conversion and that these steps ((un)marshalling, conversion, validation) should be separated. Here is a key reason:

  • marshalling and conversions are only at the Java boundaries (Web frameworks, services enpoints, datastores enpoints, etc)
  • validation happens both at these boundaries but also at key lifecycle events within the Java boundaries

Conceptually, separating validation from the rest makes a lot of sense to enable technology composition and avoid repetitions - more on that latter point later. What is important is that for a given boundary, they are properly integrated. It's not unlike inlining in compilation.

Bean Validation is in the business of not being called

One key point in the design of Bean Validation is that it has been built to be integrated within a cohesive stack more than to be used individually.

JPA transparently calls Bean Validation on your objects at the right time. JSF calls Bean Validation transparently before populating the beans. JAX-RS calls Bean Validation on the inbound and outbound resources transparently. Note that these three technologies do address the marshalling and conversion problem already. That's their job.

So the key is that an application rarely has to call the Bean Validation API. I consider it a failure or an unfinished work when it has to.

If you need to explicitly validate some piece of data in a fire and forget way, Bean Validation might not be the best tool. And that's OK.

Back to the JSON case. When the JSON Binding spec will be out, we will have the same integration that is currently present in Play Unified parsing, conversion and validation API (it could also be done today in Jackson, I'm not sure what extension points this library offers). While the three layers marshalling / conversion / validation will be conceptually separated - and I'm sure will report different types of error for better tracking - the implementation will use some specific APIs of Bean Validation to inline validation with the unmarshalling and conversion. That API exists BTW, it is Validator.validateValue which lets you validate a value without creating the POJO associated. It is used by web frameworks already. Pretty much what Play Unified Validation API does.

As for JAX-B and XML binding, well let's say that there is an X in it and that it's not safe for children. More seriously, we have integration plans with mapping between the XSD and the Bean Validation constraints but we haven't got around to do it yet.

Bean Validation is in the business of declaring constraints once

From what I see of the Play Unified Validation API, you put the declaration / implementation of the validation next the marshalling logic. In other words you cannot share the constraint declaration between different marshalling operations or even between object transformations.

Bean Validation has been designed to let the user declare the constraints once and have them validated across the whole stack. From the web form and service entry points down to the database input/output and schema definition. Some even use our metadata facility to propagate the constraints up to the client side (JavaScript).

And that's future proof, when we add JSON-B support, boom the constraints already declared are used automatically and for free.


Bean Validation cannot be understood if you take it in isolation. It is useful and works in isolation for sure but it shines when integrated in a platform from top to bottom. We did and keep doing it in Java EE but we also make sure to keep our APIs and SPIs open for other platforms to do the same. Spring famously has been doing some of it.

Let's mention a few key features of Bean Validation that are not addressed at all in Play's approach:

  • i18n
  • constraint inheritance
  • method validation
  • custom and context sensitive programmatic error report
  • partially valid object graph - yes that's important in a mutable world
  • etc

Even a few of these would make Play's stuff much much different.

Now with this context properly set, if you read back Julien's complaints about Bean Validation (especially about they design), they fall one by one pretty much. I have to admit their composition approach is much nicer than Bean Validation's which relies on annotated annotations. That's the main thing I miss.

Design is choice. Without knowledge to the designer's choices, you can too easily misunderstand her design and motives. Julien has been comparing Apples with Oranges, which is ironic for someone working on one of the most type-safe language out there :o)

But it's good that more people take data validation holistically and seriously and step up the game. Less work for app developers, safer apps, happier customers. Judging by the location, I could even share some thoughts over a few rounds of whisky with Julien :)


back to top