Red Hat

In Relation To Gunnar Morling

In Relation To Gunnar Morling

Emulating property literals with Java 8 method references

Posted by Gunnar Morling    |       |    Tagged as Discussions

One of the things library developers often miss in Java are property literals. In this post I’m going to show how to make creative use of Java 8 method references to emulate property literals, with the help of some byte code generation.

Akin to class literals (e.g. Customer.class), property literals would allow to refer to the properties of a bean class in a type-safe manner. This would be useful for designing APIs that run actions on specific bean properties or apply some means of configuration to them. E.g. consider the API for programmatic configuration of index mappings in Hibernate Search:

new SearchMapping().entity( Address.class )
    .indexed()
    .property( "city", ElementType.METHOD )
        .field();

Or the validateValue() method from the Bean Validation API for validating a value against the constraints of a single property:

Set<ConstraintViolation<Address>> violations = validator.validateValue( Address.class, "city", "Purbeck" );

In both cases a String is used to represent the city property of the Address class.

That’s error-prone on several levels:

  • The Address class might not have a property city at all. Or one could forget to update that String reference when renaming the property.

  • In the case of validateValue(), there is no way for making sure that the passed value actually satisfies the type of the city property.

Users of the APIs will only find out about these issues when actually running their application. Wouldn’t it be nice if instead the compiler and the language’s type system prevented such wrong usages from the beginning? If Java had property literals, that’d be exactly what you’d get (invented syntax, this does not compile!):

mapping.entity( Address.class )
    .indexed()
    .property( Address::city, ElementType.METHOD )
        .field();

And:

validator.validateValue( Address.class, Address::city, "Purbeck" );

The issues mentioned above would be avoided: Having a typo in a property literal would cause a compilation error which you’d notice right in your IDE. It’d allow to design the configuration API in Hibernate Search in a way to accept only properties of Address when configuring the Address entity. And in case of Bean Validation’s validateValue(), literals could help to ensure that only values can be passed, that are assignable to the property type in question.

Java 8 method references

While Java 8 has no real property literals (and their introduction isn’t planned for Java 9 either), it provides an interesting way to emulate them to some degree: method references. Having been introduced to improve the developer experience when using Lambda expressions, method references also can be leveraged as poor-man’s property literals.

The idea is to consider a reference to a getter method as a property literal:

validator.validateValue( Address.class, Address::getCity, "Purbeck" );

Obviously, this will only work if there actually is a getter. But if your classes are following JavaBeans conventions - which often is the case - that’s fine.

Now how would the definition of the validateValue() method look like? The key is using the new Function type:

public <T, P> Set<ConstraintViolation<T>> validateValue(Class<T> type, Function<? super T, P> property, P value);

By using two type parameters, we make sure that the bean type, the property and the value passed for the property all correctly match. So API-wise, we got what we need: It’s safe to use, and the IDE will even auto-complete method names after starting to write Address::. But how do we derive the property name from the Function in the implementation of validateValue()?

That’s where the fun begins, as the Function interface just defines a single method, apply(), wich runs the function against a given instance of T. That seems not exactly helpful, does it?

ByteBuddy to the rescue

As it turns out, applying the function actually does the trick! By creating a proxy instance of the T type, we have a target for invoking the method and can obtain its name in the proxy’s method invocation handler.

Java comes with support for dynamic proxies out of the box, but that’s limited to proxying interfaces only. As our API should work with any kind of bean, also actual classes, I’m going to use a neat tool called ByteBuddy instead. ByteBuddy provides an easy-to-use DSL for creating classes on the fly which is exactly what we need.

Let’s begin by defining an interface which just allows to store and obtain the name of a property we obtained from a method reference:

public interface PropertyNameCapturer {

    String getPropertyName();

    void setPropertyName(String propertyName);
}

Now let’s use ByteBudy to programmatically create a proxy class which is assignable to the type of interest (e.g. Address) and PropertyNameCapturer:

public <T> T /* & PropertyNameCapturer */ getPropertyNameCapturer(Class<T> type) {
    DynamicType.Builder<?> builder = new ByteBuddy()                                       (1)
            .subclass( type.isInterface() ? Object.class : type );

    if ( type.isInterface() ) {                                                            (2)
        builder = builder.implement( type );
    }

    Class<?> proxyType = builder
        .implement( PropertyNameCapturer.class )                                           (3)
        .defineField( "propertyName", String.class, Visibility.PRIVATE )
        .method( ElementMatchers.any() )                                                   (4)
            .intercept( MethodDelegation.to( PropertyNameCapturingInterceptor.class ) )
        .method( named( "setPropertyName" ).or( named( "getPropertyName" ) ) )             (5)
            .intercept( FieldAccessor.ofBeanProperty() )
        .make()
        .load(                                                                             (6)
             PropertyNameCapturer.class.getClassLoader(),
             ClassLoadingStrategy.Default.WRAPPER
        )
        .getLoaded();

    try {
        @SuppressWarnings("unchecked")
        Class<T> typed = (Class<T>) proxyType;
        return typed.newInstance();                                                        (7)
    }
    catch (InstantiationException | IllegalAccessException e) {
        throw new HibernateException(
            "Couldn't instantiate proxy for method name retrieval", e
        );
    }
}

The code may appear a bit dense, so let me run you through it. First we obtain a new ByteBuddy instance (1) which is the entry point into the DSL. It is used to create a new dynamic type that either extends the given type (if it is a class) or extends Object and implements the given type if it is an interface (2).

Next, we let the type implement the PropertyNameCapturer interface and add a field for storing the name of the specified property (3). Then we say that invocations to all methods should be intercepted by PropertyNameCapturingInterceptor (we’ll come to that in a moment) (4). Only setPropertyName() and getPropertyName() (as declared in the PropertyNameCapturer interface) should be routed to write and read access of the field created before (5). Finally, the class is built, loaded (6) and instantiated (7).

That’s all that’s needed to create the proxy type; Thanks to ByteBuddy, this is done in a few lines of code. Now let’s take a look at the interceptor we configured before:

public class PropertyNameCapturingInterceptor {

    @RuntimeType
    public static Object intercept(@This PropertyNameCapturer capturer, @Origin Method method) {         (1)
        capturer.setPropertyName( getPropertyName( method ) );                                           (2)

        if ( method.getReturnType() == byte.class ) {                                                    (3)
            return (byte) 0;
        }
        else if ( ... ) { } // ... handle all primitve types
            // ...
        }
        else {
            return null;
        }
    }

    private static String getPropertyName(Method method) {                                               (4)
        final boolean hasGetterSignature = method.getParameterTypes().length == 0
                && method.getReturnType() != null;

        String name = method.getName();
        String propName = null;

        if ( hasGetterSignature ) {
            if ( name.startsWith( "get" ) && hasGetterSignature ) {
                propName = name.substring( 3, 4 ).toLowerCase() + name.substring( 4 );
            }
            else if ( name.startsWith( "is" ) && hasGetterSignature ) {
                propName = name.substring( 2, 3 ).toLowerCase() + name.substring( 3 );
            }
        }
        else {
            throw new HibernateException( "Only property getter methods are expected to be passed" );    (5)
        }

        return propName;
    }
}

intercept() accepts the Method being invoked as well as the target of the invocation (1). The annotations @Origin and @This are used to designate the respective parameters so ByteBuddy can generate the correct invocations of intercept() into the dynamic proxy type.

Note that there is no strong dependency from this interceptor to any types of ByteBuddy, meaning that ByteBuddy is only needed when creating that dynamic proxy type but not later on, when actually using it.

Via getPropertyName() (4) we then obtain the name of the property represented by the passed method object and store it in the PropertyNameCapturer (2). If the given method doesn’t represent a getter method, an exception is raised (5). The return value of the invoked getter is irrelevant, so we just make sure to return a sensible "null value" matching the property type (3).

With that, we got everything in place to get hold of the property represented by a method reference passed to validateValue():

public <T, P> Set<ConstraintViolation<T>> validateValue(Class<T> type, Function<? super T, P> property, P value) {
    T capturer = getPropertyNameCapturer( type );
    property.apply( capturer );
    String propertyName = ( (PropertyLiteralCapturer) capturer ).getPropertyName();

    // perform validation of the property value...
}

When applying the function to the property name capturing proxy, the interceptor will kick in, obtain the property name from the Method object and store it in the capturer instance, from where it can be retrieved finally.

And there you have it, some byte code magic lets us make creative use of Java 8 method references for emulating property literals.

That said, having real property literals as part of the language (dreaming for a moment, maybe Java 10?) would still be very beneficial. It’d allow to deal with private properties and, hopefully, one could refer to property literals from within annotations. Real property literals also would be more concise (no "get" prefix) and it’d generally feel a tad less hackish ;)

During my talk at VoxxedVienna on using Hibernate Search with Elasticsearch earlier this week, there was an interesting question which I couldn’t answer right away:

"When running a full-text query with a projection of fields, is it possible to return the result as a list of POJOs rather than as a list of arrays of Object?"

The answer is: Yes, it is possible, result transformers are the right tool for this.

Let’s assume you want to convert the result of a projection query against the VideoGame entity shown in the talk into the following DTO (data transfer object):

public static class VideoGameDto {

    private String title;
    private String publisherName;
    private Date release;

    public VideoGameDto(String title, String publisherName, Date release) {
        this.title = title;
        this.publisherName = publisherName;
        this.release = release;
    }

    // getters...
}

This is how you could do it via a result transformer:

FullTextEntityManager ftem = ...;

QueryBuilder qb = ftem.getSearchFactory()
    .buildQueryBuilder()
    .forEntity( VideoGame.class )
    .get();

FullTextQuery query = ftem.createFullTextQuery(
    qb.keyword()
        .onField( "tags" )
        .matching( "round-based" )
        .createQuery(),
    VideoGame.class
    )
    .setProjection( "title", "publisher.name", "release" )
    .setResultTransformer( new BasicTransformerAdapter() {
        @Override
        public VideoGameDto transformTuple(Object[] tuple, String[] aliases) {
            return new VideoGameDto( (String) tuple[0], (String) tuple[1], (Date) tuple[2] );
        }
    } );

List<VideoGameDto> results = query.getResultList();

I’ve pushed this example to the demo repo on GitHub.

There are also some ready-made implementations of the ResultTransformer interface which you might find helpful. So be sure to check out its type hierarchy. For this example I found it easiest to extend BasicTransformerAdapter and implement the transformTuple() method by hand.

To the person asking the question: Thanks, and I hope this answer is helpful to you!

Hibernate Validator 5.2.4.Final is out

Posted by Gunnar Morling    |       |    Tagged as Hibernate Validator Releases

It’s my pleasure to announce the release of Hibernate Validator 5.2.4.Final!

This is a rather small bugfix release which addresses two nasty issues around one of the more advanced features of Bean Validation, redefined default group sequences.

Please refer to the issues themselves (HV-1055, HV-1057) for all the details.

Where do I get it?

Use the GAV coordinates org.hibernate:{hibernate-validator|hibernate-validator-cdi|hibernate-validator-annotation-processor}:5.2.4.Final to fetch the release with Maven, Gradle etc. Alternatively, you can find distribution bundles containing all the bits on SourceForge (TAR.GZ, ZIP).

Found a bug? Have a feature request? Then let us know through the following channels:

It’s my pleasure to announce the release of Hibernate Validator 5.2.3.Final!

Wait, didn’t we already do another Hibernate Validator release earlier this month? That’s right, indeed we pushed out the first Alpha of the 5.3 family a couple of days ago. And normally, that’d mean that there would be no further releases of earlier version families.

But in this case we decided to do an exception from the rule as we noticed that Hibernate Validator couldn’t be used with Java 9 (check out issue HV-1048 if you are interested in the details). As we don’t want to keep integrators and users of Hibernate Validator from testing their own software on Java 9, we decided to fix that issue on the current stable release line (in fact we strongly encourage you to test your applications on Java 9 to learn as early as possible about any potential changes you might need to make).

While we were at it, we backported some further bugfixes from 5.3 to 5.2, amongst them one for ensuring compatability with the Google App Engine. As always, you can find the complete list of fixes in the changelog.

Where do I get it?

Use the GAV coordinates org.hibernate:{hibernate-validator|hibernate-validator-cdi|hibernate-validator-annotation-processor}:5.2.3.Final to fetch the release with Maven, Gradle etc. Alternatively, you can find distribution bundles containing all the bits on SourceForge (TAR.GZ, ZIP).

Found a bug? Have a feature request? Then let us know through the following channels:

Hibernate Validator 5.3.0.Alpha1 is out

Posted by Gunnar Morling    |       |    Tagged as Hibernate Validator Releases

It’s my pleasure to announce the first release of Hibernate Validator 5.3!

The overarching idea for the 5.3 timeline is to prototype several features which may potentially be standardized in the Bean Validation 2.0 specification. For instance we’ll work on a solution for the long-standing request for sorting the constraints on single properties.

If you’d like to see any specific features addressed in that prototyping work (and eventually included in BV 2.0), then please get in touch and let us know which are the most important things you are missing from the spec. We’ve compiled a first list of issues we are considering for inclusion in BV 2.0. For sure we cannot address all of them, so it’ll greatly help if you tell us what would be most helpful to you.

Dynamic payloads for constraints

To get things rolling, the Alpha 1 allows to you to enrich custom constraint violations with additional context data. Code examining constraint violations can access and interpret this data in a safer way than by parsing string-based constraint violation messages. Think of it as a dynamic variant of the existing Bean Validation payload feature.

As an example, let’s assume we have a constraint @Matches for making sure a long property matches a given value with some tolerance:

public static class Package {

    @Matches(value=1000, tolerance=100)
    public long weight;
}

If the annotated value is invalid, the resulting constraint violation should have a specific severity, depending on whether the value lies within the given tolerance or not. That severity value could then for instance be used for formatting the error specifically in a UI.

The definition of the @Matches constraint is nothing new, it’s just a regular custom constraint annotation:

@Retention(RUNTIME)
@Constraint(validatedBy = { MatchesValidator.class })
@Documented
public @interface Matches {

    public enum Severity { WARN, ERROR; }

    String message() default "Must match {value} with a tolerance of {tolerance}";
    Class<?>[] groups() default {};
    Class<? extends Payload>[] payload() default {};

    long value();
    long tolerance();
}

The constraint validator is where it’s getting interesting:

public class MatchesValidator implements ConstraintValidator<Matches, Long> {

    private long tolerance;
    private long value;

    @Override
    public void initialize(Matches constraintAnnotation) {
        this.value = constraintAnnotation.value();
        this.tolerance = constraintAnnotation.tolerance();
    }

    @Override
    public boolean isValid(Long value, ConstraintValidatorContext context) {
        if ( value == null ) {
            return true;
        }

        if ( this.value == value.longValue() ) {
            return true;
        }

        HibernateConstraintValidatorContext hibernateContext = context.unwrap(
                HibernateConstraintValidatorContext.class
        );
        hibernateContext.withDynamicPayload(
                Math.abs( this.value - value ) < tolerance ? Severity.WARN : Severity.ERROR
        );

        return false;
    }
}

In isValid() the severity object is set via HibernateConstraintValidatorContext#withDynamicPayload(). Note that the payload object must be serializable in case constraint violations are sent to remote clients, e.g. via RMI.

Validation clients may then access the dynamic payload like so:

HibernateConstraintViolation<?> violation = violations.iterator()
    .next()
    .unwrap( HibernateConstraintViolation.class );

if ( violation.getDynamicPayload( Severity.class ) == Severity.ERROR ) {
    // ...
}

What else is there?

Other features of the Alpha 1 releases are improved OSGi support (many thanks to Benson Margulies for this!), optional relaxation of parameter constraints (kudos to Chris Beckey!) and several bug fixes and improvements, amongst them support for cross-parameter constraints in the annotation processor (cheers to Nicola Ferraro!).

You can find the complete list of all addressed issues in the change log. To get the release with Maven, Gradle etc. use the GAV coordinates org.hibernate:{hibernate-validator|hibernate-validator-cdi|hibernate-validator-annotation-processor}:5.3.0.Alpha1. Alternatively, a distribution bundle containing all the bits is provided on on SourceForge (TAR.GZ, ZIP).

To get in touch, use the following channels:

Hibernate Validator 5.2.2 released

Posted by Gunnar Morling    |       |    Tagged as Hibernate Validator Releases

I am happy to announce the availability of Hibernate Validator 5.2.2.Final.

This release fixes several bugs, including a nasty regression around private property declarations in inheritance hierarchies and a tricky issue related to classloading in environments such as OSGi.

We also closed a gap in the API for constraint declaration which allows to ignore annotation-based constraints for specific methods, parameters etc.:

HibernateValidatorConfiguration config = Validation.byProvider( HibernateValidator.class ).configure();

ConstraintMapping mapping = config.createConstraintMapping();
mapping.type( OrderService.class )
    .method( "placeOrder", Item.class, int.class )
        .ignoreAnnotations( true )
        .parameter( 0 )
            .ignoreAnnotations( false );
config.addMapping( mapping );

Validator validator = config.buildValidatorFactory().getValidator();

Please refer to the change log for the complete list of all issues. You can get the release with Maven, Gradle etc. using the GAV coordinates org.hibernate:hibernate-validator::5.2.2.Final. Alternatively, a distribution bundle is provided on on SourceForge (TAR.GZ, ZIP).

Get in touch through the following channels:

Order, ooorder! Sorting results in Hibernate Search 5.5

Posted by Gunnar Morling    |       |    Tagged as Hibernate Search

"Order, ooorder!" - Sometimes not only the honourable members of the House of Commons need to be called to order, but also the results of Hibernate Search queries need to be ordered in a specific way.

To do so, just pass a Sort object to your full-text query before executing it, specifying the field(s) to sort on:

FullTextSession session = ...;
QueryParser queryParser = ...;

FullTextQuery query = session.createFullTextQuery( queryParser.parse( "summary:lucene" ), Book.class );
Sort sort = new Sort( new SortField( "title", SortField.Type.STRING, false ) );
query.setSort( sort );
List<Book> result = query.list();

As of Lucene 5 (which is what Hibernate Search 5.5 is based on), there is a big performance gain if the fields to sort on are known up front. In this case these fields can be stored as so-called "doc value fields", which is much faster and less memory-consuming than the traditional approach of index un-inverting.

For that purpose, Hibernate Search provides the new annotation @SortableField (and it’s multi-valued companion, @SortableFields) for tagging those fields that should be available for sorting. The following example shows how do it:

@Entity
@Indexed(index = "Book")
public class Book {

    @Id
    private Integer id;

    @Field
    @SortableField
    @DateBridge(resolution = Resolution.DAY)
    private Date publicationDate;

    @Fields({
        @Field,
        @Field(name = "sortTitle", analyze = Analyze.NO, store = Store.NO, index = Index.NO)
    })
    @SortableField(forField = "sortTitle")
    private String title;

    @Field
    private String summary;

    // constructor, getters, setters ...
}

@SortableField is used next to the known @Field annotation. In case a single field exists for a given property (e.g. publicationDate) just specifying the @SortableField annotation is enough to make that field sortable. If several fields exist (see the title property), specify the field name via @SortableField#forField().

Note that sort fields must not be analyzed. In case you want to index a given property analyzed for searching purposes, just add another, un-analyzed field for sorting as it is shown for the title property. If the field is only needed for sorting and nothing else, you may configure it as un-indexed and un-stored, thus avoid unnecessary index growth.

For using the configured sort fields when querying nothing has changed. Just specify a Sort with the required field(s):

FullTextQuery query = ...;
Sort sort = new Sort(
    new SortField( "publicationDate", SortField.Type.LONG, false ),
    new SortField( "sortTitle", SortField.Type.STRING, false )
);
query.setSort( sort );

Now what happens if you sort on fields which you have not explicitly declared as sortable, e.g. when migrating an existing application over to Hibernate Search 5.5? The good news is that the sort will be applied as expected from a functional perspective. Hibernate Search detects the missing sort fields and transparently falls back to index-univerting.

But be aware that this comes with a performance penalty (also it is quite memory-intensive as uninverting is RAM-only operation) and it even might happen that this functionality will be removed from Lucene in a future version altogether. Thus watch out for messages like the following in your log files and follow the advice to declare the missing sort fields:

WARN ManagedMultiReader:142 - HSEARCH000289: Requested sort field(s) summary_forSort are not configured for entity \
type org.hibernate.search.test.query.Book mapped to index Book, thus an uninverting reader must be created. You \
should declare the missing sort fields using @SortField.

When migrating an existing application, be sure to rebuild the affected index(es) as described in the reference guide.

With all the required sort fields configured, your search results with be in order, just as the British parliament members after being called to order by Mr. Speaker.

It’s my pleasure to announce the first Alpha release of Hibernate OGM 5!

This release is based on Hibernate ORM 5.0 Final which we released just last week. The update should be smooth in general, but you should be prepared for some changes if you are bootstrapping Hibernate OGM manually through the Hibernate API and not via JPA. If you are using Hibernate OGM on WildFly, you need to adapt your application to the changed module/slot name of the Hibernate OGM core module which has changed from org.hibernate:ogm to org.hibernate.ogm:main.

Check out the Hibernate OGM migration notes to learn more about migrating from earlier versions of Hibernate OGM to 5.x. Also the Hibernate ORM migration guide is a recommended read.

Experimental support for Redis

Hibernate OGM 5 brings tech preview support for Redis which is a high-performance key/value store with many interesting features.

A huge thank you goes out to community member Mark Paluch for this fantastic contribution! Originally started by Seiya Kawashima, Mark took up the work on this backend and delivered a great piece of work in no time. Looking forward to many more of his contributions to come!

The general mapping approach is to store JSON documents as values in Redis. For instance consider the following entity and embeddables:

@Entity
public class Account {

    @Id
    private String login;
    private String password;
    private Address homeAddress;

    // getters, setters etc.
}
@Embeddable
public class Address {

    private String street;
    private String city;
    private String zipCode;
    private String country;
    private AddressType type;

    // getters, setters etc.
}
@Embeddable
public class AddressType {

    private String name;

    // getters, setters etc.
}

This will be persisted into Redis as a JSON document like this under the key "Account:piere":

{
    "homeAddress": {
        "country": "France",
        "city": "Paris",
        "postalCode": "75007",
        "street": "1 avenue des Champs Elysees",
        "type": {
            "name": "main"
        }
    },
    "password": "like I would tell ya"
}

Refer to the Redis chapter of the reference guide to learn more about this new dialect and its capabilities. It is quite powerful already (almost all tests of the Hibernate OGM backend TCK pass) and there is support for using it in WildFly, too.

While JSON is a popular choice for storing structured data amongst Redis users, we will investigate alternative mapping approaches, too. Specifically, one interesting approach would be to store entity properties using Redis hashes. This poses some interesting challenges, though, e.g. regarding type conversion (only String values are supported in hashes) as well as handling of embedded objects and associations.

So if you are faced with the challenge of persisting object models into Redis, give this new backend a try and let us know what you think, open feature requests etc.

Improved mapping of Map properties

Map-typed entity properties are persisted in a more natural format now in MongoDB (and also with the new Redis backend). The following shows an example:

@Entity
public class User {

    @Id
    private String id;

    @OneToMany
    @MapKeyColumn(name = "addressType")
    private Map<String, Address> addresses = new HashMap<>();

    // getters, setters etc.
}

In previous versions of Hibernate OGM this would have been mapped to a MongoDB document like this:

{
    "id" : 123,
    "addresses" : [
        { "addressType" : "home", "addresses_id" : 456 },
        { "addressType" : "work", "addresses_id" : 789 }
    ]
}

This is not what one would expect from a document store mapping, though. Therefore Hibernate OGM 5 will create the following, more natural representation instead:

{
    "id" : 123,
    "addresses" : {
        "home" : 456,
        "work" : 789
    }
}

This representation is more concise and should improve interoperability with other clients working on the same database. If needed - e.g. for migration purposes - you can enforce the previous mapping through the hibernate.ogm.datastore.document.map_storage property. Check out the reference guide for the details.

The optimized mapping is currently only applied if the map key comprises a single column which is of type String. For other types, e.g. Long or composite map keys the previous mapping is used since JSON/BSON only supports field names which are strings.

An open question for us is whether other key column types should be converted into a string or not. If for instance the addresses map was of type Map<Long, Address> one could think of storing the map keys using field names such as "1", "2" etc. Let us know whether that’s something you’d find helpful or not.

Support for multi-get operations

One of the many optimizations in Hibernate ORM is batch fetching of lazily loaded entities. This is controlled using the @BatchSize annotation. So far, Hibernate OGM did not support batch fetching, resulting in more round trips to the datastore than actually needed.

This situation has been improved by introducing MultigetGridDialect which is an optional "capability interface" that Hibernate OGM backends can implement. If a backend happens to support this contract, the Hibernate OGM engine will take advantage of it and fetch entities configured for lazy loading in batches, resulting in a better performance.

At the moment the new Redis backend makes use of this, with the MongoDB and Neo4j backends following soon.

Upgrade to MongoDB driver 3.0

We have upgraded to version 3.0 of the MongoDB driver. Most users of Hibernate OGM should not be affected by this but down the road this will allow us for some nice performance optimizations and support of some new functionality.

Together with the driver update we have reorganized the connection-level options of the MongoDB backend. All String, int and boolean MongoDB client options can be configured now through the hibernate.ogm.mongodb.driver.* namespace:

hibernate.ogm.mongodb.driver.connectTimeout=10000
hibernate.ogm.mongodb.driver.serverSelectionTimeout=3000
hibernate.ogm.mongodb.driver.socketKeepAlive=true

These options will be passed on to MongoDB’s client builder as-is. Note that the previously existing option hibernate.ogm.mongodb.connection_timeout has been removed in favor of this new approach.

Where can I get it?

You can retrieve Hibernate OGM 5.0.0.Alpha1 via Maven etc. using the following coordinates:

  • org.hibernate.ogm:hibernate-ogm-core:5.0.0.Alpha1 for the Hibernate OGM core module

  • org.hibernate.ogm:hibernate-ogm-<%BACKEND%>:5.0.0.Alpha1 for the NoSQL backend you want to use, with <%BACKEND%> being one of "mongodb", "redis", "neo4j" etc.

Alternatively, you can download archives containing all the binaries, source code and documentation from SourceForge.

Als always we are looking forward to your feedback very much. The change log tells in detail what’s in there for you. Get in touch through the following channels:

Map me if you can - Advanced embeddable mappings

Posted by Gunnar Morling    |       |    Tagged as Hibernate ORM

The other day I came across an interesting mapping challenge which I thought may be worth sharing. If you are a seasoned JPA user, it will probably be nothing new to you, but those not as experienced may find it helpful :)

TL;DR - JPA let’s you override database columns for embedded objects but also for collections of embedded objects; @AttributeOverride and @AssocationOverride can be used for that.

Let’s assume the following entity model representing a person and their home and business address:

Entity model

The interesting part is that Person has two associations with Address, one with the "homeAddress" role and another one with the "businessAddress" role. Address in turn has one ore more address lines.

When mapping these types with JPA, Person naturally becomes an entity. As there is a composition relationship between Person and Address, the latter is mapped with @Embeddable. The same goes for AddressLine, which also is an @Embeddable, contained within an element collection owned by Address.

So you’d end up with these classes:

@Entity
public class Person {

    @Id
    private long id;
    private String name;
    private Address homeAddress;
    private Address businessAddress;

    // constructor, getters and setters...
}
@Embeddable
public class Address {

    private boolean active;

    @ElementCollection
    private List<AddressLine> lines = new ArrayList<AddressLine>();

    // constructor, getters and setters...
}
@Embeddable
public class AddressLine {

    private String value;

    // constructor, getters and setters...
}

@AttributeOverride

Let’s fire up a session factory with these types (this uses the brand-new bootstrapping API coming in Hibernate ORM 5) and see how that goes:

StandardServiceRegistry registry = new StandardServiceRegistryBuilder()
    .applySetting( AvailableSettings.SHOW_SQL, true )
    .applySetting( AvailableSettings.FORMAT_SQL, true )
    .applySetting( AvailableSettings.HBM2DDL_AUTO, "create-drop" )
    .build();

SessionFactory sessionFactory = new MetadataSources( registry )
    .addAnnotatedClass( Person.class )
    .buildMetadata()
    .buildSessionFactory();

Hum, that did not go too well:

org.hibernate.MappingException: Repeated column in mapping for entity:
org.hibernate.bugs.Person column: active (should be mapped with insert="false" update="false")

Of course this makes sense; As Address is embedded twice within Person, its properties must be mapped to unique column names within the Person table. The @AttributeOverride annotation can be used for that:

@Entity
public class Person {

    @Id
    private long id;
    private String name;

    @AttributeOverride(name = "active", column = @Column(name = "home_address_active"))
    private Address homeAddress;

    @AttributeOverride(name = "active", column = @Column(name = "business_address_active"))
    private Address businessAddress;

    // constructor, getters and setters...
}

With that, the session factory boots successfully. But let’s take a look at the tables that get created:

create table Person (
    id bigint not null,
    business_address_active boolean,
    home_address_active boolean,
    name varchar(255),
    primary key (id)
)

create table Person_lines (
    Person_id bigint not null,
    "value" varchar(255)
)

@AssociationOverride

The Person table looks alright, but there is only a single table for the address lines. That’s a problem as it means there is no way to tell apart the lines of the business address from the lines of the home address when reading back a person from the database.

So what to do? @AttributeOverride is of no help this time, as it’s not a single column which needs to be re-defined but an entire table. But luckily, @AttributeOverride has a companion, @AssociationOverride. This annotation can be used to configure the required tables in this case:

@Entity
public class Person {

    @Id
    private long id;
    private String name;

    @AttributeOverride(name = "active", column = @Column(name = "home_address_active"))
    @AssociationOverride(name = "lines", joinTable = @JoinTable(name = "Person_HomeAddress_Line"))
    private Address homeAddress;

    @AttributeOverride(name = "active", column = @Column(name = "business_address_active"))
    @AssociationOverride(name = "lines", joinTable = @JoinTable(name = "Person_BusinessAddress_Line"))
    private Address businessAddress;

    // constructor, getters and setters...
}

Et voilà, now you’ll get the DDL for creating the Person table and two different tables for the address lines:

create table Person (
    id bigint not null,
    business_address_active boolean,
    home_address_active boolean,
    name varchar(255),
    primary key (id)
)

create table Person_BusinessAddress_Line (
    Person_id bigint not null,
    "value" varchar(255)
)

create table Person_HomeAddress_Line (
    Person_id bigint not null,
    "value" varchar(255)
)

Welcome back to our tutorial series “NoSQL with Hibernate OGM”!

In this part you will learn how to use Hibernate OGM from within a Java EE application running on the WildFly server. Using the entity model you already know from the previous parts of this tutorial, we will build a small REST-based application for managing hikes. In case you haven’t read the first two installments of this series, you can find them here:

In the following you will learn how to prepare WildFly for using it with Hibernate OGM, configure a JPA persistence unit, create repository classes for accessing your data and providing REST resources on top of these. In this post we will primarily focus on the aspects related to persistence, so some basic experience with REST/JAX-RS may help. The complete source code of this tutorial is hosted on GitHub.

Preparing WildFly

The WildFly server runtime is based on the JBoss Modules system. This provides a modular class-loading environment where each library (such as Hibernate OGM) is its own module, declaring the list of other modules it depends on and only “seeing” classes from those other dependencies. This isolation provides an escape from the dreaded “classpath hell”.

ZIP files containing all the required modules for Hibernate OGM are provided on SourceForge. Hibernate OGM 4.2 - which we released yesterday - supports WildFly 9, so download hibernate-ogm-modules-wildfly9-4.2.0.Final.zip for that. If you are on WildFly 8, use Hibernate OGM 4.1 and get hibernate-ogm-modules-wildfly8-4.1.3.Final.zip instead.

Unzip the archive corresponding to your WildFly version into the modules directory of the application server. If you prefer that the original WildFly directories remain unchanged, you also can unzip the Hibernate OGM modules archive to any other folder and configure this as the “module path” to be used by the server. To do so, export the following two environment variables, matching your specific environment:

export JBOSS_HOME=/path/to/wildfly
export JBOSS_MODULEPATH=$JBOSS_HOME/modules:/path/to/ogm/modules

In case you are working with the Maven WildFly plug-in, e.g. to launch WildFly during development, you’d achieve the same with the following plug-in configuration in your POM file:

...
<plugin>
    <groupId>org.wildfly.plugins</groupId>
    <artifactId>wildfly-maven-plugin</artifactId>
    <version>1.1.0.Alpha1</version>
    <configuration>
        <jboss-home>/path/to/wildfly</jboss-home>
        <modules-path>/path/to/ogm/modules</modules-path>
    </configuration>
</plugin>
...

Setting up the project

Start by creating a new Maven project using the “war” packaging type. Add the following to your pom.xml:

...
<dependencyManagement>
    <dependencies>
        <dependency>
            <groupId>org.hibernate.ogm</groupId>
            <artifactId>hibernate-ogm-bom</artifactId>
            <type>pom</type>
            <version>4.2.0.Final</version>
            <scope>import</scope>
        </dependency>
    </dependencies>
</dependencyManagement>
...

This makes sure you get matching versions of Hibernate OGM’s modules and any (optional) dependencies. Then add the dependency to the Java EE 7 API and one of the Hibernate OGM backend modules, e.g. Infinispan, JBoss’ high-performance, distributed key/value data grid (any other such as hibernate-ogm-mongodb or the brand-new hibernate-ogm-cassandra module would work as well):

...
<dependencies>
    <dependency>
        <groupId>javax</groupId>
        <artifactId>javaee-api</artifactId>
        <version>7.0</version>
        <scope>provided</scope>
    </dependency>
    <dependency>
        <groupId>org.hibernate.ogm</groupId>
        <artifactId>hibernate-ogm-infinispan</artifactId>
        <scope>provided</scope>
    </dependency>
</dependencies>
...

The provided scope makes these dependencies available for compilation but prevents them from being added to the resulting WAR file. That it because the Java EE API is part of WildFly already, and Hibernate OGM will be contributed through the modules you unzipped before.

Just adding these modules to the server doesn’t cut it, though. They also need to be registered as a module dependency with the application. To do so, add the file src/main/webapp/WEB-INF/jboss-web.xml with the following contents:

<?xml version="1.0" encoding="UTF-8"?>
<jboss-deployment-structure
    xmlns="urn:jboss:deployment-structure:1.2"
    xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">

    <deployment>
        <dependencies>
            <module name="org.hibernate" slot="ogm" services="import" />
            <module name="org.hibernate.ogm.infinispan" services="import" />
            <module name="org.hibernate.search.orm" services="import" />
        </dependencies>
    </deployment>
</jboss-deployment-structure>

This will make Hibernate OGM core and the Infinispan backend as well as Hibernate Search available to your application. The latter will be used to run JP-QL queries in a bit.

Adding entity classes and repositories

With the basic project infrastructure in place, it’s time to add the entity classes and repository classes for accessing them. The entity types are basically the same as seen in part 1, only now they are annotated with @Indexed in order to allow them to be queried via Hibernate Search and Lucene:

@Entity
@Indexed
public class Person {

    @Id
    @GeneratedValue(generator = "uuid")
    @GenericGenerator(name = "uuid", strategy = "uuid2")
    private String id;

    private String firstName;
    private String lastName;

    @OneToMany(
        mappedBy = "organizer",
        cascade = { CascadeType.PERSIST, CascadeType.MERGE },
        fetch = FetchType.EAGER
    )
    private Set<Hike> organizedHikes = new HashSet<>();

    // constructors, getters and setters...
}
@Entity
@Indexed
public class Hike {

    @Id
    @GeneratedValue(generator = "uuid")
    @GenericGenerator(name = "uuid", strategy = "uuid2")
    private String id;

    private String description;
    private Date date;
    private BigDecimal difficulty;

    @ManyToOne
    private Person organizer;

    @ElementCollection(fetch = FetchType.EAGER)
    @OrderColumn(name = "sectionNo")
    private List<HikeSection> sections;

    // constructors, getters and setters...
}
@Embeddable
public class HikeSection {

    private String start;
    private String end;

    // constructors, getters and setters...
}

In order to use these entities, a JPA persistence unit must be defined. To do so, create the file src/main/resources/META-INF/persistence.xml:

<?xml version="1.0" encoding="utf-8"?>
<persistence xmlns="http://java.sun.com/xml/ns/persistence"
    xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
    xsi:schemaLocation="http://java.sun.com/xml/ns/persistence http://java.sun.com/xml/ns/persistence/persistence_1_0.xsd"
    version="1.0">

    <persistence-unit name="hike-PU" transaction-type="JTA">
        <provider>org.hibernate.ogm.jpa.HibernateOgmPersistence</provider>

            <class>org.hibernate.ogm.demos.ogm101.part3.model.Person</class>
            <class>org.hibernate.ogm.demos.ogm101.part3.model.Hike</class>

            <properties>
                <property name="hibernate.ogm.datastore.provider" value="INFINISPAN" />
                <property name="hibernate.ogm.datastore.database" value="hike_db" />
                <property name="hibernate.ogm.datastore.create_database" value="true" />
            </properties>
    </persistence-unit>
</persistence>

Here we define a persistence unit named “hike-PU”. Infinispan is a fully transactional datastore, and using JTA as transaction type allows the persistence unit to participate in container-managed transactions. Specifying HibernateOgmPersistence as the provider class enables Hibernate OGM (instead of Hibernate ORM), which is configured with some properties for the setting backend (INFINISPAN in this case), database name etc.

Note that it actually should not be required to specify the entity types in persistence.xml when running in a Java EE container such as WildFly. Instead they should be picked up automatically. When using Hibernate OGM this unfortunately is needed at the moment. This a known limitation (see OGM-828) which we hope to fix soon.

The next step is to implement repository classes for accessing hike and organizer data. As an example, the following shows the PersonRepository class:

@ApplicationScoped
public class PersonRepository {

    @PersistenceContext
    private EntityManager entityManager;

    public Person create(Person person) {
        entityManager.persist( person );
        return person;
    }

    public Person get(String id) {
        return entityManager.find( Person.class, id );
    }

    public List<Person> getAll() {
        return entityManager.createQuery( "FROM Person p", Person.class ).getResultList();
    }

    public Person save(Person person) {
        return entityManager.merge( person );
    }

    public void remove(Person person) {
        entityManager.remove( person );
        for ( Hike hike : person.getOrganizedHikes() ) {
            hike.setOrganizer( null );
        }
    }
}

The implementation is straight-forward; by means of the @ApplicationScoped annotation, the class is marked as application-scoped CDI bean (i.e. one single instance of this bean exists throughout the lifecycle of the application). It obtains a JPA entity manager through dependency injection and uses the same to implement some simple CRUD methods (Create, Read, Update, Delete).

Note how the getAll() method uses a JP-QL query to return all person objects. Upon execution this query will be transformed into an equivalent Lucene index query which will be run through Hibernate Search.

The hike repository looks very similar, so it’s omitted here for the sake of brevity. You can find its source code on GitHub.

Exposing REST services

JAX-RS makes building REST-ful web services a breeze. It defines a declarative programming model where you annotate plain old Java classes to provide implementations for the GET, POST, PUT etc. operations of an HTTP endpoint.

Describing JAX-RS in depth is beyond the scope of this tutorial, e.g. refer to the Java EE 7 tutorial if you would like to learn more. Let’s just have a look at the some methods of a resource class for managing persons as an example:

@Path("/persons")
@Produces("application/json")
@Consumes("application/json")
@Stateless
public class Persons {

    @Inject
    private PersonRepository personRepository;

    @Inject
    private ResourceMapper mapper;

    @Inject
    private UriMapper uris;

    @POST
    @Path("/")
    public Response createPerson(PersonDocument request) {
        Person person = personRepository.create( mapper.toPerson( request ) );
        return Response.created( uris.toUri( person ) ).build();
    }

    @GET
    @Path("/{id}")
    public Response getPerson(@PathParam("id") String id) {
        Person person = personRepository.get( id );
        if ( person == null ) {
            return Response.status( Status.NOT_FOUND ).build();
        }
        else {
            return Response.ok( mapper.toPersonDocument( person ) ).build();
        }
    }

    @GET
    @Path("/")
    public Response listPersons() {  }

    @PUT
    @Path("/{id}")
    public Response updatePerson(PersonDocument request, @PathParam("id") String id) {  }

    @DELETE
    @Path("/{id}")
    public Response deletePerson(@PathParam("id") String id) {  }
}

The @Path, @Produces and @Consumes annotations are defined by JAX-RS. They bind the resource methods to specific URLs, expecting and creating JSON based messages. @GET, @POST, @PUT and @DELETE configure for which HTTP verb each method is responsible.

The @Stateless annotation defines this POJO as a stateless session bean. Dependencies such as the PersonRepository can be obtained via @Inject-based dependency injection. Implementing a session bean gives you the comfort of transparent transaction management by the container. Invocations of the methods of Persons will automatically be wrapped in a transaction, and all the interactions of Hibernate OGM with the datastore will participate in the same. This means that any changes you do to managed entities - e.g. by persisting a new person via PersonRepository#create() or by modifying a Person object retrieved from the entity manager - will be committed to the datastore after the method call returns.

Mapping models

Note that the methods of our REST service do not return and accept the managed entity types themselves, but rather specific transport structures such as PersonDocument:

public class PersonDocument {

    private String firstName;
    private String lastName;
    private Set<URI> organizedHikes;

    // constructors, getters and setters...
}

The reasoning for that is to represent the elements of associations (Person#organizedHikes, Hike#organizer) in form of URIs, which enables a client to fetch these linked resources as required. E.g. a GET call to http://myserver/ogm-demo-part3/hike-manager/persons/123 may return a JSON structure like the following:

{
    "firstName": "Saundra",
    "lastName": "Johnson",
    "organizedHikes": [
        "http://myserver/ogm-demo-part3/hike-manager/hikes/456",
        "http://myserver/ogm-demo-part3/hike-manager/hikes/789"
    ]
}

The mapping between the internal model (e.g. entity Person) and the external one (e.g. PersonDocument) can quickly become a tedious and boring task, so some tool-based support for this is desirable. Several tools exist for this job, most of which use reflection or runtime byte code generation for propagating state between different models.

Another approach for this is pursued by MapStruct, which is a spare time project of mine and generates bean mapper implementations at compile time (e.g. with Maven or in your IDE) via a Java annotation processor. The code it generates is type-safe, fast (it's using plain method calls, no reflection) and dependency-free. You just need to declare Java interfaces with mapping methods for the source and target types you need and MapStruct will generate an implementation as part of the compilation process:

@Mapper(
    // allows to obtain the mapper via @Inject
    componentModel = "cdi",

    // a hand-written mapper class for converting entities to URIs; invoked by the generated
    // toPersonDocument() implementation for mapping the organizedHikes property
    uses = UriMapper.class
)
public interface ResourceMapper {

    PersonDocument toPersonDocument(Person person);

    List<PersonDocument> toPersonDocuments(Iterable<Person> persons);

    @Mapping(target = "date", dateFormat = "yyyy-MM-dd'T'HH:mm:ss.SSSZ")
    HikeDocument toHikeDocument(Hike hike);

    // other mapping methods ...
}

The generated implementation can then be used in the Persons REST resource to map from the internal to the external model and vice versa. If you would like to learn more about this approach for model mappings, check out the complete mapper interface on GitHub or the MapStruct reference documentation.

Wrap-up

In this part of our tutorial series you learned how to add Hibernate OGM to the WildFly application server and use it to access Infinispan as the data storage for a small REST application.

WildFly is a great runtime environment for applications using Hibernate OGM, as it provides most of the required building blocks out of the box (e.g. JPA/Hibernate ORM, JTA, transaction management etc.), tightly integrated and ready to use. Our module ZIP allows to put the Hibernate OGM modules into the mix very easily, without the need for re-deploying them each time with your application. With WildFly Swarm there is also support for the micro-services architectural style, but we’ll leave it for another time to show how to use Hibernate OGM with Wildfly Swarm (currently JPA support is still lacking from WildFly Swarm).

You can find the sources of the project on GitHub. To build the project run mvn clean install (which executes an integration test for the REST services using Arquillian, an exciting topic on its own). Alternatively, the Maven WildFly plug-in can be used to fire up a WildFly instance and deploy the application via mvn wildfly:run, which is great for manual testing e.g. by sending HTTP requests through curl or wget.

If you have any questions, let us know in the comments below or send us a Tweet to @Hibernate. Also your wishes for future parts of this tutorial are welcome. Stay tuned!

back to top