Red Hat

In Relation To

The Hibernate team blog on everything data.

It’s my pleasure to announce the first Alpha release of Hibernate OGM 5!

This release is based on Hibernate ORM 5.0 Final which we released just last week. The update should be smooth in general, but you should be prepared for some changes if you are bootstrapping Hibernate OGM manually through the Hibernate API and not via JPA. If you are using Hibernate OGM on WildFly, you need to adapt your application to the changed module/slot name of the Hibernate OGM core module which has changed from org.hibernate:ogm to org.hibernate.ogm:main.

Check out the Hibernate OGM migration notes to learn more about migrating from earlier versions of Hibernate OGM to 5.x. Also the Hibernate ORM migration guide is a recommended read.

Experimental support for Redis

Hibernate OGM 5 brings tech preview support for Redis which is a high-performance key/value store with many interesting features.

A huge thank you goes out to community member Mark Paluch for this fantastic contribution! Originally started by Seiya Kawashima, Mark took up the work on this backend and delivered a great piece of work in no time. Looking forward to many more of his contributions to come!

The general mapping approach is to store JSON documents as values in Redis. For instance consider the following entity and embeddables:

@Entity
public class Account {

    @Id
    private String login;
    private String password;
    private Address homeAddress;

    // getters, setters etc.
}
@Embeddable
public class Address {

    private String street;
    private String city;
    private String zipCode;
    private String country;
    private AddressType type;

    // getters, setters etc.
}
@Embeddable
public class AddressType {

    private String name;

    // getters, setters etc.
}

This will be persisted into Redis as a JSON document like this under the key "Account:piere":

{
    "homeAddress": {
        "country": "France",
        "city": "Paris",
        "postalCode": "75007",
        "street": "1 avenue des Champs Elysees",
        "type": {
            "name": "main"
        }
    },
    "password": "like I would tell ya"
}

Refer to the Redis chapter of the reference guide to learn more about this new dialect and its capabilities. It is quite powerful already (almost all tests of the Hibernate OGM backend TCK pass) and there is support for using it in WildFly, too.

While JSON is a popular choice for storing structured data amongst Redis users, we will investigate alternative mapping approaches, too. Specifically, one interesting approach would be to store entity properties using Redis hashes. This poses some interesting challenges, though, e.g. regarding type conversion (only String values are supported in hashes) as well as handling of embedded objects and associations.

So if you are faced with the challenge of persisting object models into Redis, give this new backend a try and let us know what you think, open feature requests etc.

Improved mapping of Map properties

Map-typed entity properties are persisted in a more natural format now in MongoDB (and also with the new Redis backend). The following shows an example:

@Entity
public class User {

    @Id
    private String id;

    @OneToMany
    @MapKeyColumn(name = "addressType")
    private Map<String, Address> addresses = new HashMap<>();

    // getters, setters etc.
}

In previous versions of Hibernate OGM this would have been mapped to a MongoDB document like this:

{
    "id" : 123,
    "addresses" : [
        { "addressType" : "home", "addresses_id" : 456 },
        { "addressType" : "work", "addresses_id" : 789 }
    ]
}

This is not what one would expect from a document store mapping, though. Therefore Hibernate OGM 5 will create the following, more natural representation instead:

{
    "id" : 123,
    "addresses" : {
        "home" : 456,
        "work" : 789
    }
}

This representation is more concise and should improve interoperability with other clients working on the same database. If needed - e.g. for migration purposes - you can enforce the previous mapping through the hibernate.ogm.datastore.document.map_storage property. Check out the reference guide for the details.

The optimized mapping is currently only applied if the map key comprises a single column which is of type String. For other types, e.g. Long or composite map keys the previous mapping is used since JSON/BSON only supports field names which are strings.

An open question for us is whether other key column types should be converted into a string or not. If for instance the addresses map was of type Map<Long, Address> one could think of storing the map keys using field names such as "1", "2" etc. Let us know whether that’s something you’d find helpful or not.

Support for multi-get operations

One of the many optimizations in Hibernate ORM is batch fetching of lazily loaded entities. This is controlled using the @BatchSize annotation. So far, Hibernate OGM did not support batch fetching, resulting in more round trips to the datastore than actually needed.

This situation has been improved by introducing MultigetGridDialect which is an optional "capability interface" that Hibernate OGM backends can implement. If a backend happens to support this contract, the Hibernate OGM engine will take advantage of it and fetch entities configured for lazy loading in batches, resulting in a better performance.

At the moment the new Redis backend makes use of this, with the MongoDB and Neo4j backends following soon.

Upgrade to MongoDB driver 3.0

We have upgraded to version 3.0 of the MongoDB driver. Most users of Hibernate OGM should not be affected by this but down the road this will allow us for some nice performance optimizations and support of some new functionality.

Together with the driver update we have reorganized the connection-level options of the MongoDB backend. All String, int and boolean MongoDB client options can be configured now through the hibernate.ogm.mongodb.driver.* namespace:

hibernate.ogm.mongodb.driver.connectTimeout=10000
hibernate.ogm.mongodb.driver.serverSelectionTimeout=3000
hibernate.ogm.mongodb.driver.socketKeepAlive=true

These options will be passed on to MongoDB’s client builder as-is. Note that the previously existing option hibernate.ogm.mongodb.connection_timeout has been removed in favor of this new approach.

Where can I get it?

You can retrieve Hibernate OGM 5.0.0.Alpha1 via Maven etc. using the following coordinates:

  • org.hibernate.ogm:hibernate-ogm-core:5.0.0.Alpha1 for the Hibernate OGM core module

  • org.hibernate.ogm:hibernate-ogm-<%BACKEND%>:5.0.0.Alpha1 for the NoSQL backend you want to use, with <%BACKEND%> being one of "mongodb", "redis", "neo4j" etc.

Alternatively, you can download archives containing all the binaries, source code and documentation from SourceForge.

Als always we are looking forward to your feedback very much. The change log tells in detail what’s in there for you. Get in touch through the following channels:

Hibernate ORM 5.0 has gone Final!

Posted by Steve Ebersole    |       |    Tagged as Hibernate ORM Releases

Today I have released Hibernate ORM 5.0 (5.0.0.Final). This has been a long time coming and is the result of the efforts of many folks. Thanks to everyone who helped us get here with fixes, bug reports, suggestions, input and encouragement!

A lot of development has gone into 5.0. Here are the big points:

New bootstrap API

The venerable way to bootstrap Hibernate (build a SessionFactory) has been to use its Configuration class. Configuration, historically, allowed users to iteratively add settings and mappings in any order and to query the state of settings and mapping information in the middle of that process. Which meant that building the mapping information could not effectively rely on any settings being available. This lead to many limitations and problems.

5.0 introduces a new bootstrapping API aimed at alleviating those limitations and problems, while allowing better determinism and better integration. See the Bootstrap chapter in the User Guide for details on using the new API.

Configuration is still available for use, although in a limited sense. Some of its methods have been removed. Under the covers Configuration makes use of the new bootstrap API.

Spatial/GIS support

Hibernate Spatial is a project that has been around for a number of years. Karel Maesen has done an amazing job with it.

Starting in 5.0 Hibernate Spatial is now part of the Hibernate project proper to allow it to better keep up with upstream development. It is available as org.hibernate:hibernate-spatial. If your appplication has need for GIS data, we highly recommend giving hibernate-spatial a try.

Java 8 support

Well, ok.. not all of Java 8. Specifically we have added support for Java 8 Date and Time API in regards to easily mapping attributes in your domain model using the Java 8 Date and Time API types to the database. This support is available under the dedicated hibernate-java8 artifact (to isolate Java 8 dependencies). For additional information, see the Basic Types chapter in the Domain Model Mapping Guide.

Expanded AUTO id generation support

JPA defines support for GenerationType#AUTO limited to just Number types. Starting in 5.0 Hibernate offers expandable support for a broader set of types, including built-in support for both Number types (Integer, Long, etc) and UUID. Users are also free to plug in custom strategies for interpreting GenerationType#AUTO via the new org.hibernate.boot.model.IdGeneratorStrategyInterpreter extension.

Naming strategy split

NamingStrategy has been removed in favor of a better designed API. 2 distinct ones actually:

  • org.hibernate.boot.model.naming.ImplicitNamingStrategy - used whenever a table or column is not explicitly named to determine the name to use

  • org.hibernate.boot.model.naming.PhysicalNamingStrategy - used to convert a "logical name" (either implicit or explicit) name of a table or column into a physical name (e.g. following corporate naming guidelines)

Attribute Converter support

5.0 offers significantly improved support for JPA 2.1 AttributeConverters:

  • fully supported for non-@Enumerated enum values

  • applicable in conjunction with @Nationalized support

  • now called to handle null values

  • settable in hbm.xml by using type="converter:fully.qualified.AttributeConverterName"

  • integrated with hibernate-envers

  • collection values, map keys

  • support for conversion of parameterized types

Better "bulk id table" support

Support for "bulk id tables" has been completely redesigned to better fit what different databases support.

Transaction management

The transaction SPI underwent a major redesign as part of 5.0 as well. From a user perspective this generally only comes into view in terms of configuration. Previously applications would work with the different backend transaction stratagies directly via the org.hibernate.Transaction API. In 5.0 a level of indirection has been added here. The API implementation of org.hibernate.Transaction is always the same now. On the backend, the org.hibernate.Transaction impl talks to a org.hibernate.resource.transaction.TransactionCoordinator which represents the "transactional context" for a given Session according to the backend transaction strategy. Users generally do not need to care about the distinction.

The change is noted here because it might affect your bootstrap configuration. Whereas previously applications would specify hibernate.transaction.factory_class and refer to a org.hibernate.engine.transaction.spi.TransactionFactory FQN, with 5.0 the new contract is org.hibernate.resource.transaction.TransactionCoordinatorBuilder and is specified using the hibernate.transaction.coordinator_class setting. See org.hibernate.cfg.AvailableSettings.TRANSACTION_COORDINATOR_STRATEGY JavaDocs for additional details.

The following short-names are recognized: jdbc::(the default) says to use JDBC-based transactions (org.hibernate.resource.transaction.backend.jdbc.internal.JdbcResourceLocalTransactionCoordinatorImpl) jta::says to use JTA-based transactions (org.hibernate.resource.transaction.backend.jta.internal.JtaTransactionCoordinatorImpl)

See the User Guide for additional details.

Schema Tooling

5.0 offers much improvement in the area of schema tooling (export, validation and migration).

Typed Session API

Hibernate’s native APIs (Session, etc) have been updated to be typed. No more casting!

Improved OSGi support

Really this started with a frustration over the fragility of hibernate-osgi tests. The first piece was a better testing setup using Pax Exam and Karaf. This lead to us generating (and now publishing!) a Hibernate Karaf features file.

OSGi support has undergone some general improvement as well thanks to feedback from some Karaf and Pax developers and users.

See the Getting Started Guide for additional details on using the new Karaf features file.

Improved bytecode enhancement capabilities

Work on documentation

A lot of work has gone into the documentation for 5.0. It’s still not complete (is documentation ever "complete"?), but it is much improved.

See the revamped documentation page for details.

BinTray

For now the plan is to publish the release bundles (zip and tgz) to BinTray. We will continue to publish to SourceForge as well. For the time being we will publish the bundles to both.

Ultimately we will start to publish the "maven" artifacts there as well.

This is all a work in progress.

How to get it

See http://hibernate.atlassian.net/projects/HHH/versions/20851 for the complete list of changes.

See http://hibernate.org/orm/downloads/ for information on obtaining the releases.

In the last couple of months we’ve been working to upgrade Hibernate Search to use Apache Lucene 5, to keep up with the greatest releases from the Lucene community.

Today Hibernate Search 5.5.0.Alpha1 is available!

As the version suggests this is the first cut, but we’ll also need your feedback and suggestions to better assess the needed steps to evolve this into a great, efficient and stable Final release.

A stable API

As Uwe Schindler - a great developer from the Lucene community - had suggested me to try, it turns out it was possible to update from Apache Lucene version 4 to 5 (a major upgrade) in a minor upgrade of Hibernate Search: many things improved, some things are slightly different, but overall the API was not affected by any significant change.

You might be affected by some minor changes of the Lucene API if you were writing power-use extensions to Hibernate Search, in that case:

Please let us know if there are difficulties in upgrading: we can try to improve on that, or at least improve the migration guide. If you don’t tell us, we might not know and many other developers will have a hard time!

Improved Performance

Several changes in the Apache Lucene code relate to improved efficiency, in particular of memory usage.

However our aim for this release was functionality and correctness; we didn’t have time to run performance tests, and to be fair even if we had the time nothing beats feedback from the community of users.

We would highly appreciate your help in trying it out and run some performance tests, especially if you can compare with previous versions of Hibernate Search (based on earlier versions of Apache Lucene).

If you think you can try this, it’s a unique opportunity to both contribute and to get our insight and support to analyse and understand your performance reports!

Sorting and Faceting: use the right types!

It appears that when using Apache Lucene 4, if you were running a Numeric Faceting Query but the target field was not Numeric, it would still work - or appear to work.

Similarly, if you Sort a query using the wrong SortType, Apache Lucene 4 appears to be forgiving.

When using Apache Lucene 5 now, and also because of some internal changes we had to do to keep your migration cost easier, the validation on the right types got much stricter.

Please be aware that you’ll get runtime exceptions like the following when mistmaching the types:

Type mismatch: ageForIntSorting was indexed with multiple values per document, use SORTED_SET instead

I’m aware those are confusing, so we’ll working on improving the errors: HSEARCH-1951; still even with those error messages, the meaning is you have to fix your mapping. No shame in that, we figured this problem out as several unit tests were too relaxed ;-)

How to get it

Everything you need is available on Hibernate Search’s web site. Download the full distribution from here, or get it from Maven Central, and don’t hesitate to reach us in our forums or mailing lists.

Hibernate ORM 4.2.20.Final was released 24-July-2015. At the time it was released, SourceForge was out of commission so distributions could not be uploaded. I decided to delay the announcement until SourceForge was back in commission and I was able to release 4.3.11.Final on 5-Aug-2015.

Notable bugfixes

In both 4.3.11.Final and 4.2.20.Final:

  • HHH-2851 fixed a longstanding bug affecting dialects that require the type for binding null parameter values to a query predicate like (:param IS NULL OR alias.someField = :param). This bug affected Oracle and SQL Server dialects, and possibly others. This bug was easily worked around, but it was clearly a headache for people using those dialects.

In 4.3.11.Final only:

  • HHH-9287 fixed a bug that caused pooled optimizer identifiers to be reused if an external (to Hibernate) system inserted a row using the same sequence.

  • Hibernate’s support for AttributeConverter was improved. HHH-8804 adds support for a parameterized type as an AttributeConverter type parameter (e.g., AttributeConverter<Set<Category>, String>); HHH-8854 fixed a bug extracting the ParameterizedType representation of AttributeConverter definition from an implementation that did not directly implement AttributeConverter i.e., a superclass implements AttributeConverter.

  • There were some bugfixes related to lazy (byte-code instrumented) properties. HHH-5255 fixed a longstanding bug merging a detached entity with a lazy property that has been initialized (this only works for "property" access); HHH-7573 fixed a bug processing lazy properties after an EntityManager PreUpdate callback; HHH-9629 fixed a bug in cache key generation for an entity with inheritance when fetching lazy property.

How to get it

Fourth Candidate Release for ORM 5.0

Posted by Steve Ebersole    |       |    Tagged as Hibernate ORM Releases

Today I released a fourth candidate release for Hibernate ORM 5.0 (5.0.0.CR4). The purpose was entirely to change the defaults for some settings. This allowed some additional fixes and additional documentation work to make it in.

Default ImplicitNamingStrategy

The default ImplicitNamingStrategy (hibernate.implicit_naming_strategy) has changed to the JPA-compliant one. Additionally I added some short-names for the Hibernate-provided implementations.

  • "default" → org.hibernate.boot.model.naming.ImplicitNamingStrategyJpaCompliantImpl

  • "jpa" → org.hibernate.boot.model.naming.ImplicitNamingStrategyJpaCompliantImpl

  • "legacy-jpa" → org.hibernate.boot.model.naming.ImplicitNamingStrategyLegacyJpaImpl

  • "legacy-hbm" → org.hibernate.boot.model.naming.ImplicitNamingStrategyLegacyHbmImpl

  • "component-path" → org.hibernate.boot.model.naming.ImplicitNamingStrategyComponentPathImpl

The previous default was "legacy-jpa". Existing applications that previously used the default naming strategy and want to continue to use that implicit naming strategy should specify hibernate.implicit_naming_strategy=legacy-jpa in their configuration settings. Alternatively, they can call MetadataBuilder#setImplicitNamingStrategy(ImplicitNamingStrategyLegacyJpaImpl.INSTANCE).

Identifier generator mapping

Back in 3.6 I developed a new set of identifier generator strategies aimed at database portability based on the JPA expectations for @SequenceGenerator and @TableGenerator. Between 3.6 and now the default has been to continue to use the legacy generator strategies, but we added a setting (hibernate.id.new_generator_mappings) to allow applications to request the newer strategies be used. The default for this setting had been false. The default is now true.

Existing applications updating to CR4 and then Final that experience issues with identifier generator strategy selection should try setting this back to false if they wish to keep using the legacy mappings.

Keyword auto-quoting

This is a new feature in 5.0, but previously the default had been to auto-quote any sql identifiers believed to be a keyword in the underlying database. That feature has been disabled by default.

Applications that wish to use this feature should explicitly enable it by specifying hibernate.auto_quote_keyword=true in their configuration settings.

More work on the documentation

Still in progress, but alot more content has been added.

How to get it

Additionally many other improvements and bugfixes are included. See https://hibernate.atlassian.net/projects/HHH/versions/20752 for the complete list of changes.

As always, see http://hibernate.org/orm/downloads/ for information on obtaining the releases.

While Hibernate ORM applies some more polish before releasing the final version, so did the Hibernate Search team.

Hibernate Search version 5.4.0.CR2 is now available! It was built and tested with Hibernate ORM 5.0.0.CR3, again we’re just waiting for that to be final but decided to release another Candidate Release as some fixes and improvements were recently applied.

The "Meridian crossing issue"

The Spatial query feature was affected by a math sign mistake which would reveal itself on distance based queries which would cross the 180' meridian. Thanks to Alan Field for the test and Nicholas Helleringer for debugging and fixing the issue!

Serialization issue on enabling Term Vectors

Kudos to Benoit Guillon, who debugged and fixed a mistake in the Avro based serializer, for which values of Term Vectors would not be indexed when enabling Term Vectors in combination with a remote backend. See also HSEARCH-1936.

Various small improvements

You can now have the MassIndexer set a different timeout for the internal transactions it will start, so if you’re running Hibernate Search in a container like WildFly you no longer have to make a choice between having a deadline of 5 minutes or changing the default timeout of the whole container.

  • Improved validation when choosing an invalid Sort definition

  • The Infinispan module will now get you a warning if your Maven project is using the old artifact ids

  • The performance integration tests have been re-enabled, and fixed to run on latest WildFly and Java8 and Java9

  • Fixed a potential issue with timezone encoding for indexed dates

  • Improved the code generating the path for index storage to handle "." elements in the configured paths

  • Upgraded to Hibernate ORM 5.0.0.CR3 and WildFly 10.0.0.Alpha6 (WildFly only used for testing)

Where to download it from

Everything you need is available on Hibernate Search’s web site. Download the full distribution from here, or get it from Maven Central, and don’t hesitate to reach us in our forums

What’s next?

Soon we’ll be releasing a preview of 5.5 to have you upgrade to Apache Lucene 5.2. The Hibernate Search version 5.4.0.Final will be released when Hibernate ORM 5 is released as stable.

Third Candidate Release for ORM 5.0

Posted by Steve Ebersole    |       |    Tagged as Hibernate ORM Releases

Yesterday I released the third candidate release for Hibernate ORM 5.0 (5.0.0.CR3). We felt another CR was warranted because we had some minor integration (SPI) work that we needed to make in to Final, but too much development had happened since the second CR to be considered risk free to just include everything into Final. At any rate CR3 got lots of great TLC :) The complete set of changes can be seen in the Jira changelog. The main changes include:

Minor changes to the caching SPI

Essentially passing Session along to the various region access strategy methods to allow integrating with non-JDBC transactions.

Work on schema tooling

Improved namespace (catalog/schema) support overall in schema tools. Improved handling of views and synonyms for migrating and validating.

Work on bytecode enhancement

Lots of fixes based on feedback.

Consistency in Transaction API

A few changes were made to the JDCB-based TransactionCoordinator to work more like in JTA environments. Specifically:

  • implemented support for marking the Transaction for rollback-only.

  • transaction is now rolled back automatically on a failed commit.

Work on the documentation

Besides updating the content, the content has been split into 3 separate guides:

  • User Guide

  • Domain Model Mapping Guide

  • Integrations Guide

How to get it

Additionally many other improvements and bugfixes are included. See https://hibernate.atlassian.net/projects/HHH/versions/20150 for the complete list of changes.

As always, see http://hibernate.org/orm/downloads/ for information on obtaining the releases.

Hibernate Validator 5.2.1.Final*

Posted by Hardy Ferentschik    |       |    Tagged as Hibernate Validator

I know you have been waiting in anticipation, but now it is available - Hibernate Validator 5.2.1.Final :-). Given that it is a drop-in replacement for all 5.x releases, there is no reason to delay an upgrade. Just go and get it.

For the more cautious, here again the highlights of the 5.2 release with pointers to more information.

The highlights

  • Java 8 Support, most notably the ability to validate Optional and the new date/time types. Types, which do not represent an instant on the time-line, however, are not supported out of the box. This includes, for example, LocalDate and its sub-types. Semantically it does not make sense to talk about future and past, if your data type cannot be unambiguously tied to an instant in time. This is still a bit of a open discussion (see HV-981) though. Contact us if you have an opinion.

    Support for Java 8 also includes the use of type annotation constraints like:

    private List<@AcmeEmail String> emails;

    In this case each element of the emails list (or more generally of any Iterable) will be validated using the @AcmeEmail. Note the use of a custom constraint and not the Hibernate Validator provided @Email. The reason is that all provided constraints are missing java.lang.annotation.ElementType.TYPE_USE in their definition for now. Adding it would break backwards compatibility to Java 6.

    Java 8 is not a requirement for Hibernate Validator 5.2. It is still backward compatible with Java 6. Java 8 specific features are only enabled in case a Java 8 runtime is detected.

  • Ability to provide custom constraints discoverable via the Java ServiceLoader mechanism. Under the hood this uses the new ConstraintMappingContributor SPI. read more…​

  • Ability to use Hibernate Validator without dependency to the Expression Language libraries by using the new ParameterMessageInterpolator. read more…​

  • Ability to provide an external ClassLoader. Potentially handy for modularized environments. read more…​

  • Apache Karaf feature file. read more…​

  • Extension of the Path API for nodes of type ElementKind.PROPERTY, which allows to obtain the value of the represented property. read more…​

  • TimeProvider contract. read more…​

  • Tons of bug fixes

Where to get it

Maven artifacts can be found in the JBoss Maven repository, as well as in Maven Central. The GAV is org.hibernate:hibernate-validator:5.2.1.Final. Distribution bundles will be uploaded to SourceForge once it has recovered from its storage fault.

What’s next

Further development will be driven by the upcoming Bean Validation update - Bean Validation 1.2. Most likely this will align with a Hibernate Validator 6 requiring Java 8. This will be necessary to make use of all the new features Java 8 offers, amongst others the use of @Repeatable. This is in alignment with other technologies which are part of the Java EE 8 standard.

Watch for changes to the Hibernate Validator roadmap, as well as announcements on this and the Bean Validation blog.

Let us know what you think

Feedback and questions are welcome via the Hibernate Validator forum or on stackoverflow using the hibernate-validator tag. If that is not enough, check out the other ways you can get in contact with us.

Thank you

Last but not least, a big thank you to all of you who were lending a helping hand along the way, be it via a bug report or more hands on via a pull request. Special thanks to Khalid who kicked of the work on the 5.2 release series with his Google Summer of Code work. Thank you also for the smaller contributions provided by Denis Tiago, Nicolas Francois, Xavier Sosnovsky, dernasherbrezon, stawny and tonnyyi. Finally a thank you to all other Hibernate team members who were involved.

Enjoy!


* In case you are wondering what happened with 5.2.0.Final, it became obsolete while waiting for SourceForge to recover. In the meantime an IBM JVM specific issue (HV-1007) was reported and fixed.

Handling queries on complex types in Hibernate Search

Posted by Emmanuel Bernard    |       |    Tagged as Hibernate Search

Writing queries using complex types can be a bit surprising in Hibernate Search. For these multi-fields types, the key is to target each individual field in the query. Let’s discuss how this works.

What’s a complex type?

Hibernate Search lets you write custom types that take a Java property and create Lucene fields in a document. As long as there is a one property for one field relationship, you are good. It becomes more subtle if your custom bridge stores the property in several Lucene fields. Say an Amount type which has the numeric part and the currency part.

Let’s take a real example from a user using Infinispan's search engine - proudly served by Hibernate Search.

The FieldBridge
public class JodaTimeSplitBridge implements TwoWayFieldBridge {

    /**
     * Set year, month and day in separate fields
     */
    @Override
    public void set(String name, Object value, Document document, LuceneOptions luceneoptions) {
        DateTime datetime = (DateTime) value;
        luceneoptions.addFieldToDocument(
            name+".year", String.valueOf(datetime.getYear()), document
        );
        luceneoptions.addFieldToDocument(
            name+".month", String.format("%02d", datetime.getMonthOfYear()), document
        );
        luceneoptions.addFieldToDocument(
            name+".day", String.format("%02d", datetime.getDayOfMonth()), document
        );
    }

    @Override
    public Object get(String name, Document document) {
        IndexableField fieldyear = document.getField(name+".year");
        IndexableField fieldmonth = document.getField(name+".month");
        IndexableField fieldday = document.getField(name+".day");
        String strdate = fieldday.stringValue()+"/"+fieldmonth.stringValue()+"/"+fieldyear.stringValue();
        DateTime value = DateTime.parse(strdate, DateTimeFormat.forPattern("dd/MM/yyyy"));
        return String.valueOf(value);
    }

    @Override
    public String objectToString(Object date) {
        DateTime datetime = (DateTime) date;
        int year = datetime.getYear();
        int month = datetime.getMonthOfYear();
        int day = datetime.getDayOfMonth();
        String value = String.format("%02d",day)+"/"+String.format("%02d",month)+"/"+String.valueOf(year);
        return String.valueOf(value);
    }
}
The entity using the bridge
[...]
@Indexed
class BlogEntry {
    [...]

    @Field(store=Store.YES, index=Index.YES)
    @FieldBridge(impl=JodaTimeSplitBridge.class)
    DateTime creationdate;
}

Let’s query this field

A naive but intuitive query looks like this.

Incorrect query
QueryBuilder qb = sm.buildQueryBuilderForClass(BlogEntry.class).get();
Query q = qb.keyword().onField("creationdate").matching(new DateTime()).createQuery();
CacheQuery cq = sm.getQuery(q, BlogEntry.class);
System.out.println(cq.getResultSize());

Unfortunately that query will always return 0 result. Can you spot the problem?

It turns out that Hibernate Search does not know about these subfields creationdate.year, creationdate.month and creationdate.day. A FieldBridge is a bit of a blackbox for the Hibernate Search query DSL, so it assumes that you index the data in the field name provided by the name parameter (creationdate in this example).

We have plans in a not so future version of Hibernate Search to address that problem. It will only require you to provide a bit of metadata when you write such advanced custom field bridge. But that’s the future, so what to do now?

Use a single field

I am cheating here but as much as you can, try and keep the one property = one field mapping. Life will be much simpler to you. In this specific JodaTime type example, this is extremely easy. Use the custom bridge but instead of creating three fields (for year, month, day), keep it as a single field in the form of yyyymmdd.

Let’s again use our user real life solution.

A bridge using one field
public class JodaTimeSingleFieldBridge implements TwoWayFieldBridge {

    /**
     * Store the data in a single field in yyymmdd format
     */
    @Override
    public void set(String name, Object value, Document document, LuceneOptions luceneoptions) {
        DateTime datetime = (DateTime) value;
        luceneoptions.addFieldToDocument(
            name, datetime.toString(DateTimeFormat.forPattern("yyyyMMdd")), document
        );
    }


    @Override
    public Object get(String name, Document document) {
        IndexableField strdate = document.getField(name);
        return DateTime.parse(strdate.stringValue(), DateTimeFormat.forPattern("yyyyMMdd"));
    }

    @Override
    public String objectToString(Object date) {
        DateTime datetime = (DateTime) date;
        return datetime.toString(DateTimeFormat.forPattern("yyyyMMdd"));
    }
}

In this case, it would even be better to use a Lucene numeric format field. They are more compact and more efficient at range queries. Use luceneOptions.addNumericFieldToDocument( name, numericDate, document );.

The query above will work as expected now.

But my type must have multiple fields!

OK, OK. I won’t avoid the question. The solution is to disable the Hibernate Query DSL magic and target the fields directly.

Let’s see how to do it based on the first FieldBridge implementation.

Query targeting multiple fields
int year = datetime.getYear();
int month = datetime.getMonthOfYear();
int day = datetime.getDayOfMonth();

QueryBuilder qb = sm.buildQueryBuilderForClass(BlogEntry.class).get();
Query q = qb.bool()
    .must( qb.keyword().onField("creationdate.year").ignoreFieldBridge().ignoreAnalyzer()
                .matching(year).createQuery() )
    .must( qb.keyword().onField("creationdate.month").ignoreFieldBridge().ignoreAnalyzer()
                .matching(month).createQuery() )
    .must( qb.keyword().onField("creationdate.day").ignoreFieldBridge().ignoreAnalyzer()
                .matching(day).createQuery() )
   .createQuery();

CacheQuery cq = sm.getQuery(q, BlogEntry.class);
System.out.println(cq.getResultSize());

The key is to:

  • target directly each field,

  • disable the field bridge conversion for the query,

  • and it’s probably a good idea to disable the analyzer.

It’s a rather advanced topic and the query DSL will do the right thing most of the time. No need to panic just yet.

But in case you hit a complex type needs, it’s interesting to understand what is going on underneath.

Map me if you can - Advanced embeddable mappings

Posted by Gunnar Morling    |       |    Tagged as Hibernate ORM

The other day I came across an interesting mapping challenge which I thought may be worth sharing. If you are a seasoned JPA user, it will probably be nothing new to you, but those not as experienced may find it helpful :)

TL;DR - JPA let’s you override database columns for embedded objects but also for collections of embedded objects; @AttributeOverride and @AssocationOverride can be used for that.

Let’s assume the following entity model representing a person and their home and business address:

Entity model

The interesting part is that Person has two associations with Address, one with the "homeAddress" role and another one with the "businessAddress" role. Address in turn has one ore more address lines.

When mapping these types with JPA, Person naturally becomes an entity. As there is a composition relationship between Person and Address, the latter is mapped with @Embeddable. The same goes for AddressLine, which also is an @Embeddable, contained within an element collection owned by Address.

So you’d end up with these classes:

@Entity
public class Person {

    @Id
    private long id;
    private String name;
    private Address homeAddress;
    private Address businessAddress;

    // constructor, getters and setters...
}
@Embeddable
public class Address {

    private boolean active;

    @ElementCollection
    private List<AddressLine> lines = new ArrayList<AddressLine>();

    // constructor, getters and setters...
}
@Embeddable
public class AddressLine {

    private String value;

    // constructor, getters and setters...
}

@AttributeOverride

Let’s fire up a session factory with these types (this uses the brand-new bootstrapping API coming in Hibernate ORM 5) and see how that goes:

StandardServiceRegistry registry = new StandardServiceRegistryBuilder()
    .applySetting( AvailableSettings.SHOW_SQL, true )
    .applySetting( AvailableSettings.FORMAT_SQL, true )
    .applySetting( AvailableSettings.HBM2DDL_AUTO, "create-drop" )
    .build();

SessionFactory sessionFactory = new MetadataSources( registry )
    .addAnnotatedClass( Person.class )
    .buildMetadata()
    .buildSessionFactory();

Hum, that did not go too well:

org.hibernate.MappingException: Repeated column in mapping for entity:
org.hibernate.bugs.Person column: active (should be mapped with insert="false" update="false")

Of course this makes sense; As Address is embedded twice within Person, its properties must be mapped to unique column names within the Person table. The @AttributeOverride annotation can be used for that:

@Entity
public class Person {

    @Id
    private long id;
    private String name;

    @AttributeOverride(name = "active", column = @Column(name = "home_address_active"))
    private Address homeAddress;

    @AttributeOverride(name = "active", column = @Column(name = "business_address_active"))
    private Address businessAddress;

    // constructor, getters and setters...
}

With that, the session factory boots successfully. But let’s take a look at the tables that get created:

create table Person (
    id bigint not null,
    business_address_active boolean,
    home_address_active boolean,
    name varchar(255),
    primary key (id)
)

create table Person_lines (
    Person_id bigint not null,
    "value" varchar(255)
)

@AssociationOverride

The Person table looks alright, but there is only a single table for the address lines. That’s a problem as it means there is no way to tell apart the lines of the business address from the lines of the home address when reading back a person from the database.

So what to do? @AttributeOverride is of no help this time, as it’s not a single column which needs to be re-defined but an entire table. But luckily, @AttributeOverride has a companion, @AssociationOverride. This annotation can be used to configure the required tables in this case:

@Entity
public class Person {

    @Id
    private long id;
    private String name;

    @AttributeOverride(name = "active", column = @Column(name = "home_address_active"))
    @AssociationOverride(name = "lines", joinTable = @JoinTable(name = "Person_HomeAddress_Line"))
    private Address homeAddress;

    @AttributeOverride(name = "active", column = @Column(name = "business_address_active"))
    @AssociationOverride(name = "lines", joinTable = @JoinTable(name = "Person_BusinessAddress_Line"))
    private Address businessAddress;

    // constructor, getters and setters...
}

Et voilĂ , now you’ll get the DDL for creating the Person table and two different tables for the address lines:

create table Person (
    id bigint not null,
    business_address_active boolean,
    home_address_active boolean,
    name varchar(255),
    primary key (id)
)

create table Person_BusinessAddress_Line (
    Person_id bigint not null,
    "value" varchar(255)
)

create table Person_HomeAddress_Line (
    Person_id bigint not null,
    "value" varchar(255)
)
back to top