Red Hat

In Relation To

The Hibernate team blog on everything data.

Hibernate Validator 6.0.2.Final released

Posted by    |       |    Tagged as Hibernate Validator Releases

Thanks to our users providing us feedback on our 6.0 release, we were able to fix a few annoying issues and we are happy to announce the release of Hibernate Validator 6.0.2.Final.

This is a recommended upgrade for everyone using Hibernate Validator 6.0.x and it is a drop-in replacement of 6.0.1.Final.

What’s new since 6.0.1.Final?

We fixed 2 annoying bugs:

  • HV-1471, reported by William Eddy - An important issue in the new value extraction infrastructure: it could lead to the engine trying to extract a value from an already unwrapped value;

  • HV-1470, reported by Jean-Sébastien Roy - The annotation processor might report an error in a perfectly valid use case: a constraint validator declared through the programmatic API or the XML configuration. This issue is only relevant to people using the annotation processor.

The Brazilian Portuguese translation was updated by Hilmer Chona and Thiago Mouta.

Marko also made a few improvements to the annotation processor regarding the value extraction support in HV-1395.

The complete list of fixed issues can be found on our JIRA.

Getting 6.0.2.Final

To get the release with Maven, Gradle etc. use the GAV coordinates org.hibernate.validator:{hibernate-validator|hibernate-validator-cdi|hibernate-validator-annotation-processor}:6.0.2.Final. Note that the group id has changed from org.hibernate (Hibernate Validator 5 and earlier) to org.hibernate.validator (from Hibernate Validator 6 onwards).

Alternatively, a distribution bundle containing all the bits is provided on SourceForge (TAR.GZ, ZIP).

If you want to benefit from the new features of this version on WildFly, we also provide WildFly patches for WildFly 10.1 and WildFly 11.0 Beta1 (wait for the synchronization to Maven Central). You can read about how to apply such patches here.

What’s next?

We will continue to release maintenance releases to fix quickly the issues reported by our users.

With your help, we hope to update the translations of the constraint messages in the future releases.

And finally, we also intend to explore potential features for a future spec revision. Always good to have feedback from the field before setting the API in stone.

Feedback, issues, ideas?

To get in touch, use the usual channels:

Meet Anghel Leonard

Posted by    |       |    Tagged as Discussions Hibernate ORM Interview

In this post, I’d like you to meet Anghel Leonard, a software developer, blogger, book author, and Java EE aficionado.

Anghel Leonard, align=

Hi, Leonard. Would you like to introduce yourself and tell us a little bit about your developer experience?

Hi Vlad, thanks for having me. My name is Anghel Leonard (@anghelleonard on Twitter), I’m living in a small village in Romania, and I’ve been a software developer for the last 17 years, mainly focusing on Java and Web development. I’m also a blogger and author of several books.

My career started as a Java developer in the oil field (Petrom S.A.). Mainly there I’ve been part of the team whose goal was to develop applications in the oil simulation field (Java desktop applications meant apply some specific mathematical models). After some time, we switched to web applications (based on HTML, Servlets /JSP and Struts) and we brought databases into the equation as well (especially MySQL and Visual FoxPro). About then I’ve started with Hibernate ORM, "native" API.

Shortly, I’ve started to learn Java EE (mainly, JSF, EJB, and JPA) and Spring. Further, I’ve worked for many years in GIS field developing RIA and SPA web applications. Since then I’m constantly using Hibernate implementation of the Java Persistence API (JPA) specification. By their nature, GIS RIA/SPA applications process a lot of data (spatial and non-spatial data) and must run fast, so I’ve always been interested to optimize the performance of the persistent layer. I can say that I’ve seen Hibernate "growing" and I’ve constantly tried to learn about every new feature and improvement it brought :)

Currently, I’m working as a Java CA. My main task here is to perform code reviews on a significant number of projects of different sizes, technologies, and areas of interest. Many of these projects use Hibernate "native" API and Hibernate JPA.

You are a very prolific writer, having published several books about JSF, Omnifaces, Hibernate, MongoDB and even JBoss Tools. Now that you are self-publishing your latest book, can you tell us the difference between working with a publisher and going on your own?

Well, I think that it is obvious that choosing between a publisher and self-publishing is a trade-off matter. From my experience, there are pros and cons on both sides and I can highlight the following aspects:

Publisher vs self-publishing pros:

  • publishers provide grammar and check spelling (this can be a major advantage for non-native English speakers, as me) and technical reviewers (usually, authors can recommend reviewers as well) while in self-publishing the author must take care of this aspects

  • publishers take care of book’s cover and index while in self-publishing the author must do it

  • publishers can consider the author very good and contact him for further contracts on the same topic while this is not true in self-publishing (at least, I did not hear of it)

  • publishers provide constant and consistent assistance during the writing process (e-mail or Skype) via editors, project coordinator, technical stuff, etc while in self-publishing this is not available

  • for authors is 100% costs free while in self-publishing the costs can seriously vary

  • publishers can be powerful brands on the market

Publisher vs self-publishing cons:

  • publishers can reject a book proposal for different reasons (e.g. they already have too many books on the suggested topic) while in self-publishing the chances to be accepted are significantly bigger

  • publishers work only with deadlines (each chapter has a fixed date and the book has a fixed calendar) while is self-publishing the author decides when to release an updated version and how significant the update will be

  • publishers provide "Errata" for book issues (typos, mistakes, technical leaks, content accuracy issues, etc) and those issues can be fixed only in subsequent versions of the book while in self-publishing the author can fix issues immediately and repeatable

  • publishers usually pay significantly smaller royalties in comparison with self-publishing

  • typically, publishers pay royalties at every 6 months, while in self-publishing is more often

  • publishers are not quite flexible about the book size, aspect, format, writing style, etc while in self-publishing these coordinates are very flexible

  • publishers require to be the only owner of the book content and it is forbidden to publish it in any other place or form while in self-publishing this restriction is not always applied

  • publishers set the price of the book without consulting the author while in self-publishing the author sets the price (moreover, the author can choose the price in a range of values)

  • publishers decide the discounts and donations policy while in self-publishing the author can provide coupons, discounts and make donations.

Your latest book is called "Java Persistence Performance Illustrated Guide". Could you tell us more about this new project of yours?

Sure thing.

Well, in the beginning, the content of this new book was not meant to be part of any book. The story is like this: Over time, I have collected the best online articles about Java persistence layer (JPA, Hibernate, SQL, etc) and, on my desk, I have some of the most amazing books about these topics.

I think there is no surprise for you if I say that your blog and book, High-Performance Java Persistence, are major parts of this collection.

In order to mitigate the performance issues related to persistence layer, I strongly and constantly recommend these resources to developers involved in persistence layer, but the remaining question is: in a regular day of work, how the members of a team can quickly understand/recognize a performance issue in order to fix it?

Well, there is no time to study in that moment, so I decided to have a new approach: have drawn of a specific performance issue and for 5-15 minutes talk on that draw (after all, "a picture is worth a thousand words"). This way, the audience can quickly understand/recognize the issue and have the hints to fix it.

Further, I’ve published these draws on Twitter, where I was surprised to see that even without the words (the talk), they were appreciated. Well, over time I’ve collected a significant number of draws and people started asking me if I will publish them somewhere (I remember that we had a little talk about this on Twitter as well). And, this is how the idea of the book was born. :)

The main reason of choosing the self-publishing approach was the fact that I’m not constrained by fix deadlines. The only extra-effort I’ve done was to find somebody to make the cover - it was designed and drawn by an excellent painter, Mr. Barsan Florian.

Now, the goal of this book is to act as a quick illustrated guide for developers that need to deal with persistence layer performance issues (SQL, JDBC, JPA, Hibernate (most covered) and Hazelcast).

Each drawing is accompanied by a short description of the issue and the solution. It’s like "first-aid", a quick and condensed recipe that can be followed by an extended and comprehensive article with examples and benchmarks, as you have on your blog.

Most of the applications that I reviewed are Java EE and Spring based applications. Since most of the performance penalties have their roots in the persistence layer, I tried to make a top 10 of the most frequent programming mistakes that cause them (this trend was computed from ~300 PRs in different projects and it is in progress):

  1. Having long or useless transactions (e.g. using @Transactional at class level on Spring controllers that delegate tasks to "heavy" services or never interact with the database)

  2. Avoiding PreparedStatement bind parameters and using "+" concatenations for setting parameters in SQL queries.

  3. Fetching too much data from the database instead of using a combinations of DTO, LIMIT/ROWNUM/TOP and JOINs (e.g. in the worst scenario: a read-only query (marked as read-only or not) fetches all entities, a stream is created over the returned collection, and afterwards, the findFirst stream terminal operation is executed in order to fetch and use further a single entity).

  4. Wrong configuration of batching (the team lives with the sensation that batching is working behind the scene, but they don’t check/inspect the actually SQLs and batch size)

  5. Bad usage or missing transaction boundaries (e.g. omitting @Transactional for read-only queries or executing separate transactions for a bunch of SQL statements that must run in a single transaction)

  6. Ignoring the fact that data is loaded eagerly.

  7. Don’t rely on a pool connection or avoid tuning the pool connection (Flexy Pool should be promoted intensively). Even worse, increase the connections number to 300, 400.

  8. Use unidirectional one-to-many associations with insert and delete entities operations

  9. Using CriteriaBuilder for all SQL statements and rely on whatever is generated behind the scene

  10. Lack of knowledge about Hibernate features (e.g. attributes lazy loading, bytecode enhancement, delay DB connection acquisition, suppress sending DISTINCT to the database, etc)

We always value feedback from our users, so can you tell us what you’d like us to improve or are there features that we should add support for?

First I want to congratulate the whole Hibernate team because is doing a great job! I really love the latest features and the comprehensive improvements in documentation. Taking into account the type of applications that I’m involved in, I will like to see the Hibernate - CDI integration ready.

Thank you, Leonard, for taking your time. It is a great honor to have you here. To reach Leonard, you can follow him on Twitter.

Hibernate ORM 5.1.10.Final released

Posted by    |       |    Tagged as Hibernate ORM Releases

We decided to do another release of the 5.1 series to fix some bugs to be included in an upcoming version of WildFly. This may be the last release of the 5.1 series, so we recommend that you migrate to 5.2 for future bugfixes.

Hibernate ORM 5.1.10.Final:

For information on consuming the release via your favorite dependency-management-capable build tool, see http://hibernate.org/orm/downloads/

We just published Hibernate Search version 5.8.0.CR1, with bugfixes and improvements over 5.8.0.Beta4.

Version 5.8.0.CR1 is the last opportunity for the community to test it and report bugs before the release.

Hibernate Search 5.8.x, just as 5.7.x, is only compatible with Hibernate ORM 5.2.3 and later.

If you need to use Hibernate ORM 5.0.x or 5.1.x, use the older Hibernate Search 5.6.x.

What’s new in CR1?

Here are the most notable changes:

  • HSEARCH-2831: request signing for Amazon’s proprietary IAM authentication mechanism now requires you to set the hibernate.search.default.elasticsearch.aws.signing.enabled property to true, allowing you to easily disable signing even if the hibernate-search-elasticsearch-aws JAR is in your classpath.

  • HSEARCH-2818 / HSEARCH-2821: sending requests to Elasticsearch is now much less memory-consuming.

  • HSEARCH-2764: we improved the orchestration of index updates before they are sent to the Elasticsearch client:

    • Index updates originating from a single Hibernate Search node will now be sent to Elasticsearch in the order they were generated, even when they come from different threads.

    • Mass indexing will now add documents in parallel, allowing you to take advantage of having multiple connections to the Elasticsearch cluster. Note you can customize the maximum number of connections using the hibernate.search.default.elasticsearch.max_total_connection and hibernate.search.default.elasticsearch.max_total_connection_per_route configuration properties.

    • The internal index update queues are now bounded, thus performing mass indexing on very large data sets will not trigger an OutOfMemoryError anymore.

    • We also made several other internal changes to improve performance (less Refresh API calls, more request bulking, …​).

  • HSEARCH-2839: when using a metadata-providing bridge, the bridge can now implement projection on the default field even if its type was set to OBJECT.

  • HSEARCH-2840: when using a metadata-providing bridge, the bridge can now implement projection on dynamic fields.

  • HSEARCH-2843: changing the limit/offset of a query now properly clears the query’s result cache with Elasticsearch.

For a full list of changes since 5.8.0.Beta4, please see the release notes.

How to get this release

All versions are available on Hibernate Search’s web site.

Ideally use a tool to fetch it from Maven Central; these are the coordinates:

<dependency>
   <groupId>org.hibernate</groupId>
   <artifactId>hibernate-search-orm</artifactId>
   <version>5.8.0.CR1</version>
</dependency>

To use the experimental Elasticsearch integration you’ll also need:

<dependency>
   <groupId>org.hibernate</groupId>
   <artifactId>hibernate-search-elasticsearch</artifactId>
   <version>5.8.0.CR1</version>
</dependency>

To use Amazon’s proprietary IAM authentication mechanism to access your Elasticsearch cluster you’ll also need:

<dependency>
   <groupId>org.hibernate</groupId>
   <artifactId>hibernate-search-elasticsearch-aws</artifactId>
   <version>5.8.0.CR1</version>
</dependency>

Downloads from Sourceforge are available as well.

Feedback, issues, ideas?

To get in touch, use the following channels:

Hibernate Community Newsletter 15/2017

Posted by    |       |    Tagged as Discussions Hibernate ORM

Welcome to the Hibernate community newsletter in which we share blog posts, forum, and StackOverflow questions that are especially relevant to our users.

Articles

On Baeldung, you can find a very good article about JPA and Hibernate pagination. While JPA 2.2 defines support for Java 1.8 Stream operations. pagination is still the proffered way to controlling the amount of data being fetched.

Have you ever wondered how you can map a JPA many-to-many association with extra column? If you are interested in the best way to much a a relationship, then you should definitely read this article.

If you’re using a relational database, then you should be using a connection pool as well. Check out this article for an performance analysis of the most common Java connection pools. You will also need connection pool monitoring, and the FlexyPool open source framework allows you to do so.

Hibernate offers a dirty checking mechanism which automatically detects changes to managed entities. While the default mechanism is suitable for most use cases, you can even customize it as explained in this article.

If you ever wondered why you got the HHH000179: Narrowing proxy to class this operation breaks == warning message or wondered how to fix it, then you should read this article wrote by Marcin Chwedczuk.

Traditionally, storing EAV (Entity-Attribute-Value) data in a RDBMS has required various tricks to handle multiple value types. Now that most relational databases support JSON column types, you can use a custom Hibernate type to store your EAV data as a JsonNode object. Check out this article for a step-by-step tutorial that will show you how you can accomplish this task.

Joe Nelson wrote a great article about the difference between various SQL isolation levels with examples for various phenomena like read skew, write skew or lost updates.

Thorben Janssen gives you some tips about mapping the Many-To-One and One-To-Many associations. For more details, check out the best way to map a @OneToMany relationship with JPA and Hibernate article as well.

Questions and answers

It has been a long ride, more than six months, but here it is: we just released Hibernate Validator 6.0 Final together with the final version of the Bean Validation 2.0 specification.

What’s new since CR3?

For those who have followed closely the development of 6.0, here are the important changes since CR3:

  • Updated documentation

  • Updated translations

  • Performance improvements

  • Reduced memory footprint

  • Support for the latest JDK 9 (build 180)

The complete list of fixed issues can be found on our JIRA.

Why should I use this nifty new version?

We have some new features for you

First and foremost, Hibernate Validator 6.0 is the Reference Implementation of the Bean Validation 2.0 specification so it comes with all its new features:

  • First class support of container element constraints and cascaded validation (think private Map<@Valid @NotNull OrderCategory, List<@Valid @NotNull Order>> orderByCategories;);

  • Support for the new JSR 310 date/time data types for @Past and @Future;

  • New built-in constraints: @Positive, @PositiveOrZero, @Negative, @NegativeOrZero, @PastOrPresent and @FutureOrPresent;

We also have leveraged the new features of JDK 8 (built-in constraints are marked repeatable, parameter names are retrieved via reflection) as it is now the minimal version required.

You can read the whole story in the announcements of our Alpha, Beta and Candidate releases:

  • 6.0.0.Alpha1 in which we introduced nested container elements support and lambda based constraint definitions;

  • 6.0.0.Alpha2 with programmatic API and XML definition for container elements;

  • 6.0.0.Beta1 with metadata retrieving support for container elements;

  • 6.0.0.Beta2 with support for non generic container types in value extraction;

  • 6.0.0.CR1, 6.0.0.CR2 and 6.0.0.CR3 where we polished the features and the API.

It’s faster…​

We have done quite a lot of benchmarking and have significantly improved the performances of Hibernate Validator. It can be up to two times faster in various scenarios.

We’re going to publish the results of our benchmarks on this blog soon.

…​and it consumes less memory!

To do its magic, Hibernate Validator collects a lot of metadata on your constrained beans. After a report from the Keycloak developers, we worked on reducing the memory footprint used by the collected metadata.

Hibernate Validator should now consume significantly less memory than before to store your constrained beans' metadata.

Easy upgrade

The first thing you’ll notice is that the groupId of the artifact has changed: it is now org.hibernate.validator (we added validator at the end to better compartimentalize the various Hibernate technologies).

Other than that, it will probably just be a drop-in replacement if you didn’t use experimental features.

If you used the old value handling infrastructure to deal with custom containers, you need to migrate them to the new value extractor infrastructure.

The detailed list of potential migration concerns can be found in our migration guide.

If you want to benefit from the new features of this version on WildFly, we also provide WildFly patches for WildFly 10.1 and WildFly 11.0 Alpha1. You can read about how to apply such patches here.

It sounds exciting, how can I help?

First, test it with your applications and report any issues you might encounter.

Then, if you’d like, you can test the new features (the new constraints, the new container element support…​). If you have any issues with the features or the documentation, please report it to us.

Finally, we added a few more built-in constraints in Bean Validation 2.0 and Hibernate Validator 6.0 so we need to update the translations.

English, French, German, Persian, Ukrainian and Brazilian Portuguese are covered but any help on the others are welcome. Just take a look at the translations in our GitHub repository and create a PR (if you’re not aware on how to encode a property file, just push it to us in plain text and we’ll do it).

Getting 6.0.1.Final

To get the release with Maven, Gradle etc. use the GAV coordinates org.hibernate.validator:{hibernate-validator|hibernate-validator-cdi|hibernate-validator-annotation-processor}:6.0.1.Final. Note that the group id has changed from org.hibernate (Hibernate Validator 5 and earlier) to org.hibernate.validator (from Hibernate Validator 6 onwards).

Alternatively, a distribution bundle containing all the bits is provided on SourceForge (TAR.GZ, ZIP).

What about 6.0.0.Final?

6.0.0.Final essentially is the version submitted to the JCP as the Reference Implementation of Bean Validation 2.0. We made quite a lot of improvements on top of it during the Final Approval Ballot period so we decided to release 6.0.1.Final with all these improvements right away.

What’s next?

Well, first, we will get some rest and wait for your feedback on this version.

With your help, we hope to release a new version soon with updated translations.

And finally, we also intend to explore potential features for a future spec revision. Always good to have feedback from the field before setting the API in stone.

Feedback, issues, ideas?

To get in touch, use the usual channels:

JBoss Community Asylum - minishift

Posted by    |       |    Tagged as asylum minishift

Here is an episode we recorded a late evening with Hardy Ferentschik on the topic of minishift.

Episode 45 - Show notes and podcast.

Enjoy!

Hibernate ORM 5.1.9.Final released

Posted by    |       |    Tagged as Hibernate ORM Releases

We decided to do another release of the 5.1 series to fix some bugs to be included in an upcoming version of WildFly. This may be the last release of the 5.1 series, so we recommend that you migrate to 5.2 for future bugfixes.

Hibernate ORM 5.1.9.Final:

For information on consuming the release via your favorite dependency-management-capable build tool, see http://hibernate.org/orm/downloads/

Hibernate Community Newsletter 14/2017

Posted by    |       |    Tagged as Discussions Hibernate ORM

Welcome to the Hibernate community newsletter in which we share blog posts, forum, and StackOverflow questions that are especially relevant to our users.

Articles

In this article, Arpit Jain writes about the differences between persist and merge in relation to JPA transaction boundaries. For more details about the persist and merge entity state transitions, check out this article as well.

For functional programming aficionados, TehDev wrote a very interesting article about refactoring towards a transaction monad.

If you’re using Payara Server, check out this article about how you can integrate it with Hibernate 5.

Baeldung published an article about the differences between persist, merge, update, as well as saveOrUpdate.

If you’re using Grails, Michael Scharhag shows you how you can make use of Hibernate filters.

JPA 2.2 has been released, but Hibernate has been supporting Java 1.8 Date and Time, Hibernate-specific @Repeatable annotations and, since 5.1, Java 1.8 streams are supported as well.

If you’re using MySQL, Thorben Janssen has written a list of tips to take into consideration when using Hibernate. If you are interested in more details, then check out the following articles as well:

Debezium is an open-source project developed by Red Hat which allows you to capture transaction events from RDBMS like MySQL, PostgreSQL or NoSQL solutions such as MongoDB and push them to Apache Kafka. For more details, check out this tutorial about using Debezium, MySQL and Kafka.

We just published Hibernate Search version 5.8.0.Beta4, with AWS integration as well as bugfixes and improvements over 5.8.0.Beta3.

Hibernate Search 5.8.x, just as 5.7.x, is only compatible with Hibernate ORM 5.2.3 and later.

If you need to use Hibernate ORM 5.0.x or 5.1.x, use the older Hibernate Search 5.6.x.

5.8 status

We completed most of the work on new features and improvements for 5.8, and are now mainly working on performance improvements for the Elasticsearch integration.

As a consequence, you can expect the next version we’ll publish to be a candidate release.

Once the CR is out, we will only fix bugs, and functional improvements will have to wait until the next minor release.

So if you plan on using AWS integration, normalizers, analyzer providers, or SPIs for integration of dependency injection frameworks, now’s the last time to ask for improvements before the actual release!

What’s new in Beta4?

AWS integration

Building on the new SPIs introduced in Beta3, we added a new module allowing you to very simply wire your Hibernate Search instance to an AWS-hosted Elasticsearch cluster using Amazon’s proprietary IAM authentication mechanism.

You can find more information about how to use this integration in the reference documentation.

And more!

A summary of other notable changes:

  • HSEARCH-2783: the buffer_size_on_copy configuration property has been deprecated, because we now use Java NIO for file copy and thus don’t need explicit buffering anymore.

  • HSEARCH-2785: using .phrase() and .keyword() on the QueryBuilder for normalized fields no longer fails with Elasticsearch.

  • HSEARCH-2776 and HSEARCH-2777: javax.transaction dependencies are no longer incorrectly marked as optional in the OSGi manifest.

For a full list of changes since 5.8.0.Beta3, please see the release notes.

How to get this release

All versions are available on Hibernate Search’s web site.

Ideally use a tool to fetch it from Maven Central; these are the coordinates:

<dependency>
   <groupId>org.hibernate</groupId>
   <artifactId>hibernate-search-orm</artifactId>
   <version>5.8.0.Beta4</version>
</dependency>

To use the experimental Elasticsearch integration you’ll also need:

<dependency>
   <groupId>org.hibernate</groupId>
   <artifactId>hibernate-search-elasticsearch</artifactId>
   <version>5.8.0.Beta4</version>
</dependency>

And to also use Amazon’s proprietary IAM authentication mechanism to access your Elasticsearch cluster you’ll also need:

<dependency>
   <groupId>org.hibernate</groupId>
   <artifactId>hibernate-search-elasticsearch-aws</artifactId>
   <version>5.8.0.Beta4</version>
</dependency>

Downloads from Sourceforge are available as well.

Feedback, issues, ideas?

To get in touch, use the following channels:

back to top