Red Hat

In Relation To

The Hibernate team blog on everything data.

Hibernate ORM 5.1.8.Final released

Posted by    |       |    Tagged as Hibernate ORM Releases

We decided to do another release of the 5.1 series to fix some bugs to be included in an upcoming version of WildFly. This may be the last release of the 5.1 series, so we recommend that you migrate to 5.2 for future bugfixes.

Hibernate ORM 5.1.8.Final:

For information on consuming the release via your favorite dependency-management-capable build tool, see http://hibernate.org/orm/downloads/

Hibernate Community Newsletter 12/2017

Posted by    |       |    Tagged as Discussions Hibernate ORM

Welcome to the Hibernate community newsletter in which we share blog posts, forum, and StackOverflow questions that are especially relevant to our users.

Presentations

Don’t miss the Virtual JUG presentation about High-Performance Java Persistence and Hibernate. If you are using a relational database, then you should definitely attend this session, and, the best thing about it, you can watch it in the comfort of your home.

Articles

The pick of this edition is this article by Arnold Galovics which reiterates the benefits of using projections when fetching data.

JPA inheritance is a very useful addition to the standard. However, sometimes entity inheritance is not very well understood or applied, so, in this series of articles, I tried to offer a different perspective to why we need entity inheritance in the first place, and what is the best way to do it:

Time to upgrade

Hibernate Search has managed to release three final versions:

  • 5.5.7.Final

  • 5.6.2.Final

  • 5.7.1.Final

as well as a 5.8.0.Beta3 release.

During Riveria Dev JUDCon Emmanuel Bernard talked with Heiko Braun and Lance Ball on the topic of WildFly Swarm and node.js.

Episode 44 - Show notes and podcast.

Enjoy!

We just published Hibernate Search version 5.8.0.Beta3, with bugfixes and improvements over 5.8.0.Beta2, but also new features such as analyzer providers, normalizers, AWS compatibility and SPIs for integration of dependency injection frameworks!

Hibernate Search 5.8.x, just as 5.7.x, is only compatible with Hibernate ORM 5.2.3 and later.

If you need to use Hibernate ORM 5.0.x or 5.1.x use the older Hibernate Search 5.6.x.

About 5.8

Hibernate Search 5.8 is mainly about:

  • making the Elasticsearch integration compatible with Elasticsearch 5.x (done);

  • improving performance of the Elasticsearch integration (in progress);

  • introducing a new DSL for defining analyzers (done);

  • ensuring that Hibernate Search will work with Java 9 (done, though Java 9 may still change);

  • improving and documenting WildFly Swarm integration (in discussion);

  • removing the need for class definition on master nodes in JMS/JGroups integration (in discussion);

  • and of course, fixing reported bugs.

You can have a look at the roadmap for more details.

What’s new since Beta2?

Programmatic analyzer definitions

You can now define your analyzers programmatically (without annotations), globally (without putting the definition on a particular entity), and in a native way (without using Lucene classes to configure an Elasticsearch analyzer) using analyzer definition providers.

For example, for Lucene your LuceneAnalysisDefinitionProvider might look like this:

public static class CustomAnalyzerProvider implements LuceneAnalysisDefinitionProvider {
    @Override
    public void register(LuceneAnalyzerDefinitionRegistryBuilder builder) {
        builder
                .analyzer( "myAnalyzer" )
                        .tokenizer( StandardTokenizerFactory.class )
                        .charFilter( MappingCharFilterFactory.class )
                                .param( "mapping", "org/hibernate/search/test/analyzer/mapping-chars.properties" )
                        .tokenFilter( ASCIIFoldingFilterFactory.class )
                        .tokenFilter( LowerCaseFilterFactory.class )
                        .tokenFilter( StopFilterFactory.class )
                                .param( "mapping", "org/hibernate/search/test/analyzer/stoplist.properties" )
                                .param( "ignoreCase", "true" );
    }
}

While for Elasticsearch you would have:

public static class CustomAnalyzerProvider implements ElasticsearchAnalysisDefinitionProvider {
    @Override
    public void register(ElasticsearchAnalysisDefinitionRegistryBuilder builder) {
        builder.analyzer( "tweet_analyzer" )
                .withTokenizer( "whitespace" )
                .withCharFilters( "custom_html_strip" )
                .withCharFilters( "p_br_as_space" );

        builder.charFilter( "custom_html_strip" )
                .type( "html_strip" )
                .param( "escaped_tags", "br", "p" );

        builder.charFilter( "p_br_as_space" )
                .type( "pattern_replace" )
                .param( "pattern", "<p/?>|<br/?>" )
                .param( "replacement", " " )
                .param( "tags", "CASE_INSENSITIVE" );
    }
}

As you can see, this allows you to avoid needing to refer to Lucene classes to configure Elasticsearch analyzers.

More details can be found here for Lucene and here for Elasticsearch.

Normalizers for safer sorts

In HSEARCH-2726 and HSEARCH-2659 we introduced normalizers: analyzers that do not perform any kind of tokenization.

We shamelessly borrowed this concept from Elasticsearch, but implemented it in both embedded Lucene mode and Elasticsearch mode. Normalizers are useful for sortable fields: when a field is sortable it should never be tokenized, as this would make the sort order unpredictable; the sort could apply to the first token if you’re lucky, but it could be applied on any other token.

From version 5.8.0.Beta3 onwards, Hibernate Search will log warnings whenever you’re using an analyzer on a sortable field. To resolve this warning change your Analyzer definition to be a Normalizer.

In Lucene, normalizers are just here to help, they work exactly as analyzers. The two differences are that you can’t affect a tokenizer to a normalizer when defining it, and that normalizers have a runtime safety net: should you manage to create multiple tokens, Hibernate Search will concatenate them back to a single token and log a warning.

In Elasticsearch version 5.2 and above, a normalizer will be translated to a native Elasticsearch normalizer, and a text field with a normalizer will take the keyword datatype.

In Elasticsearch version 5.1 and below, native normalizers are not available, thus normalizers are simply translated to analyzers and a text field with a normalizer will take the text (5.x) or string (2.x) datatype.

You can find out more about normalizers in the reference documentation:

AWS compatibility

AWS requires specific, dynamically computed headers in HTTP requests to handle authentication, which until now has made it difficult to use Hibernate Search with an AWS-hosted Elasticsearch.

We introduced a new SPI allowing low-level configuration of the HTTP client, which allows you to plug in the code required to perform the required AWS authentication; this same SPI may be used to integrate with other cloud providers.

We currently have all of our test suite running successfully against an Elasticsearch cluster managed by AWS, with security turned on.

At this stage the SPI is available but we didn’t release the signing component yet; this will be availble in the next milestone: see introduce an AWS module if you want to help!

Dependency injection in FieldBridges

As part of HSEARCH-1316, we’re experimenting with integration with various dependency injection frameworks.

The integration is about allowing you to use annotations such as @Inject, @PostConstruct and so on in your FieldBridges, which may for example allow you to fetch additional data from your application when indexing a given bean.

Integration is currently known to work with Spring DI and CDI. We don’t provide packages for user consumption, but if you are an integrator, or simply if you feel like it, you can have a look at our integration tests:

And more!

A summary of other notable changes:

  • HSEARCH-2606: the discovery_scheme configuration property is now correctly taken into account. Thanks to Matthieu Vincent for reporting and fixing this issue!

  • HSEARCH-2477: shard filtering now works on Elasticsearch.

  • HSEARCH-2603: we now use the Painless scripting language when doing spatial searches on Elasticsearch 5+. Incidentally, this means that it is no longer necessary to perform any server-side configuration on Elasticsearch 5+ to perform any spatial query.

  • HSEARCH-2734: due to a lot of confusion and incorrect (harmful) use, we have deprecated the "ram" name for the RAMDirectory directory provider. If you need it, please ensure you are not using it in a production environment, read about its limitations in the reference documentation, and use its new name: "local-heap".

  • HSEARCH-2735: index-time boosting features (@Boost, @DynamicBoost) have been deprecated with no replacement, and will need to be removed in a future version because Lucene 7 won’t allow index-time boosting anymore. See the reference documentation for alternatives: the suggestion is to switch to using query-time boosting instead.

  • HSEARCH-2665: IndexingInterceptor is no longer considered experimental.

  • HSEARCH-2666: IndexControlMBean is no longer considered experimental.

For a full list of changes since 5.8.0.Beta2, please see the release notes.

How to get this release

All versions are available on Hibernate Search’s web site.

Ideally use a tool to fetch it from Maven central; these are the coordinates:

<dependency>
   <groupId>org.hibernate</groupId>
   <artifactId>hibernate-search-orm</artifactId>
   <version>5.8.0.Beta3</version>
</dependency>

To use the experimental Elasticsearch integration you’ll also need:

<dependency>
   <groupId>org.hibernate</groupId>
   <artifactId>hibernate-search-elasticsearch</artifactId>
   <version>5.8.0.Beta3</version>
</dependency>

Downloads from Sourceforge are available as well.

Feedback, issues, ideas?

To get in touch, use the following channels:

Today is a good time for some maintenance releases of Hibernate Search.

We released all three branches currently in maintenance mode:

Version 5.5.7.Final

Maintained as it’s included in WildFly, compatible with Hibernate ORM 5.0 and 5.1: change log.

Version 5.6.2.Final

Latest stable version compatible with Hibernate ORM 5.0 and 5.1, including first experimental support for Elasticsearch: change log.

Version 5.7.1.Final

Stable version compatible with Hibernate ORM > 5.2.3.Final and later: change log.

The master branch is also very active! Expect a new Beta release of version 5.8 with support for Elasticsearch 5+ later this week.

Why ?

We backported various small fixes which should be welcome but of low impact. The big deal is HSEARCH-2691, as you might fail to notice this problem until testing under load, which is quite inconvenient.

Big thanks to Andrej Golovnin, who spotted the problem and shared a patch; I suspect it wasn’t easy to find the problem.

Also thanks to Osamu Nagano, who pointed out the importance of this fix and suggested backporting it urgently.

How to get these releases

All versions are available for download on Hibernate Search’s web site.

Ideally use a modern build tool to fetch it from Maven central; these are the coordinates:

<dependency>
   <groupId>org.hibernate</groupId>
   <artifactId>hibernate-search-orm</artifactId>
   <version>5.7.1.Final</version>
</dependency>

To use the experimental Elasticsearch integration you’ll also need:

<dependency>
   <groupId>org.hibernate</groupId>
   <artifactId>hibernate-search-elasticsearch</artifactId>
   <version>5.7.1.Final</version>
</dependency>

Downloads from Sourceforge are available as well.

Feedback

Please let us know of any problem or suggestion by creating an issue on JIRA, or by sending an email to the developer’s developer’s mailing lists, or posting on the forums.

We also monitor Stack Overflow; when posting on SO please use the tag hibernate-search.

Hibernate Community Newsletter 11/2017

Posted by    |       |    Tagged as Discussions Hibernate ORM

Welcome to the Hibernate community newsletter in which we share blog posts, forum, and StackOverflow questions that are especially relevant to our users.

Articles

The pick of this edition is this article by Heap’s Engineering blog which demonstrates the benefits of using batch updates even for reducing database index overhead.

As previously explained, you can speed up integration tests considerably using a RAM disk or tmpfs. Mark Rotteveel‏ tried this approach and looks like it works for Firebird as well.

Hibernate 5.2.10 comes with a very handy connection management optimization for RESOURCE_LOCAL transactions. If you don’t use JTA and you disabled auto-commit at the connection pool level, then it’s worth setting the hibernate.connection.provider_disables_autocommit configuration property as well.

When using Oracle, the fastest way to access a database record is to use the ROWID pseudocoolumn. If using ROWID is suitable for your application, then you can annotate your entities with the @RowId annotation and Hibernate will use the ROWID pseudocoolumn for UPDATE statements.

The best way to manage a database schema is to use incremental update scripts, and a tool like Flyway. Even in this case, you can still benefit from the hbm2ddl tool to validate the entity mappings. Check out how you can deal with schema mismatch exceptions, especially for non-trivial mappings.

You can use Hibernate statistics to log query execution time. However, in reality, many enterprise application are better off using a JDBC DataSource or Driver Proxy which, not only it allows you to log JDBC statements along with their parameters, but you can even detect N+1 query issues automatically during testing.

Presentations

Jakub Kubryński has a very good presentation about JPA common pitfalls and how you should handle them effectively.

Book discount

Until the 1st of June, High-Performance Java Persistence is 33% OFF. Considering the reader testimonials as well as Good Reads and Amazon reviews, it’s quite a deal!

Time to upgrade

  • Hibernate Validator 6.0.0 Beta1 and Beta2 were released.

  • Hibernate ORM 5.1.7 is out, so you should consider updating if you are running the 5.1 version.

Creating Patch Files for WildFly

Posted by    |       |    Tagged as Discussions

The WildFly application server comes with a patching mechanism which makes it very easy to upgrade existing modules of the server or add new ones. E.g. Hibernate Validator provides patch files which let you upgrade WildFly 10.1 to the preview releases of Bean Validation 2.0.

But you also can use the patching mechanism to add your own custom libraries to WildFly, making them available to deployed applications. Even if you only ever deploy a single application to one WildFly instance, this can be very useful, as it results in a smaller size of your deployment unit (WAR etc.) and thus faster build and deployment times.

How are WildFly patches created, though? Patch files are generally just ZIP files which contain the module(s) to be added or updated as well as some additional metadata. So in theory you could create them by hand, but there’s the patch-gen tool which greatly simplifies this task.

In the following we’ll describe step by step how to create a WildFly patch using the patch-gen-maven-plugin. As an example, we’ll produce a patch file that adds the Eclipse Collections library to a WildFly instance.

The Module Descriptors

The first thing we need are the module descriptors for the JBoss Modules system which is the underlying basis of WildFly. Eclipse Collections is split into two JARs, one for its API and one for the implementation. So we’ll create the following two descriptors:

src/main/modules/system/layers/base/org/eclipse/collections/api/main/module.xml
<?xml version="1.0" encoding="UTF-8"?>

<module xmlns="urn:jboss:module:1.3" name="org.eclipse.collections.api">
    <resources>
        <resource-root path="eclipse-collections-api-${eclipse.collections.version}.jar" />
    </resources>
</module>
src/main/modules/system/layers/base/org/eclipse/collections/main/module.xml
<?xml version="1.0" encoding="UTF-8"?>

<module xmlns="urn:jboss:module:1.3" name="org.eclipse.collections">
    <resources>
        <resource-root path="eclipse-collections-${eclipse.collections.version}.jar" />
    </resources>

    <dependencies>
        <module name="org.eclipse.collections.api" />
    </dependencies>
</module>

Each descriptor specifies a resource for the corresponding JAR (the version property placeholders are replaced using Maven resource filtering). The implementation module also declares a dependency to the API module.

The Patch Tool Configuration File

The patch-gen tool needs a small configuration file which describes some patch metadata (e.g. the server version to which the patch applies and the type of the patch - one-off vs. cumulative) as well as the patched module(s):

src/main/patch/patch.xml
<?xml version='1.0' encoding='UTF-8'?>

<patch-config xmlns="urn:jboss:patch-config:1.0">
    <name>wildfly-${wildfly.version}-eclipse-collections-${eclipse.collections.version}</name>
    <description>This patch adds Eclipse Collections ${eclipse.collections.version} to a WildFly ${wildfly.version} installation</description>
    <element patch-id="layer-base-wildfly-${wildfly.version}-eclipse-collections-${eclipse.collections.version}">
        <one-off name="base" />
        <description>This patch adds Eclipse Collections ${eclipse.collections.version} to a WildFly ${wildfly.version} installation</description>
        <specified-content>
            <modules>
                <added name="org.eclipse.collections.api" />
                <added name="org.eclipse.collections" />
            </modules>
        </specified-content>
    </element>
    <specified-content/>
</patch-config>

Preparing the Patch Creation

The patch-gen tool takes two directories of the distribution to be patched as input: one directory with the original, unpatched WildFly structure and another directory which contains the original WildFly structure and the added (or updated) modules. We can use the Maven dependency plug-in for creating the two directories by extracting the WildFly distribution twice:

pom.xml
...
<plugin>
    <artifactId>maven-dependency-plugin</artifactId>
    <executions>
        <execution>
            <id>unpack-wildfly</id>
            <phase>prepare-package</phase>
            <goals>
                <goal>unpack</goal>
            </goals>
            <configuration>
                <artifactItems>
                    <artifactItem>
                        <groupId>org.wildfly</groupId>
                        <artifactId>wildfly-dist</artifactId>
                        <version>${wildfly.version}</version>
                        <type>tar.gz</type>
                        <overWrite>false</overWrite>
                        <outputDirectory>${project.build.directory}/wildfly-original</outputDirectory>
                    </artifactItem>
                    <artifactItem>
                        <groupId>org.wildfly</groupId>
                        <artifactId>wildfly-dist</artifactId>
                        <version>${wildfly.version}</version>
                        <type>tar.gz</type>
                        <overWrite>false</overWrite>
                        <outputDirectory>${project.build.directory}/wildfly-patched</outputDirectory>
                    </artifactItem>
                </artifactItems>
            </configuration>
        </execution>
    </executions>
</plugin>
...

Now we need to add the Eclipse Collections JARs to the second directory. Let’s configure another execution of the Maven dependency plug-in for that:

pom.xml
...
<execution>
    <id>add-eclipse-collections</id>
    <phase>prepare-package</phase>
    <goals>
        <goal>copy</goal>
    </goals>
    <configuration>
        <artifactItems>
            <artifactItem>
                <groupId>org.eclipse.collections</groupId>
                <artifactId>eclipse-collections-api</artifactId>
                <version>${eclipse.collections.version}</version>
                <overWrite>false</overWrite>
                <outputDirectory>${wildflyPatched}/modules/system/layers/base/org/eclipse/collections/api/main</outputDirectory>
            </artifactItem>
            <artifactItem>
                <groupId>org.eclipse.collections</groupId>
                <artifactId>eclipse-collections</artifactId>
                <version>${eclipse.collections.version}</version>
                <overWrite>false</overWrite>
                <outputDirectory>${wildflyPatched}/modules/system/layers/base/org/eclipse/collections/main</outputDirectory>
            </artifactItem>
        </artifactItems>
    </configuration>
</execution>
...

We also need to add the module.xml descriptors so they are located next to the corresponding JARs. The Maven resources plug-in helps with that. It also can be used to replace the placeholders in the patch.xml descriptor. The following two plug-in executions are needed:

pom.xml
...
<plugin>
    <artifactId>maven-resources-plugin</artifactId>
    <executions>
        <execution>
            <id>copy-module-descriptors</id>
            <phase>prepare-package</phase>
            <goals>
                <goal>copy-resources</goal>
            </goals>
            <configuration>
                <outputDirectory>${wildflyPatched}/modules</outputDirectory>
                <resources>
                    <resource>
                        <directory>src/main/modules</directory>
                        <filtering>true</filtering>
                    </resource>
                </resources>
            </configuration>
        </execution>
        <execution>
            <id>filter-patch-descriptor</id>
            <phase>prepare-package</phase>
            <goals>
                <goal>copy-resources</goal>
            </goals>
            <configuration>
                <outputDirectory>${project.build.directory}/</outputDirectory>
                <resources>
                    <resource>
                        <directory>src/main/patch</directory>
                        <filtering>true</filtering>
                    </resource>
                </resources>
            </configuration>
        </execution>
    </executions>
</plugin>
...

Configuring the Patch-Gen Maven Plug-in

After all these preparations it’s time to configure the patch-gen Maven plug-in which will eventually assemble the patch file:

pom.xml
...
<plugin>
    <groupId>org.jboss.as</groupId>
    <artifactId>patch-gen-maven-plugin</artifactId>
    <executions>
        <execution>
            <id>create-patch-file</id>
            <phase>prepare-package</phase>
            <goals>
                <goal>generate-patch</goal>
            </goals>
            <configuration>
                <appliesToDist>${wildflyOriginal}</appliesToDist>
                <updatedDist>${wildflyPatched}</updatedDist>
                <patchConfig>${project.build.directory}/patch.xml</patchConfig>
                <outputFile>${patchFile}</outputFile>
            </configuration>
        </execution>
    </executions>
</plugin>
...

The plug-in requires the following configuration:

  • The path to the unpatched WildFly directory

  • The path to the patched WildFly directory

  • The path to the patch.xml descriptor

  • The output path for the patch file

As a last step we need to make sure that the created patch file is added as an artifact to the Maven build. That way, the created ZIP file can be installed to the local Maven repository and be deployed to repository servers such as Nexus. The build helper Maven plug-in helps with this last task:

pom.xml
...
<plugin>
    <groupId>org.codehaus.mojo</groupId>
    <artifactId>build-helper-maven-plugin</artifactId>
    <executions>
        <execution>
            <id>attach-patch-artifact</id>
            <phase>package</phase>
            <goals>
                <goal>attach-artifact</goal>
            </goals>
            <configuration>
                <artifacts>
                    <artifact>
                        <file>${patchFile}</file>
                        <type>zip</type>
                        <classifier>wildfly-${wildfly.version}-patch</classifier>
                    </artifact>
                </artifacts>
            </configuration>
        </execution>
    </executions>
</plugin>
...

Running the Build

With all the configuration in place, the patch file can be built by running mvn clean install. The created patch file should have a structure like this:

target/eclipse-collections-8.1.0-wildfly-10.1.0.Final-patch.zip
├── META-INF
├── README.txt
├── layer-base-wildfly-10.1.0.Final-eclipse-collections-8.1.0
│   └── modules
│       └── org
│           └── eclipse
│               └── collections
│                   ├── api
│                   │   └── main
│                   │       ├── eclipse-collections-api-8.1.0.jar
│                   │       └── module.xml
│                   └── main
│                       ├── eclipse-collections-8.1.0.jar
│                       └── module.xml
├── misc
└── patch.xml

As we’d expect it, the patch contains the Eclipse Collections JARs as well as the corresponding module.xml descriptors. The patch.xml descriptor contains metadata for the patching infrastructure, e.g. the WildFly version to which this patch can be applied as well as hash checksums for the added modules.

Applying and Using the Patch

Once the patch is created, we can apply it using the jboss-cli tool that comes with WildFly:

<JBOSS_HOME>/bin/jboss-cli.sh "patch apply --path path/to/eclipse-collections-8.1.0-wildfly-10.1.0.Final-patch.zip"

You should see the following output if the patch has been applied:

{
    "outcome" : "success",
    "result" : {}
}

And with that you can use the Eclipse Collections API from within your deployed applications. Just make sure to expose the two new modules to your application. To do so, add a descriptor named META-INF/jboss-deployment-structure.xml to your deployment unit:

src/main/resources/META-INF/jboss-deployment-structure.xml
<?xml version="1.0" encoding="UTF-8"?>
<jboss-deployment-structure
    xmlns="urn:jboss:deployment-structure:1.2"
    xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">

    <deployment>
        <dependencies>
            <module name="org.eclipse.collections.api" />
            <module name="org.eclipse.collections" />
        </dependencies>
    </deployment>
</jboss-deployment-structure>

If you’d like to try it out and create your own WildFly patch, check out this example project on GitHub. It contains the complete pom.xml for creating the Eclipse Collections patch. There is also an integration test module, which takes the patch file, applies it to a WildFly instance and runs a small test (using Arquillian) that calls the Eclipse Collections API on the server.

If you got any feedback on this blog post or would like to share your experiences with the WildFly patching infrastructure, let us know in the comments below.

It has just been a couple of weeks since the 6.0.0.Beta1 release but we needed a new version matching Bean Validation 2.0.0.Beta2.

Hibernate Validator 6 is going to be the reference implementation of Bean Validation 2.0 and, as such, we coordinate releases so you can test the latest additions as soon as possible.

Note that Hibernate Validator 6 requires JDK 8 or above.

What’s new since Beta1

Bean Validation 2.0.0.Beta2 support

The main goal of this version is to upgrade to Bean Validation 2.0.0.Beta2.

We fixed a couple of bugs and also introduced a few new features.

Support for non generic types in value extraction

Our value extraction framework (which is the basis of our new container element validation implementation) only supported generic types until now.

We alleviated this restriction by introducing a new type attribute to the ExtractedValue annotation allowing to define the type of the extracted value even in the absence of a type argument providing the information.

Thus, it is now possible to define value extractors as below:

@UnwrapByDefault
public class OptionalIntValueExtractor implements ValueExtractor<@ExtractedValue(type = Integer.class) OptionalInt> {

    @Override
    public void extractValues(OptionalInt originalValue, ValueReceiver receiver) {
        receiver.value( null, originalValue.isPresent() ? originalValue.getAsInt() : null );
    }
}

And that’s indeed what we do in Hibernate Validator to support OptionalInt, OptionalLong and OptionalDouble.

Addition of container element information via the node builder API

The node builder API allows you to define a property path for a constraint violation using the ConstraintValidatorContext.

It is now possible to define container element information for existing node types supporting them or to define container element nodes:

public static class Validator implements ConstraintValidator<MyConstraint, MyBean> {

    @Override
    public boolean isValid(String value, ConstraintValidatorContext context) {
        context.disableDefaultConstraintViolation();

        context.buildConstraintViolationWithTemplate( context.getDefaultConstraintMessageTemplate() )
                .addContainerElementNode( "myNode1", Map.class, 1 )
                        .inIterable()
                        .atKey( "key" )
                .addConstraintViolation();

        context.buildConstraintViolationWithTemplate( context.getDefaultConstraintMessageTemplate() )
                .property( "myNode2" )
                        .inContainer( List.class, 0 )
                        .inIterable()
                        .atKey( "key" )
                .addConstraintViolation();

        return false;
    }
}

CDI improvements

Value extractors defined in the XML configuration are now managed beans.

We also fixed an issue that could occur if you were using a managed ParameterNameProvider.

And a few other things

  • @SafeHtml now supports defining accepted protocols (think allowing the data protocol for images) and has an improved programmatic API.

  • The @Min / @Max, @DecimalMin / @DecimalMax validators were split to have one validator per type and avoid reflection at runtime.

The complete list of fixed issues can be found in the release notes.

Getting 6.0.0.Beta2

To get the release with Maven, Gradle etc. use the GAV coordinates org.hibernate.validator:{hibernate-validator|hibernate-validator-cdi|hibernate-validator-annotation-processor}:6.0.0.Beta2. Note that the group id has changed from org.hibernate (Hibernate Validator 5 and earlier) to org.hibernate.validator (from Hibernate Validator 6 onwards).

Alternatively, a distribution bundle containing all the bits is provided on SourceForge (TAR.GZ, ZIP).

Feedback, issues, ideas?

To get in touch, use the usual channels:

What’s next?

Bean Validation 2.0 is currently in the Public Review phase, the Public Review Ballot will take place at the beginning of June. The Proposed Final Draft is planned to be released shortly thereafter, so if you spot any remaining issues or shortcomings in the spec draft, please let us know as soon as possible.

Hibernate Validator 6 is still under active development. We’ll keep you posted with our progress.

Hibernate ORM 5.1.7.Final released

Posted by    |       |    Tagged as Hibernate ORM Releases

We decided to do another release of the 5.1 series to fix some bugs to be included in an upcoming version of WildFly. This may be the last release of the 5.1 series, so we recommend that you migrate to 5.2 for future bugfixes.

Hibernate ORM 5.1.7.Final:

For information on consuming the release via your favorite dependency-management-capable build tool, see http://hibernate.org/orm/downloads/

Hibernate Community Newsletter 10/2017

Posted by    |       |    Tagged as Discussions Hibernate ORM

Welcome to the Hibernate community newsletter in which we share blog posts, forum, and StackOverflow questions that are especially relevant to our users.

Articles

Mapping JPA relationships is a trivial thing to do. However, not all associations are equal in terms of performance. Check out this series of articles about the best way to map the:

If you’re using TomEE 7, you can easily switch to using Hibernate ORM as the JPA provider. Check out this article which shows you how you can do that, and how you can also speed up application server startup time.

Docker is extremely useful for running database containers that you need when doing integration testing. Check out this article about running IBM DB2 Express-C as a Docker container, and how to set up a JDBC connection to DB2.

Although collections like List and Set are more common when using JPA and Hibernate, you can easily use Maps as explained in this article.

Time to upgrade

Hibernate ORM 5.1.6 has been released, as well as Hibernate Search 5.8.0 Beta 2.

back to top