Red Hat

In Relation To JPA

In Relation To JPA

Using a different persistence provider with AS 7.0.1

Posted by    |       |    Tagged as JBoss AS JPA

AS 7.0.1 is released!!!

AS 7.0.1 is released, which includes improvements for using different persistence providers (beside the included Hibernate 4.0.0). The framework is in place to plug in Hibernate 3.5 (or greater) persistence providers. This addressed user feedback, that its important to use Hibernate 3.x on AS7, to give more time before migrating to Hibernate 4.0.0. Switching your application to Hibernate 4.x, should be done as soon as possible. Hibernate 4.0.0 has been modified to support the AS7 modular classloading environment and JBoss logging (and other many other improvements).

Drop in a Hibernate 3.6.6.Final module

By default, JPA applications will use Hibernate 4.0.0 with AS7, unless you add property jboss.as.jpa.providerModule set to org.hibernate:3 to your persistence.xml properties list. You also need to create an org.hibernate:3 module that represents the Hibernate 3.6.6.Final release jars that will be used by your application. I downloaded the hibernate-distribution-3.6.6.Final for creating the new module.

Steps to create the Hibernate 3.6.6.Final module

  1. Create as/modules/org/hibernate/3 folder
  2. Copy jars from the hibernate-distribution-3.6.6.Final folder into the as/modules/org/hibernate/3 folder
  3. Create as/modules/org/hibernate/3/module.xml file
  4. update your persistence.xml to use the org.hibernate:3 module
<!-- as/modules/org/hibernate/3/module.xml file -->
<?xml version="1.0" encoding="UTF-8"?>
<module xmlns="urn:jboss:module:1.0" name="org.hibernate" slot="3">
    <resources>
        <resource-root path="hibernate3.jar"/>
        <resource-root path="javassist-3.12.0.GA.jar"/>
        <resource-root path="antlr-2.7.6.jar"/>  
        <resource-root path="commons-collections.jar"/>  
        <resource-root path="dom4j-1.6.1.jar"/>  
        <!-- Insert other Hibernate 3 jars to be used here -->
    </resources>
    <dependencies>
        <module name="org.jboss.as.jpa.hibernate" slot="3"/>
        <module name="asm.asm"/>
        <module name="javax.api"/>
        <module name="javax.persistence.api"/>
        <module name="javax.transaction.api"/>
        <module name="javax.validation.api"/>
        <module name="org.apache.ant"/>
        <module name="org.infinispan"/>
        <module name="org.javassist"/>
        <module name="org.slf4j"/>
    </dependencies>
</module>
<!-- persistence.xml using Hibernate 3.6.6.Final -->
<?xml version="1.0" encoding="UTF-8"?>
<persistence xmlns="http://java.sun.com/xml/ns/persistence" version="1.0">
<persistence-unit name="GameOfThrones_pu">
    <description>my Hibernate 3 persistence unit.</description>
    <jta-data-source>java:jboss/datasources/gameDS</jta-data-source>
    <properties>
        <property name="jboss.as.jpa.providerModule" value="org.hibernate:3"/>
    </properties>
</persistence-unit>
</persistence>

Example of an AS7 (Arquillian) unit test using the org.hibernate:3 module. Note that this test is @Ignored until we arrange for the org.hibernate:3 module to be created by the test build.

I could of created a standalone application but I prefer to push the awesome Arquillian project! ;) I hope that you will contribute similar JPA tests to the AS7 project when you can.

Experimental use of OGM on AS 7.0.1

I wanted to see if I could create an AS7 unit test that created a few entities with (OGM). To accomplish this, I checked out a copy of the OGM sources at and built them.

cd work
git clone git://github.com/hibernate/hibernate-ogm.git
Cloning into hibernate-ogm...
  remote: Counting objects: 4371, done.
  remote: Compressing objects: 100% (1357/1357), done.
  remote: Total 4371 (delta 1694), reused 4262 (delta 1599)
  Receiving objects: 100% (4371/4371), 1.17 MiB, done.
  Resolving deltas: 100% (1694/1694), done.

cd hibernate-ogm
mvn clean install
... lots of output from building ogm jars....

Folder hibernate-ogm-core/target will contain hibernate-ogm-core-3.0.0-SNAPSHOT.jar 

Next, you need to build my experimental OGM branch for AS7 (oh yeah, you need a few tools maven and git).

cd ..
git clone git://github.com/scottmarlow/jboss-as.git
cd jboss-as
git checkout ogm
./build.sh clean install
OR build.bat clean install
  1. Follow instructions above for creating the org.hibernate:3 module
  2. Create as/modules/org/hibernate/ogm folder
  3. Copy hibernate-ogm-core-3.0.0-SNAPSHOT.jar that you just built (from OGM build) into the as/modules/org/hibernate/ogm folder
  4. Create as/modules/org/hibernate/ogm/module.xml file
  5. update your persistence.xml to use the org.hibernate:3 module and org.hibernate:ogm providerModule.
  6. Create a local-only infinispan.xml for your application.
<!-- as/modules/org/hibernate/ogm/module.xml file -->
<?xml version="1.0" encoding="UTF-8"?>
<module xmlns="urn:jboss:module:1.0" name="org.hibernate" slot="ogm">
    <resources>
        <resource-root path="hibernate-ogm-core-3.0.0-SNAPSHOT.jar"/>
    </resources>

    <dependencies>
        <module name="org.jboss.as.jpa.hibernate" slot="3"/>
        <module name="org.hibernate" slot="3" export="true" />
        <module name="javax.api"/>
        <module name="javax.persistence.api"/>
        <module name="javax.transaction.api"/>
        <module name="javax.validation.api"/>
        <module name="org.apache.ant"/>
        <module name="org.infinispan"/>
        <module name="org.javassist"/>
        <module name="org.slf4j"/>
    </dependencies>
</module>
<!-- persistence.xml using OGM -->
<?xml version="1.0" encoding="UTF-8"?>
<persistence xmlns="http://java.sun.com/xml/ns/persistence" version="1.0">
<persistence-unit name="OGMExperiment_pu">
    <description>my OGM persistence unit.</description>
    <jta-data-source>java:jboss/datasources/ExampleDS</jta-data-source>
    <properties>
        <property name="jboss.as.jpa.providerModule" value="org.hibernate:ogm"/>
        <property name="jboss.as.jpa.adapterModule" value="org.jboss.as.jpa.hibernate:3"/>
        <property name="hibernate.ogm.infinispan.configuration_resourcename" value="infinispan.xml"/>
    </properties>
</persistence-unit>
</persistence>
<?xml version="1.0" encoding="UTF-8"?>
  
<!-- 
    infinispan.xml
    This is the testing configuration, running in LOCAL clustering mode to speedup tests.
-->
<infinispan
    xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
    xsi:schemaLocation="urn:infinispan:config:5.0 http://www.infinispan.org/schemas/infinispan-config-5.0.xsd"
    xmlns="urn:infinispan:config:5.0">

    <global>
    </global>

    <!-- *************************** -->
    <!--   Default cache settings    -->
    <!-- *************************** -->

    <default>
    </default>

    <!-- *************************************** -->
    <!--     Cache to store the OGM entities     -->
    <!-- *************************************** -->
    <namedCache
        name="ENTITIES">
    </namedCache>

    <!-- *********************************************** -->
    <!--   Cache to store the relations across entities  -->
    <!-- *********************************************** -->
    <namedCache
        name="ASSOCIATIONS">
    </namedCache>

    <!-- ***************************** -->
    <!--   Cache to store identifiers  -->
    <!-- ***************************** -->
    <namedCache
        name="IDENTIFIERS">
    </namedCache>

</infinispan>

To run the OGM unit test that I added (after manually creating the org.hibernate:ogm module), run the unit test with these steps:

  1. cd testsuite
  2. cd compat
  3. mvn clean install -DallTests

The unit test will run one instance of AS7 and use Infinispan in local-only mode. A few entities will be created (under a JTA transaction) and read back. If you see error org.jboss.as.testsuite.compat.jpa.hibernate.OGMHibernate3SharedModuleProviderTestCase: Could not deploy to container, open the ./target/jbossas/standalone/log/server.log to see what went wrong (did you create the as/modules/org/hibernate/3 and as/modules/org/hibernate/ogm modules?)

See OGM documentation for more about OGM.

Like I said before, using OGM on AS7, is experimental at this point. ;)

AS 7.0.1 JPA documentation

The documentation for AS7 JPA is available here.

I started a new git repo for working on EclipseLink integration. This will require coordination with the EclipseLink project as changes are probably required there as well. EclipseLink has the eclipselink.target.server property that can be set to JBoss but that doesn't support AS7 yet. Also, we need to fix the PersistenceUnitMetadata.getNewTempClassLoader() in AS7. Depending on the level of interest, the EclipseLink integration in AS7 will come together (more of the work may be inside of the EclipeLink project itself).

Community contributions are welcome

Please continue to ask/answer questions on the AS7 user discussion forums. I would like to work with a few more community members on further code changes to the JPA integration support in AS7. If you would like to volunteer, find Scott Marlow on IRC (irc://irc.freenode.org/jboss-as7). As well as working on integration support with other persistence providers.

A more concise way to generate the JPA 2 metamodel in Maven

Posted by    |       |    Tagged as Discussions JPA

The JPA 2 metamodel is the cornerstone of type-safe criteria queries in JPA 2. The generated classes allow you to refer to entity properties using static field references, instead of strings. (A metamodel class is the fully-qualified class name of the entity class it matches followed by an underscore (_)).

It sounds promising, but many people using Maven are getting tripped up trying to get the metamodel generated and compiled. The Hibernate JPA 2 Metamodel Generator guide offers a couple of solutions. I've figured out another, which seems more elegant.

Just to refresh your memory of the problem:

  1. Maven compiles the classes during the compile phase
  2. The Java 6 compiler allows annotation processors to hook into it
  3. Annotation processors are permitted to generate Java source files (which is the case with the JPA 2 metamodel)
  4. Maven does not execute a secondary compile step to compile Java source files generated by the annotation processor

I figured out that it's possible to use the Maven compiler plugin to run only the annotation processors during the generate-sources phase! This effectively becomes a code generation step. Then comes the only downside. If you can believe it, Maven does not have a built-in way to compile generated sources. So we have to add one more plugin (build-helper-maven-plugin) that simply adds an additional source folder (I really can't believe the compiler plugin doesn't offer this feature). During the compile phase, we can disable the annotation processors to speed up compilation and avoid generating the metamodel a second time.

Here's the configuration for your copy-paste pleasure. Add it to the <plugins> section of your POM.

<!-- Compiler plugin enforces Java 1.6 compatibility and controls execution of annotation processors -->
<plugin>
   <artifactId>maven-compiler-plugin</artifactId>
   <version>2.3.1</version>
   <configuration>
      <source>1.6</source>
      <target>1.6</target>
      <compilerArgument>-proc:none</compilerArgument>
   </configuration>
   <executions>
      <execution>
         <id>run-annotation-processors-only</id>
         <phase>generate-sources</phase>
         <configuration>
            <compilerArgument>-proc:only</compilerArgument>
            <!-- If your app has multiple packages, use this include filter to
                 execute the processor only on the package containing your entities -->
            <!--
            <includes>
               <include>**/model/*.java</include>
            </includes>
            -->
         </configuration>
         <goals>
            <goal>compile</goal>
         </goals>
      </execution>
   </executions>  
</plugin>         
<!-- Build helper plugin adds the sources generated by the JPA 2 annotation processor to the compile path -->
<plugin>
   <groupId>org.codehaus.mojo</groupId>
   <artifactId>build-helper-maven-plugin</artifactId>
   <version>1.5</version>
   <executions>      
      <execution> 
         <phase>process-sources</phase>
         <configuration>
            <sources>
               <source>${project.build.directory}/generated-sources/annotations</source>
            </sources>
         </configuration>
         <goals>
            <goal>add-source</goal>
         </goals>
      </execution>
   </executions>
</plugin>

The metamodel source files get generated into the target/generated-sources/annotations directory.

Note that if you have references to the metamodel across Java packages, you'll need to filter the annotation processor to only run on the package containing the entity classes.

We'll be protoyping this approach in the 1.0.1.Beta1 release of the Weld archetypes, which should be out soon.

Bonus material: Eclipse configuration

While I'm at it, I might as well show you how I enabled the JPA 2 metamodel generation in Eclipse. (Max may correct me. He's the authority on Eclipse tooling, so listen to what he says).

Start by adding the following dependency to your POM:

<dependency>
   <groupId>org.hibernate</groupId>
   <artifactId>hibernate-jpamodelgen</artifactId>
   <version>1.0.0.Final</version>
   <scope>provided</scope>
</dependency>

Then, populate the .factorypath file at the root of your project with the following contents:

<factorypath>
    <factorypathentry kind="PLUGIN" id="org.eclipse.jst.ws.annotations.core" enabled="true" runInBatchMode="false"/>
    <factorypathentry kind="VARJAR" id="M2_REPO/org/hibernate/hibernate-jpamodelgen/1.0.0.Final/hibernate-jpamodelgen-1.0.0.Final.jar" enabled="true" runInBatchMode="false"/>
    <factorypathentry kind="VARJAR" id="M2_REPO/org/hibernate/javax/persistence/hibernate-jpa-2.0-api/1.0.0.Final/hibernate-jpa-2.0-api-1.0.0.Final.jar" enabled="true" runInBatchMode="false"/>
</factorypath>

Refresh the project in Eclipse. Now right click on the project and select:

Properties > Java Compiler > Annotation Processing

Enable project specific settings and enable annotation processing. Press OK and OK again when prompted to build the project. Now, Eclipse should also generate your JPA 2 metamodel.

Happy type-safe criteria querying!

Java EE 6 Final Release

Posted by    |       |    Tagged as Bean Validation CDI Java EE JPA Seam

As I'm sure you've all seen, Java EE 6 has gone final. You can now download the Final Release of the Contexts and Dependency Injection, Bean Validation, Java Persistence API 2 and Java Servlet 3 specifications from jcp.org, and read the linked javadoc for the entire platform. It's also a good chance to check out the Java API for RESTful Web Services specification, which now includes CDI integration, if you havn't already.

Sun have also published a three-part overview of the new platform and the Java EE 6 tutorial and sample applications.

It's just fantastic to finally see the fruits of all that work :-)

Now that Bean Validation is officially part of Java EE 6 and that Java EE6 is officially voted YES, let's see how Bean Validation integrates with the rest of the eco system.

What is Bean Validation

It's goal is to let application developers declare their data constraints once by annotating their model and make sure these constraints are validated by the different layers of the application in a consistent manner. Without Bean Validation, people have to write their validation rules in their favorite presentation framework, then in their business layer, then in their persistent layer, to some degree in the database schema and keep all of them synchronized.

Here is how this centralized constraint declaration looks like:

class User {
  @NotEmpty @Size(max=100)
  String getLogin() { ... }

  @NotEmpty @Size(max=100)
  String getFirstname() { ... }
  
  @Email @Size(max=250)
  String getEmail() { ... }
  ...
}

There are many more features like constraint composition, grouping but let's focus on how Bean Validation integrates with the EE 6 stack.

So what do I have to do to make it work in Java EE 6

The short answer is nothing. Not even an XML configuration trick.

Simply add your constraints on your domain model and the platform does the rest for you.

JSF and how to expose constraint violations to the user

In JSF, you bind form inputs to properties of your domain model. JSF 2 and Bean Validation smartly figure out which property you are binding to and execute the constraints associated to it.

<h:form id="register">
    <div style="color: red">
        <h:messages id="messages" globalOnly="true"/>
    </div>

    <div>
        Login:
        <h:inputText id="login" value="#{identifier.user.login}"/>
        <h:message style="color: red" for="login"/>
        <br/>
        Password:
        <h:inputSecret id="password" value="#{identifier.user.password}"/>
        <h:message style="color: red" for="password"/>
        <br/>
        Firstname:
        <h:inputText id="firstname" value="#{identifier.user.firstname}"/>
        <h:message style="color: red" for="firstname"/>
        <br/>
        Email:
        <h:inputText id="email" value="#{identifier.user.email}"/>
        <h:message style="color: red" for="email"/>
        <br/>

        <h:commandButton id="Login" value="Login" action="#{identifier.register}"/>
        <br/>
        <h:button id="cancel" value="Cancel" outcome="/home.xhtml"/>
    </div>
</h:form>

If the email, for example is malformed and the first name is left empty, Bean Validation will return the constraint violations to JSF 2 that will expose them to the user in a localized error message. By default, it just works and you don't even have to think about it.

For more advanced use cases, like disabling constraint validation for one or several fields or using a specific group or set of groups instead of the default one, you can use the <f:validateBean/> tag (check line 8 and 20 in the following example).

<h:form id="register">
    <div style="color: red">
        <h:messages id="messages" globalOnly="true"/>
    </div>

    <div>
        <!-- ***** use a specific group ***** -->
        <f:validateBean validationGroups="${identifier.validationGroups}">
            Login:
            <h:inputText id="login" value="#{identifier.user.login}"/>
            <h:message style="color: red" for="login"/>
            <br/>
            Password:
            <h:inputSecret id="password" value="#{identifier.user.password}"/>
            <h:message style="color: red" for="password"/>
            <br/>
            Firstname:
            <!-- ***** disable validation for firstname ***** -->
            <h:inputText id="firstname" value="#{identifier.user.firstname}">
                <f:validateBean disabled="true"/>
            </h:inputText>
            <h:message style="color: red" for="firstname"/>
            <br/>
            Email:
            <h:inputText id="email" value="#{identifier.user.email}"/>
            <h:message style="color: red" for="email"/>
            <br/>

            <h:commandButton id="Login" value="Login" action="#{identifier.register}"/>
            <br/>
            <h:button id="cancel" value="Cancel" outcome="/home.xhtml"/>
        </f:validateBean>
    </div>
</h:form>

In the future, we want to work with RichFaces so that the constraints declared on the object model are validated in the JSF components on the client side. This is something we had prototyped already and that Pete, Dan and I proposed to the JSF 2 expert group initially but we had to scale down our ambitions :) Expect some innovations from us in this area.

But not all your data comes from the presentation layer.

JPA 2: last line of defense

Again, by default, your JPA 2 provider runs Bean Validation on the entities you are about to persist or update. You are then guaranteed to not put invalid data in your database and thus increasing the quality of your data overall. Oh, and these are the same constraints you would have validated in JSF 2.0.

You can disable validation in JPA 2 using the validation-mode element in persistence.xml or the javax.persistence.validation.mode property and set them to none. More interestingly, you can chose which group will be validated upon entity persist, update and even delete operations. By default, the Default group is validated when you persist or update entities. Use any one of these properties to adjust that.

<property name="javax.persistence.validation.group.pre-persist" 
        value"javax.validation.groups.Default, com.acme.model.Structural"/>
<property name="javax.persistence.validation.group.pre-update" 
        value"javax.validation.groups.Default, com.acme.model.Structural"/>

<property name="javax.persistence.validation.group.pre-delete" 
        value"com.acme.model.SafeDestruct"/>

Hibernate Core and Hibernate Validator go a bit beyond that and propagate the constraints to the database schema (provided that you let Hibernate Core generate or update the schema for you). Simply set the hibernate.hbm2ddl.auto property to create, update or create-drop.

How about my service layer

You can inject a Validator or ValidatorFactory instance in any injectable POJO in Java EE 6.

class SalesService {
  @Inject Validator validator;
  @Inject @Current User user;

  public boolean canBuyInOneClick() {
    return validator.validate(user, BuyInOneClick.class).size() == 0;
  }

Where can I try it?

All of this is now available in JBoss AS 6 M1 that have just been released. Enjoy!

JPA 2.0 typesafe CriteriaQuery support in IntelliJ

Posted by    |       |    Tagged as CDI Hibernate ORM JPA

IntelliJ now has support for the new JPA 2.0 typesafe query facility I've been blogging about. It's very important that this stuff works smoothly with tooling, so it's great to see that the tooling vendors are on this early.

UPDATE: Even better, here and here are some screenshots of the CDI support in IntelliJ. Looks great!

Conspiracy theorists

Posted by    |       |    Tagged as JPA

Haha, just stumbled across this. It's funny to see, a whole three years after the end of the Persistence Wars, and in the face of the incredible success of JPA in almost every corner of Java development, that the conspiracy theorists are still out there, darkly hinting that commercial organisations like Oracle, IBM, RedHat ... have their own vested interest in RDBMS technologies, or in selling application servers.

As if the old JDO vendors weren't commercial organizations, or weren't selling their own technologies in which they had a vested interest.

Well, look here, JPA won. Get over it. It won because it was a better-written specification, with a better feature set. It was written by serious people with an understanding of the marketplace, not by scary Unabomber types with beards. It had a simpler set of APIs and a simpler lifecycle model. It revolutionized O/R mapping by introducing the first annotation-based mapping layer. It was truly integrated with the EE5 environment. It had better support for detached objects, and a more flexible model for handling graphs of persistent objects. It concentrated on defining user-visible semantics, not implementation. But, just as importantly, it left out all kinds of useless junk that JDO threw in. The measure of a good spec is not only what it puts in, but also what it leaves out, and what it leaves for tomorrow.

It doesn't help to screech that your spec has a bigger feature set, not unless you can prove that those features are useful, and well-designed.

Now that JPA2 is introducing great new things like a truly typesafe query API (instead of the totally half-assed fetch profiles stuff in JDO2) and a runtime-accessible metamodel, can't we all just agree that things turned out pretty well, in the end?

Linda blogs the typesafe query API for JPA 2.0

Posted by    |       |    Tagged as Hibernate ORM JPA

Linda has written up the new typesafe query API. I previously blogged the reasoning behind this stuff here and here.

An open issue that Linda doesn't mention is query execution. I'm trying to convince the rest of the group that we should carry the typesafety all the way through to the query result set. Here's what I wrote to the group a few weeks ago:

Folks, I figured out a refactoring that gives us a way to do typesafe result sets, avoiding the use of Result. In this new approach, CriteriaQuery and Query would both have a type parameter. You could write code like this, if you have a single selection:
  CriteriaQuery<Order> q = qb.create(Order.class);
  Root<Order> order = q.from(Order.class);
  q.select(order);

  Query<Order> eq = em.createQuery(q);
  List<Order> res= eq.getTypedResultList();
like this, if you have multiple selections and an object to wrap them in:
  CriteriaQuery<OrderProduct> q = qb.create(OrderProduct.class);
  Root<Order> order = q.from(Order.class);
  Join<Item, Product> product = order.join(Order_.items)
                                     .join(Item_.product);
  q.select( qb.construct(OrderProduct.class, order, product) );

  Query<OrderProduct> eq = em.createQuery(q);
  List<OrderProduct> res= eq.getTypedResultList();
Or, if you don't have a nice wrapper class like OrderProduct, you can fall back to use Result:
  CriteriaQuery<Result> q = qb.create();
  Root<Order> order = q.from(Order.class);
  Join<Item, Product> product = order.join(Order_.items)
                                     .join(Item_.product);
  q.select( qb.result(order, product) );

  Query<Result> eq = em.createQuery(q);
  List<Result> res= eq.getTypedResultList();
This change let's people directly get typesafe lists of entities or wrappers, which is something that many people have asked for!

The big point about this API is that I can't write a query which selects Foo and then try to put it in a List<Bar>. It's truly typesafe, end-to-end.

The sticking point with this is that javax.persistence.Query does not currently have the needed type parameter, and there are millions of queries written to the JPA 1.0 APIs which would suddenly spit compiler warnings if we added a type parameter. So we might have to introduce a new interface like TypesafeQuery or something.

Using an alternate JPA provider with Seam

Posted by    |       |    Tagged as JPA Seam

What, you didn't think it was possible? Of course it is! Although the Seam development team encourages you to use Hibernate as the JPA provider, Seam is capable of working with any JPA provider. This entry will show you how.

Why Hibernate or why not?

Hibernate is recommended for a good reason. It provides several vendor extensions that Seam is able to leverage to your advantage. If you are willing to pass on these enhancements (most notably manual flushing, advanced mappings and Hibernate Search), then you should have no problem swapping out Hibernate for another provider. In fact, I encourage you to try an alternate provider so that you can appreciate the value Hibernate adds ;)

Enough with the advice, let's get to it.

Changing the JPA persistence provider

There are three steps to changing the JPA persistence provider in a Seam application:

  1. Make sure the JPA provider is deployed to the application server (the JARs are available on the classpath)
  2. Declare the SPI class in persistence.xml
  3. Tell Seam to use the generic JPA persistence provider manager component (instead of Hibernate)

After you have the JPA provider configured, you should check your code for uses of Hibernate-specific extensions. This will keep you from bumping into exceptions.

Step 1: Preparing an alternate provider

The first step is usually just deciding to use the JPA provider that comes with the application server. For instance, if you are using GlassFish V2, TopLink Essentials is the bundled provider. Otherwise, you have to add the JAR files to the application server's extension directory. You had to do this same step if you have ever used Hibernate on GlassFish.

Step 2: Attaching the persistence unit to the provider

The second step is how you tell JPA which provider you want to use. You enter the SPI class that extends the javax.persistence.spi.PersistenceProvider interface in the <provider> element under <persistence-unit> in the persistence unit descriptor (persistence.xml).

<persistence xmlns="http://java.sun.com/xml/ns/persistence" 
   xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
   xsi:schemaLocation="http://java.sun.com/xml/ns/persistence http://java.sun.com/xml/ns/persistence/persistence_1_0.xsd" 
   version="1.0">
   <persistence-unit name="pu">
      <provider>oracle.toplink.essentials.ejb.cmp3.EntityManagerFactoryProvider</provider>
      ...
   </persistence-unit>
</persistence>

If there is only one provider on the classpath, JPA will automatically detect it and use it. It's when you have more than one JPA provider on the classpath that you have to be explicit.

You may need to specify other settings in persistence.xml that are specific to the provider, just like Hibernate has its own set of properties. For instance, with TopLink, you need to add a property that sets the target database:

<property name="toplink.target-database" value="MySQL4"/>

I also find that when I am using an exploded archive, I have to tell TopLink not to exclude unlisted classes. That's done by adding the following element above the <properties> element:

<exclude-unlisted-classes>false</exclude-unlisted-classes>

Your mileage may vary. I'll admit that depending on your packaging, getting the persistence unit setup right can be hairy. This has nothing to do with Seam.

Step 3: Breaking the news to Seam that you won't be using Hibernate

The third and final step to configuring the application is to give Seam the hint as to which persistence provider manager component to use. That's how Seam keeps straight which JPA providers support which features. For any non-Hibernate JPA provider, you define the generic JPA persistence provider in the Seam component descriptor (components.xml).

<components xmlns="http://jboss.com/products/seam/components"
   xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
   ...
   xsi:schemaLocation="
      ...
      http://jboss.com/products/seam/components http://jboss.com/products/seam/components-2.1.xsd">

   ...
   <component name="org.jboss.seam.persistence.persistenceProvider"
      class="org.jboss.seam.persistence.PersistenceProvider"/>
</components>

Again, this is only necessary if Hibernate is on the classpath, since Seam assumes you want to use Hibernate in that case (see JBSEAM-2785 for plans to fix this feature).

However, you will run into problems if you are using Seam < 2.1.2. In older versions, Seam would throw an exception when it attempted to switch the persistence contexts to manual flush mode prior to rendering (JBSEAM-3030), a flush mode that is only available in Hibernate. This has now been fixed. Until you have a chance to upgrade, you need to instead add the following component to your application to suppress this exception:

package com.domain.app.persistence;

import javax.persistence.EntityManager;
import org.jboss.seam.annotations.Name;
import org.jboss.seam.persistence.PersistenceProvider;

@Name("org.jboss.seam.persistence.persistenceProvider")
public class NoManualFlushPersistenceProvider extends PersistenceProvider {

   @Override
   public void setFlushModeManual(EntityManager entityManager) {
      // no-op
   }
}

You should now be able to start your Seam application using the alternate JPA provider! Now it's time to reset your expectations.

Implications

First and foremost, without Hibernate, manual flushing will not be available. Therefore, when using a long-running conversation, you are likely to get flushing of the persistence context after each request. The reason for this is that Seam uses a wrapper (global) transaction around each non-faces request and two transactions for each faces request. Since a JPA provider will flush prior to the transaction committing, you are guaranteed the flush will happen on every request. You can choose to disable Seam's wrapper transactions by configuring the <core:init> component in the Seam component descriptor as follows:

<core:init transaction-management-enabled="false"/>

Now you can control flushing by avoiding any calls to transactional methods until you are ready to send dirty changes to the database. That's the JPA way, so to speak.

Also be aware that you cannot cast EntityManager to the FullTextEntityManager (Hibernate Search). Also, when you retrieve the delegate from the EntityManager, it will be the delegate for the provider, which is a oracle.toplink.essentials.ejb.cmp3.EntityManager in the case of TopLink.

Feel free

There you have it, Seam with an alternate JPA provider! That's the beauty of leveraging the Java EE standards. In fact, you can even grab seam-gen and create a Seam project that you can deploy to GlassFish, the ultimate proof that Seam is a Java EE framework. While you may enjoy your freedom, you'll also miss out on the nice features that Hibernate adds. Still, it's good to know you have options.

The official instructions are maintained as an FAQ on the Seam wiki.

Java 6 compiler plugins and typesafe criteria queries

Posted by    |       |    Tagged as Hibernate ORM JPA

There's been plenty of discussion in the JPA group about my typesafe criteria proposal. My new favorite feature of the Java language is javax.annotation.Processor. Java 6 annotation processors are derived from the APT tool that existed in JDK 5, but are built into javac. Really, the name annotation processor is misleading, since this feature is only incidentally related to annotations. The Processor is really a fairly general purpose compiler plugin. If, like me, you've never been a fan of code generation, now is the time to reconsider. A Java 6 Processor can:

  • analyze the compiler's metamodel of the Java source code that is being compiled
  • search the source path for other metadata, such as XML
  • generate new types, which will also be compiled, or other files

Best of all, this functionality requires no special tool or commandline options to javac. All you need to do is put the jar containing your Processor in the classpath, and the compiler does the rest!

In the typesafe query API, I want to use this to work around Java's lack of a typesafe metamodel for fields and methods of a class. The basic idea is that the compiler plugin will generate a metamodel type for each persistent class in the application.

Suppose we have the following persistent class:

@Entity
public class Order {
    @Id long id;
	
    boolean filled;
    Date date;

    @OneToMany Set<Item> items;

    @ManyToOne Shop shop;
	
    //getters and setters...
}

Then a class named Order_ would be generated, with a static member of each persistent attribute of Order, that the application could use to refer to the attributes in queries.

After several iterations, we've settled on the following format for the generated type:

import javax.jpa.metamodel.Attribute;
import javax.jpa.metamodel.Set;
import javax.jpa.metamodel.Metamodel;

@Metamodel
public abstract class Order_ {
    public static Attribute<Order, Long> id;
    public static Attribute<Order, Boolean> filled;
    public static Attribute<Order, Date> date;
    public static Set<Order, Item> items;
    public static Attribute<Order, Shop> shop;
}

The JPA provider would be responsible for initializing the values of these members when the persistence unit is initialized.

Now, criteria queries would look like the following:

Root<Item> item = q.addRoot(Item.class);
Path<String> shopName = item.get(Item_.order)
                            .get(Order_.shop)
                            .get(Shop_.name);
q.select(item)
 .where( qb.equal(shopName, "amazon.com") );

Which is equivalent to:

select item 
from Item item
where item.shop.name = 'amazon.com'

Or like:

Root<Order> order = q.addRoot(Order.class);
Join<Item, Product> product = order.join(Order_.items)
		                   .join(Item_.product);
		
Path<BigDecimal> price = product.get(Product_.price);
Path<Boolean> filled = order.get(Order_.filled);
Path<Date> date = order.get(Order_.date);
		
q.select(order, product)
 .where( qb.and( qb.gt(price, 100.00), qb.not(filled) ) )
 .order( qb.ascending(price), qb.descending(date) );

Which is equivalent to:

select order, product 
 from Order order 
    join order.items item
    join item.product product
 where
    product.price > 100 and not order.filled
 order by
    product.price asc, order.date desc

The queries are almost completely typesafe. Because of the generic type parameters of Attribute:

  • I can't pass an attribute of Order to a Join or Path that represents an Item, and
  • I can't try to perform a comparison like gt() on a Path that represents a boolean attribute, or not() on a Path that represents an attribute of type Date.

There's some skeptics in the expert group, but my feeling is that once people get used to the idea that type generation is no longer something that gets in your way during development, we're going to see a lot more frameworks using this kind of approach. I certainly think this API is a big improvement over the previous proposal:

Root item = q.addRoot(Item.class);
Path shopName = item.get("order")
                    .get("shop")
                    .get("name");
q.select(item)
 .where( qb.equal(shopName, "amazon.com") );

Or:

Root order = q.addRoot(Order.class);
Join product = order.join("items")
		    .join("product");
		
Path price = product.get("price");
Path filled = order.get("filled");
Path date = order.get("date");
		
q.select(order, product)
 .where( qb.and( qb.gt(price, 100.00), qb.not(filled) ) )
 .order( qb.ascending(price), qb.descending(date) );

Both these queries are riddled with non-typesafe method invocations which can't be validated without executing the query.

A typesafe criteria query API for JPA

Posted by    |       |    Tagged as Hibernate ORM JPA

The public draft of the JPA 2.0 specification is already out and includes a much-awaited feature: an API that lets you create queries by calling methods of Java objects, instead of by embedding JPA-QL into strings that are parsed by the JPA implementation. You can learn more about the API proposed by the public draft at Linda's blog.

There's several reasons to prefer the API-based approach:

  • It's easier to build queries dynamically, to handle cases where the query structure varies depending upon runtime conditions.
  • Since the query is parsed by the Java compiler, no special tooling is needed in order to get syntactic validation, autocompletion and refactoring support.

(Note that JPA-QL syntax validation and autocompletion is available is some IDEs - in JBoss Tools, for example.)

There's two major problems with criteria query APIs in the Java language:

  • The queries are more verbose and less readable.
  • Attributes must be specified using string-based names.

The first problem isn't really solvable without major new language features (usually described as DSL support). The second problem could easily be solved by adding a typesafe literal syntax for methods and fields to Java. This is now a sorely needed feature of the language, it's especially useful in combination with annotations.

There have been some previous efforts to work around the lack of method and field literals. One recent example is LIQUidFORM. Unfortunately that particular approach forces you to represent every persistent attribute as a public getter method, which is not a restriction that is acceptable in the JPA specification.

I've proposed a different approach to the JPA EG. This approach comes in three layers:

  • A metamodel API for JPA
  • A query API where types and attributes are specified in terms of metamodel API objects
  • Support for third-party tooling which would generate a typesafe metamodel from the entity classes

Let's go layer-by layer.

The Metamodel

The metamodel API is a bit like the Java reflection API, except that it is provided by the JPA persistence provider, is aware of the JPA metadata, and uses generics in a clever way. (Also it uses unchecked exceptions.)

For example, to obtain an object that represents an entity, we call the MetaModel object:

import javax.jpa.metamodel.Entity;
...
Entity<Order> order = metaModel.entity(Order.class);
Entity<Item> item = metaModel.entity(Item.class);
Entity<Product> item = metaModel.entity(Product.class);

To obtain attributes of the entity, we need to use string-based names, as usual:

import javax.jpa.metamodel.Attribute;
import javax.jpa.metamodel.Set;
...
Set<Order, Item> orderItems = order.set("items", Item.class);
Attribute<Item, Integer> itemQuantity = item.att("quantity", Integer.class);
Attribute<Item, Product> itemProduct = item.att("product", Product.class);
Attribute<Product, BigDecimal> productPrice = product.att("price", BigDecimal.class)

Notice how the metamodel types which represent attributes are parameterized not only by the type of the attribute they represent, but also by the type that they belong to.

Also notice that this code is non-typesafe and can fail at runtime if no persistent attribute with the given type and name exists in the entity class. This is the only non-typesafe code we'll see - our goal is keep the rest of the API completely typesafe. How does that help us? Well, the trick here is to notice that the metamodel objects represent completely static information about a persistent classes, state that doesn't change at runtime. So we can:

  • obtain and cache these objects at system intialization time, forcing any errors to occur upfront, or even
  • let a tool that has access to our persistent classes generate the code that obtains and caches metamodel objects.

That's much better than having these errors occur at query execution time, as they do in the previous criteria query proposal.

The metamodel API is generally useful, even independent of the query API. Currently it's very difficult to write generic code that interacts with JPA because JPA metadata may be partitioned between annotations and various XML documents.

But, of course, the most popular use of the metamodel is to build queries.

Queries

To construct a query, we pass metamodel objects to the QueryBuilder API:

Query query = queryBuilder.create();

Root<Order> orderRoot = query.addRoot(order);
Join<Order, Item> orderItemJoin = orderRoot.join(orderItems);
Join<Item, Product> itemProductJoin = orderItemJoin.join(itemProduct);

Expression<Integer> quantity = orderItemJoin.get(itemQuantity);
Expression<BigDecimal> price = itemProductJoin.get(productPrice);

Expression<Number> itemTotal = queryBuilder.prod(quantity, price);
Expression<Boolean> largeItem = queryBuilder.gt(itemTotal, 100);

query.restrict(largeItem)
     .select(order)
     .distinct(true);

For comparison, here is the same query expressed using the API proposed in the public draft:

Query query = queryBuilder.createQueryDefinition();

DomainObject orderRoot = query.addRoot(Order.class);
DomainObject orderItemJoin = orderRoot.join("items");
DomainObject itemProductJoin = orderItemJoin.join("product");

Expression quantity = orderItemJoin.get("quantity");
Expression price = itemProductJoin.get("price");

Expression itemTotal = quantity.times(price);
Predicate largeItem = queryBuilder.greaterThan(100);

query.where(largeItem);
     .selectDistinct(order);

Of course, this query could be written more compactly in either API, but I'm trying to draw attention to the generic types of the objects that make up the query. The type parameters prevent me from writing something like this:

orderItemJoin.get(productPrice); //compiler error

The use of generics means the compiler can detect when we try to create a path expression by combining a queried entity of one type and an attribute of some other type. The metamodel object productPrice has type Attribute<Product, BigDecimal> and therefore cannot be passed to the get() method of orderItemJoin. get() only accepts Attribute<Item, ?>, since orderItemJoin is of type Join<Order, Item>.

Expressions are also parameterized by the expression type, so the compiler detect mistakes like:

queryBuilder.gt(stringExpression, numericExpression); //error

Indeed, the API has sufficient typesafeness that it's more or less impossible to build an unexecutable query.

Generating a typesafe metamodel

It's completely possible to build queries with only the metamodel API and the query API. But to really make the most of these APIs, the final piece of the puzzle is a little code generation tool. This tooling doesn't need to be defined by the JPA specification, and different tools don't need to generate exactly the same code. Nevertheless, the generated code will always be portable between all JPA implementations. All the tool does is reflect upon the persistent entities and create a class or classes that statically cache references to the metamodel Entity and Attribute objects.

Why do we need this code generator? Because writing Attribute<Item, Integer> itemQuantity = item.att("quantity", Integer.class); by hand is tedious and slightly error prone, and because your refactoring tool probably isn't smart enough to change the string based name when you refactor the name of the attribute of the persistent class. Code generation tools don't make these kind of errors, and they don't mind re-doing their work from scratch each time you ask them to.

In a nutshell: the tool uses the non-typesafe metamodel API to build a typesafe metamodel.

The most exciting possibility is that this code generation tool could be an APT plugin for javac. You wouldn't have to run the code generator explicitly, since APT is now fully integrated into the Java compiler. (Or, it could be an IDE plugin.)

But didn't code generation tools go out of fashion recently? Wasn't one of the great features of ORM solutions like Hibernate and JPA that they didn't rely upon code generation? Well, I'm a great believer in using whatever tool is the right solution to the problem at hand. Code generation has certainly been applied to problems where it wasn't the best solution. On the other hand, I don't see anyone bashing ANTLR or JavaCC for their use of code generation to solve the problem they address. In this case, we're working around a specific problem in the Java type system: the lack of a typesafe metamodel (reflection is one of the worst-designed language features). And code generation is simply the only solution that works. Indeed, for this problem it works well.

Don't worry, the generated code won't be hard to understand ... it might look something like this, for example:

public class persistent {
	static Metamodel metaModel;

	public static Entity<model.Order> order = metaModel.entity(model.Order.class);
	public static class Order {
		public static Attribute<model.Order, Long> id = order.id(Long.class);
		public static Set<model.Order, model.Item> items = order.set("items", model.Item.class);
		public static Attribute<model.Order, Boolean> filled = order.att("filled", Boolean.class);
		public static Attribute<model.Order, Date> date = order.att("date", Date.class);
	}

	public static Entity<model.Item> item = metaModel.entity(model.Item.class);
	public static class Item {
		public static Attribute<model.Item, Long> id = item.id(Long.class);
		public static Attribute<model.Item, model.Product> product = item.att("product", model.Product.class);
		public static Attribute<model.Item, model.Order> order = item.att("order", model.Order.class);
		public static Attribute<model.Item, Integer> quantity = item.att("quantity", Integer.class);
	}

	public static Entity<model.Product> product = metaModel.entity(model.Product.class);
	public static class Product {
		public static Attribute<model.Product, Long> id = product.id(Long.class);
		public static Set<model.Product, model.Item> items = product.set("items", model.Item.class);
		public static Attribute<model.Product, String> description = product.att("description", String.class);
		public static Attribute<model.Product, BigDecimal> price = product.att("price", BigDecimal.class);
	}

}

This class just let's us refer to attributes of the entities easily. For example, we could type persistent.Order.id to refer to the id attribute of Order. Or persistent.Product.description to refer to the description of the Product.

back to top