Red Hat

In Relation To Hibernate ORM

In Relation To Hibernate ORM

Web Beans and the EE platform

Posted by    |       |    Tagged as CDI Hibernate ORM

If there's one thing that we really want to get right as Web Beans goes from Public Draft to Proposed Final draft, it's integration with the Java EE platform. Up to this point, most of the work we've been doing as an EG was focused on nailing down the programming model, the semantics of the Web Beans services, and the behavior of the Web Bean manager. We've been writing the spec to assume that Web Beans implementations should be pluggable between different Java EE containers, because this is a direction that a lot of folks in the Java EE community believe that the EE platform should move in, and because we would like Web Beans to help show the way for other specifications such as EJB, JTA and perhaps even Servlets (note that JPA and JSF already support this).

Since the release of the Public Draft, our main focus has shifted to the integration problem. Web Beans generated a bunch of controversy when the draft was circulated to the EE platform expert group and proposed by Sun for inclusion in the Web Profile. That's not surprising - Web Beans solves some very important problems that lots of people have strong opinions on. I've also had a number of more technical discussions with the EE and EJB spec leads and with the EJB expert group. A number of concerns have emerged, which we're now attempting to address.

Let me share my take on these issues, and some directions we might pursue. But first...

...some history

In the days of EJB 1.x and 2.x, a major complaint against EJB was that the programming model was too restrictive. The specification imposed many annoying, distracting and unnecessary requirements upon developers. EJB 3.0 changed all that. Gone were the most of these annoying coding restrictions. In place of verbose XML deployment descriptors, EJB 3.0 was the first Java component model to be based around the use of annotations. In place of JNDI lookups was a simple dependency injection model, which ended up being generalized at the EE 5 platform level. I was a member of the EJB 3.0 expert group, and remain proud of what we accomplished.

However, the Java EE 5 dependency injection model was criticized from the outset. Many folks, including Rod Johnson of SpringSource, argued that the lack of support for injection of Java classes which were not EJBs was a fatal flaw. (At the time I did not agree, but I've changed my mind since.) On the other hand, I was dissatisfied with the lack of a contextual lifecycle model - though, in fairness, some other DI solutions of the time, including Spring, also lacked a contextual model. I was also unhappy with the mismatch between the JSF (XML-based, contextual) dependency injection model and the platform-level (annotation-based, non-contextual) model. This mismatch made it difficult to integrate EJB components with JSF.

Most of the changes introduced in EJB 3.0 and JPA 1.0 have stood the test of time remarkably well. The EE 5 dependency injection model has not. Contextual lifecycle management is now a standard feature of dependency injection solutions including Seam, Spring and Guice. Most importantly, Guice showed us the folly of using string-based names to identify implementations of an API. And Seam showed how to simplify application development by unifying dependency management in the web and EJB tiers.

Web Beans was initiated to address these problems. The original JSR proposal complained that:

  • EJB components are not aware of the web-tier request, session and application contexts and do not have access to state associated with those contexts. Nor may the lifecycle of a stateful EJB component be scoped to a web-tier context.
  • The JSF component model is not consistent with Java EE ... dependency injection...

The JSR promised to unif[y] the two component models and enabl[e] a considerable simplification to the programming model.

A new dependency injection model was needed: a model that provided contextual lifecycle management to transactional EE components.

Is Web Beans a component model?

In the Web Beans Early Draft, we characterized Web Beans as a component model. The specification defined two things:

  1. a set of restrictions that a Java class or EJB session bean must satisfy in order to be a Web Bean (principally, but not only, that it must explicitly specify @Component or another deployment type), and
  2. the services that would then be available to the Web Bean.

I viewed this as a unifying component model: a unification that would solve a major problem that has existed since J2EE - that the Java EE web tier has a completely different component model (actually, component models plural) to the transactional tier. It promised to tear down the wall between the two tiers, allowing transactional components to be aware of state related to the Web request. It promised a truly uniform programming model for components concerned with orchestration of the Web request, and components concerned with managing data access (at least when JSF was used).

This turned out to be the wrong approach. The strong feedback from the EE group was that Web Beans shouldn't define a new component model. The feedback from the EE spec leads at Sun was that the services defined by Web Beans should be available to all kinds of Java EE components - not just those which satisfied our definition of a Web Bean.

At Sun's urging, we made two changes to the specification. The first was primarily a language change. We repositioned Web Beans as a set of services, provided by a Manager analogous to the JTA TransactionManager, instead of a component model with a Container. But this also implied two more technical changes:

  1. we dramatically loosened the restrictions on what objects are Web Beans, so that every EJB session bean and every JavaBean is now a Web Bean, without any requirement for explicit declaration, and
  2. we provided a set of SPIs to allow objects which are not JavaBeans or EJB session beans to take advantage of the services provided by the Web Bean manager.

In particular, we provided injection into Servlets and message-driven beans (objects which by nature are not injectable) and an SPI that allowed other EE technologies and thirdparty frameworks to offer their components up for injection by Web Beans, and take advantage of the Web Beans services.

This was certainly the right thing to do, and the Web Beans specification is now more powerful and less intrusive. It still promises a more uniform programming model, but a side effect of these changes was that the model became even more generally useful, and even less specific to JSF and EJB. This isn't the first time I've seen something like that happen: the EJB 3.0 expert group also produced some technology (JPA, EE 5 dependency injection) that ended up being applied more generally than was originally envisaged.

Is Web Beans still a component model?

Unfortunately, some people are still not satisfied. A number of members of the platform expert group believe that the notion of a simple Web Bean still specifies a new component model. So what, exactly, is a simple Web Bean, and why do we need them?

A simple Web Bean is nothing new. It's just a Java class. You've already written hundreds or even thousands of simple Web Beans. Every JavaBean is a simple Web Bean. All we're trying to do is allow objects which were not specifically written as EJBs to be injectable and take advantage of the Web Beans services.

A simple Web Bean has the following advantages over an EJB session bean:

  • it does not need to be explicitly declared as an EJB using a component-defining annotation or XML
  • its interfaces do not need to be explicitly declared using @Local or @LocalBean
  • it can be final, or have final methods
  • it has a more natural lifecycle and concurrency model - the normal lifecycle for a Java class, instead of the enhanced lifecycle/concurrency semantics defined by EJB
  • if it's a stateful object, it does not require a @Remove method
  • it can take advantage of constructor injection

You can use preexisting JavaBeans and many other Java classes as simple Web Beans. As a side effect, simple Web Beans reduce the mental gap for new Java EE users who might be, initially, scared off by the unfamiliar EJB semantics - and the rather painful requirements to explicitly designate business interfaces and add empty @Remove methods - and let these new users get started with container-managed objects more easily.

On the other hand, unlike session beans, simple Web Beans, as currently defined:

  • don't have method-level transaction demarcation or security
  • can't have timeouts or asynchronous methods
  • can't have remote interfaces or be web service endpoints
  • don't have container-managed concurrency (locks)
  • don't support instance-level passivation or pooling

Therefore, in my opinion (an opinion shared the current and previous EJB spec leads), simple Web Beans don't compete with EJB as a solution for the transactional tier of the application. In fact, simple Web Beans aren't really anything new - Java EE developers already use EJBs and JavaBeans side by side. Web Beans just makes their life a little easier.

But some folks disagree. For example, one expert is worried that by providing something called a simple Web Bean, we're encouraging people to use them instead of EJB. The risk is that developers will be guided away from the use of non-simple EJBs. This has become a real sticking point in the EE group, and threatens to derail the whole Web Beans effort.

Of course, we're trying to work through this. One possibility is that we could move the definition of a simple Web Bean to the EJB specification, as a new EJB component type. Another possibility is to simply drop simple Web Beans from the specification, and ask everyone to write all their objects as EJBs.

Personally, I can't imagine releasing Java EE 6 with a dependency injection solution that doesn't support JavaBeans. All the other solutions in the space support both JavaBeans and EJB. EE 5 missed this boat - I don't want EE 6 to repeat the mistake. And I think that this is a case where users want to make the decision about when to use EJB. They don't want us to force a decision down their throats.

Injection of other EE resource types

A second issue raised by the EE experts is that the Web Beans specification does not currently define support for injection of all the various Java EE resource types. Specifically, Web Beans does not explicitly define injection of JDBC datasources, connections, web service endpoints, remote EJBs or persistence contexts. Of course, you can still use the existing Java EE 5 @Resource and @PersistenceContext annotations to inject Java EE resources, but then you'll miss out on the advantages of the more flexible and more typesafe model defined by Web Beans.

Well, the true situation is that injection of EE resources (or anything else at all) is already supported by Web Beans, assuming that the Java EE container calls an appropriate Web Beans SPI at initialization time. But neither the EE specification nor the Web Beans specification requires it to do so. So this is an issue that should be easy to fix.


We put a fair amount of effort into defining the interaction between the EE container and the Web Bean manager, for the purpose of supporting pluggable Web Bean managers. However, nothing that is currently defined imposes any new requirement upon the EE container. Unfortunately, at this point, there remain a number of open issues in the specification that can only be resolved by either:

  • dropping the requirement for pluggability or
  • imposing new requirements upon the EE container.

(Search for Open issue in the Public Draft if you're interested in knowing exactly what new requirements would be imposed upon the Java EE container.)

Either of these paths is technically feasible, but one of them is a lot more work and involves a lot more coordination between the various expert groups, some of which are already struggling to meet tight deadlines. If Web Beans is required in Java EE 6, it's not clear that pluggability is truly of great value to users. Does anyone really need to switch out their Web Bean manager?

I'd love to hear from the community whether pluggability is really worth pursuing.

Packages and naming

Quite a number of people hate the name Web Beans, which they see as obscuring the generality of the model we've defined. I view the name as more of a catchy brand than a definition of what Web Beans does. Think of Microsoft and .net. I don't mind changing the name, but nobody has yet suggested a great alternative. For one thing, I hate acronyms, and something descriptive like Java Dependency Injection wouldn't really capture all that Web Beans does.

Independently of this objection, quite a number of people think that it would be better to package our annotations by concern rather than putting them all in the package javax.webbeans. For example, the following package layout might make better sense:

  • javax.dependency - DI-related annotations: binding types, deployment types, @Initializer, @Produces, etc.
  • javax.contexts - scope-related annotations, Context interface
  • javax.interceptor (which already exists) - interceptor and decorator related annotations
  • - @Observes, event bus interfaces

I see repackaging as a great idea.

The good news

The good news is that there's very little debate about the actual technical details of the (Guice and Seam inspired) dependency injection model in Web Beans. The feedback I'm getting from spec reviewers is uniformly positive. This gives me a lot of confidence that the more political issues are also solvable.

What do you think?

The issues I've discussed in this post are important to all members of the Java EE community and to the future direction of the Java EE platform. I don't believe that we should be making these decisions behind closed doors, without the involvement of users. Please speak up and let us know what you want us to do. And please take the time to learn more about Web Beans.

Hibernate Search 3.0.0.Beta2 is out with a bunch of new interesting features:

  • shared Lucene IndexReader, significantly increasing the performances especially in read mostly applications
  • ability to customize the object graph fetching strategy
  • properties projection from the Lucene index
  • ability to sort queries (thanks to Hardy Ferentschik)
  • expose the total number of matching results regardless of pagination


Hibernate Search can now share IndexReaders across queries and threads, making them much more efficient especially on applications where the number of reads is much higher than the number of updates. Opening and warming a Lucene IndexReader can be a relatively costly operation compared to a single query. To enable sharing, just add the following property shared

Object loading has been enhanced as well. You can for example define the exact fetching strategy used to load the expected object graph, pretty much like you would do it in a Criteria or HQL query, allowing per use case optimization.

Some use cases do not mandate a fully loaded object. Hibernate Search now allow properties projection from the Lucene index. At the cost of storing the value in the index, you can now retrieve a specific subset of properties. The behavior is similar to HQL or Criteria query projections.

fullTextQuery.setIndexProjection( "id", "summary", "body", "" ).list();

Query flexibility

I already talked about the customizable fetching strategy and projection.

The default sorting on Hibernate Search queries is per relevance, but you can now customize this strategy and sort per field(s).

Regardless of the pagination process, it is interesting to know the total number of matching elements. While costly in plain SQL, this information is provided out-of-the-box by Apache Lucene. getResultSize() is now exposed by the FullTextQuery. From this information, you can for example:

  • implement the search engine feature '1-10 of about 888,000,000'
  • implement a fast pagination navigation
  • implement a multi step search engine (gradually enabling approximations if the restricted query(ies) return no or not enough results)

For more information, check the the release notes and download the bundle . Hibernate Search 3.0.0.Beta2 is compatible with Hibernate Core 3.2.x (from 3.2.2), Hibernate Annotations 3.3.x and Hibernate EntityManaher 3.3.x.

Hibernate: 2 Million Downloads + 5 Years = 25 Books

Posted by    |       |    Tagged as Hibernate ORM

Happy Birthday Hibernate! Now that the first copies of Java Persistence with Hibernate are shipping (still waiting for mine though), the first people who should get one are Hibernate contributors. Manning Publications sponsors 25 copies of the book, and we'll distribute them in exchange for 25 Hibernate forum credits. See this page for details.

Also, I'm not sure I promised Gavin to never show this, but this is how the Hibernate website looked like five years ago: Old Hibernate Website[1]

Say what you want about the look of the page, some of that content is still here today unchanged. Good stuff.

Hibernate 3.2, Java Persistence provider final released

Posted by    |       |    Tagged as Hibernate ORM

The Hibernate developer team released Hibernate 3.2.0 GA today, this release is now ready for production use. Please read the migration guidelines if you are upgrading from an earlier version.

In addition to Hibernate Core, final releases of the Java Persistence provider are now available with Hibernate Annotations and Hibernate EntityManager. The Hibernate Java Persistence provider has been certified with the Sun TCK. You can use the Java Persistence API and annotation metadata in any Java project, inside or outside any Java EE 5.0 application server. If you are upgrading from an earlier version of the Hibernate Java Persistence provider, please see the migration guidelines.

We'd like to thank all contributors and users for their issue reports!

Download the latest releases...

Hibernate 3.2: Transformers for HQL and SQL

Posted by    |       |    Tagged as Hibernate ORM

People using the Criteria API have either transparently or knowingly used a ResultTransformer . A ResultTransformer is a nice and simple interface that allows you to transform any Criteria result element. E.g. you can make any Criteria result be returned as a java.util.Map or as a non-entity Bean.

Criteria Transformers

Imagine you have a StudentDTO class:

public class StudentDTO {
  private String studentName;
  private String courseDescription;
  public StudentDTO() { }

Then you can make the Criteria return non-entity classes instead of scalars or entities by applying a ResultTransformer:

List resultWithAliasedBean = s.createCriteria(Enrolment.class)
  .createAlias("student", "st").createAlias("course", "co")
  .setProjection( Projections.projectionList()
                   .add(""), "studentName" )
                   .add("co.description"), "courseDescription" )
          .setResultTransformer( Transformers.aliasToBean(StudentDTO.class) )

 StudentDTO dto = (StudentDTO)resultWithAliasedBean.get(0);  

This is how ResultTransformer have been available since we introduced projection to the Criteria API in Hibernate 3.

It is just one example of the built in transformers and users can provide their own transformers if they so please.

Jealous programming

Since I am more a HQL/SQL guy I have been jealous on Criteria for having this feature and I have seen many requests for adding it to all our query facilities.

Today I put an end to this jealousy and introduced ResultTransformer for HQL and SQL in Hibernate 3.2.

HQL Transformers

In HQL we already had a kind of result transformers via the (select new syntax, but for returning non-entity beans it only provided value injection of these beans via its constructor. Thus if you used the same DTO in many different scenarios you could end up having many constructors on this DTO purely for allowing the select new functionality to work.

Now you can get the value injected via property methods or fields instead, removing the need for explicit constructors.

List resultWithAliasedBean = s.createQuery(
  "select as studentName," +
  "       e.course.description as courseDescription" +
  "from   Enrolment as e")
  .setResultTransformer( Transformers.aliasToBean(StudentDTO.class))

StudentDTO dto = (StudentDTO) resultWithAliasedBean.get(0);

SQL Transformers

With native sql returning non-entity beans or Map's is often more useful instead of basic Object[]. With result transformers that is now possible.

List resultWithAliasedBean = s.createSQLQuery(
  "SELECT as studentName, co.description as courseDescription " +
  "FROM Enrolment e " +
  "INNER JOIN Student st on e.studentId=st.studentId " +
  "INNER JOIN Course co on e.courseCode=co.courseCode")
  .setResultTransformer( Transformers.aliasToBean(StudentDTO.class))

StudentDTO dto =(StudentDTO) resultWithAliasedBean.get(0);

Tip: the addScalar() calls were required on HSQLDB to make it match a property name since it returns column names in all uppercase (e.g. STUDENTNAME). This could also be solved with a custom transformer that search the property names instead of using exact match - maybe we should provide a fuzzyAliasToBean() method ;)

Map vs. Object[]

Since you can also use a transformer that return a Map from alias to value/entity (e.g. Transformers.ALIASTOMAP), you are no longer required to mess with index based Object arrays when working with a result.

List iter = s.createQuery(
  "select as studentName," +
  "       e.course.description as courseDescription" +
  "from   Enrolment as e")
  .setResultTransformer( Transformers.ALIAS_TO_MAP )

String name = (Map)("studentName");

Again, this works equally well for Criteria, HQL and native SQL.

Reaching Nirvana of native sql

We still miss a few things, but with the addition of ResultTranformer support for SQL and the other additions lately to the native sql functionality in Hibernate we are close to reach the Nirvana of native sql support.

Combined with StatelessSession you actually now got a very flexible and full powered sql executor which transparently can map to and from objects with native sql without any ORM overhead.

...and when you get tired of managing the sql, objectstate, lifecycles, caching etc. of your objects manually and want to benefit from the power of an ORM then you got it all readily available to you ;)

Pro Hibernate 3 deadlocking example

Posted by    |       |    Tagged as Hibernate ORM

A bug report was recently opened in Hibernate's JIRA stating that Hibernate incorrectly handles deadlock scenarios. The basis for the report was an example in the /Pro Hibernate 3/ book (Chapter 9). For those perhaps not familiar with the term deadlock, the basic gist is that two processes each hold resource locks that the other needs to complete processing. While this phenomena is not restricted to databases, in database terms the idea is that the first process (P1) holds a write lock on a given row (R1) while the second process (P2) holds a write lock on another row (R2). Now, to complete its processing P1 needs to acquire a write lock on R2, but cannot do so because P2 already holds its write lock. Conversely, P2 needs to acquire a write lock on R1 in order to complete its processing, but cannot because P1 already holds its write lock. So neither P1 nor P2 can complete its processing because each is indefinitely waiting on the other to release the needed lock, which neither can do until its processing is complete. The two processes are said to be deadlocked in this situation.

Almost all databases have support to circumvent this scenario by specifying that locks should be timed-out after a certain period of time; after the time-out period, one of the processes is forced to rollback and release its locks, allowing the other to continue and complete. While this works, it is not ideal as it requires that the processes remained deadlocked until the underlying timeout period is exceeded. A better solution is for the database to actively seek out deadlock situations and immediately force one of the deadlock participants to rollback and release its locks, which most databases do in fact also support.

So now back to the /Pro Hibernate 3/ example. Let me say up front that I have not read the book and so do not understand the background discussion in the chapter nor the authors' intent/exceptations in regards to the particular example code. I only know the expectations of a (quite possibly mis-guided) reader. So what this example attempts to do is to spawn two threads that each use their own Hibernate Session to load the same two objects in reverse order and then modify their properties. So the above mentioned reader expects that this sould cause a deadlock scenario to occur. But it does not. Or more correctly, in my running of the example, it typically does not, although the results are inconsistent. Sometimes a deadlock is reported; but the vast majority of runs actually just succeed. Why is that the case?

So here is what really happens in this example code. As I mentioned before, the example attempts to load the same two objects in reverse order. The example uses the entities Publisher and Subscriber. The first thread (T1) loads a given Publisher and modifies its state; it is then forced to wait. The second thread (T2) loads a given Subscriber and modified its state; it is then forced to wait. Then both threads are released from their waiting state. From there, T1 loads the same Subscriber previously loaded by T2 and modifies its state; T2 loads the same Publisher previously loaded by T1 and modifies its state. The thing you need to keep in mind here is that so far neither of these two Sessions have actually been flushed, thus no UPDATE statements have actually occurred against the database at this point. The flush occurs on each Session after each thread's second load and modify sequence. Thus, until that point neither thread (i.e. the corresponding database process) is actually holding any write locks on the underlying data. Clearly, the outcome here is going to depend upon the manner in which the two threads are actually allowed to re-awaken by the underlying threading model, and in particular whether the UPDATE statements from the two sessions happen to get interleaved. If the two threads happen to interleave their requests to the database (i.e. T1's UPDATE PUBLISHER happens first, T2's UPDATE SUBSCRIBER hapens second, etc) then a deadlock will occur; if not interleaved, then the outcome will be success.

There are three ways to inequivocally ensure that lock acquisition errors in the database force one of these two transactions to fail in this example:

  • use of SERIALIZABLE transaction isolation in the database
  • flushing the sesssion after each state change (and the end of the example code's step1() and step2() methods)
  • use of locking (either optimistic or pessimistic)

Seems simple enough. Yet apparently not simple enough for the reader of the /Pro Hibernate 3/ book that opened the previously mentioned JIRA case. After all this was explained to him, he wrote me some ill-tempered, misconception-laden replies in private emails. I am not going to go into all the misconceptions here, but one in particular I think needs to be exposed as many developers without a lot of database background seem to stumble over various concepts relating to transactions. Isolation and locking are not the same thing. In fact, to a large degreee, they actually have completely opposite goals and purposes. Transaction isolation aims to isolate or insulate one transaction from other concurrent transactions, such that operations performed in one transaction do not effect (to varying degrees, based on the exact isolation mode employed) operations performed in others. Locking, on the other hand, has essentially the exact opposite goal; it seeks to ensure that certain operations performed in a transaction do have certain effects on other concurrent transactions. In fact locking really has nothing to do with transactions at all except for the fact that their duration is typically scoped to the transaction in which they are acquired and that their presence/absense might affect the outcome of the different transactions. Perhaps, although I cannot say for sure, this confusion comes from the fact that a lot of databases use locking as the basis for their isolation model. But that is just an implementation detail and some databases such as Oracle, Postgres, and the newest SQL Server have very sophisticated and modern isolation engines not at all based on locking.

Hibernate 3.1.1 released

Posted by    |       |    Tagged as Hibernate ORM

Hibernate 3.1.1 has been released earlier this week. This maintenance release focused on bugfixes and improvements, especially regarding:

  • SQL Server support
  • Native Query support
  • Connection handling

For details check out the release notes

Hibernate 3.1 introduced non OLTP features as well as better environment integration:

  • Custom strategy for session handling through CurrentSessionContext including 2 default implementations (JTA based and ThreadLocal based session handling)
  • more natural and aggressive connection handling in J2EE and J2SE environments
  • command-oriented API (StatelessSession API) for operations on huge number of objects
  • bulk UPDATE, DELETE, INSERT INTO ... SELECT for multi table entities
  • extra lazy collections for huge collection handling
  • use of join fetch on collection through scrollable resultsets through break processing
  • database generated properties (including database timestamps)
  • additional ON clauses for joins
  • dirty checking short-cuts for instrumented classes

Downloads are available here

Hibernate 3.1: Reduced wiring code needed for native sql

Posted by    |       |    Tagged as Hibernate ORM

Over the past few months we have been adding some simplifications to the way you can use and specify native sql queries in Hibernate. Gavin even blogged about some of them earlier , but I thought it were about time we brought some more news on this blog about it.

Auto detect return types and aliases

The newest feature related to native sql is that Hibernate will now auto detect the return types and even aliases of any scalar value from a sql query.

Before you had to do something like:

List l = s.createSQLQuery("SELECT emp.regionCode as region, emp.empid as id, {employee.*} 
                           FROM EMPLOYMENT emp, EMPLOYEE employee ...")
         .addScalar("region", Hibernate.STRING)
         .addScalar("id", Hibernate.LONG)
         .addEntity("employee", Employee.class)
Object[] data = l.get(0);

Today you do not need to specify the type of the scalar values, but simply the alias name so Hibernate knows what data you actually want to extract from the resultset.

List l = s.createSQLQuery("SELECT emp.regionCode as region, emp.empid as id, {employee.*} 
                           FROM EMPLOYMENT emp, EMPLOYEE employee ...")
         .addEntity("employee", Employee.class)
Object[] data = l.get(0); 

If the query is only returning scalar values then addScalar is not needed at all; you just call list() on the query.

List l = s.createSQLQuery("SELECT * FROM EMPLOYMENT emp").list();
Object[] data = l.get(0); 

It of course needs some more processing from Hibernate, but it makes experimentation and some data processing problems easier to do.

No redundant column mappings

Previously when you specified native sql in named queries you had to use the return-property element to (redundantly) specify which column aliases you wanted Hibernate to use for your native sql query. It were redundant because in most cases you would simply just be specifying the exact same columns as you had just done in the class mapping.

Thus it could get pretty ugly and verbose when you were starting to have even just mildly complex mappings such as the following which is from our unit tests for a native sql stored procedure call.

<sql-query name="selectAllEmployees" callable="true">
 <return alias="employement" class="Employment">
 <return-property name="employee" column="EMPLOYEE"/>
 <return-property name="employer" column="EMPLOYER"/>                     
 <return-property name="startDate" column="STARTDATE"/>
 <return-property name="endDate" column="ENDDATE"/>                       
   <return-property name="regionCode" column="REGIONCODE"/>                       
   <return-property name="id" column="EMPID"/>                                            
   <return-property name="salary"> 
    <return-column name="VALUE"/>
    <return-column name="CURRENCY"/>                      
 { call selectAllEmployments() }

In the upcoming Hibernate 3.1 you can do the exact same with loss less code:

<sql-query name="selectAllEmployees" callable="true">
 <return class="Employment"/>
 { call selectAllEmployments() }

or in code (for normal sql):

List l = s.createSQLQuery("SELECT * FROM EMPLOYMENT emp")
Object[] data = l.get(0); 

This also removes the need for always using the curly brackets syntax (e.g. {})to handle the aliasing as long as you are not returning the same entity type more than once per row.

Hope you like it, Enjoy :-)

Pluggable Session management in Hibernate 3.1

Posted by    |       |    Tagged as Hibernate ORM

Steve just committed a new interface and extension point to Hibernate Core. We can finally plug-in custom Session context management into Hibernate. For those of you who already know getCurrentSession() in Hibernate 3.0, this new extension enables the same without a JTA environment.

But how does it work? In a J2EE container we can rely on the scope of the current JTA transaction to bind the Hibernate Session. So whenever you call getCurrentSession() you get exactly that. Outside of a container, however, we don't really know when a Session ends (starting it is easy: the first time you request one).

So, a new interface was needed to allow Hibernate users to provide a current Session. In fact, the interface CurrentSessionContext has exactly this single method you can implement. But you don't have to, there are two default implementations distributed with Hibernate 3.1:

  • The usual behavior for JTA environments, binding the current Session to the current system transaction This works without any extra configuration, just set up Hibernate for managed J2EE use (transaction manager, etc.), and call sessionFactory.getCurrentSession(). No need to flush or close the Session. If you use CMT, transaction demarcation is declarative, if you use BMT, call the methods beginTransaction(), getTransaction().commit(), and getTransaction().rollback() on the current Session. If you want to you can enable JTA context management in your Hibernate configuration by setting the new property current_session_context_class to "jta", but again, it is the default if Hibernate is configured for J2EE.
  • An implementation for typical Hibernate deployment in non-managed environments, binding the current Session to the current thread. However, this implementation needs a little help from you to mark the end of a unit of work (we can't wait for the thread to end). You have to call the static method ThreadLocalSessionContext.unbind().close() to remove the current Session from the thread, and to close it. To enable this context handler, set the current_session_context_class configuration property to "thread".

Of course you can implement your own CurrentSessionContext and name the class in configuration, for example, to implement a Long Session (or in new terms: extended persistence context) pattern with EJBs, storing the disconnected Session in a SFSB between requests and JTA transactions.

The traditional HibernateUtil class can now finally be reduced to a simple startup helper, used to initialize a SessionFactory. This is how a typical data access routine now looks like in CMT, BMT, and non-JTA environments:

Session s = HibernateUtil.getSessionFactory().getCurrentSession();
s.beginTransaction();; // or

ThreadLocalSessionContext.unbind().close(); // Only needed for non-JTA

With HibernateUtil we don't really care where the SessionFactory is coming from, either static singleton or from JNDI. The last line is only needed for the built-in thread context handling outside of an application server, as explained above. The rest of the code is the same everywhere, in all deployment situations. Usually you would unbind and close the Session in some kind of interceptor and encapsulate the last line in some system class that knows when request processing completes. The two lines of code saving an item are equivalent, the purpose here is to show you that you can call getCurrentSession() as many times and in as many different (DAO) classes as you like.

For all of you who don't want to build Hibernate from CVS to try this new feature: I've prepared an updated CaveatEmptor release that includes a snapshot of Hibernate 3.1 from CVS, updated HibernateUtil, and updated DAO/unit test classes. The other Hibernate documentation (and the popular Wiki pages about Session handling) will be updated later, when an actual 3.1 release is available.

Hibernate 3.0 presentations

Posted by    |       |    Tagged as Hibernate ORM

I recently spoke at TheServerSideJavaSymposium and at the New England JUG. My presentations, which cover some new ideas implemented in Hibernate 3.0 are now online:

I find it quite hard to talk about HB3, because there is simply so much new stuff, and a lot of it is somewhat obscure. It's really difficult to communicate what a big step forward this is for us.

back to top