Red Hat

In Relation To Discussions

In Relation To Discussions

Hibernate Community Newsletter 2/2016

Posted by    |       |    Tagged as Discussions Hibernate ORM

Starting this year, we are hosting a series of articles focused on the Hibernate community. We share blog posts, forum and StackOverflow questions that are especially relevant to our users.

Articles

A very interesting project is Apache Trafodion, which aims to build a transactional SQL engine on top of Hadoop. We’d like to thanks the Trafodion team for writing a Hibernate Dialect, therefore making it possible to access Hadoop from a Hibernate data access layer. Give it a try and let us know what you think.

Thorben Janssen wrote an article about calling native SQL queries from JPA.

Romain Manni-Bucau demonstrates how you can integrate JPA pagination with Java 8 Streams.

Michael J. Simons, who gave us the idea of writing the JPA test case templates, describes the EuregJUG site architecture which, along other open-source frameworks, makes use of Hibernate for accessing data.

Hibernate Community Newsletter 1/2016

Posted by    |       |    Tagged as Discussions Hibernate ORM

Happy New Year, everyone!

Starting this year, we are going to host a series of articles focused on the Hibernate community. We are going to share blog posts, forum and StackOverflow questions, that are especially relevant to our users.

Articles

Thorben Janssen, who regularly writes about JPA and Hibernate topics, has summarized the most typical logging configurations in a Hibernate Logging Guide.

For our Portuguese readers, Rafael Ponte has written a very interesting article about bidirectional relationships and the caveats of not properly synchronizing both ends of a bidirectional association.

One topic, I always wanted to investigate in great details, is about the aggressive connection release behavior employed for JTA transactions. For high-performance data access platforms, it’s worth checking if the Java EE application server can operate successfully even without aggressive connection release.

Building the Hibernate blog with Docker on Windows

Posted by    |       |    Tagged as Discussions

For my first post, I’d like to share the experience of running the in.relation.to blog on my Windows machine.

All the blog content is available on GitHub, and you can practically run the whole site on your local environment.

The Hibernate blog is built with awestruct from Asciidoctor files, and getting all the Ruby gems in place is definite not a walk in the park. To make matters worse, I’m running a Windows machine and all these Ruby gems are tightly coupled to Linux libraries, as I discovered after several failed attempts with the 64 bits Ruby 2.2.4 or the 32 bits Ruby 1.9.3.

Luckily, there is a dockerfile available, so building a Docker image and run it in a dedicated container can tackle the Ruby gem hell. Building the Docker image was fine, but running it was a three hours hackathon for both Emmanuel and I.

Docker images are immutable and all changes are wiped out once the container is terminated. Instead, we want the GitHub repository to be mirrored in the Docker container, so all changes are persisted even after shutting down the docker machine. This process can be done by mounting a host folder into the Docker container, which can happen upon running the Docker image. In our case, the mounted directory is the GitHub repository that’s mirrored inside the currently running Docker container.

Once the image is built, we need to run this command from within the GitHub repository folder:

docker run -t -i -p 4242:4242 -v `pwd`:/home/dev/in.relation.to hibernate/in.relation.to

This doesn’t work on Windows because Docker needs the OS paths to be prefixed with another slash.

So this command must be changed to:

docker run -t -i -p 4242:4242 -v '/'`pwd`:/home/dev/in.relation.to hibernate/in.relation.to

After running it, the mounted folder was just empty. We noticed that without the GitHub folder mounting part the Docker image could run properly, so the mounting process was the culprit.

After all sorts of hacks, Emmanuel had the idea of checking the Virtual Box Shared Folders, and, by default, only the C:\Users directory is being shared. No wonder it was not working all along.

All my repositories being on D:\, we thought that adding a new shared path would fix this issue. Well, it didn’t.

Docker must mount these Virtual Box shared folders too, but it only does so for C:\Users. There’s a GitHub issue detailing this behavior, which you can watch if you are interested in this feature.

After moving the checkout GitHub repository to /c/Users/Vlad/GitHub/in.relation.to, it all worked fine.

Revamped blog

Posted by    |       |    Tagged as Discussions

Welcome to the newly revamped Hibernate and friends blog.

As you can see, we made it look like hibernate.org and we took the opportunity to clean up the tags to make them more useful. But we had other reasons to migrate.

Blog early, blog often

We have been very happy with the Asciidoctor-based writing experience of both our reference documentation and of hibernate.org. We are bringing this ease of writing to our blog and the not so secret objective is for us to blog more. So if you don’t see more content here, you can start yelling at us :)

Write once, display anywhere

There is an added bonus to using Asciidoctor for all of our writing: we can move content easily between systems. What starts as a blog can easily become a how-to page on the site or a section in the reference documentation.

Maintenance

We had a few issues with the old infrastructure that were keeping us away from our IDEs a bit too often for comfort.
This blog is a statically generated site, we use Awestruct. This means that we can make is scale very easily and with very low maintenance. Nothing beats plain old HTML :)

We did migrate all the old entries and comments from the old blog: fear not, nothing is lost.

If you seen any issue, let us know in the comments or by tweet. Oh and enlist to our new feed URL.

Hibernate Branches and Releases

Posted by    |       |    Tagged as Discussions

Hibernate has moved to Git (hosted on GitHub) for source control. That has been well documented. I wanted describe the approach Hibernate takes to branching, versioning and releasing as it is something that has come up a number of times, even prior to the move to Git.

Versioning

For versioning, Hibernate follows the OSGi guidelines that are well described here. Here is how we view the components that make up an OSGi version:

  • major - To Hibernate, this denotes major, disruptive changes. Think back to Hibernate2 and the migration to Hibernate3 and that is the type of disruption I am talking about. The move from 3.x to 4.x will be less disruptive in that sense, but there is still a significant change.
  • minor - We sometimes refer to this as a series so that you might hear Hibernate developers talk about the 3.5 series or 3.6 series. Within a series Hibernate strives for API stability. A change in this component (3.5 to 3.6) denotes some API change, though its usually an addition and therefore less disruptive than discussed above.
  • micro - For Hibernate these denote bugfix releases.
  • qualifier - Hibernate uses this component really only for pre-final releases (alphas, betas, etc). Once we get to Final (x.y.0.Final) we begin to increment the micro but continue to use the Final qualifier (x.y.1.Final) for release sorting (and therefore ranging) purposes.

Maintenance Branches

Hibernate maintains branches for each (major.minor) series. During the move to GitHub we took the opportunity to drop off the very old branches. So currently there are bugfix branches for 3.2, 3.3, 3.5 and 3.6 (currently the master branch is ongoing 4.0 work). We branch off the series to make it possible to port fixes back to them. As an example... Currently the active maintenance branch is 3.6; master is the ongoing work for 4.0. 4.0 has lots of disruptive changes as discussed above (though the same discussion holds true for series changes due to API differences). Having the 2 branches allows us to work both code-bases simultaneously. We can fix a bug and apply that change to both the 4.0 and 3.6 code-bases; yet 4.0 can evolve independently like it needs to. On master Hibernate has moved to Gradle for its build tool and at the same time I decided to move to a more standard module directory naming scheme; unfortunately that causes some interesting challenges in trying to backport fixes which I'll discuss in a separate blog.

Maintenance Releases

Hibernate performs maintenance releases from the active maintenance branch. Currently, that means we will do maintenance releases from the 3.6 branch. We will continue to do that until 4.0 becomes Final at which point we will stop 3.6 maintenance, make 4.0 the active maintenance branch and move master to 4.1 development. This is the general outline we follow. If you look back, for example, to 3.6 development you can see that illustrated. We kept doing 3.5 maintenance releases up until the time 3.6 went Final.

Time Boxing

The Hibernate team loosely follows a time boxing schedule for releases. Personally I don't believe in rigidity here, especially for maintenance releases. We have become very good at sticking to time boxing for Alpha, Beta and CR releases which is where I see the benefit the most. We have opted for a 2-week time box for Beta and CR releases, which has worked out really really well for us in my opinion. We have done both 2- and 4-week boxes for Alphas; we will continue to fine tune that (like I think 4.0 dev will use 4-week boxes just because there is so much change going on).

Hibernate + Gradle Pointers

Posted by    |       |    Tagged as Discussions

Hibernate 4 (master branch on GitHub) has been switched to use Gradle for its builds. Mainly I just liked the alliteration of Git and Gradle... :)

The Gradle User Guide is a good reference obviously. Most of what we use gets introduced by the JavaPlugin

gradle -t is a handy thing to know. It will tell you all the main tasks available in a project. Note however that it does not reach into subprojects. For example, assuming you are at the root of the Hibernate project checkout if you run gradle -t you would see nothing about tasks related to compiling, which is obviously essential :) In Gradle when you run a task against a project, that task request is passed along to any subprojects also. So I'd suggest cd'ing around a bit and doing gradle -t to get a feel for the tasks available. An alternative is to run gradle -t --all which shows all tasks for that project; just beware that that can get extremely verbose.

That being said, the here is a list of the most common tasks you will use:

  1. clean - Deletes the build directory.
  2. build - Assembles (jars) and tests this project.
  3. buildDependents - Assembles and tests this project and all projects that depend on it. So think of running this in hibernnate-entitymanager.. Gradle would assemble and test hibernate-entitymanager as well as hibernate-envers because envers depends on entitymanager. See below.
  4. classes - compiles the main classes
  5. testClasses - compiles the test classes.
  6. test - Runs the tests for this project
  7. jar - Generates a jar archive with all the compiled classes.
  8. uploadArchives - think Maven deploy
  9. install - I have also enabled the MavenPlugin throughout the projects, which adds this task. install installs the project jar into the local maven repository cache (usually ~/.m2/repository), which is important to inter-operate between projects using Maven to build and those using Gradle to build (otherwise you'd have to push your artifacts to Nexus to share)

A note about build and buildDependents. Gradle, unlike Maven, acts better in the face of inter-module dependences. In Gradle, if I cd into hibernate-entitymanager and perform a build, Gradle will automatically try to build hibernate-core for me if it needs to. For build it stops there. If you also want it to recompile things that depend on your changes, you would instead use buildDependents. So from hibernate-entitymanager again, if I this time run buildDependents it will first build hibernate-core, then hibernate-entitymanager and then hibernate-envers.

Yyou probably don't know it yet but Hibernate Core and Hibernate Validator source code is now on Git and Hibernate Search and co will likely follow very soon. There are various reasons for the move but to summarize it, life was not bad under SVN but it's really great under Git.

Getting Hibernate sources

The public reference Git repository is hosted at GitHub. We are not married to GitHub but so far, we really like the infrastructure they offer. Kudos to them! In any case moving is one clone away. That tends to keep services like that on their toes.

Hibernate Core is here while Hibernate Validator is there.

Contributing

If you want to contribute a fix or new feature, either:

  • use the GitHub fork capability: clone, work on a branch, fork the repo on GitHub (fork button), push the work there and trigger a pull request (pull request button). Also see http://help.github.com/forking and http://help.github.com/pull-requests
  • use the pure Git approach: clone, work on a branch, push to a public fork repo hosted somewhere, trigger a pull request git pull-request
  • provide a good old patch file: clone the repo, create a patch with git format-patch or diff and attach the patch file to JIRA

Our preference is GitHub over pure Git over patches:

  • GitHub lets us comment on your changes in a fine grained way
  • Git generally leads to more small commits that are easier to follow
  • a big fat patch of 100k is always depressing and can't be updated easliy

If you still want to do it the old way a provide a patch file, that's ok too.

Various tips on Git

While in no way comprehensive, here are a collection of useful tips when using Git.

Read a book on it

The time will be worth it. I particularly enjoyed Pro Git. It's an awesome book and very practical. It has a free html and epub version (buying the tree version is recommended to repay the author).

Cloning

Prefer the git protocol when cloning over http (so say the experts). At the very least that will be much faster. cloning the repo from GitHub took me less than 3 minutes.

#for people with read/write access
git clone git@github.com:hibernate/hibernate-core.git

#for people with read-only access
git clone git://github.com/hibernate/hibernate-core.git

This will create a remote link named origin. I usually tend to rename it to reflect what it really is.

git remote rename origin core-on-github

Note that you can attach more than one remote repository to your local repository

git clone git://github.com/hibernate/hibernate-core.git
git remote rename origin ref-core-github
git remote add manu-core-github git@github.com:emmanuelbernard/hibernate-core.git
git fetch manu-core-github

Pushing and pulling

To initially get a copy of a remote branch locally, do

git checkout -b 3.6 origin/3.6

When pulling a branch, it's best to keep the same name between the origin repo and your clone. Many syntax have shortcuts in this situation (are you listening Hardy? ;) ). You can then do

git push origin master

If you happen to have renamed the branch master to say local-master locally, do

git push origin local-master:master

To delete a branch on a remote repository, you push an empty one

git push origin :branch-to-remove

Branches

Always work on a topic branch and merge your work when you are done.

git checkout master
git checkout -b HHH-XXX
#hack commit hack commit

Don't forget to pull from your master and rebase this topic branch before proposing request pull.

There are many advantages to this, one of them being that you can work on many different things in parallel and sync them with the reference master easily.

Likewise if you want to share a work with somebody from the Hibernate team, push to the public repo and define the pull request of your topic branch. Make sure your topic branch is above the most recent master.

Commits

Prefer small commits, they will be more readable and will very unlikely fail on merge.

Write good comments (one short line including the issue number at stake and a summary, followed by a blank line and a more detailed explanation if needed.

HHH-XXX Fix NPE on persist

Fix stupid bug by Gavin that lead to a NPE when persisting objects with components

Merge or rebase

It's a debate, but here is my take on it.

  • Primitive 0: DO NOT rebase a branch that you have shared publicly (unless you know people won't use it or you wish them harm).
  • Primitive 1: Prefer rebase over merge assuming it does not break Primitive 0

This is particularly sensible when you trigger a pull request: rebase atop the branch you are branching of (master usually). Your work will be easier to read and integrate.

Rebase put changes from the branch you forked (say master) below the new commits you have done (say HHH-XXX) and thus keep the history linear.

git checkout HHH-XXX
git rebase master

While we are at rebasing, you can rewrite your commit history to clean comments or merge some commits together (aka squashing)

git rebase -i HEAD~6 (go back 6 commits in time)

Backporting: cherry-picking

Git made backporting fun.

Say you have done some commits on master and the sha1 of the commits are ae397d... and 87ab342...

You can apply them on an older branch very easily.

git checkout 3.5
git cherry-pick ae397d
git cherry-pick 87ab34

Bam, you're done. Now did I tell you to do small commits? That's also for the backporters.

Aliases and configuration

Once you're fed up with typing longish command lines, use aliases. I've put a copy of my ~/.gitconfig file in case people want to copy some things including aliases (see below)

~/.gitconfig
[user]
     name = Redacted
     email = redacted@redacted.com
     signingkey = id_key.pub
[core]
     editor = open -nW -a Smultron
[merge]
     tool = opendiff
[color]
     ui = auto
[color "branch"]
     current = yellow reverse
     local = yellow
     remote = green
[color "diff"]
     meta = yellow bold
     frag = magenta bold
     old = red bold
     new = green bold
[color "status"]
     added = yellow
     changed = green
     untracked = cyan
[github]
     user = redacted
     token = redacted
[alias]
     co = checkout
     undo = reset --hard
     cb = checkout -b
     br = branch
     cp = cherry-pick

Tools

If you use Mac OS X, GitX is a fantastic tool. In particular, you can very easily play with interactive staging and commit only some parts of a file in a couple of clicks. A typical use case is to separate the commit of a typo from the commit of a core feature. Some people like the built-in GitK GUI.

Command completion support via git-completion.bash is pretty good if you are command shell type person. The script is available in the Git project sources and in other places (including Fedora). Pro Git has some more information on it.

Various

You can also read this blog entry that was some more info for people moving from SVN to Git.

This entry is too long already, feel free to add your tips in the comments.

JSF 2.0 CookBook by Anghel Leonard review

Posted by    |       |    Tagged as Discussions JSF

Overview

Actually I’m a fan of this kind of book, as in my opinion they are useful for beginner and experienced developers. It’s always good to have it to be able to review all the possible solutions for concrete case and start quickly with prototyping. So that’s why I accepted Packt Publishing proposal to review JSF 2.0 CookBook by Anghel Leonard with pleasure.

Finally I’ve finished reading it and now wanting to share my opinion about its advantages and problematic parts. From the beginning I want to give quick overview of common points and then to go through the whole book to highlight recipes I really liked and propose some kind of errata for other ones.

Positives

It has a really good structure for recipes! I’m able to open any recipe without going through some overviews about conventions and environments info and just use the recipe! Every recipe contains description of technologies and tools used. Also it points us to the archive with complete application and has good explanations of key points for sure. Many of the JSF world FAQ’s are covered well in the book and most of them could be picked by just copying the sample code from book applications.

Also need to highlight that in most cases – all the alternatives for implementation of the same use-case are covered. That allows the user to choose most convenient way for him!

And I believe that in general – that book really worth a place on your shelf because could answer at many common questions or at least provide some pointers about where to check for the answer.

Some criticism

Well there is always some place for critical words. But let that criticism not to confuse the author and the guys who plans to read it. I just want to highlight some points which I believe will be useful for author in future and will make the readers to go through some problems easier.

Most of the examples show clean and useful samples – but sometimes author uses just “hello world” style for complex cases and the developer could miss the points or underrate the feature importance. That the main problem of some recipes. And I’m not sure if recipes where there are no code samples at all (there are few present) should be in such book.

Author uses JSP through the entire book. However – it’s deprecated in 2.0 and was not really popular in 1.2 after facelets came to the scene. It’s not a big problem for sure – but in some cases some important info missed. For example – custom component recipes – says about TLD descriptors but has nothing about taglib’s needed for the component.

So let’s go through the book shortly. Again – critic words which could appears there – only pointer for the readers to pay their attention more and to the author – to consider in future.

Going through

Now let's look some closer to book content.

Conversion/Validation chapters

Both chapters are described really well and will be useful for the developers who need to get a quick start with that JSF area. Most cool points which I like to see included into JSF CookBook, as they are asked much across the community – converters with passing parameters, validation using regular expressions with useful list of common expressions, JSR 303 recipes for validators groups and calling the validation from your java code.

Some Problems
  1. For better understanding - RichFaces bean validator recipe @NotEmpty case could be slightly extended with information that in general JSF 1.2 not validates the empty inputs at all. So need to mention that it will work only using RichFaces framework (which provides standard input renderers extensions to support null-values validation). Not just to say how cool it is to use RichFaces J but just to make sure that developer will realize where the problem is if will try the validation without RichFaces and it will stop working.
  2. Author missed to highlight the difference between BeanValidator and AjaxValidator in RichFaces. They described as completely similar. Most important that should be mentioned that ajaxValidator – is a combination of Ajax Support and Bean Validator.

File Management

This provides a good overview of all the component libraries which provides solutions for that topic. All the samples covered well and could be just picked to real application from the samples. The only minor problems lack of samples for the solutions without usage any additional libraries (download, export).

Security

The recipe contains good overview of JSF security and Acegi. But in my opinion chapter lacks some basic information and could be slightly extended with the pure JSF solutions.

Custom Components

This chapter is one which caused most of my questions. It’s sad as custom components creation – one of the most important topics in JSF – so I paid much attention to that chapter. Overview is short and really clear and looks really good as an introduction for the beginner... However screenshot of the lifecycle contains basic mistake. The “encoding” and “decoding” labels was put to wrong phases opposite to the places where them should be.

“EmailInput” – not looks as a best example of custom component. As written at all the classical JSF books resources - “before creation of the new component – double-check that you can’t achieve the result via some composition”. Even considering that the author probably wanted to gave really simple overview – the question remains then - why he was need to override renderer methods of the input? Standard input renderer should be enough there and it’s a way to confuse developer as he could decide that extension of the components still requires from him to override base functionality every time.

“ImageViewer” – again maybe it was better to extend the graphicImage component instead of writing from scratch.. And the code not looks really clean especially in adding Ajax capabilities section where the new JS not listed normally but only included in renderer as escaped String.

“ProxyId library”. Need to admit that I never heard about this interesting extension. But that probably related to the facts that RichFaces already provides the rich:clientId, rich:element, rich:findComponent and other functions to perform different operations on components using their short id’s. And not a section problem but note for me to check - author not listed but interesting to know how that proxyId will works in iteration components.

“Accessing resources from custom component”. From my point of view – that is most useless recipe in that chapter... I believe that we should not believe that JSF beginner – could understand the difference of approaches using just brief descriptions. In order this section to become recipe – it should provide the code and explanations based on it.

“RichFaces CDK”. At first small notes:

  • For those who will read the book only now – the settings for maven – has been changed after we switched to JBoss nexus repository.
  • Author should explicitly define that this shows CDK from RF 3.3.3 in action. As in 4.x it is completely redesigned and not compatible with the old one.

“Getting Ready” in “RichFaces CDK” contains the info on manual folder and pom.xml creation. However it could be done by archetype as written in our CDK reference. “How to do it” – at first asking the user to type the command from the screenshot – looks not good.. At least for big 3-row command. J The rest of the recipe in general explained well. Minor note - project could be easily created by archetype instead of manual sample creation.

“Composite Components” section explains the process really well and provides short but good samples.

“Mixing dojo and JSF”. In general – the recipe will work well for somebody who needs to get the idea about how to use dojo widgets in JSF components. But I’m confused with the facts that author proposes just output attributes needed by dojo using component but include all the resources on the samples page. It looks like not very maintainable solution.

AJAX in JSF

“DynaFaces” – recipe which unfortunately is not described well in my opinion. Just “drag and drop some components and then add Ajax transaction” – is not enough to understand who could need those recipe and why he need dynaFaces. Why it’s still useful – self-descriptional application archive.

“Tomahawk suggestAjax” and “A4j Ajax” recipes are clean and well described.

“Writing reusable AJAX components” recipe contains really weird issue – there is no Ajax which is mentioned in section name J. Just common spinner component optimized for usage of multiple instances on the page.

Main issue of that chapter on my opinion – lack of really common tasks recipes... E.g. how to perform optimized Ajax editing of huge tables, how to organize wizards, how to lazy load data after page rendered and some more... Simple cases are useful for sure as shows basics... But the developers has to implement much more complex issue in real enterprise applications and that questions appears on different community forum continuously

Localization

The chapter covers all the main cases which could be needed by the developer and recipes look clean at most.

Let’s highlight some samples there. “Accessing message resources from a class” – pretty good recipe and looks clean while you looking for the most important Helper class which performs obtaining the localized String. Minor problem there – it contains no result screenshots and actual properties listing – so not really easy to get what the user will see in the result. Need to start application from examples archive to check. Selection a timezone in JSF 2.0 – good pointer for the reader to possible problems. Even having no recipe author at least pointed the developer to the fact that he should pay attention and check for additional information. Displaying Arabic characters recipe – again uses JSP only.. But what about facelets? There are some other tricks existing and it’s knowledge needed by the developer more.

JSF Images, CSS and JS

“Injection of CSS“ recipe are we really talking about styling here? So why do I see only that dry source code and not at least a single image of the beautiful page created? :) It's could be not easy for the beginner to imagine what did the author achieved there (the beginner could not know the rules to apply those attributes for the component) without going again to examples archive.

“Dynamic CSS“ - I believed that there I will see how to use EL in CSS stylesheets :) B.t.w. frequently asked use-case which could be shown there - highlighting in the table cells/rows using styles which returned according to row model type, (e.g. Important or LowPriority). But the sample there unfortunately looks more static.

“Java Script injection“ well simple and clean example gives some ideas how I could apply my JS knowledge in JSF. Just working with JSF hidden fields section confuses me as there is nothing about hidden fields. (Actually I expected h:inputHidden usage) “Passing parameters“ could become also really useful section – but the examples miss some more real usage cases. “Opening popups using JSF and JS“ - looks good and answers the initial question really fine..

“Rss4jsf“ – Thanks for the pointer :). I've missed that framework and plan to look though now. It looks clean and easy to use. “Resources in JSF 2.0“ – it would be good to have a few words about access from Java objects. But proposed descriptions and samples of usage in the views looks clean and self-explanatory.

JSF Managing and testing

JSF Unit – want to highlight great overall descriptions, good samples and list of most used API. The only charts usage – not really related to testing, and in general contains no recipe but just pointer to library site… But that not really make the whole chapter less useful.

Facelets

Almost nothing to say – but that because the chapter has no problems in general and well written :)

Want only to highlight “composition component for JSF 2.0” section. It’s really good sample. But the main problem – for some reason (seems because a section name) I expected that it will be about composite component especially considering that the author mentions JSF 2.0. And it’s actually about custom tag from 1.2. But as far as I realized that that is just custom tag recipe and not actual composite component I re-read it and want to admit that it’s pretty good for that topic.

“Passing action” – really good to see there. It’s most weird issue in Facelets 1.x and most frequent question from the guys who creating custom facelets.

JSF 2.0

Good and useful chapter again – so let’s just look to some concrete samples which I want to pay readers attention.

Why we need Pretty Faces to create bookmarkable URLs in 2.0? It’s JSF 2.0 core feature. This just seems placed in wrong chapter. It was good for 1.2 when GET requests were not supported out of the box. Later we will found section about JSF 2.0 navigation enhancements and view parameters usage. In general the overview of them written fine. Just already mentioned problem - it loses some real case to show its coolness (it could be greatly used in common case of product list – product details navigation).

Mixing JSF and other technologies.

“Seam”. Configuration overview contains all general pointers to set-up Seam.. However in my opinion to try Seam the reader need more information or real samples. And wow! It’s the first write-up (even considering that short one) about Seam which is not talks about Conversations at all! :)

“Mixing JSTL and JSF”. It’s unfortunately not about mixing but about replacement h:dataTable to c:forEach. And I do not see advantages – why should I use it?

“Integration with Hibernate and Mixing Spring and JSF”. I think that such kind of content is just not for cookbook. Being the guy who need to learn something new from the real kick start samples - I still not see how to do at list something simple with that.

“JPA”. Finally I’m seeing some sample and it’s cool that I finally could check how to actually reach the db in my JSF action. It’s sad that login action not implemented but in general I understand how to proceed with that. So seems most useful recipe in that chapter.

Configuring JSF related technologies.

Some lack of versioning information. E.g. only RichFaces 3.3 and older versions requires Filter. But in general such section could be useful in order to quickly check the settings required for libraries usage.

Conclusion

Actually my conclusion was already written at the beginning. I believe that the book will find a place on your shelf and you will check it from time to time when needs to implement some concrete case quickly. All the problematic parts listed – just pointers for the author for future and I believe that he could successfully continue his works creating useful books for JSF community developers.

Get updates of that blog in twitter

RFC : XSD validation in both JDK 1.5 and 1.6

Posted by    |       |    Tagged as Discussions

In Hibernate there is a particular branch of logic where we need to parse and validate an org.xml.sax.InputSource that might represent either a Hibernate mapping (hbm.xml) file, a 1.0-compliant orm.xml file or a 2.0-compliant orm.xml file. Now currently Hibernate mapping files are defined by a DTD and both versions of the orm.xml files by an XSD. Currently the code builds a SAXReader with DTD and Schema validation enabled and tries to read in the source. It first maps Schema validation to the 2.0 version of the XSD; if an error occurs it then tries re-parsing mapping Schema validation to 1.0 version of the XSD.

Now I am not an XML guru, but this seemed wasteful to me. But try as I could I could not find a way to say that we need to resolve the XSD to one file (locally) if the root element defined version=2.0 as an attribute versus another if it defined version=1.0. Really I guess its a matter of conditionally resolving the schemaLocation. Anyone know if this is possible?

My next thought was to leverage the javax.xml.validation.Validator contract added in JDK 1.5. So basically, I would enabled DTD validation of the document during parse and simply parse the document initially. Then I peeked at the root element to see if the document was a Hibernate mapping or an orm.xml. If an orm.xml, I then check its version attribute, load a Validator based on the proper XSD and do a Validator.validate( new DOMSource(...) ). First, due to the separate parse and then validate steps, just how much slower will this be?

Also, I had a very irksome issue with this approach anyway. In my tests I created a document that is valid according to the 2.0 schema, but I identified the version as 1.0 and used the 1.0 version of the Schema. When running in (Sun) JDK 1.6 the test was successful in that the validator did in fact complain; but on (Sun) JDK 1.5 the validator simply did not complain at all. 1.5 and 1.6 appear to be using different internal SchemaFactory implementations. 1.6 used com.sun.org.apache.xerces.internal.jaxp.validation.XMLSchemaFactory, while 1.5 used com.sun.org.apache.xerces.internal.jaxp.validation.xs.SchemaFactoryImpl.

Perhaps I am just doing something wrong? The code can be seen at https://fisheye2.atlassian.com/browse/Hibernate/core/trunk/core/src/main/java/org/hibernate/util/xml/MappingReader.java?r=20321#l118 (its the commented out code).

Assuming I did not make a mistake, what is the best way around this?

A more concise way to generate the JPA 2 metamodel in Maven

Posted by    |       |    Tagged as Discussions JPA

The JPA 2 metamodel is the cornerstone of type-safe criteria queries in JPA 2. The generated classes allow you to refer to entity properties using static field references, instead of strings. (A metamodel class is the fully-qualified class name of the entity class it matches followed by an underscore (_)).

It sounds promising, but many people using Maven are getting tripped up trying to get the metamodel generated and compiled. The Hibernate JPA 2 Metamodel Generator guide offers a couple of solutions. I've figured out another, which seems more elegant.

Just to refresh your memory of the problem:

  1. Maven compiles the classes during the compile phase
  2. The Java 6 compiler allows annotation processors to hook into it
  3. Annotation processors are permitted to generate Java source files (which is the case with the JPA 2 metamodel)
  4. Maven does not execute a secondary compile step to compile Java source files generated by the annotation processor

I figured out that it's possible to use the Maven compiler plugin to run only the annotation processors during the generate-sources phase! This effectively becomes a code generation step. Then comes the only downside. If you can believe it, Maven does not have a built-in way to compile generated sources. So we have to add one more plugin (build-helper-maven-plugin) that simply adds an additional source folder (I really can't believe the compiler plugin doesn't offer this feature). During the compile phase, we can disable the annotation processors to speed up compilation and avoid generating the metamodel a second time.

Here's the configuration for your copy-paste pleasure. Add it to the <plugins> section of your POM.

<!-- Compiler plugin enforces Java 1.6 compatibility and controls execution of annotation processors -->
<plugin>
   <artifactId>maven-compiler-plugin</artifactId>
   <version>2.3.1</version>
   <configuration>
      <source>1.6</source>
      <target>1.6</target>
      <compilerArgument>-proc:none</compilerArgument>
   </configuration>
   <executions>
      <execution>
         <id>run-annotation-processors-only</id>
         <phase>generate-sources</phase>
         <configuration>
            <compilerArgument>-proc:only</compilerArgument>
            <!-- If your app has multiple packages, use this include filter to
                 execute the processor only on the package containing your entities -->
            <!--
            <includes>
               <include>**/model/*.java</include>
            </includes>
            -->
         </configuration>
         <goals>
            <goal>compile</goal>
         </goals>
      </execution>
   </executions>  
</plugin>         
<!-- Build helper plugin adds the sources generated by the JPA 2 annotation processor to the compile path -->
<plugin>
   <groupId>org.codehaus.mojo</groupId>
   <artifactId>build-helper-maven-plugin</artifactId>
   <version>1.5</version>
   <executions>      
      <execution> 
         <phase>process-sources</phase>
         <configuration>
            <sources>
               <source>${project.build.directory}/generated-sources/annotations</source>
            </sources>
         </configuration>
         <goals>
            <goal>add-source</goal>
         </goals>
      </execution>
   </executions>
</plugin>

The metamodel source files get generated into the target/generated-sources/annotations directory.

Note that if you have references to the metamodel across Java packages, you'll need to filter the annotation processor to only run on the package containing the entity classes.

We'll be protoyping this approach in the 1.0.1.Beta1 release of the Weld archetypes, which should be out soon.

Bonus material: Eclipse configuration

While I'm at it, I might as well show you how I enabled the JPA 2 metamodel generation in Eclipse. (Max may correct me. He's the authority on Eclipse tooling, so listen to what he says).

Start by adding the following dependency to your POM:

<dependency>
   <groupId>org.hibernate</groupId>
   <artifactId>hibernate-jpamodelgen</artifactId>
   <version>1.0.0.Final</version>
   <scope>provided</scope>
</dependency>

Then, populate the .factorypath file at the root of your project with the following contents:

<factorypath>
    <factorypathentry kind="PLUGIN" id="org.eclipse.jst.ws.annotations.core" enabled="true" runInBatchMode="false"/>
    <factorypathentry kind="VARJAR" id="M2_REPO/org/hibernate/hibernate-jpamodelgen/1.0.0.Final/hibernate-jpamodelgen-1.0.0.Final.jar" enabled="true" runInBatchMode="false"/>
    <factorypathentry kind="VARJAR" id="M2_REPO/org/hibernate/javax/persistence/hibernate-jpa-2.0-api/1.0.0.Final/hibernate-jpa-2.0-api-1.0.0.Final.jar" enabled="true" runInBatchMode="false"/>
</factorypath>

Refresh the project in Eclipse. Now right click on the project and select:

Properties > Java Compiler > Annotation Processing

Enable project specific settings and enable annotation processing. Press OK and OK again when prompted to build the project. Now, Eclipse should also generate your JPA 2 metamodel.

Happy type-safe criteria querying!

back to top