Red Hat

In Relation To Discussions

In Relation To Discussions

JSF 2.0 CookBook by Anghel Leonard review

Posted by    |       |    Tagged as Discussions JSF

Overview

Actually I’m a fan of this kind of book, as in my opinion they are useful for beginner and experienced developers. It’s always good to have it to be able to review all the possible solutions for concrete case and start quickly with prototyping. So that’s why I accepted Packt Publishing proposal to review JSF 2.0 CookBook by Anghel Leonard with pleasure.

Finally I’ve finished reading it and now wanting to share my opinion about its advantages and problematic parts. From the beginning I want to give quick overview of common points and then to go through the whole book to highlight recipes I really liked and propose some kind of errata for other ones.

Positives

It has a really good structure for recipes! I’m able to open any recipe without going through some overviews about conventions and environments info and just use the recipe! Every recipe contains description of technologies and tools used. Also it points us to the archive with complete application and has good explanations of key points for sure. Many of the JSF world FAQ’s are covered well in the book and most of them could be picked by just copying the sample code from book applications.

Also need to highlight that in most cases – all the alternatives for implementation of the same use-case are covered. That allows the user to choose most convenient way for him!

And I believe that in general – that book really worth a place on your shelf because could answer at many common questions or at least provide some pointers about where to check for the answer.

Some criticism

Well there is always some place for critical words. But let that criticism not to confuse the author and the guys who plans to read it. I just want to highlight some points which I believe will be useful for author in future and will make the readers to go through some problems easier.

Most of the examples show clean and useful samples – but sometimes author uses just “hello world” style for complex cases and the developer could miss the points or underrate the feature importance. That the main problem of some recipes. And I’m not sure if recipes where there are no code samples at all (there are few present) should be in such book.

Author uses JSP through the entire book. However – it’s deprecated in 2.0 and was not really popular in 1.2 after facelets came to the scene. It’s not a big problem for sure – but in some cases some important info missed. For example – custom component recipes – says about TLD descriptors but has nothing about taglib’s needed for the component.

So let’s go through the book shortly. Again – critic words which could appears there – only pointer for the readers to pay their attention more and to the author – to consider in future.

Going through

Now let's look some closer to book content.

Conversion/Validation chapters

Both chapters are described really well and will be useful for the developers who need to get a quick start with that JSF area. Most cool points which I like to see included into JSF CookBook, as they are asked much across the community – converters with passing parameters, validation using regular expressions with useful list of common expressions, JSR 303 recipes for validators groups and calling the validation from your java code.

Some Problems
  1. For better understanding - RichFaces bean validator recipe @NotEmpty case could be slightly extended with information that in general JSF 1.2 not validates the empty inputs at all. So need to mention that it will work only using RichFaces framework (which provides standard input renderers extensions to support null-values validation). Not just to say how cool it is to use RichFaces J but just to make sure that developer will realize where the problem is if will try the validation without RichFaces and it will stop working.
  2. Author missed to highlight the difference between BeanValidator and AjaxValidator in RichFaces. They described as completely similar. Most important that should be mentioned that ajaxValidator – is a combination of Ajax Support and Bean Validator.

File Management

This provides a good overview of all the component libraries which provides solutions for that topic. All the samples covered well and could be just picked to real application from the samples. The only minor problems lack of samples for the solutions without usage any additional libraries (download, export).

Security

The recipe contains good overview of JSF security and Acegi. But in my opinion chapter lacks some basic information and could be slightly extended with the pure JSF solutions.

Custom Components

This chapter is one which caused most of my questions. It’s sad as custom components creation – one of the most important topics in JSF – so I paid much attention to that chapter. Overview is short and really clear and looks really good as an introduction for the beginner... However screenshot of the lifecycle contains basic mistake. The “encoding” and “decoding” labels was put to wrong phases opposite to the places where them should be.

“EmailInput” – not looks as a best example of custom component. As written at all the classical JSF books resources - “before creation of the new component – double-check that you can’t achieve the result via some composition”. Even considering that the author probably wanted to gave really simple overview – the question remains then - why he was need to override renderer methods of the input? Standard input renderer should be enough there and it’s a way to confuse developer as he could decide that extension of the components still requires from him to override base functionality every time.

“ImageViewer” – again maybe it was better to extend the graphicImage component instead of writing from scratch.. And the code not looks really clean especially in adding Ajax capabilities section where the new JS not listed normally but only included in renderer as escaped String.

“ProxyId library”. Need to admit that I never heard about this interesting extension. But that probably related to the facts that RichFaces already provides the rich:clientId, rich:element, rich:findComponent and other functions to perform different operations on components using their short id’s. And not a section problem but note for me to check - author not listed but interesting to know how that proxyId will works in iteration components.

“Accessing resources from custom component”. From my point of view – that is most useless recipe in that chapter... I believe that we should not believe that JSF beginner – could understand the difference of approaches using just brief descriptions. In order this section to become recipe – it should provide the code and explanations based on it.

“RichFaces CDK”. At first small notes:

  • For those who will read the book only now – the settings for maven – has been changed after we switched to JBoss nexus repository.
  • Author should explicitly define that this shows CDK from RF 3.3.3 in action. As in 4.x it is completely redesigned and not compatible with the old one.

“Getting Ready” in “RichFaces CDK” contains the info on manual folder and pom.xml creation. However it could be done by archetype as written in our CDK reference. “How to do it” – at first asking the user to type the command from the screenshot – looks not good.. At least for big 3-row command. J The rest of the recipe in general explained well. Minor note - project could be easily created by archetype instead of manual sample creation.

“Composite Components” section explains the process really well and provides short but good samples.

“Mixing dojo and JSF”. In general – the recipe will work well for somebody who needs to get the idea about how to use dojo widgets in JSF components. But I’m confused with the facts that author proposes just output attributes needed by dojo using component but include all the resources on the samples page. It looks like not very maintainable solution.

AJAX in JSF

“DynaFaces” – recipe which unfortunately is not described well in my opinion. Just “drag and drop some components and then add Ajax transaction” – is not enough to understand who could need those recipe and why he need dynaFaces. Why it’s still useful – self-descriptional application archive.

“Tomahawk suggestAjax” and “A4j Ajax” recipes are clean and well described.

“Writing reusable AJAX components” recipe contains really weird issue – there is no Ajax which is mentioned in section name J. Just common spinner component optimized for usage of multiple instances on the page.

Main issue of that chapter on my opinion – lack of really common tasks recipes... E.g. how to perform optimized Ajax editing of huge tables, how to organize wizards, how to lazy load data after page rendered and some more... Simple cases are useful for sure as shows basics... But the developers has to implement much more complex issue in real enterprise applications and that questions appears on different community forum continuously

Localization

The chapter covers all the main cases which could be needed by the developer and recipes look clean at most.

Let’s highlight some samples there. “Accessing message resources from a class” – pretty good recipe and looks clean while you looking for the most important Helper class which performs obtaining the localized String. Minor problem there – it contains no result screenshots and actual properties listing – so not really easy to get what the user will see in the result. Need to start application from examples archive to check. Selection a timezone in JSF 2.0 – good pointer for the reader to possible problems. Even having no recipe author at least pointed the developer to the fact that he should pay attention and check for additional information. Displaying Arabic characters recipe – again uses JSP only.. But what about facelets? There are some other tricks existing and it’s knowledge needed by the developer more.

JSF Images, CSS and JS

“Injection of CSS“ recipe are we really talking about styling here? So why do I see only that dry source code and not at least a single image of the beautiful page created? :) It's could be not easy for the beginner to imagine what did the author achieved there (the beginner could not know the rules to apply those attributes for the component) without going again to examples archive.

“Dynamic CSS“ - I believed that there I will see how to use EL in CSS stylesheets :) B.t.w. frequently asked use-case which could be shown there - highlighting in the table cells/rows using styles which returned according to row model type, (e.g. Important or LowPriority). But the sample there unfortunately looks more static.

“Java Script injection“ well simple and clean example gives some ideas how I could apply my JS knowledge in JSF. Just working with JSF hidden fields section confuses me as there is nothing about hidden fields. (Actually I expected h:inputHidden usage) “Passing parameters“ could become also really useful section – but the examples miss some more real usage cases. “Opening popups using JSF and JS“ - looks good and answers the initial question really fine..

“Rss4jsf“ – Thanks for the pointer :). I've missed that framework and plan to look though now. It looks clean and easy to use. “Resources in JSF 2.0“ – it would be good to have a few words about access from Java objects. But proposed descriptions and samples of usage in the views looks clean and self-explanatory.

JSF Managing and testing

JSF Unit – want to highlight great overall descriptions, good samples and list of most used API. The only charts usage – not really related to testing, and in general contains no recipe but just pointer to library site… But that not really make the whole chapter less useful.

Facelets

Almost nothing to say – but that because the chapter has no problems in general and well written :)

Want only to highlight “composition component for JSF 2.0” section. It’s really good sample. But the main problem – for some reason (seems because a section name) I expected that it will be about composite component especially considering that the author mentions JSF 2.0. And it’s actually about custom tag from 1.2. But as far as I realized that that is just custom tag recipe and not actual composite component I re-read it and want to admit that it’s pretty good for that topic.

“Passing action” – really good to see there. It’s most weird issue in Facelets 1.x and most frequent question from the guys who creating custom facelets.

JSF 2.0

Good and useful chapter again – so let’s just look to some concrete samples which I want to pay readers attention.

Why we need Pretty Faces to create bookmarkable URLs in 2.0? It’s JSF 2.0 core feature. This just seems placed in wrong chapter. It was good for 1.2 when GET requests were not supported out of the box. Later we will found section about JSF 2.0 navigation enhancements and view parameters usage. In general the overview of them written fine. Just already mentioned problem - it loses some real case to show its coolness (it could be greatly used in common case of product list – product details navigation).

Mixing JSF and other technologies.

“Seam”. Configuration overview contains all general pointers to set-up Seam.. However in my opinion to try Seam the reader need more information or real samples. And wow! It’s the first write-up (even considering that short one) about Seam which is not talks about Conversations at all! :)

“Mixing JSTL and JSF”. It’s unfortunately not about mixing but about replacement h:dataTable to c:forEach. And I do not see advantages – why should I use it?

“Integration with Hibernate and Mixing Spring and JSF”. I think that such kind of content is just not for cookbook. Being the guy who need to learn something new from the real kick start samples - I still not see how to do at list something simple with that.

“JPA”. Finally I’m seeing some sample and it’s cool that I finally could check how to actually reach the db in my JSF action. It’s sad that login action not implemented but in general I understand how to proceed with that. So seems most useful recipe in that chapter.

Configuring JSF related technologies.

Some lack of versioning information. E.g. only RichFaces 3.3 and older versions requires Filter. But in general such section could be useful in order to quickly check the settings required for libraries usage.

Conclusion

Actually my conclusion was already written at the beginning. I believe that the book will find a place on your shelf and you will check it from time to time when needs to implement some concrete case quickly. All the problematic parts listed – just pointers for the author for future and I believe that he could successfully continue his works creating useful books for JSF community developers.

Get updates of that blog in twitter

RFC : XSD validation in both JDK 1.5 and 1.6

Posted by    |       |    Tagged as Discussions

In Hibernate there is a particular branch of logic where we need to parse and validate an org.xml.sax.InputSource that might represent either a Hibernate mapping (hbm.xml) file, a 1.0-compliant orm.xml file or a 2.0-compliant orm.xml file. Now currently Hibernate mapping files are defined by a DTD and both versions of the orm.xml files by an XSD. Currently the code builds a SAXReader with DTD and Schema validation enabled and tries to read in the source. It first maps Schema validation to the 2.0 version of the XSD; if an error occurs it then tries re-parsing mapping Schema validation to 1.0 version of the XSD.

Now I am not an XML guru, but this seemed wasteful to me. But try as I could I could not find a way to say that we need to resolve the XSD to one file (locally) if the root element defined version=2.0 as an attribute versus another if it defined version=1.0. Really I guess its a matter of conditionally resolving the schemaLocation. Anyone know if this is possible?

My next thought was to leverage the javax.xml.validation.Validator contract added in JDK 1.5. So basically, I would enabled DTD validation of the document during parse and simply parse the document initially. Then I peeked at the root element to see if the document was a Hibernate mapping or an orm.xml. If an orm.xml, I then check its version attribute, load a Validator based on the proper XSD and do a Validator.validate( new DOMSource(...) ). First, due to the separate parse and then validate steps, just how much slower will this be?

Also, I had a very irksome issue with this approach anyway. In my tests I created a document that is valid according to the 2.0 schema, but I identified the version as 1.0 and used the 1.0 version of the Schema. When running in (Sun) JDK 1.6 the test was successful in that the validator did in fact complain; but on (Sun) JDK 1.5 the validator simply did not complain at all. 1.5 and 1.6 appear to be using different internal SchemaFactory implementations. 1.6 used com.sun.org.apache.xerces.internal.jaxp.validation.XMLSchemaFactory, while 1.5 used com.sun.org.apache.xerces.internal.jaxp.validation.xs.SchemaFactoryImpl.

Perhaps I am just doing something wrong? The code can be seen at https://fisheye2.atlassian.com/browse/Hibernate/core/trunk/core/src/main/java/org/hibernate/util/xml/MappingReader.java?r=20321#l118 (its the commented out code).

Assuming I did not make a mistake, what is the best way around this?

A more concise way to generate the JPA 2 metamodel in Maven

Posted by    |       |    Tagged as Discussions JPA

The JPA 2 metamodel is the cornerstone of type-safe criteria queries in JPA 2. The generated classes allow you to refer to entity properties using static field references, instead of strings. (A metamodel class is the fully-qualified class name of the entity class it matches followed by an underscore (_)).

It sounds promising, but many people using Maven are getting tripped up trying to get the metamodel generated and compiled. The Hibernate JPA 2 Metamodel Generator guide offers a couple of solutions. I've figured out another, which seems more elegant.

Just to refresh your memory of the problem:

  1. Maven compiles the classes during the compile phase
  2. The Java 6 compiler allows annotation processors to hook into it
  3. Annotation processors are permitted to generate Java source files (which is the case with the JPA 2 metamodel)
  4. Maven does not execute a secondary compile step to compile Java source files generated by the annotation processor

I figured out that it's possible to use the Maven compiler plugin to run only the annotation processors during the generate-sources phase! This effectively becomes a code generation step. Then comes the only downside. If you can believe it, Maven does not have a built-in way to compile generated sources. So we have to add one more plugin (build-helper-maven-plugin) that simply adds an additional source folder (I really can't believe the compiler plugin doesn't offer this feature). During the compile phase, we can disable the annotation processors to speed up compilation and avoid generating the metamodel a second time.

Here's the configuration for your copy-paste pleasure. Add it to the <plugins> section of your POM.

<!-- Compiler plugin enforces Java 1.6 compatibility and controls execution of annotation processors -->
<plugin>
   <artifactId>maven-compiler-plugin</artifactId>
   <version>2.3.1</version>
   <configuration>
      <source>1.6</source>
      <target>1.6</target>
      <compilerArgument>-proc:none</compilerArgument>
   </configuration>
   <executions>
      <execution>
         <id>run-annotation-processors-only</id>
         <phase>generate-sources</phase>
         <configuration>
            <compilerArgument>-proc:only</compilerArgument>
            <!-- If your app has multiple packages, use this include filter to
                 execute the processor only on the package containing your entities -->
            <!--
            <includes>
               <include>**/model/*.java</include>
            </includes>
            -->
         </configuration>
         <goals>
            <goal>compile</goal>
         </goals>
      </execution>
   </executions>  
</plugin>         
<!-- Build helper plugin adds the sources generated by the JPA 2 annotation processor to the compile path -->
<plugin>
   <groupId>org.codehaus.mojo</groupId>
   <artifactId>build-helper-maven-plugin</artifactId>
   <version>1.5</version>
   <executions>      
      <execution> 
         <phase>process-sources</phase>
         <configuration>
            <sources>
               <source>${project.build.directory}/generated-sources/annotations</source>
            </sources>
         </configuration>
         <goals>
            <goal>add-source</goal>
         </goals>
      </execution>
   </executions>
</plugin>

The metamodel source files get generated into the target/generated-sources/annotations directory.

Note that if you have references to the metamodel across Java packages, you'll need to filter the annotation processor to only run on the package containing the entity classes.

We'll be protoyping this approach in the 1.0.1.Beta1 release of the Weld archetypes, which should be out soon.

Bonus material: Eclipse configuration

While I'm at it, I might as well show you how I enabled the JPA 2 metamodel generation in Eclipse. (Max may correct me. He's the authority on Eclipse tooling, so listen to what he says).

Start by adding the following dependency to your POM:

<dependency>
   <groupId>org.hibernate</groupId>
   <artifactId>hibernate-jpamodelgen</artifactId>
   <version>1.0.0.Final</version>
   <scope>provided</scope>
</dependency>

Then, populate the .factorypath file at the root of your project with the following contents:

<factorypath>
    <factorypathentry kind="PLUGIN" id="org.eclipse.jst.ws.annotations.core" enabled="true" runInBatchMode="false"/>
    <factorypathentry kind="VARJAR" id="M2_REPO/org/hibernate/hibernate-jpamodelgen/1.0.0.Final/hibernate-jpamodelgen-1.0.0.Final.jar" enabled="true" runInBatchMode="false"/>
    <factorypathentry kind="VARJAR" id="M2_REPO/org/hibernate/javax/persistence/hibernate-jpa-2.0-api/1.0.0.Final/hibernate-jpa-2.0-api-1.0.0.Final.jar" enabled="true" runInBatchMode="false"/>
</factorypath>

Refresh the project in Eclipse. Now right click on the project and select:

Properties > Java Compiler > Annotation Processing

Enable project specific settings and enable annotation processing. Press OK and OK again when prompted to build the project. Now, Eclipse should also generate your JPA 2 metamodel.

Happy type-safe criteria querying!

The goal of this blog post is to walk you through an Java EE 6 application from a simple, static web page until we have a full blown stack that consist of the stuff in the list below. I'm calling this stack Summer because after a long, hard winter Spring may be nice but boy, wait until Summer kicks in ;-)

  • CDI (Weld)
  • JSF 2 (facelets, ICEfaces 2)
  • JPA 2 (Hibernate, Envers)
  • EJB 3.1 (no-local-view, asynchronous, singletons, scheduling)
  • Bean Validation (Hibernate Validator)
  • JMS (MDB)
  • JAX-RS (RESTEasy)
  • JAX-WS
  • Arqullian (incontainer-AS6)

We will pack all this in a single WAR. Just because we can (spoiler: in part IV). Noticed that apart from the component and testing frameworks, they are all standards? That's a lot of stuff. Fortunately, the appserver already provides most of the stuff so you're app will still be reasonably small.

As for the environment I'm using

  • Eclipse (Galileo SR2)
  • JBoss 6.0 M3
  • Maven 3 (beta1)
  • Sun JDK 6
  • m2eclipse 0.10

This will not be your typical blog post where everything goes well - we will hit bugs. There will be curses, blood and guts and drama and we will do workarounds and rewrites as we move along. Pretty much the same as your average day as a software developer probably looks like. I'm also no expert in the technologies I use here so there are probably things that could be done better. Consider this more of a write-down of my experiences in EE6-land that will probably mirror what others are going through. I will also not point you to links or additional information, I assume that if I say RESTEasy, you can google up more information if you are interested.

And I almost forgot: don't panic.

In the beginning: Project setup

So, lets start things off - go and download the stuff mentioned in the environment if you don't already have it. I'm not going to insult your intelligense by walking you through that (remind me to insult it later). Besides, it's pretty straightforward.

Let's make a new Maven project (File -> New -> Project... -> Maven -> Maven Project. We skip the archetype selection and just make a simple project with group id com.acme, artifact id Greetings of version 1.0.0-SNAPSHOT packed as a WAR. Now finish the wizard and now you should have a nice, perfect project. It will never be this perfect again as our next step is adding code to it.

Maven tip-of-the-day for Windows users. Google up on how you change the path to your local repo as it might be somewhere under Documents And Settings which has two effects: classpath gets huge and there could be problems due to the spaces. Change it to something like c:\java\m2repo

The first thing we notice that m2eclipse has J2SE-1.4 as default. How 2002. Besides, that will make using annotations impossible so lets change that. Edit the pom.xml and throw in

<build>
	<plugins>
		<plugin>
			<groupId>org.apache.maven.plugins</groupId>
			<artifactId>maven-compiler-plugin</artifactId>
			<version>2.3.1</version>
			<configuration>
				<source>1.6</source>
				<target>1.6</target>
			</configuration>
		</plugin>
	</plugins>
</build>

Save and right-click the project root and go Maven -> Update Project Configuration. Aah, that's better

JSF

Let's wake up JSF. We create a folder WEB-INF in src/main/webapp and throw in a web.xml because no web app is complete without it (enforced by the maven war plugin). OK, actually this can be configured in the plugin but let's keep the web.xml since we'll need it later.

<?xml version="1.0" encoding="ISO-8859-1"?>
<web-app version="3.0" xmlns="http://java.sun.com/xml/ns/javaee"
	xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
	xsi:schemaLocation="http://java.sun.com/xml/ns/javaee http://java.sun.com/xml/ns/javaee/web-app_3_0.xsd"/>

and an empty faces-config.xml next to it

<faces-config xmlns="http://java.sun.com/xml/ns/javaee"
              xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
              xsi:schemaLocation="http://java.sun.com/xml/ns/javaee http://java.sun.com/xml/ns/javaee/web-facesconfig_2_0.xsd"
              version="2.0"/>

and in webapp we add a greeting.xhtml like

<?xml version="1.0" encoding="ISO-8859-1"?>
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN" "DTD/xhtml1-strict.dtd">
<html xmlns="http://www.w3.org/1999/xhtml"
	xmlns:h="http://java.sun.com/jsf/html"
	xmlns:ui="http://java.sun.com/jsf/facelets">
	<h:head>
		<title>
			Greetings
		</title>
	</h:head>

	<h:body>
		<h:outputText value="Hello world"/>
	</h:body>
</html>

Will it blend? I mean, will it deploy? Do a mvn clean package and you should have a Greetings-1.0.0-SNAPSHOT in your projects target directory. Throw it into the AS server/default/deploy directory and start up the server and go to

http://localhost:8080/Greetings-1.0.0-SNAPSHOT/faces/greetings.xhtml

The url is not pretty, but the server port, web context root, welcome files and JSF mappings can all be tuned later, let's focus on technologies and dependencies for now. But wait - at which point did we define the JSF servlet and mappings in web.xml? We didn't. It's automagic for JSF-enabled applications.

EJB and CDI

Next step is bringing in some backing beans, let's outsource our greeting. We make a stateless EJB and use it in CDI

package com.acme.greetings;

@Stateful
@Model
public class GreetingBean 
{
	public String getGreeting()
	{
		return "Hello world";
	}
}

The @Stateful defines a stateful session EJB (3.1 since it's a POJO) and the @Model is a CDI stereotype that is @RequestScoped and @Named (which means the lifecycle is bound to a single HTTP request and it has a name that can be referenced in EL and defaults to greetingBean in this case). But we have a problem - the annotations don't resolve to anything. So we need to pick them up from somewhere(tm). Fortunately we can have all the APIs picked up for us by adding the following to our pom.xml

<dependencies>
	<dependency>
		<groupId>org.jboss.spec</groupId>
		<artifactId>jboss-javaee-6.0</artifactId>
		<version>1.0.0.Beta4</version>
		<type>pom</type>
		<scope>provided</scope>
	</dependency>
</dependencies>

Sun Java API artifacts are a bit amusing since getting hold of them can be a bit tricky. First they publish them in the JSR:s and then they treat them like they're top secret. Fortunately Glassfish and now JBoss have started making them available in their repositories (although under their own artifact names, but still)...

We also need to make sure we have set up the JBoss repositories for this according to http://community.jboss.org/wiki/MavenGettingStarted-Users. Have a look at what happened in the projects Maven Dependencies. Good. Now close it and back away. It's getting hairy in there so better trust Maven to keep track of the deps from now on.

The imports should now be available in our bean so we import

import javax.ejb.Stateful;
import javax.enterprise.inject.Model;

and EL-hook the bean up with

<h:body>
	<h:outputText value="#{greetingBean.greeting}"/>
</h:body>

in greetings.xhtml.

Just as no web application is complete without web.xml, no CDI application is complete without beans.xml. Let's add it to WEB-INF

<?xml version="1.0" encoding="ISO-8859-1"?>
<beans xmlns="http://java.sun.com/xml/ns/javaee" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
	xsi:schemaLocation="
      http://java.sun.com/xml/ns/javaee 
      http://java.sun.com/xml/ns/javaee/beans_1_0.xsd" />

Package and redeploy. We get a warning about encoding when compiling so lets add this to our pom.xml

<properties>
	<project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>
</properties>

Back to http://localhost:8080/Greetings-1.0.0-SNAPSHOT/faces/greetings.xhtml SUCCESS! No. Wait. Huge stack trace hits you for 300 points of damage. Let's back up on our EJB, there are still some issues with 3.1 style EJBs in WAR-only-packaging on AS 6 M3. Remove the @Stateful annotation and it becomes a normal CDI managed POJO. Repackage. Redploy. Recoyce.

Testing

Testing is hip nowadays so let's bring in Arquillian. Arquillian is the latest and greatest in EE testing (embedded or incontainer). Start using it now. In a year or so when everone else catch up you can go I've been using it since Alpha. Add the following property to pom.xml:

<arquillian.version>1.0.0.Alpha2</arquillian.version>

and these deps

<dependency>
	<groupId>org.jboss.arquillian</groupId>
	<artifactId>arquillian-junit</artifactId>
	<version>${arquillian.version}</version>
	<scope>test</scope>
</dependency>
<dependency>
	<groupId>junit</groupId>
	<artifactId>junit</artifactId>
	<version>4.8.1</version>
	<scope>test</scope>
</dependency>

and this profile

<profiles>	
	<profile>
		<id>jbossas-local-60</id>
		<dependencies>
			<dependency>
				<groupId>org.jboss.arquillian.container</groupId>
				<artifactId>arquillian-jbossas-local-60</artifactId>
				<version>1.0.0.Alpha2</version>
			</dependency>
			<dependency>
				<groupId>org.jboss.jbossas</groupId>
				<artifactId>jboss-server-manager</artifactId>
				<version>1.0.3.GA</version>
			</dependency>
			<dependency>
				<groupId>org.jboss.jbossas</groupId>
				<artifactId>jboss-as-client</artifactId>
				<version>6.0.0.20100429-M3</version>
				<type>pom</type>
			</dependency>
		</dependencies>
	</profile>
</profiles>

Maven will probably now download the entire internet for you.

Let's write our first test and place it in the test source folder:

package com.acme.greetings.test;

import javax.inject.Inject;

import org.jboss.arquillian.api.Deployment;
import org.jboss.arquillian.junit.Arquillian;
import org.jboss.shrinkwrap.api.ArchivePaths;
import org.jboss.shrinkwrap.api.ShrinkWrap;
import org.jboss.shrinkwrap.api.spec.JavaArchive;
import org.jboss.shrinkwrap.impl.base.asset.ByteArrayAsset;
import org.junit.Assert;
import org.junit.Test;
import org.junit.runner.RunWith;

import com.acme.greetings.GreetingBean;

@RunWith(Arquillian.class)
public class GreetingTest 
{
	@Inject
	GreetingBean greetingBean;

	@Deployment
	public static JavaArchive createTestArchive() 
	{
		return ShrinkWrap.create("test.jar", JavaArchive.class).addClass(
				GreetingBean.class).addManifestResource(
				new ByteArrayAsset("<beans/>".getBytes()),
				ArchivePaths.create("beans.xml"));
	}

	@Test
	public void testInjection() 
	{
		Assert.assertEquals("Hello World", greetingBean.getGreeting());
	}

}

and then we try it out with mvn test -Pjbossas-local-60. If we have the AS running we can save some time, otherwise the manager will start it automagically. Setting the JBOSS_HOME env helps. What happens here is we use Shrinkwrap to create a deployment which consist of our GreetingBean and an empty beans.xml file (for CDI) and the bean is then injected for use in our tests.

This concludes Part I. In part II we will set up ICEfaces and expand our application and in part III we'll set up JPA. Part IV is for MDB and EJB and part V for adding JAX-RS and JAX-WS for importing and exporting stuff.

Simultaneously Supporting JDBC3 And JDBC4 With Maven : II

Posted by    |       |    Tagged as Discussions

As a follow up to http://in.relation.to/Bloggers/SimultaneouslySupportingJDBC3AndJDBC4WithMaven, I wanted to point out that I uploaded some example projects to the design discussion. There are 2 different approaches as Maven projects and one Gradle project.

Simultaneously Supporting JDBC3 and JDBC4 with Maven

Posted by    |       |    Tagged as Discussions

Per HHH-2412 and its design discussion I have been working on supporting JDBC4 in Hibernate. Initially I had planned on adding this in 3.6, but now leaning toward 3.5 Anyway, as outlined on the design wiki, I initially thought to add this support as separate modules. However, I quickly came to the realization that using separate modules would actually require users make an explicit choice wrt an extra jar. Especially considering that I could make Hibernate code make this determination programatically, I really did not like that aspect to using modules here. So I started looking for alternatives.

As I normally do, I started by thinking of just the ideal outside of any build structure or tool. So to my mind the best option here seemed to include compiling JDBC4 support targeting JDK 1.6 and JDBC3 support targeting JDK 1.4 and then bundling all that code into a single jar. The code which decides which feature-set to use uses reflection anyway, so there is no issue with having both in the same jar as long as only the correct classes actually get loaded (and anything else would be a bug in this determination code).

Well unfortunately it turns out that Maven does not really support such a setup. In the common case I think you'd set up multiple source directories within that main module and run different compilation executions against them with different compiler settings (source/target). This Maven explicitly disallows (see MCOMPILER-91 and its discussion for details).

Two workable solutions were presented to me (workable in that Maven could handle these):

  1. Use a single source directory, but configure 2 executions of the compiler plugin taking advantage of includes/excludes to define what to compile targeting 1.6 and 1.4. The issue here is that this is not good fit for IDEs.
  2. Split this into 3 modules (one each for JDB3 and JDBC4 support plus another for commonality to avoid circular dependencies with core). Core would then pull the compiled classes from these 3 modules into its jar. Seems overly complicated to me just to work around a self-imposed limitation of Maven. And sorry, I just don't see how this constitutes a messed up system as indicated in that Maven discussion.

Anyway, I really hope someone has other suggestions on better ways to handle this with Maven. Anyone?

Maven, 2 years after

Posted by    |       |    Tagged as Discussions

Let me preface this by saying up front that, yes, I am fed up with Maven. In the 2 years since I decided to switch Hibernate over to use Maven for its build tool I don't think that Maven has lived up to making builds any easier. Quite the contrary, in fact. Previously though there was not really been any other option. At that time ant/ivy combo was just starting to gain traction. Not to mention that I do like the notion of build-by-convention which Ant just does not provide.

Bugs aside, most of the problems I have with Maven in dealing with Hibernate stem from Maven's assumption that all projects are independently releasable. The problem here is that this extends to sub-projects or modules as well. With Hibernate, the decision to utilize sub-projects was made solely for the purpose of isolating dependencies to various pieces of optional Hibernate functionality. For example, we have a sub-project specific to dealing with c3p0 as a provider of JDBC connections. The dependency in Hibernate on the c3p0 artifact is isolated completely in this module. So as a user you can simply tell maven that you are using hibernate-core and hibernate-c3p0 and have the transitive dependencies set up correctly. I did not set up these sub-projects so that I could release hibernate-c3p0 independently at some time. I have no desire do go that route as it causes nothing but confusion IMO. Even if nothing were to change in the classes in hibernate-c3p0 I would much rather have that get bumped to the new standardized version.

So anyway, what are the issues with this assumption? Well the first is that I cannot simply 'change directory' into a sub-project directory and build it and expect Maven to build the other projects upon which this sub-project depends for me automatically. For example, I cannot 'cd' into the hibernate-c3p0 project directory and run 'mvn compile' and expect Maven to first compile the hibernate-core project for me even though hibernate-c3p0 depends upon it. This may not sound like a big deal, but Hibernate is a big project with 14 jar-producing projects (not to mention numerous support projects like documentation and distribution-builders). Having to 'cd' around all those different projects just to compile a dependency graph that Maven already know about seems silly. But again, Maven must do this because of this assumption of project independence

The other issue has to do with the notoriously bad support for performing releases. This has bitten me every time I have tried to do a Hibernate release with Maven using it's release plugin. In fact it just caused a 3 day delay in the Hibernate 3.5.0.Beta-1 release. I know the release plugin is just a wrapper around a few mvn commands you can run yourself (in fact that's how I finally ended the 3 day 3.5.0.Beta-1 release marathon); the issue is again back to Maven's support for multi-project builds. The release plugin is the only current plugin which offers support for managing the numerous version declarations that are required throughout all the sub-projects. So when releasing a project with multiple sub-projects you have 2 options: (1) use the release plugin or (2) (a) update the pom versions manually (b) commit (c) r-tag (d) update the pom versions manually (e) commit (f) checkout the tag (g) perform the mvn deploy. Sure would be nice if the version management stuff from the release plugin were usable separately.

The other build tools I have been looking at briefly are (1) gradle, (2) buildr and (3) simple-build-tool. Anyone have practical experience with any of these, or suggestions on others we should look at?

jDocBook and resource+content staging

Posted by    |       |    Tagged as Discussions

<queue-voice-of-god>In the beginning there was jdocbook-styles...</queue-voice-of-god>

jDocBook styles were meant to represent common styling elements that could be developed on their own, built into bundles and used as dependencies to DocBook builds using the jDocBook plugin. The contents of the styles (css and images) were staged to a special directory within the DocBook project's target directory for use within the XSLT formatting.

Recently I have been working on extending that concept to an idea from the world of publican. For those not familiar with it, publican is the DocBook build system used by Fedora and many Red Hat related projects. The idea I was looking at including into the world of jDocBook is what publican terms brands. These brands really provide two somewhat complimentary things:

  1. styling
  2. fallback content

Much like jdocbook-styles, publican brands allow providing css and images that should be used in the output formatting. The big difference is that brands allow for them to be localized to the particular translations. The notion of 'fallback content' allows common pieces of content (DocBook XML) to be developed and then used throughout various documentation projects. For example, the publican brands generally define fallback content for things like legal notices and document conventions. Again, allows for localization to the various translations in terms of appropriate POT/PO files.

The localization aspect brings up a question wrt jdocbook-styles and the staging of those resources. Currently the resources within a jdocbook-styles are staged into the target/docbook/staging directory by the jdocbook plugin itself (the later formatting phase simply points to them from there for each translation since they are implicitly shared for all translations). Staging of the brands, however, would require that each translation have its own staging directory (ala target/docbook/staging/en-US, etc). So either the jdocbook-styles could be repeatedly exploded into each translation's staging directory or we could make the move to have localized jdocbook-styles as well.

Initially the idea was to have the jDocBook plugin itself perform all these stagings (especially since it does so already for jdocbook-styles). But today another thought crossed my mind: separate out the notions of staging. The basic principle would be to simplify jDocBook to focus on a core competency, namely taking some DocBook XML, some resources, some XSLT and generating some formatted output. This would allow projects not using styles and not using brands to have a very simple jDocBook setup; essentially it would build directly from the controlled sources. For projects with styles and/or brands we would stage everything to the staging directories and configure jDocBook itself to use the staged directories as its input.

--- Resources:

  • GNU gettext if you are not familiar with POT/PO files
  • poxml for the tools used to handle gettext in regards to XML files

It's time for Facelets to start using XSD

Posted by    |       |    Tagged as Discussions JSF

I've had this idea for a while now about using XSDs in Facelets templates. I believe we should stop pretending that Facelets templates are XHTML documents and start treating them as unrestricted XML. This would give us the freedom to extend the XML dialect with XML Schema and take full advantage of the type enforcement, syntax recognition, and tooling support that XML Schema provides.

I tried to communicate this idea to the JSF EG at JSFOne, but I fear that I did not articulate it well enough as it never took hold. I want to explain the approach here in detail to see if others agree it has merit. In this entry, I outline a way to take better advantage of Facelets' strengths today and propose a change in JSF 2.1 to support this usage. Right now, all I'm doing is putting the idea out there in the open.

A brief intro to Facelets

Anyone familiar with JSF must have heard about Facelets by now. Facelets is the alternative view handler and view definition language to JSP for JSF, and a far superior one at that. But Facelets is more than just an alternative. I truly believe that Facelets saved JSF, for if it weren't for something elegant like Facelets to come along and give developers a reason to give JSF a chance, they would have abandoned the framework long ago. Facelets really is that significant. As a recognition of its influence, Facelets is set to become the standard page declaration language in JSF 2.0--so if you have been living under a rock, you'll have to come out of hiding soon enough.

What makes Facelets successful

There are two features of Facelets that make it a more palatable choice than JSP for UI development:

  1. The XML markup gets translated directly into JSF UI components
  2. The XML markup must be well-formed

The first point is important because it means that the middleman in the view template to component tree build process has been ousted. JSF was originally designed so that it could leverage JSP pages and in turn JSP tag handlers. Each JSF component was mapped one-to-one with a JSP tag handler. That meant JSF didn't have to be in the business of consuming and parsing XML (or pseudo-XML in the case of classic JSP syntax). Instead, JSP would feed UI components to the JSF component tree build process.

While this sounded good from a reuse perspective, it turned out to be a terrible nightmare for developers. The JSP tag handlers duplicated the properties of the combined UIComponent and component renderer classes for each component. This meant one more class for the developer to write and one more class to maintain. Not to mention the burden of having to keep the tag library descriptor (TLD) metadata in sync with the class to make the JSP parser happy. To minimize the effort, many developers turned towards code generation, in this case an indication of a badly broken design.

Enter Facelets. Facelets offers its own XML compiler and tag handlers. Only in this case, the tag handlers are generic and require minimal configuration. Facelets can introspect the UIComponent and component renderer classes to determine how to map the information from the XML document into objects in the JSF component tree, both of which the Facelets builder creates. Facelets effectively cuts out the middleman. What's more, by having control over the compile process, Facelets was able to introduce its own advanced templating mechanism (called compositions) that allow one Facelets template to become the template of another, almost resembling a LISP dialect.

While building direct is an important reason for Facelets' success, the second point listed above is the one most relevant to this entry. Facelets put its foot down by enforcing valid XML syntax, finally doing away with that ridiculous pseudo-XML syntax that classic JSP supported. Even the JSP taglib directives were removed in favor of using XML namespace declarations to import a component set.

Facelets == XML

If you have so much as an undeclared entity in your template, Facelets will abort processing and throw up a pretty (no, really!) error page. And when Facelets reports the error, it can do so with precision (e.g., line number and column) since it uses a SAX compiler to parse the template prior to building the component tree.

Now I'll convince you why using valid XML is a good thing.

Leveraging the XML ecosystem

XML is ubiquitous. As a result, there are probably as many XML editors available as there are e-mail clients. By Facelets enforcing valid XML syntax, it means I don't have to wait around for a vendor to create me an editor to support a proprietary syntax, as was the case with JSP. Instead, Facelets opens the door to using any one of the available XML tools to create/modify/translate a Facelets view template. That's a very important benefit.

But there's a catch...well, really just a snag. In an attempt to support an experimental design-time technique, Facelets veered off course and did not take full advantage of the tooling benefit that XML has to offer. Let me explain what Facelets' creator was going for and then show you how to take advantage of what he missed.

By parsing the template directly, Facelets is able to work around one of the most horrible monstrosities of the early JSF-JSP integration, the <f:verbatim> tag. You see, every character in a JSF response must be yielded by a JSF component. That means that all static HTML markup must somehow get attached to a component. The only way to accomplish this in JSP was to introduce a wrapper tag, <f:verbatim>. However, this tag not only makes the template look ugly and verbose, it also makes creating valid XML nearly impossible because it ignores the HTML structure.

Once again, Facelets saved the day. Any time Facelets comes across static markup, it automatically creates a UIComponent to wrap the markup and output it during encoding. This also means that Facelets templates can match the recipe of JSP pages: static HTML markup with dynamic parts intermixed.

But Facelets goes a little too far by assuming (in the general case) that the document itself is an HTML template interspersed with JSF components. It became conventional to include an XHTML DOCTYPE at the top of the page (at least the top-level template), which Facelets then passes on to the response when the component tree is encoded. Facelets even takes measure to hide the JSF components by allowing them to be defined on a standard XHTML tag using the proprietary jsfc attribute (the name of the JSF component tag is defined as the value of the attribute). The theory here is that when the template is opened in an HTML viewer, it renders just like any other static HTML document, thus allowing the designer to preview the page. In theory.

<ul jsfc="ui:repeat" var="_name" value="#{bean.names}">
    <li>#{_name}</li>
</ul>

In reality, this theory simply doesn't scale. Unless all you are doing is developing a mostly static website with a handful of dynamic parts scattered across a subset of pages, you eventually get to the point where the pretend HTML documents become a handicap for both the page designer and the developer. It especially hurts the developer because it limits his/her ability to reuse template logic or roll it up into a custom component. I have done a lot of web development and the only way that this relationship works is if you are using a tool, such as the JBoss Tools visual designer, which can provide a runtime view of the page at design time. Otherwise, the designer should simply work on his/her own mockups and deliver them to the developer when they are ready to be integrated into the application.

Cutting the XHTML shackles

While Facelets templates are valid XML, most developers treat them as XHTML documents. The problem is, the XHTML vocabulary is not extensible, particularly when the document is tied to a DOCTYPE. That means all those JSF component tags in the document are seen as foreign bodies and cause the XHTML parser to complain loudly. Once again, you have to wait on that vendor to create an editor which supports this proprietary syntax. And what does that support look like? Well, likely the editor is going to have to parse the TLD files and provide some sort of code assistance (i.e., tag auto-completion) based on that metadata. Proprietary, proprietary, proprietary. Yuck.

But wait, didn't we just finish emphasizing that a Facelets template is valid XML? Of course! Well, there's a perfectly good vocabulary extension system for XML called XML Schema. XML Schema associates an XML dialect with an XML namespace. Unlike DTD, this contract permits multiple vocabularies to be imported into a single document, thus allowing it to be extended infinitely. In addition, the specificity of XML Schema goes way beyond what DTD allows. In fact, XML Schema is so detailed that with its support, you could call XML a type-safe language. This type system translates into code assistance and validation, drastically reducing the pain involved with using XML.

And guess what? Many of the configuration files for the Java frameworks you use today rely on XML Schema Documents (XSDs) to describe, extend, modularize, and document the permissible elements and attributes. Here are a couple of examples:

  • Seam's component descriptor (components.xml, .component.xml)
  • Seam's page descriptor (pages.xml, .page.xml)
  • Faces configuration resources (faces-config.xml)
  • Web application descriptor (web.xml)
  • Spring configuration file

So why not add Facelets to this list? Nothing is stopping us! All we have to do is author an XSD for each JSF component library we use (e.g., Facelets UI, JSF core, HTML basic, etc) and import the types defined in the XSD by associating it with the component library's XML namespace.

Creating XSDs for each component library sure would be a tedious task. You'd have to define an element for each component tag and then list all of the component-specific and renderer-specific attributes. We'll, don't throw away those TLD files just yet! As Mark Ziesemer points out, a TLD file is XML too. So you can work smarter, not harder, by writing an XSLT document to transpose the TLD into an XSD. And fortunately for us, Mark did that for us already! I used his XSLT script to create XSDs for the standard JSF component sets. Once created, I match them up with namespaces in the xsi:schemaLocation attribute as shown below. Note that to use this attribute, it's necessary to declare the namespace for XML Schema.

<?xml version="1.0" encoding="UTF-8"?>
<f:view xmlns="http://www.w3.org/1999/xhtml"
    xmlns:ui="http://java.sun.com/jsf/facelets"
    xmlns:f="http://java.sun.com/jsf/core"
    xmlns:h="http://java.sun.com/jsf/html"
    xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
    xsi:schemaLocation="
        http://java.sun.com/jsf/facelets facelets-ui-2.0.xsd
        http://java.sun.com/jsf/core jsf-core-2.0.xsd
        http://java.sun.com/jsf/html html-basic-2.0.xsd">
    <html>
        <body>
            <h2>My name is Duke. What's yours?</h2>
            <h:form id="helloForm" prependId="false">
                <h:graphicImage url="/wave.png"/>
                <h:inputText id="name" value="#{HelloBean.name}"/>
                <h:commandButton id="submit" action="success" value="Submit"/>
            </h:form>
            <h2>Folks who stopped by to say "Hi!"</h2>
            <ul>
                <ui:repeat var="_name" value="#{GuestBook.names}">
                    <li>#{_name}</li>
                </ui:repeat>
            </ul>
        </body>
    </html>
</f:view>

If you were to open this document in any XML editor that is capable of digesting XSD and providing code assistance (which is pretty much any XML editor) you now get tag completion for standard JSF component elements and attributes in addition to standard HTML attributes! Try it out for yourself by downloading the sample code[1] attached.

The only slight drawback is that the root tag is not <html>, so you don't get the out of the box preview with a standard HTML viewer. Assuming that matters to you, the problem with making <html> the root tag is that HTML does not accommodate other vocabularies in the document. The schema for HTML lacks <xs:any> declarations that would allow foreign tags to be present. And unfortunately, the HTML schema is pretty much fixed in stone. There is an effort for modularizing XHTML, but it still isn't widely adopted. For now, it's easier just to interweave HTML tags into the <f:view> document root than it is the other way around.

Now, you might be thinking that creating the XSDs is just one more step in a long process to create a JSF component library. I'm here to say that the transformation from TLD to XSD should be a migration away from TLD in favor of XSD. XML Schema is an incredibly comprehensive at defining XML dialects and there is absolutely nothing that TLD can describe that XML Schema cannot. Unlike TLD, XSD even has the ability to declare which facet names a component supports. Scrap TLDs!

What's your type, Doc?

There's one shortcoming in this setup. We had to remove the DOCTYPE definition at the top of the document so that the XML editor doesn't complain about the presence of the JSF component tags (which are obviously not declared in the XHTML DOCTYPE). XML Schema is a replacement for DOCTYPE and as such, the two are mutually exclusive. But if you think about it, the DOCTYPE is not there to validate the syntax of the template. It's really part of the output, meant to instruct the browser how to render the page. Therefore, the DOCTYPE should be rendered by a component tag just like any other markup-producing area of the template.

Unfortunately, the standard HTML component library does not include a tag that outputs a DOCTYPE. But you can quickly define a prototype of this tag using a JSF 2.0 composite component (or you can go all the way and create a real UIComponent). The following fragment from the file resources/meta/doctype.xhtml defines the <meta:doctype> tag as a composite component:

<comp:interface name="doctype" 
    displayName="XHTML DOCTYPE"
    shortDescription="Produces an XHTML DOCTYPE declaration">
    <comp:attribute name="type" required="true"/>
</comp:interface>
<comp:implementation>
    <h:outputText value='&lt;!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 #{
        fn:toUpperCase(fn:substring(compositeComponent.attrs.type, 0, 1))}#{fn:substring(compositeComponent.attrs.type, 1, -1)
        }//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-#{compositeComponent.attrs.type}.dtd"&gt;' escape="false"/>
</comp:implementation>

Then you simply add this tag directly inside of <f:view>.

<?xml version="1.0" encoding="UTF-8"?>
<f:view xmlns="http://www.w3.org/1999/xhtml"
    xmlns:ui="http://java.sun.com/jsf/facelets"
    xmlns:meta="http://java.sun.com/jsf/composition/meta"
    xmlns:f="http://java.sun.com/jsf/core"
    xmlns:h="http://java.sun.com/jsf/html"
    xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
    xsi:schemaLocation="
        http://java.sun.com/jsf/facelets facelets-ui-2.0.xsd
        http://java.sun.com/jsf/core jsf-core-2.0.xsd
        http://java.sun.com/jsf/html html-basic-2.0.xsd">
    <meta:doctype type="strict"/>
    <html>
        ...
    </html>
</f:view>

It's pretty clear that the standard HTML component library should include a component tag capable of producing all of the valid HTML and XHTML doctypes. There's no reason a developer should have to assume the burden of something that is so fundamental to web page development.

Breakout

Tell me what you think of this idea. Do you think it will help development? Do you think it will lead to better tooling? If it's worth pursuing, I think the XSDs should be built automatically from the metadata in the standard faces-config.xml and through reflection of the component classes/renderers so that you, the developer, don't have to worry about creating them and making them available on the file system. And of course, the tooling should help you get the XSDs added to the template.

Attachments

  1. facelets-xsd.zip

Step right up and select your time zone

Posted by    |       |    Tagged as Discussions JSF

Prior to revision 2.0, the JavaServer Faces specification states that all dates and times should be treated as UTC, and rendered as UTC, unless a time zone is explicitly specified in the timeZone attribute of the <f:convertDateTime> converter tag. This is an extremely inconvenient default behavior. This open proposal, targeting the 2.1 release, extends the Locale configuration to accommodate a default time zone preference that is used by default when a date is rendered.

Note: Dates are always assumed to be defined using UTC, so this proposal merely addresses the display offset.

Making the system time zone the default

The JavaServer Faces 2.0 (proposed) specification offers a way to override the standard time zone setting of the JavaServer Faces application so that it uses the time zone where the application server is running (which happens to be the default behavior for Java SE according to java.util.TimeZone) rather than UTC. This override is activated using the following web.xml context parameter declaration, documented in section 11.1.3:

<context-param>
    <param-name>javax.faces.DATETIMECONVERTER_DEFAULT_TIMEZONE_IS_SYSTEM_TIMEZONE</param-name>
    <param-value>true</param-value>
</context-param>

Before diving into the implications of using the system time zone, I want to first suggest either renaming this parameter or removing it in favor of a comparable setting defined in faces-config.xml. Assuming the context parameter remains the preferred way of making the system time zone the default, there's no way anyone except JSF experts will be able to commit this parameter to memory due to its unnecessarily long name. I would like to put forth an alternate, simpler name:

javax.faces.USE_SYSTEM_TIMEZONE

In addition to name being shorter, it's much less specific, opening the door for other areas of JSF to take advantage of this setting. (An alternative configuration defined in faces-config.xml is proposed further down).

Naming aside, this setting is important because it allows the developer to restore the default time zone behavior of the JVM. However, it does not address the real problem that JSF was attempting to solve, which is that the time zone used by the application should not be affected by the geographic location of the deployment. Optimally, the default time zone should reflect the time zone preference of the user (i.e., it should be a user time zone).

Setting the default time zone in JSF

Time zones are extremely important in data-driven web applications. That's why the JSF runtime should make it easy for the developer to get the time zone setting correct. Both the UTC and the system default time zone choices are inadequate. The UTC default is annoying because it forces every developer to deal with the time zone issue on day one (no good default). The system default is risky because it's dependent on the geographic location where the application is deployed.

Let's consider a scenario where the system default is inadequate.

If I am deploying my application to a server in California and servicing New York customers, the default time zone is not what the customers expect. Now, I could start my application server instance with the time zone of New York. But what if I have a separate application on that server that services customers in California? Then I have a major conflict.

The correct solution is to allow the time zone to be set per JSF application, configured in faces-config.xml. In fact, there is already an ideal place to define this configuration: within the <locale-config> element, complementing the default locale. The developer would specify a time zone ID in the new <default-time-zone-id> element and JSF would resolve a TimeZone object from this ID. Here's an example showing the default time zone set to New York.

<application>
    <locale-config>
        <default-time-zone-id>America/New_York</default-time-zone-id>
    </locale-config>
</application>

However, a static value is not always sufficient since the users of an application may reside in different time zones. Therefore, while a static value is permitted, this element should also accept a ValueExpression that is resolved when the time zone value is needed (i.e., such as during encoding). The permitted use of a ValueExpression allows the default time zone to be dynamic, but still a default since it can be overridden by the <f:convertDateTime> component tag.

<application>
    <locale-config>
        <default-time-zone-id>#{userPreferences.timeZoneId}</default-time-zone-id>
    </locale-config>
</application>

This configuration setting provides three important features:

  1. Provides a good application default (which may be customized to the user's preference using a ValueExpression)
  2. Remains stable across deployment locations (doesn't change just because the application is deployed to a different time zone, as is the case with the system default)
  3. Isolated per application running on the same server

With this feature available, JSF will perform the following time zone resolution algorithm.

If a default time zone ID is specified in a configuration resources (i.e., faces-config.xml), then the TimeZone resolved from the specified ID is used as the default time zone in the JSF application. If one is not specified, JSF will use the system default time zone if the javax.faces.USE_SYSTEM_TIMEZONE context parameter value is true or UTC otherwise.

But instead of relying on a context parameter to set the time zone to the system default, it may be better to introduce the <system/> element as a possible value for <default-time-zone-id> to keep the time zone setting contained within the JSF configuration. (Context parameters in the web.xml configuration should only be used for defining implementation-specific settings).

The addition of the <default-time-zone-id> child element in <locale-config> requires the following API additions to javax.faces.application.Application.

/**
  * <p>Return the default <code>TimeZone</code> for this application. This
  * method resolves the Value Expression returned by {@link Application#getDefaultTimeZoneId}
  * and uses that value to look up a <code>TimeZone</code> instance. If the
  * resolved time zone id is null, the <code>TimeZone</code> for UTC is returned.</p>
  */
public abstract TimeZone getDefaultTimeZone();

/**
 * <p>Set the default time zone for this application as a value expression that
 * is expected to resolve to a time zone id.</p>
 *
 * @param timeZoneId The new default time zone
 *
 * @throws NullPointerException if <code>timeZone</code>
 *  is <code>null</code>
 */
public abstract void setDefaultTimeZoneId(ValueExpression timeZoneId);

/**
 * <p>Return the default time zone for this application as a value expression
 * that is expected to resolve to a time zone id.<p>
 */
public abstract ValueExpression getDefaultTimeZoneId();

What about auto-detecting the user's time zone?

The main challenge is that browsers don't send a preferred timezone header like they do a preferred language header. And you can't rely on a geo-IP address resolution since VPNs and other network configurations can move the source address across timezones. For now, it's the application's responsibly to collect the user's preference and store it in either the database or a browser cookie. The ValueExpression defined in the <default-time-zone-id> element is expected to return this stored value.

Provide your feedback

Let me know what you think about this solution. Does it seem adequate? Do you think the configuration is simple enough? Am I missing something? (Please see the JSF 2.1 page on the Seam Wiki for the latest revision of this proposal).

I encourage you to start participating now in these and other issues so that JSF can continue to improve as a result of community participation and feedback. JSF 2 is a big step in the evolution of JSF, but it is by no means the last.

back to top