Red Hat

In Relation To Christian Bauer

In Relation To Christian Bauer

New edition of Java Persistence with Hibernate

Posted by Christian Bauer    |    Mar 23, 2013    |    Tagged as

The second edition of Java Persistence with Hibernate is now available on the Manning Early Access Program. We have three chapters ready for you, and we'll add more chapters soon.

Until March 27th, get 50% off with promotion code jpwh2au!

Some highlights of the new edition:

  • Coverage of the latest JPA 2.1 specification version
  • All example code available as unit tests, ready to run
  • Many new illustrations, hundreds of examples
  • Application design recommendations for Java EE 7
  • Condensed and more focused than the previous edition, 200 pages less but more content

We will update the book as soon as Hibernate 4.3 (with JPA 2.1 support), and then later this year Hibernate 5, become available. Early access subscribers will be notified of any updates. This is a great opportunity to catch up with the latest Hibernate releases, and to learn the new features of JPA 2.1 and Java EE 7.

The example code for this early access version, based on Hibernate 4.1 and JPA 2.0, can be found here.

Talk to us on the author online forum if you have any questions.

First release of AuthorDoclet

Posted by Christian Bauer    |    Jan 25, 2010    |    Tagged as

End of last year I wrote about my first proof-of-concept with AuthorDoclet, a documentation tool I've been working on. It took me a while to get it into shape for a first release and I also had to make some conceptual changes once it was mature enough to compile its own manual.

What surprised me was how well that worked and how handy it is to have tested documentation examples. I frequently had to go back and make changes to a code snippet and it was a real timesaver to edit it in one place only, the actual runnable unit test. I also tried to implement and use features such as inline images, tables, automatic chapter/section numbering, automatic generation of a Table Of Contents etc. No problems so far and except for code example callouts (the numbered bullets inside code snippets you often see in books), I've all the features my older Docbook XML based toolchain provided.

So if you have to document some testable Java software, try the attached alpha release[1]. The code is still quite raw but I'm happy with the overall design of pipelines, processors, readers, etc. There is almost no documentation inside the core code though, I'll add that next. If you'd like to write an improved TOC generator or anything else that fits, you are more than welcome.

Introducing AuthorDoclet

Posted by Christian Bauer    |    Nov 21, 2009    |    Tagged as

I'm on my way home from the Seam community meeting in Antwerp this Friday, where I managed to talk to two or three people about the Javadoc-based documentation toolset I've been working on, but there was no opportuntity to talk about it in more detail or to look at some actual examples.

The preliminary name is AuthorDoclet and I'm not attached to it - it's the first thing I entered into the project name box in IntelliJ. So I've been using AuthorDoclet to write the AuthorDoclet manual, which was a bit recursive and confusing. Especially because I'm still redesigning certain aspects of the software while I'm writing the manual.

And this is actually the most important idea behind AuthorDoclet: Stop writing anemic and synthetic unit test code. Write unit tests that people want to read, to learn your software. Write lots of quality Javadoc for your unit test classes and methods. Then press a button in AuthorDoclet and you'll get documentation automatically generated, with validated and tested examples.

This kind of approach has already been evaluated and implemented by other people, for example, there is JCite. However, the tools I've found all assume that you write documentation text in some text editor and include the code examples by putting placeholders in the text. These placeholders are then replaced with code from your Java classes by the documentation processing tool.

Validating the source code used in examples is not the only problem you'll have to deal with when you work on software documentation. The other major issue is maintaining the code while you go through many iterations of the same text, and of course many versions of the example code. If you manually copy and paste code lines from your working application into the document you are writing, you'll at some point no longer know which code needs to be updated and where, after some refactoring.

The solution offered by JCite is simple: The second time it is processing your documentation, it is going to find all the citations (included code snippets) that changed and it will show you a list of items to approve or decline.

With AuthorDoclet, you should not even have this problem because instead of referencing the code snippets from the text for inclusion, you write the text into the same file as the code, as Javadoc comments. So when the code changes, you immediately see the text that describes that code example. When you change the name of a class, method, or field, any references from your Javadoc comments (in all source files!) will be updated automatically as well. (I'm assuming that your IDE supports Javadoc refactoring.)

You still need an external master template file in AuthorDoclet which describes your documentation structure. The following example will make this easier to understand. Create an XHTML file as your master template:

<html xmlns="http://www.w3.org/1999/xhtml">
<head>
    <title>Hello World Source</title>
</head>
<body>
    <div>
        <a class="citation"
           href="example/helloworld/HelloWorld.java"/>
    </div>
</body>
</html>

When processed by AuthorDoclet, this XHTML file will be transformed into the result XHTML file, and all tags that are known to the AuthorDoclet processing pipeline (that's an implementation detail) are going to be handled. The citation anchor is going to trigger the inclusion of the source of HelloWorld.java as source within that <div>. You can organize your <div>'s into chapters and (sub-)sections.

Now, this was not an example of Javadoc inclusion, just the simpler case of Java source inclusion. This is Javadoc citation:

<html xmlns="http://www.w3.org/1999/xhtml">
<head>
    <title>Hello World Documentation</title>
</head>
<body>
    <div>
        <a class="citation"
           href="javadoc://example.helloworld.HelloWorldTest#testHello()"/>
    </div>
</body>
</html>

You'll probably recognize the syntax of this javadoc:// reference: It's the same you use in Javadoc with @link and @see tags. Your IDE can detect these strings and refactor the href value when the class or method name changes.

The Javadoc, which is also XHTML, is going to be included between the <div> element. If there are any <a class="citation"/> within that Javadoc - again, we are talking about the Javadoc of the testHello() method - they are going to be processed as well:

/**
* Testing Hello World
* <p>
* Let's assume you want to say "Hello World". The following <em>example</em> shows you how
* to do that:
* </p>
* <a class="citation" href="javacode://example.helloworld.HelloWorld#testHello()"/>
* <p>
* This was easy, right?
* </p>
*/
@Test
public void testHello() {
    System.out.println("Hello World!");
}

Note the scheme of that URI: The javacode:// prefix is enabling the dot-notation for package, class, and method names (Javadoc @see syntax). Without this scheme, you'd reference the .java file directly as shown earlier.

Processing this Javadoc comment is a recursive operation: If it would contain an anchor that cites another Javadoc comment - be it package, class, or method level - that Javadoc comment would also be transformed and so on.

AuthorDoclet currently also supports various syntaxes for inclusion/exclusion of source lines, and a few special options I'm going to show some other time.

The output document is XHTML markup, and styling, or printing this document is outside of the scope of AuthorDoclet. You can get an easy preview if you write a CSS file and open it in the browser - remember that you control the XHTML element identifiers and CSS classes directly:

<html xmlns="http://www.w3.org/1999/xhtml">
<head>
    <title>Hello World Documentation</title>
</head>
<body>
<div>
    <div class="citation javadoc" id="javadoc.example.helloworld.HelloWorldTest-testHello__">
        <div class="title">Testing Hello World</div>
        <div class="content">
            <p>
                Let's assume you want to say "Hello World". The following <em>example</em> shows you how
                to do that:
            </p>

            <div class="citation javacode" id="javacode.example.helloworld.HelloWorldTest-testHello__">
                <div class="content"><![CDATA[
                @Test
                public void testHello() {
                    System.out.println("Hello World!");
                }
                ]]></div>
            </div>
            <p>
                This was easy, right?
            </p>
        </div>
    </div>
</div>
</body>
</html>

AuthorDoclet also supports linking, e.g. you can write {@link example.helloworld.HelloWorld#testHello()} in any of your Javadoc comments, and a cross-reference (a regular link...) to the citation will be generated in the output.

Writing an XSL:FO utility that converts XHTML to nice-looking PDFs is going to be easy. I guess iText would work as well, although I've not spend any time so far on the output transformation for consumption.

Syntax highlighting now available

Posted by Christian Bauer    |    Nov 17, 2009    |    Tagged as

A much-requested feature on this website has been automatic syntax highlighting of code snippets. It is now available and I thought I'd document it here instead of sending everyone an e-mail. The reason why it took so long to implement is that I didn't know how to best integrate it with the Seam wiki text syntax. As you know, we wrap a code block in backticks - that doesn't leave any room for syntax highlighting options. Other wiki text parsers use something like <code syntax="java">...</code> but I wanted to keep the superfast backtick syntax and have it highlighted.

So the way it works now is with an optional line of parameters that follows the opening backtick:


                
                <p class="wikiPara">
                The options are enclosed in square brackets and the closing square bracket needs to be followed by a newline. See <a href="http://alexgorbatchev.com/wiki/SyntaxHighlighter:Configuration" target="" class="regularLink">this page</a> for a list of options. I've installed all brushes, though most of you will probably use 'java', 'xml', and 'sql'.
                </p>
                
                <p class="wikiPara">
                The example will then render like this:
                </p>
                
                <pre class="wikiPreformatted brush: java; gutter: true;">String text = "Hello World"; // Some comment...
System.out.println(text);

If you do not specify a brush or if you do not include an option line, the code block will be rendered as a grey box, as before. Oh, and the preview when you edit a document will always show the non-highlighted version, it's only applied when you save or update the document.

(No, it will not work on the seamframework.org forums. This has to be tested here first.)

Testing Java doclets

Posted by Christian Bauer    |    Nov 15, 2009    |    Tagged as

Following up on my last blog entry about the next edition of the Hibernate bible, in the comments, Will Iverson (sorry Will, I hope that is really you, first Google hit) said that he would write ALL the code examples as JUnit test cases. Well, AFAIR that is what Will was trying with his Hibernate book a few years ago.

For my own writing, from the first day, I wasn't sure how to treat code examples. It doesn't really matter if you are writing a tutorial or a reference book with 1000 pages, the question simply is: Did you verify that all code examples really work?

On the other side of the equation is your publisher and their procedures and formats. The publisher I worked with, for example, required that authors submit their text in some MSFT Word template. I've heard from other authors that some publishers are happy with a Docbook XML or SGML file. Well, the only advice I can give you is that you best ignore what the publisher wants and you use what works best for you. (Seriously, you are doing all the work, all they do is import it into Framemaker and pay a typesetter by the hour. If they can't import what you produce, find another publisher.)

So what you have to do is find a toolset that delivers what the publisher wants, but also allows you to verify code examples automatically.

For the first two Hibernate books I had my own toolset based on Docbook XML, with XML, PDF, and HTML output. This toolchain has been re-used by a few open source projects for documentation. I wrote all of the text in XML in IntelliJ IDEA. Unfortunately, all the code examples were copy/pasted lines from real working code. So when the code had to be changed because it had a bug, the book text wasn't updated automatically.

For the next edition of Java Persistence with Hibernate I do not want to bother with this and I want most of the code examples to be verified automatically, I want to reference the executable source that has been tested from within the text, without duplicating it.

I've been prototyping my new toolchain for a few weeks now and it's almost ready for a wider audience. It's based on Javadoc and XHTML and I'm going to blog about it soon.

Well, all I really wanted to show you today is a single class that helped me with unit testing and running my prototype. If you have ever written a Javadoc doclet you'll probably understand why this is useful:

import com.sun.javadoc.RootDoc;
import com.sun.javadoc.ClassDoc;
import com.sun.tools.javac.util.Context;
import com.sun.tools.javac.util.ListBuffer;
import com.sun.tools.javac.util.Options;
import com.sun.tools.javadoc.JavadocTool;
import com.sun.tools.javadoc.ModifierFilter;
import com.sun.tools.javadoc.PublicMessager;

import java.io.File;
import java.io.IOException;
import java.io.PrintWriter;
import java.io.Writer;
import java.util.Arrays;
import java.util.logging.Level;
import java.util.logging.Logger;

public class EasyDoclet {

    final private Logger log = Logger.getLogger(EasyDoclet.class.getName());

    final private File sourceDirectory;
    final private String[] packageNames;
    final private File[] fileNames;
    final private RootDoc rootDoc;

    public EasyDoclet(File sourceDirectory, String... packageNames) {
        this(sourceDirectory, packageNames, new File[0]);
    }

    public EasyDoclet(File sourceDirectory, File... fileNames) {
        this(sourceDirectory, new String[0], fileNames);
    }

    protected EasyDoclet(File sourceDirectory, String[] packageNames, File[] fileNames) {
        this.sourceDirectory = sourceDirectory;
        this.packageNames = packageNames;
        this.fileNames = fileNames;

        Context context = new Context();
        Options compOpts = Options.instance(context);

        if (getSourceDirectory().exists()) {
            log.fine("Using source path: " + getSourceDirectory().getAbsolutePath());
            compOpts.put("-sourcepath", getSourceDirectory().getAbsolutePath());
        } else {
            log.info("Ignoring non-existant source path, check your source directory argument");
        }

        ListBuffer<String> javaNames = new ListBuffer<String>();
        for (File fileName : fileNames) {
            log.fine("Adding file to documentation path: " + fileName.getAbsolutePath());
            javaNames.append(fileName.getPath());
        }

        ListBuffer<String> subPackages = new ListBuffer<String>();
        for (String packageName : packageNames) {
            log.fine("Adding sub-packages to documentation path: " + packageName);
            subPackages.append(packageName);
        }

        new PublicMessager(
                context,
                getApplicationName(),
                new PrintWriter(new LogWriter(Level.SEVERE), true),
                new PrintWriter(new LogWriter(Level.WARNING), true),
                new PrintWriter(new LogWriter(Level.FINE), true)
        );

        JavadocTool javadocTool = JavadocTool.make0(context);

        try {
            rootDoc = javadocTool.getRootDocImpl(
                    "",
                    null,
                    new ModifierFilter(ModifierFilter.ALL_ACCESS),
                    javaNames.toList(),
                    new ListBuffer<String[]>().toList(),
                    false,
                    subPackages.toList(),
                    new ListBuffer<String>().toList(),
                    false,
                    false,
                    false);
        } catch (Exception ex) {
            throw new RuntimeException(ex);
        }

        if (log.isLoggable(Level.FINEST)) {
            for (ClassDoc classDoc : getRootDoc().classes()) {
                log.finest("Parsed Javadoc class source: " + classDoc.position() + " with inline tags: " + classDoc.inlineTags().length );
            }
        }
    }

    public File getSourceDirectory() {
        return sourceDirectory;
    }

    public String[] getPackageNames() {
        return packageNames;
    }

    public File[] getFileNames() {
        return fileNames;
    }

    public RootDoc getRootDoc() {
        return rootDoc;
    }

    protected class LogWriter extends Writer {

        Level level;

        public LogWriter(Level level) {
            this.level = level;
        }

        public void write(char[] chars, int offset, int length) throws IOException {
            String s = new String(Arrays.copyOf(chars, length));
            if (!s.equals("\n"))
                log.log(level, s);
        }

        public void flush() throws IOException {}
        public void close() throws IOException {}
    }

    protected String getApplicationName() {
        return getClass().getSimpleName() + " Application";
    }

}

If you want to test your doclet or run it programmatically you have to use the javadoc command line tool or the evil Main class provided with tools.jar - it's evil because it calls System.exit() when it is done, not usable in unit tests. So what I did here is dig through the JDK source code to figure out how to start a Doclet programmatically without all the baggage. (Again, you most likely won't understand what this is about until you try to write your own Doclet. The API is very old and very bad.)

Oh, and you also need this:

package com.sun.tools.javadoc;

import com.sun.tools.javac.util.Context;

import java.io.PrintWriter;

/**
 * Protected constructors prevent the world from exploding!
 */
public class PublicMessager extends Messager {

    public PublicMessager(Context context, String s) {
        super(context, s);
    }

    public PublicMessager(Context context, String s, PrintWriter printWriter, PrintWriter printWriter1, PrintWriter printWriter2) {
        super(context, s, printWriter, printWriter1, printWriter2);
    }
}

This is how you use it in your unit test (or whatever code):

EasyDoclet doclet = new EasyDoclet(new File("/my/source"), "some.package", "another.package");
RootDoc doc = doclet.getRootDoc();
...

I'm going to write more about my prototype toolset next week and I hope that it's going to be useful not only for myself and the next book but also for other projects, like the last toolset.

Are books tutorials?

Posted by Christian Bauer    |    Sep 26, 2009    |    Tagged as

I've picked up Wicket in Action last week and I've been reading without interrupting myself so far. So now I'm reading chapter 6 and I haven't written a single line of Wicket code. It's not the first time this happened, most of my books I've read once and never tried any of the code samples.

Last week Gavin called me and we talked about the next edition of Hibernate in Action, which actually would be the second edition of Java Persistence with Hibernate. Now that JPA2 is almost done, and the first beta release with JPA2 features of Hibernate is out, updating the text is inevitable in the near future.

What I need to decide soon is if this update is going to emphasize the tutorial aspect of the book, or if I'm going to add more reference material. I don't think that decision has much to do with the length of the book (JPwH is >900 pages). It's actually all about the code examples. Of course you can not write a 1000 page tutorial, when you pass the 200 page marker, you will have to switch from tutorial mode into reference mode.

Well, because that doesn't happen automatically, you constantly ask yourself the same question: Do readers expect code that works out-of-the-book? Are they going to write that code or copy/paste it, and then expect that it will run? Will it run within the project/product setup I've explained step-by-step up to this point?

For the first two Hibernate books I always considered the answer to that question to be: Yes, maybe the readers want to try most code examples immediately and they probably will have the book open on their desk while reading, next to the keyboard, and they will try the product you are describing in Action. That was actually what the publisher expected from an in Action series book and we had endless and exhausting discussions about it. In the end, it was a lot of work and I'm sure it's not quite perfect. At some point a tutorial approach just doesn't make sense anymore and you have to break the flow and continue with point-by-point reference material. Some readers will not be able to make that jump. The reviews of the books show that, you have a few people who haven't been able to follow the text and examples and got lost at some point. They probably expected the tutorial to continue for another 800 pages.

And here I am, asking myself if I would ever do this again and why I had so much trouble doing it before. I just realized that when I read a book, I don't try the code. I'm not a newbie and I have some Java and JEE specs/framework experience, and I think it's a waste of my time to try the Hello World example in a framework book I'm reading. I'll continue reading until I hit that barrier when it's obvious to me that I need try the code I'm reading. I'll actually not continue reading a book when all the practical details are getting in my way and I've to skip pages because they are full of trivial copy this JAR here, then edit the properties file there explanations. So I'm obviously not the target audience of my own books because they start with: This is how you create your working directory, and here is how you do that on your Windows computing machine. :)

So why can't you have both in one book? I've been paying extra attention to how other writers resolve that issue. In Wicket in Action, for example, the writers obviously do not expect the reader to stop and try the examples immediately. They do not even include the product configuration and initial setup steps in the main text and instead refer to the appendix. I'm somewhat surprised they got this past the Manning in Action guidelines, btw. ;)

I'd considered this for JPwH, moving all of the setup stuff into an appendix. Don't waste 50 pages on basic setup instructions (especially JEE vs. !JEE container) but cater to those readers who have some experience and expect to pick up new stuff quickly in a day or two, without the interruption of real world problems. As the title and subtitle are probably going to be Java Persistence with Hibernate, Second Edition, I'm not really worried about what the publisher has to say.

Still, I'm afraid we're going to have many angry newbies who expect all the setup/configuration steps in chapter 1 or 2, and if one little detail is missing, they are not going to continue reading. On the other hand, what's so bad about If you don't know how to create a directory and copy a JAR file, you need to take a break and read this appendix?

So should the next edition be more like Teach yourself Hibernate in 24 hours although your shoes have 'L' and 'R' on them or should it be The Hibernate Bible, Next Edition?

P.S. Whatever happens, the next edition of JPwH will not be 900 pages. As far as I can see, the .hbm.xml and org.hibernate.Session examples will be be removed whenever they duplicate JPA functionality, so without any other changes, that's going to be 150 pages gone already.

Seam Recipes Part 1

Posted by Christian Bauer    |    Sep 8, 2009    |    Tagged as Seam

Finally had time to clean up and write down a few knowledge base articles for Seam. Some of these tricks have been very useful for building and running the Seam website. I still have leftovers, hopefully I can post another round of articles next week.

Here is the list for today, there should be something in there for anyone using Seam:

  • Extending DBUnitSeamTest shows how you can extend the unit testing feature of Seam for mock data import. Examples are: adding support for PostgreSQL, writing custom DBUnit dataset operations (like calling a stored procedure before a test method runs) and configuration of DBUnit.
  • Importing DBUnit datasets for development deployments explains how to use DBUnit during actual development, not only for unit testing. I found it very convenient to have the same mock data as I had available in unit testing imported automatically when deploying an application on my development machine. It helps you keeping interactive and automated testing in sync.
  • Using MySQL in production with UTF8 wasn't as straightforward as I hoped it would be, I had to customize the Hibernate dialect, the JBoss AS datasource, and Tomcat. This recipe summarizes all the changes.
  • Removing JSESSIONID from your URLs (and fixing s:cache) was a problem that actually only hit me in staging and I almost rolled it out without noticing. If you use the Seam HTML fragment cache, you need to read this. Unfortunately, there is nothing we can do in Seam to fix this.
  • Drop-down boxes with entities and page scope describes my favorite JSF misfeature, or how the good intentions of the specification have been destroyed by a bad reference implementation. It's very hard to have drop-down box in JSF with a list of products or customers. Seam makes it easy out of the box if you can also use a long-running conversation context. My solution is a bit of a hack, but it makes it easy without the conversation context, just the page context.

Post improvements directly on the pages please, they are editable for all Seam community members.

REST support in latest Seam 2.1

Posted by Christian Bauer    |    Jun 9, 2009    |    Tagged as Seam

Norman released Seam 2.1.2 yesterday and it comes with much improved support for REST processing, compared to previous 2.1.x versions. We started integrating RESTEasy - an implementation of JAX-RS (JSR 311) - with Seam almost a year ago in a first prototype. We then waited for the JAX-RS spec to be finalized and for RESTEasy to be GA, which happened a few months ago. So based on that stable foundation we were able to finish the integration with Seam.

I'm going to demonstrate some of the unique features of that integration here, how you can create a RESTful Seam application or simply add an HTTP web service interface to an existing one.

Deploying resources and providers

With JAX-RS you write a plain Java class and put @javax.ws.rs.Path("/customer") on it to make it available under the HTTP base URI path /customer. You then map methods of that class to particular sub-paths and HTTP methods with @javax.ws.rs.GET, @POST, @DELETE, and so on. These classes are called Resource classes. The default life cycle of an instance is per-HTTP-request, an instance is created for a request and destroyed when processing completes and the response has been sent.

Converting HTTP entities (the body of an HTTP request) is the job of Provider classes, annotated with @javax.ws.rs.ext.Provider and usually stateless or singleton. They transform content between HTTP and Java types, say my.Customer entity to and from XML with JAXB. Providers also are the extension point in JAX-RS for custom exception converters, etc.

RESTEasy has its own classpath scanning routine that detects all resources and providers by looking for annotations. That requires a servlet context listener configured in web.xml. You'd also have to configure a request dispatcher servlet. Finally, if you'd like to make your resource classes EJBs, for automatic transaction demarcation and persistence context handling, you'd have to list these EJBs in web.xml as well. This last feature is a RESTEasy enhancement and not part of the JAX-RS specification.

If you use Seam with RESTEasy, none of this extra work is necessary. Of course it still needs to be done but you most likely have already configured the basic Seam listener and resource servlet in web.xml - almost all Seam applications have.

You do not have to configure RESTEasy at all. Just drop in the right JAR files (see the reference docs) and your Seam application will automatically find all @Path resources and @Provider's. Your stateless EJBs still need to be listed to be found, but that can be done in Seam's components.xml or programmatically through the usual Seam APIs. All the other RESTEasy configuration options and some useful other configuration features are available as well.

So without changing any code, you get easier deployment and integrated configuration of JAX-RS artifacts in your Seam application.

Utilizing Seam components

Resources and providers can be made Seam components, with bijection, life cycle management, authorization, interception, etc. Just put an @Name on your resource class:

@Name("customerResource")
@Scope(ScopeType.EVENT) // Default
@Path("/customer")
public class MyCustomerResource {

    @In
    CustomerDAO customerDAO;

    @GET
    @Path("/{customerId}")
    @Produces("text/xml")
    @Restrict("#{s:hasRole('admin')}")
    public Customer getCustomer(@PathParam("customerId") int id) {
         return customerDAO.find(id);
    }
}

Naturally REST-oriented architecture assumes that clients are maintaining application state, so your resource components would be EVENT or APPLICATION scoped, or STATELESS. Although SESSION scope is available, by default a session only spans a single HTTP request and it's automatically destroyed after the HTTP request. This behavior and how to configure it if you really want to transmit a session identifier between the REST client and server and utilize server-side SESSION scope across requests is explained in more detail in the reference docs. We already have some ideas for CONVERSATION scope integration, follow this design document fore more info.

Of course your resource Seam component doesn't have to be a POJO, you can also use @Stateless and turn it into an EJB. Another advantage here is that you do not have to list that EJB in components.xml or web.xml anymore as all Seam components are automatically found and registered according to their type.

The @Restrict annotation is just a regular Seam authorization check, currently you can configure Basic or Digest authentication as you'd for any other Seam application.

CRUD framework integration

Seam has a framework for building basic CRUD database applications quickly, you probably already have seen EntityHome and EntityQuery in other Seam examples. Jozef Hartinger built an extension that allows you to create a basic CRUD application with full HTTP/REST support in minutes. You can declare it through components.xml:

<framework:entity-home name="customerHome"
                       entity-class="my.Customer"
                       auto-create="true"/>

<framework:entity-query name="customerQuery"
		        ejbql="select c from Customer c" order="lastname"/>

<resteasy:resource-home path="/customer" name="resourceCustomerHome"
                        entity-home="#{customerHome}" entity-id-class="java.lang.Long"
                        media-types="application/xml application/json"/>

<resteasy:resource-query path="/customer" name="resourceCustomerQuery"
                         entity-query="#{customerQuery}" entity-class="my.Customer"
                         media-types="application/xml application/json"/>

You only have to create the my.Customer entity and you are ready to read from and write to the database through HTTP.

  • A GET request to /customer?start=30&show=10 will execute the resourceCustomerQuery component and return a list of all customers with pagination, starting at row 30 with 10 rows in the result.
  • You can GET, PUT, and DELETE a particular customer instance by sending HTTP requests to /customer/<customerId>.
  • Sending a POST request to /customer creates a new customer entity instance and persists it.

Note that the <framework:...> mappings are part of the regular Seam CRUD framework with all the usual options such as query customization. The content will be transformed by the built-in RESTEasy providers for XML and JSON, for example. The XML transformation will use any JAXB bindings on your entity class.

You do not have to use XML configuration; as you'd with the Seam CRUD superclasses ResourceHome and ResourceQuery, you can write subclasses instead and configure the mapping with annotations.

There is a reason this CRUD framework feature is not documented in the current release: We are not sure the API will stay as it is. Consider this release as our proposal and we really need feedback on it, what works and what can be improved. Jozef also wrote a full-featured RESTful application with a jQuery based client for regular webbrowsers to demonstrate the CRUD framework. Have a look at the Tasks example in the Seam distribution. You can find more demo code and tests in the Restbay example which we use for general RESTEasy integration testing and demonstration.

Feature shortlist

I've only highlighted three of the main features of Seam and RESTEasy but there is more available and more to come:

Exceptions in JAX-RS applications are mapped to HTTP responses for clients with provider classes called ExceptionMapper. That can be much more work than it should, so you can also map exceptions in Seam's pages.xml declaratively, see docs.

You can write unit tests that pass mock HTTP request and response through Seam and RESTEasy, all with local calls not TCP sockets. We use them in the integration tests and so can you to test your application. See the reference docs.

There is already talk about MVC and REST. What this all comes down to, at least from my standpoint, is that hypertext should drive the application state through linked resources (HATEOAS, Hypertext as the engine of application state). From a technical perspective, it simply means that we need more control over how the view is rendered, not just marshaling dumb XML documents from Customer entities with JAXB defaults. We should render XHTML representations - which of course may include JAXB-rendered XML blobs in addition to links and forms - and be able to customize them with templates.

Facelets seems like a natural fit for this and we have a prototype for sending templated XHTML responses:

@GET
@Path("/customer/{id}")
@ProduceMime("application/xhtml+xml")
@FaceletsXhtmlResponse(
    template = "/some/path/to/template/#{thisCanEvenBeEL}/foo.xhtml"
)
@Out(value = "currentCustomer", scope = ScopeType.EVENT)
public Customer getCustomer(@PathParam("id") String id) { ... }

This is just pseudo-code, this feature is not available in the release. It wouldn't be very useful as it is, because we don't know how to transform incoming HTTP requests with XHTML payload back into a Facelet view. It's not trivial to implement either and we'll probably wait for JSF2 before we finalize this. But it shows that providing a JSF-based human client interface and a RESTful HTTP web service interface in the same application might be a natural fit with the given technologies.

Next version?

The currently available RESTEasy version is still not GA, although it is a release candidate. There are also a few open issues with the integration code that we'd like to close, and we have to finalize the CRUD framework interface. This is all expected to happen in the Seam 2.2 releases.

More elaborate additional features such as conversation integration, representation templating, or additional authentication schemes are probably reserved for Seam3 as we might want to build on the new JSF2/JCDI standards as much as possible. Follow this wiki page for updates.

P.S. This book is an excellent starting point if you are wondering what this stuff is all about.

Update: I forgot to mention one important feature that some of you might like. You can annotate your Seam component (POJO or EJB) interface and not the bean class. For EJB Seam components, you actually have to annotate the local business interface.

@Path("/customer")
public interface MyCustomerResource {

    @GET
    @Path("/{customerId}")
    @Produces("text/xml")
    public Customer getCustomer(@PathParam("customerId") int id);
}
@Name("customerResource")
public class MyCustomerResourceBean implements MyCustomerResource {

    @In
    CustomerDAO customerDAO;

    @Restrict("#{s:hasRole('admin')}")
    public Customer getCustomer(int id) {
         return customerDAO.find(id);
    }
}

Fresh 1.0.0.Alpha1 - CLI for JBossAS

Posted by Christian Bauer    |    May 15, 2009    |    Tagged as JBoss AS

I'm proud to present the initial version of Fresh, CLI for JBossAS.

This is definitely not the only CLI developed under our roof (see adminclient repository) but it brings a ton of interesting stuff developed by the people who donated it, my friends from Parsek company, Tomaz Cerar and Marko Strukelj (yes, the one who's responsible for vfszip :-)).

This is the JIRA release issue, where you can learn more about it, download the deployable artifact, etc

A simple user guide can be found here

Any additional ideas can be published here:

Basically the idea is to drop in the jboss-fresh.jar into deploy/ and then connect to it via ssh, port 2022. Currently there is no user/pass, simply press Enter.

Fresh is installed On_Demand, only ssh server deamon is fully installed at the beginning, the rest kicks in when you first connect (thanks Bela for the tip).

Marko and Tomaz already have plans to make this even better, and I'll be there to help with some fancy MC tricks. :-)

Enjoy, any feedback is welcome.

ps: this was also posted on the jboss-dev mailing list, so we can easily continue the discussion there

Migrating databases with DBUnit

Posted by Christian Bauer    |    Apr 15, 2009    |    Tagged as Hibernate ORM

In the last few weeks I had to migrate a MySQL database and it turned out to be more difficult than I thought. In the past I've used the tools that ship with MySQL, such as mysqldump and its various options. For the recent migrations that was surprisingly... impossible.

The first migration was from a latin MySQL database to a UTF encoded database. By default MySQL and the JDBC driver all use latin encoding (or they derive it from the system character set), so you better make sure that your database is using UTF8 if you want, for example, Chinese users to be able to store their data. I recommend doing this on a per-table basis, which is easy if you export the schema with Hibernate - just add an extension to your dialect. Also make sure that you set characterEncoding=UTF-8 on your JDBC connection string to initialize the SQL session properly. Note that the useUnicode=true switch is not necessary for MySQL 5.x.

The problem I had was the seamframework.org production database, which was latin encoded when it was created a year ago. I've been pushing migration back because we never had any issue with it and the manual migration with mysqldump and recode turned out not to work for me (some instructions for this if you want to try).

The second migration I was looking at was a migration from MySQL to PostgreSQL, for development and testing purposes. Now, many people use mysqldump for this, then fiddle about with its many command line options and switches (make it ANSI compatible SQL damnyou!) and then close their eyes and pray when they import the dump into Postgres. Well, that didn't work in my case because mysqldump exports bit typed columns as raw binary. You can't make it to export something like true or false or anything that you can import into a Postgres boolean type. The problem here is actually that MySQL (just like the mighty Oracle) doesn't support a true boolean datatype and that Hibernate defaults to creating a bit column for a java.lang.Boolean mapping. In retrospect, Hibernate should probably not do this on MySQL for portability reasons and use a tinyint(1) mapping - on the other hand it is fine if you always stay on MySQL.

So mysqldump didn't work in both cases, I had to find another solution. I solved it with DBUnit and a simple 20 line class:

import org.dbunit.database.IDatabaseConnection;
import org.dbunit.database.DatabaseConnection;
import org.dbunit.database.QueryDataSet;
import org.dbunit.operation.DatabaseOperation;

import java.sql.Connection;
import java.sql.DriverManager;

public class Migration {

    public static final String[] TABLES = new String[]{ "FOO", "BAR", "BAZ" };

    public static void main(String[] args) throws Exception {
        System.out.println("Running Migration...");

        Class.forName("com.mysql.jdbc.Driver");
        //Class.forName("org.postgresql.Driver");

        Connection exportConnection = DriverManager.getConnection(
            "jdbc:mysql://localhost/mydb", 
            "johndoe", 
            "secret"
        );
        IDatabaseConnection exportDatabaseConnection = new DatabaseConnection(exportConnection);

        Connection importConnection = DriverManager.getConnection(
            "jdbc:mysql://localhost/mytarget?characterEncoding=UTF-8&sessionVariables=FOREIGN_KEY_CHECKS=0", 
            "johndoe", 
            "secret"
        );
        IDatabaseConnection importDatabaseConnection = new DatabaseConnection(importConnection);

        for (String table : TABLES) {
        System.out.println("Migrating table: " + table);
            QueryDataSet exportDataSet = new QueryDataSet(exportDatabaseConnection);
            exportDataSet.addTable(table, "SELECT * FROM " + table);
            DatabaseOperation.INSERT.execute(importDatabaseConnection, exportDataSet);
        }

        exportDatabaseConnection.close();
        importDatabaseConnection.close();
        System.out.println("Migration complete");

    }
}

This is the code I used to migrate from MySQL latin to MySQL UTF encoding. For the PostgreSQL migration, uncomment the driver and use a different import JDBC URL. Make sure that you disable foreign key checks for the importing SQL session as you don't know or control in which order tables and rows will be exported and imported.

back to top