Red Hat

In Relation To Hardy Ferentschik

In Relation To Hardy Ferentschik

It has been a while since the first alpha release of Validator 5.1, but as the saying goes: Haste makes waste :-) There is a lot going on in the Hibernate universe and over the last few months we have been especially busy with projects like Search and OGM. Not to speak of the new Hibernate website. Sigh, if we just had more contributors lending a hand (hint, hint).

Nevertheless, we found time to also work on Validator and the result is Hibernate Validator 5.1.0.Beta1 with some nice new features and bug fixes. The most notable feature is the introduction of @UnwrapValidatedValue and the corresponding ValidatedValueUnwrapper SPI (see HV-819). The idea is that in some cases the value to validate is contained in a wrapper type and one would have to add custom ConstraintValidator implementations for each constrained type and constraint. Think Java 8 Optional or JavaFX Property types. For example in JavaFX you could have:

@NotBlank
@UnwrapValidatedValue
private Property<String> name = new SimpleStringProperty( "Foo" );
Here the intention is to apply the @NotBlank constraint to the string value, not the the Property instance. By annotating name with @UnwrapValidatedValue the Validator engine knows that it has to unwrap the value prior to validation. To be able to do this, you need to also register an implementation of ValidatedValueUnwrapper which specifies how this unwrapping has to happen. You can do this via validation.xml as provider specific property (hibernate.validator.validated_value_handlers) or programmatically via:
Validator validator = Validation.byProvider( HibernateValidator.class )
        .configure()
        .addValidatedValueHandler( new PropertyValueUnwrapper() )
        .buildValidatorFactory()
        .getValidator();
PropertyValueUnwrapper is in this case an implementation of ValidatedValueUnwrapper which tells the Validator engine how to unwrap a Property type and of which type the validated value is. The latter is needed for constraint validator resolution. For more information refer to the online documetnation.

What else is worth mentioning? Thanks to Brent Douglas for profiling of Hibernate Validator and detecting of some potential memory leaks - see HV-838 and HV-842. They are fixed now.

Thanks also to Victor Rezende dos Santos and Shin Sang-Jae. Victor found a problem with the Brazilian CPF constraint which lead to its refactoring HV-808, as well as the deprecation of @ModCheck and the introduction of dedicated @Mod10 and @Mod11 constraints as replacements. Shin on the other hand provided a Korean translation of the ValidationMessages properties bundle.

Last but not least, we also have an improved the memory footprint by reducing the amount of memory consumed by metadata (HV-589).

In total 29 resolved issues. Not bad in my opinion and definitely worth upgrading to Validator 5.1.

Maven artefacts are on the JBoss Maven repository under the GAV org.hibernate:hibernate-validator:5.1.0.Beta1 and distribution bundles are available on SourceForge. Send us feedback either via the Hibernate Validator forum or on stackoverflow using the hibernate-validator tag.

Enjoy!

Hibernate Metamodel Generator 1.3.0.Final

Posted by    |       |    Tagged as

Guillaume pointed out recently on the hibernate-dev mailing list that is has been a very long time since the last release of Hibernate JPA Metamodel Generator release and that it is time to do another one. And we listened. We just released 1.3.0.Final. As usual, you can get the artefacts from the JBoss Maven repo or from SourceForge.

Don't say that we are not listening to our users and contributors. There might have been, however, some self interest in Guillaume's request. After all he reported three of the 10 resolved issues and even contributed patches to fix them. Thanks for your help! To single out one of the 10 issues, it should be METAGEN-92. Its fix will make sure that the annotation processor will now also work with JPA 2.1 configuration files (persistence.xml and orm.xml).

In case you forgot how to setup the annotation processor in Maven have a look at this post and don't forget to report any issues in the Hibernate JIRA project METAGEN.

Enjoy!

With the release of Hibernate Validator 5.1.0.Alpha1 just out, it is time to pick up on the Bean Validation 1.1 spotlight series and cast a light onto method validation. Long time Hibernate Validator users know that method validation is part of Validator since version 4.2, however only as a provider specific featue. Bean Validation 1.1 makes method validation now part of the specification.

But what is method validation? Method validation is the ability to place constraint annotations onto methods or constructors and/or their parameters. Here is an example:

public class User {
     public User(@NotNull @Size(max = 40) String firstName, @NotNull @Size(max = 40) String lastName) {
        // ...
     }

     @NotNull
     @Valid
     public Order placeOrder(@NotNull @Valid Item item) {
          // ...
     }

     @PasswordsMatch
     public void resetPassword( @NotNull String password, @NotNull String passwordConfirmation) {
          // ...
     }

     // ...
}

The constraints describe that whenever the constructor of User is called, firstName and lastName cannot be null and neither can exceed 40 characters. Also, whenever the user places an order, the provided Item instance cannot be null and must be valid. Valid in this context means that the instance itself passes the validation of all its property (constraints placed on fields and getters) and class level constraints. The returned Order instance of the placeOrder() call is a non null instance which also passes all bean constrains for this type. Last but not least, if the user resets his password, the new password and the password confirmation cannot be null and they have to match. The latter is expressed via a so called cross parameter constraint - @PasswordMatch.

Overall, the shown constraints define the pre- and postconditions for a given method or constructor call. This is commonly known as programming by contract which allows to make the specification of the these constraints part of the method. It also avoids code duplication, since these checks don't have to be implemented redundantly in the body of the methods.

But how do these constraints get validated then? Bean Validation offers for this purpose ExecutableValidator which can be retrieved from a Validator instance via Validator.forExecutables(). This ExecutableValidator offers the methods to validate method/constructor parameters and return values. This in itself, however, is not very helpful for an application developer. She would have to write the required code to handle the validation herself which is not a trivial task. Instead the intention is that frameworks and libraries provide the integration for the application developer via some sort of AOP, interceptor or proxy based approach. That way, method validation occurs transparently upon invocation of constrained methods or constructors, throwing a ConstraintViolationException whenever one more more constraints are violated. An example of this type of integration is the Java EE 7 framework.

In Java EE 7 all CDI managed beans are automatically method validated. All you have to do is to place constraints on methods or constructors and their parameters. CDI will then, via its interceptor capabilities, evaluate all method validation constraints. In this context it is important to know that Bean Validation does not validate getters (method name starts with get, has a return type and no parameters or starts with is, is returning boolean and has no parameters) per default. This is to avoid conflicts with the validation of bean constraints. This behaviour can be configured via the use of @ValidateOnExecution or in validation.xml using default-validated-executable-types. The specification contains all the details and also gives some examples.

Last but not least, another caveat. When using method validation constraints within class hierarchies, the specification demands that the Liskov substitution principle holds. Formally this means that a method's preconditions (as represented by parameter constraints) must not be strengthened and its postconditions (as represented by return value constraints) must not be weakened in sub-types. Concretely it implies that in sub-types no parameter constraints can be declared on overridden or implemented methods. Also parameters cannot be marked for cascaded validation. Instead, all parameter constraints must be defined at the method's root within the hierarchy. The Bean Validation implementation will throw a ConstraintDeclarationException if the Liskov substitution principle is violated. The complete set of rules can be found in Method constraints in inheritance hierarchies of the specification.

I hope this blog helped to shine some light on the biggest new feature of Bean Validation 1.1 - method validation. If you have questions, leave a comment, contact us via the forum or chat to us in IRC.

Happy validating!

P.S. We will talk about the changes to the metadata and javax.validation.Path API related to method validation in another post. We also will have a closer look at the details of implementing cross parameter constraints. Stay tuned.

Hibernate Search 4.4.0.Alpha1

Posted by    |       |    Tagged as Hibernate Search

It has been quite some time since I have done my last Hibernate Search release announcement, so I am especially happy to get the 4.4 release train rolling. Hibernate Search 4.4.0.Alpha1 is now out and ready for download, either from the JBoss Maven Repository under the GAV org.hibernate:hibernate-search:4.4.0.Alpha1 or via SourceForge. The release contains fixes for 16 issues which you can closer inspect in the Jira release notes.

Next to the usual bug fixes and internal changes I want to highlight the latest new feature of Hibernate Search, the Metadata API (HSEARCH-436). Looking at SearchFactory you will find two new methods:

public interface SearchFactory {

    // ...

    /**
     * Returns the set of currently indexed types.
     *
     * @return the set of currently indexed types. If no types are indexed the empty set is returned.
     */
     Set<Class<?>> getIndexedTypes();

     /**
      * Returns a descriptor for the specified entity type describing its indexed state.
      *
      * @param entityType the entity for which to retrieve the descriptor
      *
      * @return a non {@code null} {@code IndexedEntityDescriptor}. This method can also be called for non indexed types.
      *         To determine whether the entity is actually indexed {@link org.hibernate.search.metadata.IndexedTypeDescriptor#isIndexed()} can be used.
      *
      * @throws IllegalArgumentException in case {@code entityType} is {@code null}
      */
     IndexedTypeDescriptor getIndexedTypeDescriptor(Class<?> entityType);
}

The former, getIndexedTypes(), allows you to determine which indexed types the SearchFactory knows about. The latter, getIndexedTypeDescriptor(Class<?>), is the entry point into a descriptor based meta model API. It allows you to determine the configuration aspects of a given type. Amongst other things the type descriptor contains information about the static and dynamic class boost, whether the index is sharded and of course which properties of the type are indexed and which Lucene fields they produce. The property and field information is contained - surprise, surprise - in Property- and FieldDescriptors. Access to these descriptors is via getters in IndexedTypeDescriptor.

Last but not least, there are a couple of things worth noticing around FieldDescriptor. First, there are really two field related descriptors, namely FieldSettingsDescriptor and FieldDescriptor (extending FieldSettingsDescriptor). The split might not seem obvious at first glance, but it allows to separate the actual Lucene Field information from additional Search specific field configuration options (for example whether null values should be indexed or not).

Secondly, there are multiple subtypes of FieldSettingsDescriptor, for example NumericFieldSettingsDescriptor. This allows to keep information specific for a given field type contained in a single class. In the case of NumericFieldSettingsDescriptor it is the precisionStep. To access type sub-type specific information a type check followed by an unwrap is needed. Something like:

// ...
IndexedTypeDescriptor typeDescriptor = searchFactory.getIndexedTypeDescriptor( Foo.class );
FieldDescriptor fieldDescriptor = typeDescriptor.getIndexedField( "snafu" );
if( FieldSettingsDescriptor.Type.NUMERIC.equals( fieldDescriptor.getType() ) ) {        
    NumericFieldSettingsDescriptor numericFieldSettingsDescriptor = fieldDescriptor.as(NumericFieldSettingsDescriptor.class);
    int precisionStep = numericFieldSettingsDescriptor.precisionStep();
    // ...
}
// ...

Feedback related to this API is much welcome. It helps us fine tuning the API prior to final Search 4.4 release. Just contact us via the mailing list or on IRC.

Enjoy!

Today we are happy to announce the 5.0.1.Final release of Hibernate Validator. In case you are wondering what happened with 5.0.0.Final - it has not gone missing. In fact it was released on the 11th of April.

The long story is, that we had to release 5.0.0.Final to meet the Java EE 7 release schedule. At the time the functionality was complete, but documentation was not. Given the amount of changes introduced by Bean Validation 1.1, we felt it was important to wait with the announcement of Hibernate Validator 5 until the documentation is up to scratch. That's the case with 5.0.1.Final. Not only does this release offer a complete Bean Validation 1.1 implementation it also includes an updated online documentation.

The highlights of Hibernate Validator 5 are (with pointers into the freshly baked documentation):

  • Standardized method validation of parameters and return values. This has been a Hibernate Validator 4 specific functionality, but got now standardized as part of Bean Validation 1.1.
  • Integration with Context and Dependency Injection (CDI). There are default ValidatorFactory and Validator instances available and you can now use @Inject in ConstraintValidator implementations out of the box. Requested custom implementations (via validation.xml) of resources like ConstraintValidatorFactory, MessageInterpolator, ParameterNameProvider or TraversableResolver are also provided as managed beans. Last but not least, the CDI integration offers transparent method validation for CDI beans.
  • Group conversion
  • Error message interpolation using EL expressions

We are also planning to create a little blog series introducing these new features in more detail. Stay tuned!

For now have a look at the Getting Started section of the documentation to see what you need to use Hibernate Validator 5. Naturally you will need the new Bean Validation 1.1 dependency, but you will also need an EL implementation - either provided by a container or added to your Java SE environment. Additional migration pointers can also be found in the Hibernate Validator migration guide.

You find the full release notes as usual on Jira. Maven artefacts are on the JBoss Maven repository under the GAV org.hibernate:hibernate-validator:5.0.1.Final and distribution bundles are available on SourceForge.

We are looking forward to get some feedback either on the Hibernate Validator forum or on stackoverflow using the hibernate-validator tag.

Enjoy!

With the endgame for Bean Validation 1.1 in full swing, we want to make sure that everyone has the chance to test and integrate early. So without further ado:

Bean Validation TCK 1.1.0.CR2

Hibernate Validator 5.0.0.CR2

The TCK work has been about increasing test coverage - see BVTCK-42, whereas Hibernate Validator work was focused around these 17 issues. Most notably is the rework of the CDI integration via a portable extension (HV-723, HV-740).

Enjoy!

Playing catchup with last week's Bean Valdiation specification release (1.1.0.Beta3) we are happy to make the following releases available as well:

Bean Validation TCK 1.1.0.Beta3

  • Maven artefacts on the JBoss Maven repository under the GAV org.hibernate.beanvalidation.tck:beanvalidation-tck-tests:1.1.0.Beta3
  • Zip and tar bundles on SourceForge

Hibernate Validator 5.0.0.Beta1

Given that the specification addressed 38 issues in its latest release, we were not quite able to sync the TCK and RI completely. There are still some missing TCK tests and the RI has missing features as well. For example, the XML configuration of method validation is still work in progress (see HV-373).

The TCK addresses 6 issues and Hibernate Validator 36. In case you are wondering, the main work in the TCK was the addition of test for new Bean Validation 1.1 features. This was addressed in BVTCK-32 which alone has 12 pull requests. Feel free to review the tests to verify that we are on the right track ;-)

Most notably we are behind with documentation though. Both, the TCK docs as well as the Hibernate Validator docs could need some love. Ever wanted to contribute, but did not know how? Here is your chance! Contact us via email or use the issue tracker.

Last but not least, here is a little teaser about what is new in this release. As of HV-676 it is now possible to use the Unified Expression Language (UEL) in the constraint message descriptors. We offered something in this direction before with ValueFormatterMessageInterpolator (which is by the way deprecated now), but there we offered only a way to format the validated value. Now you can interpolate the constraint annotation paramters as well and you can navigate the objects via the dot notation. And you can even have conditional messages. Here are a couple of examples:


    # ternary operator for conditional DecimalMax constraints - see also HV-256
    javax.validation.constraints.DecimalMax.message  = must be less than ${inclusive == true ? 'or equal to ' : ''}{value}

    # a custom date range annotation with custom formatted validated date
    com.acme.DateRange.message = the specified date ${formatter.format('%1%tY-%tm-%td', validatedValue)} lies not between formatter.format('%1%tY-%tm-%td', min)} and formatter.format('%1%tY-%tm-%td', max)}
There is another release at least of Hibernate Validator planned for next week. So stay tuned.

Enjoy!

Inspired by these questions on the Search forum and stackoverflow I decided to blog about different solutions for the problem using only the tools available in Search right now (4.1.1.Final). But let's start with the problem.

The Problem

How can I define a custom analyzer for a field added in a custom class bridge? Let's look at an example. Below the class Foo defines a custom bridge FooBridge. How can I specify a custom analyzer for the field added by this bridge?

@Entity
@Indexed
@ClassBridge(impl = FooBridge.class)
public static class Foo {
	@Id
	@GeneratedValue
	private Integer id;
}

Solution 1

The straight forward approach looks something like that.

@Entity
@Indexed
@ClassBridge(name = "myCustomField", impl = FooBridge.class, analyzer = @Analyzer(impl = MyCustomAnalyzer.class))
public static class Foo {
	@Id
	@GeneratedValue
	private Integer id;
}

This works fine, provided the field you are adding in FooBridge has the name myCustomField. In case you are adding a field (or even multiple fields) with a different name this approach does not work anymore. In Lucene analyzers are specified per field identified by field name. Since from your @ClassBridge definition we cannot tell which fields you are adding, there is no way of registering and applying the right analyzers. See also the related issue HSEARCH-904.

Solution 2

In the second solution you are managing the analyzers on your own. The relevant part is in the bridge implementation:

public class FooBridge implements FieldBridge {

	@Override
	public void set(String name, Object value, Document document, LuceneOptions luceneOptions) {
		Field field = new Field(
				name,  
				"",
				luceneOptions.getStore(),
				luceneOptions.getIndex(),
				luceneOptions.getTermVector()
		);
		try {
			String text = "whatever you want to index";
			MyCustomAnalyzer analyzer = new MyCustomAnalyzer( );
			field.setTokenStream( analyzer.reusableTokenStream( name, new StringReader( text ) ) );
			document.add( field );
		}
		catch ( IOException e ) {
			// error handling
		}
	}
}
As you can see you need to instantiate your analyzer yourself and then set the token stream for the field you want to add. This will work, but it does not work with @AnalyzerDef which is often used to define and reuse analyzers globally for your application. Lets have a look at this solution.

Solution 3

Let's start directly with the code:

	@Entity
	@Indexed
	@AnalyzerDefs({
			@AnalyzerDef(name = "analyzer1",
					tokenizer = @TokenizerDef(factory = MyFirstTokenizer.class),
					filters = {
							@TokenFilterDef(factory = MyFirstFilter.class)
					}),
			@AnalyzerDef(name = "analyzer2",
					tokenizer = @TokenizerDef(factory = MySecondTokenizer.class),
					filters = {
							@TokenFilterDef(factory = MySecondFilter.class)
					}),
			@AnalyzerDef(name = "analyzer3",
					tokenizer = @TokenizerDef(factory = MyThirdTokenizer.class),
					filters = {
							@TokenFilterDef(factory = MyThirdFilter.class)
					})
	})
	@ClassBridge(impl = FooBridge.class)
	@AnalyzerDiscriminator(impl = FooBridge.class)
	public static class Foo {
		@Id
		@GeneratedValue
		private Integer id;
	}

	public static class FooBridge implements Discriminator, FieldBridge {

		public static final String[] fieldNames = new String[] { "field1", "field2", "field3" };
		public static final String[] analyzerNames = new String[] { "analyzer1", "analyzer2", "analyzer3" };

		@Override
		public void set(String name, Object value, Document document, LuceneOptions luceneOptions) {
			for ( String fieldName : fieldNames ) {
				Fieldable field = new Field(
						fieldName,
						"Your text to analyze and index",
						luceneOptions.getStore(),
						luceneOptions.getIndex(),
						luceneOptions.getTermVector()
				);
				field.setBoost( luceneOptions.getBoost() );
				document.add( field );
			}
		}

		public String getAnalyzerDefinitionName(Object value, Object entity, String field) {
			for ( int i = 0; i < fieldNames.length; i++ ) {
				if ( fieldNames[i].equals( field ) ) {
					return analyzerNames[i];
				}
			}
			return null;
		}
	}
A lot is going on here and the example shows many useful features of Search. First @AnalyzerDefs is used to declaratively define and build your analyzers. This analyzers are globally available under their given names and can be reused across the application (see also SearchFactory#getAnalyzer(String)). You build an analyzer by first specifying its tokenizer and then a list of filters to apply. Have a look at named analyzers in the online documentation for more information.

Next the example uses @ClassBridge(impl = FooBridge.class) to define the custom class bridge. Nothing special there. Which fields you are adding in the implementation is up to you.

Last but not least, we have @AnalyzerDiscriminator(impl = FooBridge.class). This annotation is normally used for dynamic analyzer selection based on the entity state. However, it can easily be used in this context as well. To make things easy I let the field bridge directly implement the required Discriminator interface. Discriminator#getAnalyzerDefinitionName will now be called for each field being added to the index. You also get passed the entity itself and the value (in case the field bridge is defined on a property), but this is not important in this case. All which remains to be done is to make sure to return the right analyzer name based on the passed field name.

Solution 3 seems maybe longer, but it is a declarative approach and it also allows you to reuse analyzer configurations. It also shows that the current API might still have its shortcomings (eg HSEARCH-904), but using the tools already available, there are often work-arounds.

Happy analyzing,

Hardy

First Alpha release of Hibernate Validator 5

Posted by    |       |    Tagged as Hibernate Validator

Good news for all of you waiting for an early version of Hibernate Validator 5 in order to experiment with the new Bean Validation 1.1 (JSR 349) features. Hibernate Validator 5.0.0.Alpha1 is now available for download in the JBoss Maven Repository under the GAV org.hibernate:hibernate-validator:5.0.0.Alpha1 or via SourceForge.

The focus of the release was the alignment of Hibernate Validator with the first early draft of Bean Validation 1.1 (1.1.0.Alpha1). The Hibernate Validator changelog circles for this reason around HV-571 which served as placeholder for those specification changes. Of course the biggest change was the formalization of method validation, but there are other interesting new features as well. For example:

  • In conjunction with method validation it is worth having a look at the ParameterNameProvider interface which helps to identify the parameter of a failing parameter constraint in the resulting ConstraintViolation. The Path API was also extended to provide additional meta information about a failing constraint. You can get the element descriptor via Path.Node#getElementDescriptor (of course you need to iterate to the leaf node of first). The element discriptors themselves got extended by ConstructorDescriptor, MethodDescriptor, ParameterDescriptor and ReturnValueDescriptor.
  • Support for container injection in ConstraintValidator. Check out the new life cycle methods ValidatorFactory#close and ConstraintValidatorFactory#releaseInstance in this context.
  • Expose settings defined in XML in the Configuration API. Refer to the new interface javax.validation.ConfigurationSource, but be aware that there is a follow up issue (BVAL-292) as well.

The easiest way to find out more about these new interfaces and classes is to have a look at the Hibernate Validator test suite where you will find tests for all the mentioned new features.

There are more proposals which are still under active discussion. Feel free to contribute :-)

Given that this is an early draft there will be further changes to the API. For this reason Hibernate Validator 5.0.0.Alpha1 is not a production release. On the other hand, for integrators and tool developers it makes a lot of sense to have an early evaluation to see whether we are on the right track.

Enjoy!

Updated OGM kitchensink example

Posted by    |       |    Tagged as Hibernate OGM

Last year I have been giving an introduction to OGM and OpenShift at JUDCon London using a modified version of the kitchensink quickstart. Time has gone and it was time to give the demo a revamp.

Thanks to Sanne the code got updated to use the latest app server (AS 7.1.1.Final Brontes) and naturally the latest OGM and Search versions. Another change is that now Infinispan is not only used to persist the entity data via OGM, but it is also used for storing the Lucene indexes via the Infinispan directory provider (see persistence.xml of the example project). Sanne added also a bunch of new tests show casing Arquillian and different ways to bundle the application. Definitely worth checking out!

Personally, I had a look at the project setup and made some changes there. The original demo assumed you had a local app server installation as a prerequisite on your machine. It then used the jboss-as-maven-plugin to deploy the webapp. Unfortunately, this plugin does not allow me to start and stop the server and it seems redundant to require a local install if the Arquillian tests already download an AS instance (yes, I could run the test against the local instance as well, but think for example continuous integration where I want to manage/control the WHOLE ENVIRONMENT).

In the end I decided to give the cargo plugin another go. A lot has happened there and it supports not only JBoss 7.x, but it also offers a so called artifact installer which allows to download the app server as a managed maven dependency. The relevant pom.xml settings look like this:

          ...
          <plugin>
                <groupId>org.codehaus.cargo</groupId>
                <artifactId>cargo-maven2-plugin</artifactId>
                <version>1.2.1</version>
                <configuration>
                    <container>
                        <containerId>jboss71x</containerId>
                        <artifactInstaller>
                            <groupId>org.jboss.as</groupId>
                            <artifactId>jboss-as-dist</artifactId>
                            <version>7.1.1.Final</version>
                        </artifactInstaller>
                    </container>
                </configuration>
                <executions>
                    <execution>
                        <id>install-container</id>
                        <phase>initialize</phase>
                        <goals>
                            <goal>install</goal>
                        </goals>
                    </execution>
                </executions>
            </plugin>
            <plugin>
                <groupId>org.codehaus.gmaven</groupId>
                <artifactId>gmaven-plugin</artifactId>
                <version>1.3</version>
                <executions>
                    <execution>
                        <id>copy-modules</id>
                        <phase>initialize</phase>
                        <goals>
                            <goal>execute</goal>
                        </goals>
                        <configuration>
                            <source>
                                def toDir = new File(project.properties['jbossTargetDir'], 'modules')
                                def fromDir = new File(project.basedir, '.openshift/config/modules')
                                log.info('Copying OGM module from ' + fromDir + ' to ' + toDir)
                                ant.copy(todir: toDir) {
                                fileset(dir: fromDir) {
                                exclude(name: 'README')
                                }
                                }
                            </source>
                        </configuration>
                    </execution>
                </executions>
            </plugin>
            ...|
 

I am using cargo:install in the initialize phase to install the app server into the target directory. This way I can install a custom module (via the gmaven plugin) before the tests get executed and/or before I start the application via a simple:

    $ mvn package cargo:run

Neat, right?

You find all the code on GitHub in the ogm-kitchensink. The README gives more information about the main maven goals.

Enjoy, Hardy

back to top