Red Hat

In Relation To Java EE

In Relation To Java EE

Célébrons l'open source et le partage (English version below).

Place pour Devoxx France à gagner

Devoxx France vous offre une place: je ne suis que le messager :) Elle sera gagnée par l'un d'entre vous. La règle est simple.

Contribuer à un projet open source (code, doc, etc) entre maintenant et dimanche 29 mars 2015 et tweeter le lien vers la pull request ou le patch à @Hibernate.

  • Quel projet ? N'importe pourvu que la licence soit open source. Donc pas limité à Hibernate.
  • Comment sera choisi le vainqueur ? La contribution que je préfère sera choisie. Super bonus si c'est votre première contribution à ce projet : il faut que ça brasse :)
  • Mais si on contribue à un projet Red Hat, on a plus de chance ? Non, tous les projets open source sont (libres et) égaux.
  • Je suis employé Red Hat, je peux gagner? Non. Contacte-moi directement.
  • Et ? C'est tout.

Devoxx France se déroule du 8 au 10 avril à Paris (Palais des Congrès). J'y parlerai entre autre des sujets Object Mappers dans le NoSQL et une BoF Hibernate.

Aller au boulot !

Free pass for Devoxx France

Devoxx France is giving away a free pass through me. I am just the messenger :) One of you will win it. The rule is simple.

Contribute to an open source project (code, doc, etc) between now and Sunday evening 29th of March 2015 and tweet the link to the pull request or the patch to @Hibernate.

  • Which project? Any project released under an open source license. Not limited to Hibernate.
  • How will you chose the winner? The contribution I prefer will be chosen. Extra bonus if that's your first contribution on that project: let's exchange!
  • But if I contribute to a Red Hat project, I have a better chance? No, all open source projects are equals.
  • I am a Red Hat employee, can I win? No. Contact me directly instead.

Devoxx France happens in Paris from the 8th to the 10th of April. I will be speaking about a few topics including Object Mappers and NoSQL and an Hibernate BoF.

Now go contribute!

PS légal: cette place est offerte par Devoxx France et Emmanuel Bernard à titre personnel, pas par Red Hat. Bref, je fais ce que je veux, avec mes cheveux. Legal PS: this pass if given away by Devoxx France and Emmanuel Bernard as an individual and free human being, not Red Hat.

PicketLink Deep Dive - Part 2

Posted by Shane Bryzak    |    May 9, 2014    |    Tagged as Java EE PicketLink

Welcome to part 2 of the PicketLink Deep Dive series. In case you missed part 1, you can find it here. In part 1 you’ll also find links to helpful resources, such as the PicketLink documentation, distribution binaries, source code and more.

In this issue, we’re going to be taking a closer look at PicketLink’s support for partitions. Let’s start though by establishing what exactly partitions are in the scope of PicketLink and how they are used.

What is a Partition?

In the simplest definition, partitions in PicketLink are used to segregate identities (such as users, groups and roles) from each other. Why would you want to do this, you wonder? Well one common use case for such a feature would be for an application that serves multiple clients/companies, with each client having its own distinct set of user accounts. Another use case might be when you wish to use PicketLink as an IDP (Identity Provider) that services multiple applications, with each application having a distinct set of identities. Whenever you need to support the creation of distinct “sets” of users or other identity types, partitions are there to help you.

Do I need to use them?

If your application has simple security requirements, such as a single set of user accounts and maybe perhaps a few groups and roles, then you probably don’t need to use PicketLink’s extended support for partitions. The good news is that you don’t really need to do anything special - simply ignore the partition aspect of PicketLink and it shouldn’t ever bother you! (You can probably also skip the rest of this article if you like and just come back for part 3).

Partitions and the Identity Model

If you read part 1, you may remember how the identity model was described and how each identity object implements the IdentityType interface, as shown in this class diagram:

What we didn’t explain in part 1 was how every IdentityType object must belong to a Partition. This means that every user, group, role, agent, account (or any other identity type) always has a partition object, and its getPartition() method will always return this object (and never null). If you’ve worked with the PicketLink API already you may have noticed these overloaded methods on the PartitionManager interface:

    IdentityManager createIdentityManager() throws IdentityManagementException;
    IdentityManager createIdentityManager(Partition partition) throws IdentityManagementException;

At first glance it might seem that the first of these methods might return a “partitionless” IdentityManager instance, however behind the scenes PicketLink actually returns an IdentityManager for the default partition. We’ll examine this in more detail shortly, but first let’s look at how PicketLink supports different partition types.

Creating a custom partition type

Since the Partition interface cannot be instantiated itself, it is generally up to the developer to provide a concrete partition implementation. PicketLink provides two built-in partition types which you are free to use (see the next section for details), but otherwise it is very simple to create your own partition type. The AbstractPartition abstract base class makes this easy by doing most of the work for us, and may be easily extended to create a custom partition type. Let’s take a look at a really simple example:

@IdentityPartition(supportedTypes = {IdentityType.class})
public class Organization extends AbstractPartition {
    public Organization() {
    public Organization(String name) {

That’s all the code we need! We can use this new Organization partition type to create identities within separate Organizations. The only really significant feature worth a special mention in the above code is the @IdentityPartition annotation. This annotation tells PicketLink that the class is a partition class, and also that it allows the storing of all IdentityType types (including any subclasses of IdentityType). If we only wanted to store User objects in the partition then we could instead annotate the class like this:

@IdentityPartition(supportedTypes = {User.class})

It’s also possible to “trim” support for certain identity classes off the hierarchy tree, by specifying them in the unsupportedTypes member. For example, let’s say that we wish to be able to store all identity types in an Organization, except for Roles. The annotation would now look like this:

@IdentityPartition(supportedTypes = {IdentityType.class}, unsupportedTypes = {Role.class})

Finally, since the Partition interface extends AttributedType, we know that it will have a unique identifier and that we are able to also assign it arbitrary attribute values, so with our new Organization partition we can do stuff like this:

Organization org = new Organization(acme);
org.setAttribute(new Attribute<String>(description, Manufactures anvils and other failure-prone devices);
partitionManager.add(org); new Organization partition with id:  + org.getId());

Built In Partition Types

PicketLink provides two built-in, optional partition types - Realm and Tier. Both of these classes can be found in the org.picketlink.idm.model.basic package along with the other classes that form the basic identity model. Both of these partition types are provided for convenience only, there is absolutely no requirement that you use either of them. If you do wish to use these built-in partition types, then here are our guidelines (which you may choose to ignore if you wish):


The Realm partition type is analogous to the commonly accepted definition of a security realm and is recommended for use when you need to create a distinct set of users, groups and roles (or other identity types) for restricting access to an application. It supports all identity types.


A Tier is designed to be used in conjunction with a Realm and is intended for storing only roles or groups (or any other non-Account identity type, i.e. an identity type that isn’t capable of authenticating such as a User) while the Realm stores the Users. It is intended for storing tier-specific identities; for example if your application consists of multiple tiers, then each tier can define its own set of roles which in turn may be assigned certain tier-specific privileges. This way, a user in a separate Realm can be easily assigned one or more Tier-specific roles to give them access to the services provided by that tier.

The Default Partition

As mentioned above, Identity Management operations performed on an IdentityManager returned by the PartitionManager.createIdentityManager() method are actually done in the default partition. This default partition is actually a Realm object with a name of “DEFAULT”. If your PicketLink environment isn’t configured to support partitions then it doesn’t matter, PicketLink will transparently handle the support for the default partition without you having to do anything special.

Partitions and Relationships

So far we haven’t mentioned where identity relationships fit into the picture in regards to partitions. By their very nature, relationships differ from identities in that they don’t belong to a specific partition. If you think about this, it makes quite a lot of sense as a relationship is a typed association between two or more identities and those identities may not all exist within the same partition. For example, based on the description of the Realm and Tier partitions above we know that it is possible to have a Role that exists in a Tier, granted (via a Grant relationship) to a User in a Realm.

It is outside the scope of this article to go into detail about how PicketLink determines where to store relationship state, however this might be a good topic for a future deep dive.

Partitions and Multiple Configurations

PicketLink allows multiple configurations to be simultaneously declared, each with its own distinct name. A single configuration may have one or more identity stores configured for storing identity state. By supporting multiple configurations, PicketLink provides control over which backend identity store is used to store the identity state for each partition. This all might sound a little confusing, so let’s illustrate it with a diagram that describes a possible use case for a fictional paper company:

In this example our paper company has two configurations, “Public” and “Internal”. The “Public” configuration has been configured to use a JPA-based identity store which uses a database to store its identity state. The “Internal” configuration uses an LDAP-based identity store which is backed by a corporate LDAP directory. In addition to these two configurations, we also have two realm partitions - “Users” and “Employees”.

Let’s also assume that our paper company runs an ecommerce web site where anyone can log in and place an order for their products. The login page for public users might look something like this:

When logging in through this login form (which is easily found via a “Sign In” link on the main page of the site), authentication will be performed using the “Users” realm. The site may also provide an employee portal for managing orders and possibly performing other back-office tasks. Employees wishing to access this portal would use a different web address (or maybe an entirely different application, possibly even available only on the company private network) and would authenticate with an “employee-only” login form backed by the “Employees” realm. We can represent the distinct login pages and their associated realms using the following diagram:

An application may be modelled to support this multi-faceted application in a number of ways; it may be structured as a number of separate Tiers, with each tier providing a limited set of functions (possibly implemented as separate applications and/or services) and a set of tier-specific roles to control how privileges to access those functions are assigned; it could also be structured as a single monolithic application that “does everything” (™), restricting access to certain areas depending on the access level of the current user. In either case, application-wide privileges can be easily assigned to individual users from either realm. For example, if the privilege takes the form of a role or group membership, it’s possible for that role or group to exist in one realm and a user to which it is assigned exist in another realm.

Let’s say for example that an “admin” role for our paper company’s web portal is defined in the “Users” realm, and that this role is required to access the “Review Orders” page of the site:

As we can see, it doesn’t matter that the user to which this role is assigned exists in a different realm, because as we mentioned previously, relationships by their very nature are “cross-partitioned” and so can be used to assign privileges between distinct realms.

Partition Management API

PicketLink provides a simple PartitionManager API for managing partitions. It can be easily injected into your application like so:

import org.picketlink.idm.PartitionManager;
import javax.inject.Inject;

public class AdminService {
    @Inject PartitionManager partitionManager;

Once you have the PartitionManager instance, you can retrieve an existing Partition like so:

    Realm default = partitionManager.<Realm>getPartition(Realm.class, default);

Or you can retrieve all partitions of a certain type:

    List<Realm> realms = partitionManager.<Realm>getPartitions(Realm.class);

Creating a new Partition is also simple:

    Tier orderServices = new Tier(orderServices);

As is removing a Partition:

    Realm tempUsers = partitionManager.<Realm>getPartition(Realm.class, temp);

To create users and other identity objects within a Partition, get a reference to its IdentityManager via the createIdentityManager() method:

    Realm default = partitionManager.<Realm>getPartition(Realm.class, default);
    IdentityManager im = partitionManager.createIdentityManager(default);

    User jsmith = new User(jsmith);

To grant permissions to users within a Partition, get a reference to its PermissionManager via the createPermissionManager() method:

    Realm default = partitionManager.<Realm>getPartition(Realm.class, default);
    User jsmith = new User(jsmith);

    PermissionManager pm = partitionManager.createPermissionManager(default);
    pm.grantPermission(jsmith, Order.class, CREATE);

To create relationships, get a reference to a partitionless RelationshipManager via the createRelationshipManager():

    RelationshipManager rm = partitionManager.createRelationshipManager();

Once you have the RelationshipManager you can use it to create relationships either between identities in the same partition, like so:

    Realm default = partitionManager.<Realm>getPartition(Realm.class, default);
    IdentityManager im = partitionManager.createIdentityManager(default);
    User jsmith = new User(jsmith);
    Role admin = new Role(admin);

    rm.add(new Grant(jsmith, admin));

Or between relationships in different partitions:

    Realm default = partitionManager.<Realm>getPartition(Realm.class, default);
    IdentityManager im = partitionManager.createIdentityManager(default);
    User jsmith = new User(jsmith);

    Tier serviceTier = partitionManager.<Tier>getPartition(Tier.class, service);
    IdentityManager tim = partitionManager.createIdentityManager(serviceTier);
    Role admin = new Role(admin);

    rm.add(new Grant(jsmith, admin));


PicketLink’s advanced support for partitions allows us to create security architectures suitable for simple, single-purpose applications ranging through to complex, multi-tier enterprise platforms. In this article we examined how partitions can be used to create distinct sets of identities, and we explored how to manage partitions using the PartitionManager API.

Once again, thanks for reading!

PicketLink Deep Dive - Part 1

Posted by Shane Bryzak    |    Nov 13, 2013    |    Tagged as Java EE PicketLink

In this series we’ll be taking a magnifying glass to PicketLink, a security framework for Java EE. Each article in the series will examine a single aspect or feature of PicketLink in detail and also illuminate some of the design decisions made during the feature’s development.

Hopefully by the end of the series we’ll have helped you to gain a greater understanding of PicketLink, and how best to use it to address the security requirements of your own Java EE application.


At the time of writing the latest stable version of PicketLink is 2.5.2.Final. The latest version of both the PicketLink Reference Documentation and API Documentation can always be found at

The PicketLink binary distribution can be downloaded from the following page:

If your build is Maven-based then please refer to the PicketLink Reference Documentation for information about how to include the PicketLink dependencies in your project.

You can find the latest version of the PicketLink source code at GitHub:

Identity Model

In this issue we’ll be looking at PicketLink’s Identity Model feature, a fundamental part of the PicketLink IDM (Identity Management) submodule that forms the foundation on which the majority of PicketLink’s other features are built.

During the design process for PicketLink IDM the development team spent a lot of time trying to work out the best way for modelling common security concepts such as roles and groups, etc. Faced with challenges relating to the lack of a common (or even de facto) standard for these core concepts it was decided that PicketLink required a certain level of abstraction to allow it to be customized per-deployment depending on the application’s security needs. Striking a balance between usability (for those that just wished to use PicketLink out of the box) and customizability (for applications that had more complex, or non-standard security requirements) posed a challenge that required an elegant solution.

The answer was the Identity Model - a set of interfaces and classes that provide a basic implementation so that PicketLink would just work(™), built on top of an abstraction layer that allows for a completely custom implementation.

The Identity Model defines the identity objects (such as Users and Roles) for your project, and the relationships between those objects. This is one of the major differences between PicketLink and other security frameworks - PicketLink supports a dynamic Identity Model, which means that there isn’t a hard-coded set of User, Role or Group classes that you are forced to use in your own application (PicketLink actually does provide these classes but you may choose not to use them). Why is it so important that PicketLink supports such a high level of customization? There are actually quite a number of reasons, so let’s take the time to explore a few of them in more detail.

Region Specific Attributes

Depending on where your project is being deployed, you may have specific requirements in relation to User state based on the region that your application may be servicing. For example, an application in the United States might require a User to have an SSN property, while an application in Russia might require a property to store a user’s patronym. In western culture a person may typically have a title, first name, middle name/s and last name, however this is certainly not the rule in many other cultures. It may be argued that these property values may be simply stored as arbitrary attribute values, like so:

user.setAttribute(new Attribute(joinDate, new Date()));

However storing these property values as ad-hoc attributes means we give up on important stuff like type safety and bean validation. If the property value is one that’s required for all of your users, then It’s much more desirable to be able to define these as first class properties of your bean, like so:

user.setJoinDate(new Date());

Custom Security Requirements

Security requirements may change drastically between applications, and a security architecture that may be sufficient for one application might be unsuitable for another. Let’s take roles for example - depending on who you ask, a role within an application might mean any number of things.

In some cases, a role equates to a simple, application-wide privilege that may be assigned directly to users. In Java EE5 for example the @RolesAllowed annotation may be used to restrict access to a class or method, allowing only the users that have been granted that specific role invocation rights:

public void setBaseInterestRate(float value);

In other cases, a role might be something that may only be assigned to a group or mapped to a principal. Or it might only be valid within the context of a group (i.e. user jsmith has the logistics_support role for the retail_sales group). Or there may even be no such thing as roles at all, with all application privileges perhaps modeled as group memberships or using some alternative design. The point is there’s no single correct way to do security.

Because of the fact that everyone has different ideas of what constitutes a Security API, it is important that PicketLink provides the flexibility to allow custom scenarios to be supported.

Backend Identity Store Compatibility

In some environments your application may be required to conform to a pre existing security model. For example a corporate LDAP directory with heavily customised user attributes or a central SSO provider over which you have no direct control, or a legacy database containing user records. In these scenarios it's important that PicketLink is able to work within the bounds of an existing security architecture, and be able to accurately model its security objects.

Identity Model Building Blocks

Let’s get more technical now about what actually constitutes an Identity Model. A core set of interfaces from the org.picketlink.idm.model package define the foundation on which an Identity Model is built. Let’s take a look at these in the following class diagram:

We’ll be covering partitions in a future article so we can ignore the Partition interface for now. That leaves us with the following four core interfaces:

AttributedType - This is the root interface of the Identity Model. It basically provides methods for defining only two simple pieces of state - a globally unique identifier value assigned by PicketLink when an identity object is created, and a set of arbitrary attribute values. This means that every identity object (including users, roles, groups, and any other identity types you might create) at the very least has a unique identifier value, and is capable of storing a custom set of named attribute values of any Serializable type.

IdentityType - This is the parent interface for all identity types. In addition to the identifier and attribute properties inherited from the AttributedType interface, it also defines properties that reflect whether the identity object is enabled or not, the partition that it belongs to and the created date and optional expiration date for the identity.

Account - This is a special marker interface, which represents an identity type that is capable of authenticating. It is typically used as the superinterface for user identity types, or identity types that represent a third party process or agent that may interact with your application. Identity types such as roles or groups do not implement the Account interface as these types are not capable of authentication (i.e. they cannot log in), instead they implement the IdentityType interface directly.

Relationship - This is the super-interface for all relationship implementations. Like the Account interface it is also a marker interface, with no additional state (beyond that declared by its parent interface IdentityType) required by its contract.

There are a couple more important ingredients that we must know about besides the core interfaces described above when creating an identity model - the @AttributeProperty annotation and the @Unique annotation. We’ll cover these in more detail in the next section.

What does an actual Identity Model look like?

Good question! To answer this, we’ll take a look at the basic Identity Model provided out of the box by PicketLink. This model defines a number of commonly used identity objects and relationships, and in many cases may be sufficient for the security requirements of many typical applications. It can be easily extended or even copied to allow for convenient customization if desired.


The Agent class represents an external non-human identity that is capable of authenticating, hence it implements the Account interface. It defines getter and setter methods for a loginName property, which is used during authentication to identify the Account. Let’s take a look at the source code for this class (for the purpose of brevity some non-essential code has been trimmed):

public class Agent extends AbstractIdentityType implements Account {
    private String loginName;

    public String getLoginName() {
        return loginName;

    public void setLoginName(String loginName) {
        this.loginName = loginName;

There’s a few significant things to note about the above code. First of all, we can see that the Agent class extends AbstractIdentityType. This abstract class provides default implementations for all the methods of the parent IdentityType interface, allowing us to concentrate on the specific business logic for this identity class. If you wish to implement your own Identity Model then this can be a great time-saver.

Secondly, the @AttributeProperty annotation indicates that this is a property of the identity object that should be automatically mapped to the backend identity store. Each identity property that must be persisted should be annotated with @AttributeProperty. An example of a property that might not be annotated would be one with a calculated value, for instance a getFullName() method that simply concatenates the user’s first and last name together, yet doesn’t persist the calculated value itself in the backend identity store.

Thirdly, the @Unique annotation is used to indicate to PicketLink that the property must contain unique values. In this example it is used to ensure that the loginName property is unique (it should be obvious why it’s important that more than one account does not have the same login name).

Finally, we can observe that creating identity properties is as simple as defining a getter and setter method, following the JavaBean standard. Property values may be any serializable type.


The User class extends Agent to add some human-specific properties, such as first name, last name and so forth.

public class User extends Agent {
    private String firstName;

    private String lastName;

    private String email;

    public String getFirstName() {
        return firstName;

    public void setFirstName(String firstName) {
        this.firstName = firstName;

    public String getLastName() {
        return lastName;

    public void setLastName(String lastName) {
        this.lastName = lastName;

    public String getEmail() {

    public void setEmail(String email) { = email;

In the above code, you may notice that the @AttributeProperty annotation is on the actual field declaration itself, instead of the getter method. PicketLink allows the annotation on either the field or the getter method, similar in fashion to JPA and its annotations.


The Group class is an identity type that may be used for modelling basic group-related identity privileges, such as group membership and group roles. It is intended to be used in a hierarchical structure, with each group capable of having multiple subgroups and so forth. The Group class defines three property values - the name property represents the group’s unique name within the same branch of its hierarchical structure, the parentGroup property is an optional property which may be used to specify the group’s parent group (or null if the group has no parent, i.e. it is a root group), and path which represents the fully qualified path of the group, i.e. the complete hierarchical group name structure starting from the root, delimited by a separator.

public class Group extends AbstractIdentityType {
    public static final String PATH_SEPARATOR = "/";

    private String name;
    private Group parentGroup;
    private String path;

    public String getName() {
        return name;

    public void setName(String name) { = name;

    public String getPath() {
        this.path = buildPath(this);
        return this.path;

    public void setPath(String path) {
        this.path = path;

    public Group getParentGroup() {
        return this.parentGroup;

    public void setParentGroup(Group group) {
        this.parentGroup = group;

    private String buildPath(Group group) {
        String name = PATH_SEPARATOR + group.getName();

        if (group.getParentGroup() != null) {
            name = buildPath(group.getParentGroup()) + name;

        return name;

You may notice in the above code that the parentGroup property has a type of Group. This is possible because PicketLink allows identity definitions to themselves directly reference other identities via their properties. We can also notice that the getPath() method is annotated with @Unique - this is to ensure that group names are unique within their same group branch (i.e. no other group with the same parent group may have the same name).


The Role class is quite similar to the Group class except that it doesn’t support a hierarchical structure, therefore its code is significantly more simple:

public class Role extends AbstractIdentityType {
    private String name;

    public String getName() {
        return name;

    public void setName(String name) { = name;


Relationships are the constructs that help us to define the privileges for our identity objects by defining how they relate to each other. Relationships are typed associations between two or more identity objects.

A relationship class must implement the Relationship interface, which itself extends AttributedType - this means that every concrete relationship instance is assigned a unique identifier value, and is also capable of storing an arbitrary set of attribute values.

The basic identity model defines only three relationship types - Grant, which associates an identity with a role; GroupMembership, which similarly associates an identity with a group; and lastly GroupRole, which is used to assign a group-specific role to an identity. Let’s take a look at them in closer detail:


The Grant relationship is a simple association between a role and an assignee, and is generally used to assign coarse-grained application-wide privileges to a set of users.

public class Grant extends AbstractAttributedType implements Relationship {
    private IdentityType assignee;
    private Role role;

    public IdentityType getAssignee() {
        return assignee;

    public void setAssignee(IdentityType assignee) {
        this.assignee = assignee;

    public Role getRole() {
        return role;

    public void setRole(Role role) {
        this.role = role;

You may notice that there is nothing special about the above code - the Grant class extends AbstractAttributedType to provide the basic framework methods, however the implementation of the relationship itself is simple a couple of fields with getter/setter methods. This simplicity is intentional - as long as your property values implement the IdentityType interface then PicketLink is perfectly capable of working with your relationship class without any further effort required. This makes it extremely easy to define new custom relationship types yourself.

If you do wish to define a non-identity attribute property for your relationship, then you may do so by annotating the property with @AttributeProperty, the same way as is done when creating an identity type. For example, let’s say we would like to store the grant date for the above relationship - we could simply add a property like this:

    private Date grantDate;

    public Date getGrantDate() {
        return grantDate;

    public void setGrantDate(Date grantDate) {
        this.grantDate = grantDate;

Then we would be able to use it like so:

Grant grant = new Grant();
grant.setGrantDate(new Date());

Alternatively, you could of course just set an arbitrary attribute value:

Grant grant = new Grant();
grant.setAttribute(new Attribute(grantDate, new Date()));

Which option you would choose is totally up to you.


The GroupMembership relationship is very similar to Grant, except that it is used to associate an account with a Group.

public class GroupMembership extends AbstractAttributedType implements Relationship {
    private Account member;
    private Group group;

    public Account getMember() {
        return member;

    public void setMember(Account member) {
        this.member = member;

    public Group getGroup() {
        return group;

    public void setGroup(Group group) { = group;

Once again, there is absolutely nothing special about this implementation - it simply defines two property values with getter/setter methods.


The GroupRole relationship is used to represent a group-specific role, similar to Grant however restricted to the scope of a single group. To simplify the implementation of this relationship it extends the Grant relationship (which already provides the assignee and role properties), and just adds a group property:

public class GroupRole extends Grant implements Relationship {
    private Group group;

    public Group getGroup() {
        return group;

    public void setGroup(Group group) { = group;

Like the above examples, the implementation is extremely simple - a single property with getter/setter methods.


Hopefully this article has been helpful in explaining how to work with PicketLink’s identity model, both in using the provided basic model and in describing how to create your own custom model, complete with identity and relationship classes.

If you have any questions or comments please post them below, and if you have any special requests for future PicketLink Deep Dive topics please let us know also.

Thanks for reading!

London: Meet an Open Source Project

Posted by Sanne Grinovero    |    Jul 23, 2013    |    Tagged as Events Hibernate Search Java EE

Tomorrow the London Java Community (LJC) meets for the monthly Meet a Project night. I'll be there again, this time to explain how to get started contributing to Hibernate Search and answer all the questions you might have about the mysterious world of open source professionals.

Contributing to Hibernate

I don't expect to need to explain what Hibernate Search is to the readers of this blog, but to clarify what it means in terms of CV and career opportunities, you get to learn and contribute to two very popular open source projects in one shot: Apache Lucene and Hibernate. Technically it's maintained under the Hibernate umbrella but you get very close to Lucene/Solr as well. And there are other cool projects being presented too: we do quick round-tables, so you'll get to talk face to face with many developers.

Free as in Beer

Anyone is welcome and Red Hat kindly sponsors with some refreshments, so you have an excuse to come and say hi even if you think you're not going to contribute any code: your suggestions and complaints over a drink are very welcome too. Free as in Beer is getting a new meaning:-)

Where and When

To register and find more details look at the Meetup event page.

Migrating Spring Applications to Java EE 6

Posted by Strong Liu    |    Aug 18, 2012    |    Tagged as Java EE

commentary 译注

This is a translation of this post, so if you're a english speaker, please read the original post.

这是我第一次正经的翻译整篇的英文文章, 感觉真是比自己写还要累啊...... :(


Bert Ertman, Paul BakkerLuminis 写了一系列的文章来介绍如何把应用从Spring迁移到Java EE6上, 并且他们还提供了一个简单的实例项目.

在本系列中, Paul和Bert从基本的原理开始介绍整个Spring到Java EE6的迁移过程, 并且通过实际的例子展示了如何升级Web UI显示层, 替换数据访问层, 如何从AOP切换到CDI 拦截器, 迁移JMX, 如何处理JDBC Templates等等, 另外, 还额外的演示了如何使用Arquillian来对基于Java EE6的应用程序做集成测试.


在过去的几年中, 越来越多的人开始对Java EE感兴趣了, Java EE 5的出现就让大家耳目一新, Java EE 6则重建了开发人员对Java EE技术体系的信心. 现在, Java EE 已经是当下的热点技术, 许多开发人员都在重新对其进行评估, 已审视自己对其的固有印象. 在2003/2004年的时候, J2EE正处于最低潮的时期, 由于它太过于阳春白雪而忽视了如何解决实际问题, 让开发人员对J2EE嗤之以鼻. 正是那个时期, 一本书横空出世, 从本质上改变了之后五年内企业级Java开发的方式. 这本书正是J2EE Design and Development, 不久之后, 又有一本J2EE without EJB出版, 这两本书都出自Rod Johnson(译注:最近Rod Johnson刚刚从SrpingSource离开)之手.

开发人员当时非常喜爱这两本书和书中对他们在开发中碰到的问题所给出的解决方案. 事实上, 开发人员现在仍然很喜欢这些书, 但是, 他们并没有意识到现在的世界和几年前已经发生了根本性的改变. 现在, 我们需要思考的是那些在Rod Johnson 2003/2004的书中所提出的前提条件是否还合理. 按照我们的观点, 所有书中所提到的问题, 现在都可以使用基于轻量级POJO的 Java EE技术所以解决. 所以, 如果你仍然在意那两本书, 那么表达你的敬意的最好的方式可能就是用他们垫个桌子/显示器啥的.

有两种情况最有可能会讨论到究竟是用Spring还是Java EE, 从头开发一个全新的企业级应用, 或者, 升级一个遗留下来的应用. 如果是升级一个六七年前的Java应用, 那么不管是用Spring 还是Java EE, 都是个相当大的麻烦事. 不过, 如果是一个从头开始的项目, 我们认为选择Srping还是Java EE根本就不是一个问题了, Java EE由于其是Java业界的标准, 并且轻量, 对于主流企业级应用开发中的问题都有合适的解决方案, 自然是我们应该优先选择的方案. 如果没有特殊的原因, 你不应该选择标准技术之外的东东.

本文并不着重于讨论究竟哪种技术更好, Spring还是Java EE, 我们想要展示的是如何解决在升级老的Spring项目的过程中所遇到的问题. 并且, 我们也不鼓励完全按照本文所讲的路径进行升级. 现实中, 可能会有各种各样的原因没有办法完全遵循本文所介绍的迁移步骤进行, 例如时间, 金钱和经验等因素. 不过我们保证, 当你遗留的Spring项目遇到问题, 并且你想把它迁移到一个现代的企业软件架构, 还需要保证这个架构在未来五年内能够满足要求的话, 本文将为你演示一个可行的方案.


在我们讨论如何迁移一个遗留的Spring项目之前, 首先需要问自己的是 为什么要这样搞? 这绝对是个合理的问题, 并且它可以有很多的答案. 首先, Spring是一个创建于主流技术不能满足大家要求的年代的开源项目, 它从诞生以来一直都很成功, 所以, 它从一开始就很吸引风险投资的眼球, 最终, 被专注于虚拟话的VMware所收购. 对于Java世界的人来说, 大家可能都知道可以用它在我们的Mac上跑虚拟机运行Windows. 不过Spring和它的众多子项目, 没有一个成为Java 标准, 不过还是有公司和个人在推动Spring的标准化进程的. 据我们所知, 推动Spring的标准化, 不管是对Spring项目, 还是它身后的SpringSource公司, 都没有任何明显的好处. 当然, 这些都是公司间的政治, 大堆数开发人员对此一点都不感冒, 所以, 还是让我们从技术的角度来看看为什么迁移到Java EE是一个更好的选择.

从技术角度来看, 就算你要把老版本的Spring项目升级到新版的, 也还是有好多工作要做, 老版本中所用到的好多Spring技术可能很快或者已经不被支持了.

当我们说老版本的Spring项目的时候, 我们脑海中浮现的是, 大量复杂的XML配置文件, 过时的DAO层技术, 例如JDBC Template, Kodo 或者TopLink, 还有像使用早就不推荐使用了的Web MVC的扩展 -- SimpleFormController.

现在还有程序员对这些技术很精通么? 就算有, 那么未来五年还能有么? 从我们自身的经验来说, 这些老的项目每年都需要维护, 需要每年都更换不同的Web / ORM 框架去适应这个框架在当时的最新版本.所有的上面这些, 都需要对项目进行大规模的改动, 部分程序需要从根本上被更新, 有些时候甚至需要从头来过, 既然你这都能够忍, 那为什么不一步到位, 直接把项目转成符合Java EE 标准的?


如果你仍然对Java EE是否能够满足你的需求持怀疑态度, 那么我们建议你继续阅读本文, 希望能够消除你对其的一些误解. 大家都J2EE的一些缺点可能都很了解了, 但是, 一些开发人员就从此完全放弃了Java EE, 也不再关心其的最新进展了, 所以可能对当前Java EE不是很了解.

本文的目的不是向你完整的介绍Java EE6, 只是通过对比Spring 和 Java EE 6中的一些关键技术来说明为什么我们认为你应该选择Java EE 6.


过去大家经常抱怨Java EE的一点就是它太过于沉重了, 一些人甚至把Java EE解释成Java Evil Edition. 这个问题现在还存在么? 光听我们说可能还不够, 让我们来看看现在的应用服务器的启动时间. 在一台不是很老的PC上, 最新版的JBoss AS 7能够在2到5秒内完成启动和部属一个完整的Java EE6项目, 基本上和使用Tomcat + Spring相同. 不过, 如果考虑到你的项目的war/ear包的大小, 就会有明显的不同了, 一个标准的Java EE6项目最终的部属文件(war/ear)也就大约100K的大小, 而一个Spring, 则差不多要30M, 这是因为对于基于Java EE的项目来说, 所有的标准Java EE 服务都是由应用服务器所提供的, 就不需要再把依赖的Jar打进你自己的项目当中了.

一个我们最喜欢的观点来自于JBoss的工程师Andrew Lee Rubinger, 他说没有任何Java EE标准要求应用服务器要是笨重并且缓慢的 -- 非常正确!


单纯从技术角度, 我们值得看看过去的J2EE在这方面有哪些缺失, 而Spring也正是从那里切入的. 过去, 大家选择Spring的很重要的一个原因是它提供了反转控制(Inversion of Control, IoC), 或者, 现在比较流行的名字, 依赖注入(Dependency Injection, DI). 根据Wikipedia上的表述, 反转控制是一种开发模式, 核心在于让可重用的结构性代码控制业务逻辑代码. 这个模式有一个隐性的前提条件就是结构性代码和业务逻辑代码是分别独立开发的, 然后再整合进一个程序当中. 反转控制有时候也被表述称好莱坞原则, 即不要给我打电话, 有需要的时候我会叫你的, 因为其实现通常依赖于回调函数.

尽管反转控制通常被认为是由Spring框架的作者发明的, 但实际上, 早在上个世纪八十年代就已经有公开的出版物提到过这个古老的设计模式了. Spring 所做的, 实际上把这个设计模式成功的应用到了企业级Java开发当中, 并且借助于此, 大幅度的对代码之间的关系进行解耦, 从而得到了更好的可测试代码. Java EE在很长一段时间内都做不到这一点, 但是, 从Java EE6开始, 基于上下文关系的依赖注入(Contextal Dependency Injection, CDI), 成为了EE标准的一部分, 它提供了强大的基于上下文关系的依赖注入, 并且能够非常灵活的注入各种标准规范所提供的服务. 如果你还不了解它, 那可能真是一种损失了.


另一项由Spring引领而流行起来的技术就是面向方面编程(Aspect Oriented Programming, AOP). 通过使用AOP, 基于面向对象的语言能够在编译期或者运行期被织入一些横向关注点. 典型的场景就要数日志和安全了, 当然, AOP的能力并不局限于这两部分. 但是过度的使用AOP则是另外一个常见的误区, 这会导致代码可读性降低. 这是因为AOP所关注的切面, 和切面逻辑的实现代码并不在同一个位置. 这样, 要是想知道一段代码在运行时究竟干了哪些事情就不是那么清楚了. 当然, 这并不是说AOP就一无是处, 如果你能够合理的使用AOP的话, 它能够帮你很好的做到关注点分离和代码重用.

我们通常把在有限的位置使用AOP称之为轻量级的AOP, 这也正是Java EE中的拦截器所提供的功能. EJB中过滤器的实现能够在某种程度上达到限制滥用AOP的作用. 过滤器模型是在Java EE 5的时候被引入的, 如果把它和CDI (在Java EE 6中被引入) 结合起来, 可以做到很清晰的关注分离架构, 并且大幅度的减少重复代码.


在上面介绍依赖注入的时候, 我们已经简单的提了一下这块内容, 在过去的J2EE中, 基本上很难做到真正的功能代码测试, 主要是因为大部分J2EE的组件都依赖于各种运行时的服务才能够正常工作. 这一点是可测试设计的死穴. 而现在的Java EE已经回归到基于POJO的组件设计了, 可测试性提升不少, 不过, 各个组件依旧依赖于运行时的服务. 为了真正的对Java EE的组件在应用服务器中进行测试, 我们需要创建一个模拟(mock)框架. 或者不需要?

正是在不久之前, Arquillian 框架横空出世, 为Java EE领域中的功能测试创造了可能性.

通过使用Arquillian, 我们可以抛开各种模拟框架, 直接在容器中对Java EE组件进行测试. 在测试被执行之前, Arquillian会先创建一个所谓的微型部属文件(micro deployment)包含了被测试的Java EE组件和它直接依赖的服务, 然后再由此创建出一个可部属文件并将其部属至一个正在运行的Java EE容器(或者临时启动一个)当中.

Arauillian也不会强迫你转向一个新的测试技术, 相反, 它会无侵入性的与你喜爱的测试框架集成, 例如JUnit. 在测试领域, Arquillian可以说是Java EE的最佳伴侣了. 在Arquillian的网站上提供了非常好的文档和大量的示例, 同时, 在本系列文章中, 我们也会向你展示如何写基本的功能性测试.


最后, 我们都有过这样的经历, 开发工具太过笨重, 缓慢, 还得先画一堆UML, 再通过复杂的向导一步步的往下走, 幸运的情况下才能够把向导走完, 才能做出个简单的东西来. 你的电脑还会像一辆小汽车正在被一个大卡车蹂躏一样的狂叫. 幸运的是, 这些情况都过去了. 现在主流的三大Java IDE都已经对开发Java EE组件有了很好的支持, 并且占用资源也不那么多. 开发Java EE应用也不再被限定在某一家的工具上了, 你可以按照自己的喜好来选择IDE, 还有应用服务器了. 通过使用诸如JBoss Forge等工具来创建一个基于Maven的Java EE项目现在轻而易举.

Spring和Java EE的功能对比

功能 Spring Java EE
依赖注入 Spring 容器 CDI
事务 AOP/Annotations EJB
Web 框架 Spring Web MVC JSF
AOP AspectJ (只支持Spring Beans) 拦截器
数据访问 JDBC Template / other ORM / JPA JPA
RESTful Spring Web MVC(3.x) JAX-RS
集成测试 Spring Test Framework Arquillian

作为总结, 我们可以负责任的说, 轻量级的Java EE能够提供Spring所提供的所有功能.


怎样做才是最好的方法来开始这个迁移呢? 一些人可能会直接扔掉遗留项目然后选择从头开始, 尽管这种方式很吸引人, 并且如果时间和资金没有限制的话这样做非常可行. 不过, 对于大多数现实情况来说, 这显然不可取. 我们需要一种渐进式替换的方法, 并且保证在这个过程当中不会出现什么灾难性的故障. 我们需要有一个迁移方案来带领我们一步步的向前推进, 这个方案不应限制我们自己的创造性, 并且, 它应该允许我们在任何一步的时候停下来. 由此, 我们得出如下的迁移方案:

  1. 升级Spring到最新版本
  2. 替换Spring中使用的过时的框架(例如ORM, Web框架等)
  3. 同时运行Spring 容器和应用服务器
  4. 完全替换掉Spring
  5. 移除Spring容器

在本系列的下一篇中, 我们会把重点放在如何迁移上面, 然后通过一个真实的示例项目向你展示迁移过程中的不同阶段. 届时会有实例代码和一个github上的项目提供给你, 这样, 你可以自己动手来尝试整个迁移过程.


作者: Bert Ertman, Paul Bakker

Bert Ertman(@BertErtman) 是Luminis公司的院士, 该公司位于荷兰, 他同时还是荷兰Java用户组的领导人(该用户组大约3500名成员).

Paul Bakker(@pbakker)是Luminis公司的一名高级工程师, 他还是JBoss Seam, Arquillian 和Forge等项目的contribitor.

这两名作者都有着非常丰富的企业应用软件开发经验, 无论是使用J2EE之前的技术, 还是J2EE, Spring以及现代的Java EE技术. 他们对现代的企业软件开发技术有过很多的讨论, 也包括Spring和Java EE的对比, 现在, 他们相信, 不管是全新的企业应用软件, 还是迁移已经被广泛部属的遗留项目, Java EE 6都可以提供最好的技术. 本文的作者们已经在全球很多技术会议中介绍推荐过Java EE 6技术了, 包括J-Fall, Jfokus , Devoxx 和 JavaOne. 本系列文章是基于他们广受好评的在JavaOne 2011上的演讲迁移Spring到Java EE6的最佳实践写成.

Without further ado, Hibernate Validator 4.3.0.Final is available for download in the JBoss Maven Repository under the GAV org.hibernate:hibernate-validator:4.3.0.Final or via SourceForge.

The changelog for this version does not contain much, so let me summarize the most important changes of Hibernate Validator 4.3

  • The package structure got refactored to separate clearly between API, SPI and internal classes. Doing though we deprecated some classes. Make sure to migrate to the new types when upgrading to 4.3 read more
  • slf4j got replaced by JBoss Logging as the main logging framework read more
  • A bunch of new and improved constraints, e.g. MOD11, CNPJ, CPF and TituloEleitoral read more
  • A bunch of performance and quality improvements, in particular we addressed issues around metadata caching read more
  • The Hibernate Validator Annotation Processor can now be used without any additional dependencies which makes it setup easier read more
  • Hibernate Validator 4.3 requires now a Java 6 or 7 runtime

Please check also the Hibernate Validator Migration Guide.

Last but not least, thanks to everyone who was lending a helping hand during the development of Hibernate Validator 4.3 (in case you wanted to help, but missed out - why not helping by translating the documentation? Join the Hibernate Validator project on Zanata and get started...).


P.S. In case you are waiting for a Validator release which aligns with the first draft of Bean Validation 1.1, have an eye on the JBoss snapshot repository. An initial hibernate-validator-5.0.0-SNAPSHOT won't be far off.

Neuchatel JBoss User Group starting next week

Posted by Max Andersen    |    Sep 15, 2011    |    Tagged as Events Java EE

If you are near Neuchatel, Switzerland next week then we are having our first meeting in JBUG: Neuchatel at the Red Hat/JBoss offices in Neuchatel, Wednesday 21st September 2011.

The topic of the first meeting is JBUG Neuchatel Intro and AS 7.

You can see the schedule here and leave a comment if you plan on showing up or are interested in future meetups at Neuchatel and surrounding area.

Java EE 6 on OpenShift

Posted by Pete Muir    |    Aug 12, 2011    |    Tagged as CDI Java EE JBoss AS Seam

Unless you've been living under a rock for the past week, then you'll have seen that Red Hat just added support for JBoss AS 7 on OpenShift (both Express and Flex).

Why should you care?

  • OpenShift is the first PaaS to offer Java EE 6 support
  • OpenShift Express is 100% free, and allows you to run as many non-clustered applications you want on JBoss AS 7
  • OpenShift Express offers neat management of your apps via Git, including a source compilation mode
  • OpenShift Flex gives you much more freedom, including the ability to run clustered applications, and offers monitoring and automatic scaling.
  • OpenShift Flex is free, but you need to provide the EC2 instances. However Red Hat is offering a free 1 month/30 hour trial, so there is no reason not to check it out right now
  • JBoss AS 7 implements the Java EE 6 web profile, with all the benefits of the excellent CDI-based programming model it offers. It's very snappy to use, so deploying apps is quick

How can you learn more?

This week we've been pushing out material like crazy. If you want to give Java EE 6 in the cloud a try check out these resources:

To find out more, check out

We're going to dive down into the rabbit hole, and follow up on Wesley Hales great video on deploying the mobile web optimized RichFaces TweetStream application to Red Hat's new, free PaaS offering OpenShift Express complete with JBoss AS 7 under the hood!

Whats Been Covered

There has already been a lot of coverage on OpenShift, and the mobile web optimized TweetStream app. So I'm not going to cover old ground. Check out these blogs and videos:

OpenShift Express Updates

The RichFaces team is in the process of migrating our RichFaces Showcase application from Google App Engine to OpenShift Express, we'll have it ready soon. OpenShift offers a number of benefits over GAE, it is a real Java EE container, supports RichFaces push and has a much less restrictive API

Like many other free PaaS offerings OpenShift Express does have a few limitations that you need to consider. The most important ones for our application are limited threads, and JMS support. Note that all of these go away when you move up to OpenShift Flex!

RichFaces Push streamlined

When RichFaces 4.0.0.Final was released our push component was tied to JMS. This provides excellent enterprise level messaging capabilities, but unfortunately requires some setup to use. Since JMS is not provided by Express out of the box we needed to make some changes. So for 4.1.0 we are adding in a few options!

Starting with RichFaces 4.1.0.M1 RichFace Push can be decoupled from JMS. All that is needed is to set a context param:


This will switch push to using an internal message queue. Look for further blogs, and documentation on this in the future. This is also one step closer to our plan to support CDI events as part of RichFaces Push.

Atmosphere updates on the way

Another change that was needed was moving to a snapshot version of Atmosphere. Atmosphere had a bug where it was creating a new thread for each request - ouch! Since OpenShift Express has limited threads available we needed a way around this.

Thankfully this issue was fixed in the Atmosphere 0.8-SNAPSHOT branch. This version of Atmosphere is due to be released in August, and RichFaces will use it by default once it is (likely in the 4.1.0.M2 release).

For now - if you are working on your own RichFaces push application and deploying to Express you'll need to override the Atmosphere version. This is simple enough with Maven, just add the following to your pom:


Infinispan cache LOCAL support

As was discussed in some of the linked blogs, the TweetStream application uses Infinispan under the covers to provide caching for the tweet data that we process. Infinispan in cluster mode uses jGroups to provide advanced communication support.

The problem here is the threading that accompanies this. For TweetStream it is important to make sure that you are using Infinispan in LOCAL mode. The latest TweetStream source has been updated to use the LOCAL cache.

Where to go from here

Now that we've gone over updates that are needed to take advantage of OpenShift Express I encourage you do give it a shot on your own. The source code is in the TweetStream git repo. Just follow the readme to setup and build it. Then deploy following the instructions in Wesley's video.

There will be more JBoss and OpenShift blogs and videos coming out, so stay tuned and check out JBoss and OpenShift page for the latest news.

[OpenShift Express] [JBoss OpenShift News] [OpenShift Twitter] [TweetStream git Repo] [RichFaces Twitter]

Union types and covariance, or why we need intersections

Posted by Gavin King    |    Aug 7, 2011    |    Tagged as Ceylon Java EE

A couple of days ago I spotted an interesting hole in Ceylon's type checker. I started out with something like this:

Sequence<Float>|Sequence<Integer> numbers = .... ;
Sequence<Numbers> numbers2 = numbers;

The Sequence interface is covariant in its type parameter. So, since Float and Integer are both subtypes of Number, both Sequence<Float> and Sequence<Integer> are subtypes of Sequence<Number>. Then the compiler correctly reasons that the union of the two types is also a subtype of Sequence<Number>. Fine. Clever compiler.

Now, here's the hole:

value first = numbers.first; //compiler error: member does not exist
value first2 = numbers2.first; //ok, infers type Number

When it encountered the union type, the member resolution algorithm was looking for a common produced type of the type constructor Sequence in the hierarchies of each member of the union type. But since Sequence<Float> isn't a subtype of Sequence<Integer> and Sequence<Integer> isn't a subtype of Sequence<Float>, it simply wasn't finding a common supertype. This resulted in the totally counterintuitive (and, in my view, pathological) result that the member resolution algorithm could not assign a type to the member first of the type Sequence<Float>|Sequence<Integer>, but it could assign a type after widening to the supertype Sequence<Number>.

Of course, there might be many potential common supertypes of a union type. There's no justification for the member resolution algorithm to pick Sequence<Number> in preference to a sequence of any other common supertype of Float and Integer. We've got an ambiguity.

So I quickly realized that I had an example which was breaking two of the basic principles of Ceylon's type system:

  • It is possible to assign a unique type to any expression without cheating and looking at where the expression appears and how it is used.
  • All types used or inferred internally by the compiler are denoteable. That is, they can all be expressed within the language itself.

These principles are specific to Ceylon, and other languages with generics and type inference don't follow them. But failing to adhere to them - and especially to the second principle - results in extremely confusing error messages (Java's horrid capture-of type errors, for example). These two principles are a major reason why I say that Ceylon's type system is simpler than some other languages with sophisticated static type systems: when you get a typing error, we promise you that it's an error that humans can understand.

Fortunately I was able to discover a useful relationship between union types and covariance.

If T is covariant in its type parameter, then T<U>|T<V> is a subtype of T<U|V> for any types U and V.

But furthermore, T<U|V> is also a subtype of any common supertype of T<U> and T<V> produced from the type constructor T. We've successfully eliminated the ambiguity!

So I adjusted the member resolution algorithm to make use of this relationship. Now the problematic code compiles correctly, and infers the correct type for first:

value first = numbers.first; //ok: infers type Float|Integer

Well, that's great! Oh, but what about types with contravariant type parameters? What type should be inferred for the parameter of the consume() method of Consumer<Float>|Consumer<Integer>? Well, I quickly realized that the corresponding relationship for contravariant type parameters is this one:

If T is contravariant in its type parameter, then T<U>|T<V> is a subtype of T<U&V> for any types U and V.

Where U&V is the intersection of the two types. So the type of the parameter of consume() would be Float&Integer, which is intuitively correct. (Of course, since it is impossible for any object to be assignable to both Float and Integer, the compiler could go even further and reduce this type to the bottom type.)

But, ooops, Ceylon doesn't yet have first-class intersection types, except as a todo in the language specification. And our second principle states that the compiler isn't allowed to infer or even think about types which can't be expressed in the language!

Well, really, I was just waiting for the excuse to justify introducing intersection types, and this gave me the ammunition I was waiting for. So yesterday I found a couple of free hours to implement experimental support for intersection types in the typechecker, and, hey, it turned out to be much easier than I expected. It's also a practically useful feature. I've often wanted to write a method which accepts any value which is assignable to two different types, without introducing a new type just to represent the intersection of the types.

I'll leave you with two more interesting relationships, applying to intersection types:

If T is covariant in its type parameter, then T<U>&T<V> is a supertype of T<U&V> for any types U and V.
If T is contravariant in its type parameter, then T<U>&T<V> is a supertype of T<U|V> for any types U and V.

Nice symmetries here.

back to top