Red Hat

In Relation To Vlad Mihalcea

In Relation To Vlad Mihalcea

Hibernate Community Newsletter 15/2016

Posted by    |       |    Tagged as Discussions Hibernate ORM

Welcome to the Hibernate community newsletter in which we share blog posts, forum, and StackOverflow questions that are especially relevant to our users.

Articles

Null and not-null @DiscriminatorValue options

Posted by    |       |    Tagged as Discussions Hibernate ORM

Inheritance and discriminator columns

Although it can be used for JOINED table inheritance, the @DiscriminatorValue is more common for SINGLE_TABLE inheritance. For SINGLE_TABLE, the discriminator column tells Hibernate the subclass entity type associated with each particular database row.

Without specifying a discriminator column, Hibernate is going to use the default DTYPE column. To visualize how it works, consider the following Domain Model inheritance hierarchy:

@Entity(name = "Account")
@Inheritance(strategy = InheritanceType.SINGLE_TABLE)
public static class Account {

    @Id
    private Long id;

    private String owner;

    private BigDecimal balance;

    private BigDecimal interestRate;

    public Long getId() {
        return id;
    }

    public void setId(Long id) {
        this.id = id;
    }

    public String getOwner() {
        return owner;
    }

    public void setOwner(String owner) {
        this.owner = owner;
    }

    public BigDecimal getBalance() {
        return balance;
    }

    public void setBalance(BigDecimal balance) {
        this.balance = balance;
    }

    public BigDecimal getInterestRate() {
        return interestRate;
    }

    public void setInterestRate(BigDecimal interestRate) {
        this.interestRate = interestRate;
    }
}

@Entity(name = "DebitAccount")
public static class DebitAccount extends Account {

    private BigDecimal overdraftFee;

    public BigDecimal getOverdraftFee() {
        return overdraftFee;
    }

    public void setOverdraftFee(BigDecimal overdraftFee) {
        this.overdraftFee = overdraftFee;
    }
}

@Entity(name = "CreditAccount")
public static class CreditAccount extends Account {

    private BigDecimal creditLimit;

    public BigDecimal getCreditLimit() {
        return creditLimit;
    }

    public void setCreditLimit(BigDecimal creditLimit) {
        this.creditLimit = creditLimit;
    }
}

For this mode, Hibernate generates the following database table:

create table Account (
    DTYPE varchar(31) not null,
    id bigint not null,
    balance decimal(19,2),
    interestRate decimal(19,2),
    owner varchar(255),
    overdraftFee decimal(19,2),
    creditLimit decimal(19,2),
    primary key (id)
)

So when inserting two subclass entities:

DebitAccount debitAccount = new DebitAccount();
debitAccount.setId( 1L );
debitAccount.setOwner( "John Doe" );
debitAccount.setBalance( BigDecimal.valueOf( 100 ) );
debitAccount.setInterestRate( BigDecimal.valueOf( 1.5d ) );
debitAccount.setOverdraftFee( BigDecimal.valueOf( 25 ) );

CreditAccount creditAccount = new CreditAccount();
creditAccount.setId( 2L );
creditAccount.setOwner( "John Doe" );
creditAccount.setBalance( BigDecimal.valueOf( 1000 ) );
creditAccount.setInterestRate( BigDecimal.valueOf( 1.9d ) );
creditAccount.setCreditLimit( BigDecimal.valueOf( 5000 ) );

Hibernate will populate the DTYPE column with the subclass class name:

INSERT INTO Account (balance, interestRate, owner, overdraftFee, DTYPE, id)
VALUES (100, 1.5, 'John Doe', 25, 'DebitAccount', 1)

INSERT INTO Account (balance, interestRate, owner, creditLimit, DTYPE, id)
VALUES (1000, 1.9, 'John Doe', 5000, 'CreditAccount', 2)

While this is rather straightforward for most use cases, when having to integrate a legacy database schema, it might be that the discriminator column contains NULL(s) or some values that are not associated to any entity subclass.

Consider that our database contains records like these:

INSERT INTO Account (DTYPE, balance, interestRate, owner, id)
VALUES (NULL, 300, 0.9, 'John Doe', 3)

INSERT INTO Account (DTYPE, active, balance, interestRate, owner, id)
VALUES ('Other', true, 25, 0.5, 'Johnny Doe', 4)

INSERT INTO Account (DTYPE, active, balance, interestRate, owner, id)
VALUES ('Unsupported', false, 35, 0.6, 'John Doe Jr.', 5)

With the previous mappings, when trying to fetch all Account(s):

Map<Long, Account> accounts = entityManager.createQuery(
        "select a from Account a", Account.class )
.getResultList()
.stream()
.collect( Collectors.toMap( Account::getId, Function.identity()));

We’d bump into the following kind of issues:

org.hibernate.WrongClassException: Object [id=3] was not of the specified subclass
[org.hibernate.userguide.inheritance.Account] : Discriminator: null

org.hibernate.WrongClassException: Object [id=4] was not of the specified subclass
[org.hibernate.userguide.inheritance.Account] : Discriminator: Other

org.hibernate.WrongClassException: Object [id=5] was not of the specified subclass
[org.hibernate.userguide.inheritance.Account] : Discriminator: Unsupported

Fortunately, Hibernate allows us to handle these mappings by using NULL and NOT NULL discriminator value mapping.

For the NULL values, we can annotate the base class Account entity as follows:

@Entity(name = "Account")
@Inheritance(strategy = InheritanceType.SINGLE_TABLE)
@DiscriminatorValue( "null" )
public static class Account {

    @Id
    private Long id;

    private String owner;

    private BigDecimal balance;

    private BigDecimal interestRate;

    // Getter and setter omitted for brevity
}

For the Other and Unsupported discriminator values, we can have a miscellaneous entity that handles all values that were not explicitly mapped:

@Entity(name = "MiscAccount")
@DiscriminatorValue( "not null" )
public static class MiscAccount extends Account {

    private boolean active;

    public boolean isActive() {
        return active;
    }

    public void setActive(boolean active) {
        this.active = active;
    }
}

This way, the aforementioned polymorphic query works and we can even validate the results:

assertEquals(5, accounts.size());
assertEquals( DebitAccount.class, accounts.get( 1L ).getClass() );
assertEquals( CreditAccount.class, accounts.get( 2L ).getClass() );
assertEquals( Account.class, accounts.get( 3L ).getClass() );
assertEquals( MiscAccount.class, accounts.get( 4L ).getClass() );
assertEquals( MiscAccount.class, accounts.get( 5L ).getClass() );

I have also updated the Hibernate 5.0, 5.1, and 5.2 documentations with these two very useful mapping options.

How we fixed all database connection leaks

Posted by    |       |    Tagged as Discussions Hibernate ORM

The context

By default, all Hibernate tests are run on H2. However, we have a lots of database-specific tests as well, so we should be testing on Oracle, PostgreSQL, MySQL, and possibly SQL Server as well.

When we tried to set up a Jenkins job that uses PostgreSQL, we realized that the job fails because we ran out of connections. Knowing that the PostgreSQL server has a max_connections setting of 30, we realized the connection leak issue was significant.

Needle in a haystack

Just the hibernate-core module alone has over 5000 tests, and hibernate-envers has around 2500 tests as well. But there are many mode modules: hibernate-c3p0, hibernate-ehcache, hibernate-jcache, etc. All in all, we couldn’t just browse the code and spot issues. We needed an automated connection leak detector.

That being said, I came up with a solution that works on H2, Oracle, PostgreSQL, and MySQL as well. Luckily, no problem was spotted in the actual framework code base. All issues were caused by unit tests which did not handle database resources properly.

The most common issues

One of the most widespread issue was caused by improper bootstrapping logic:

@Test
public void testInvalidMapping() {
    try {
        new MetadataSources( )
                .addAnnotatedClass( TheEntity.class )
                .buildMetadata();
        fail( "Was expecting failure" );
    }
    catch (AnnotationException ignore) {
    }
}

The issue here is that MetadataSources creates a BootstrapServiceRegistry behind the scenes, which in turn triggers the initialization of the underlying ConnectionProvider. Without closing the BootstrapServiceRegistry explicitly, the ConnectionProvider will not get a chance to close all the currently pooled JDBC Connection(s).

The fix is as simple as that:

@Test
public void testInvalidMapping() {
    MetadataSources metadataSources = new MetadataSources( )
        .addAnnotatedClass( TheEntity.class );
    try {
        metadataSources.buildMetadata();
        fail( "Was expecting failure" );
    }
    catch (AnnotationException ignore) {
    }
    finally {
        ServiceRegistry metaServiceRegistry = metadataSources.getServiceRegistry();
        if(metaServiceRegistry instanceof BootstrapServiceRegistry ) {
            BootstrapServiceRegistryBuilder.destroy( metaServiceRegistry );
        }
    }
}

Another recurring issue was improper transaction handling such as in the following example:

protected void cleanup() {
    Session s = getFactory().openSession();
    s.beginTransaction();

    TestEntity testEntity = s.get( TestEntity.class, "foo" );
    Assert.assertTrue( testEntity.getParams().isEmpty() );

    TestOtherEntity testOtherEntity = s.get( TestOtherEntity.class, "foo" );
    Assert.assertTrue( testOtherEntity.getParams().isEmpty() );

    s.getTransaction().commit();
    s.clear();
    s.close();
}

The first thing to notice is the lack of a try/finally block which should be closing the session even if there is an exception being thrown. But that’s not all.

Not a long time ago, I had fixed HHH-7412, meaning that, for RESOURCE_LOCAL (e.g. JDBC Connection-bound transactions), the logical or physical Connection is closed only when the transaction is ended (either commit or rollback).

Before HHH-7412 was fixed, the Connection was closed automatically when the Hibernate Session was closed as well, but this behavior is not supported anymore. Nowadays, aside from closing the underlying Session, you have to commit/rollback the current running Transaction as well:

protected void cleanup() {
    Session s = getFactory().openSession();
    s.beginTransaction();

    try {
        TestEntity testEntity = s.get( TestEntity.class, "foo" );
        Assert.assertTrue( testEntity.getParams().isEmpty() );

        TestOtherEntity testOtherEntity = s.get( TestOtherEntity.class, "foo" );
        Assert.assertTrue( testOtherEntity.getParams().isEmpty() );

        s.getTransaction().commit();
    }
    catch ( RuntimeException e ) {
        s.getTransaction().rollback();
        throw e;
    }
    finally {
        s.close();
    }
}

If you are curious of all the changes that were required, you can check the following two commits: da9c6e1 and f5e10c2. The good news is that the PostgreSQL job is running fine now, and soon we will add jobs for Oracle and a MySQL too.

Hibernate Community Newsletter 13/2016

Posted by    |       |    Tagged as Discussions Hibernate ORM

Welcome to the Hibernate community newsletter in which we share blog posts, forum, and StackOverflow questions that are especially relevant to our users.

Articles

Introduction

When I started writing High-Performance Java Persistence, I decided to install four database systems on my current machine:

  • Oracle XE

  • SQL Server Express Edition

  • PostgreSQL

  • MySQL

These four relational databases are the most commonly referred ones on our forum, StackOverflow, as well as on most JIRA issues. However, these four top-ranked databases are not enough because, from time to time, we need to integrate Pull Requests for other database systems, like Informix or DB2.

Since installing a plethora of databases on a single machine is not very practical, we can do better than that. Many database providers have generated Docker images for their products, and this post is going to show you haw easy we can start an Informix database.

Running Informix on Docker

IBM offers Docker images for both Informix Innovator-C and DB2 Express-C.

As explained on Docker Hub, you have to start the container using the following command:

docker run -it --name iif_innovator_c --privileged -p 9088:9088 -p 27017:27017 -p 27018:27018 -p 27883:27883 -e LICENSE=accept ibmcom/informix-innovator-c:latest

To run the Informix Docker container, you have to execute the following command:

docker start iif_innovator_c

After the Docker container is started, we can attach a new shell to it:

docker exec -it iif_innovator_c bash

We have a databases.gradle configuration file which contains the connection properties for all databases we use for testing, and, for Informix, we have the following entry:

informix : [
    'db.dialect' : 'org.hibernate.dialect.InformixDialect',
    'jdbc.driver': 'com.informix.jdbc.IfxDriver',
    'jdbc.user'  : 'informix',
    'jdbc.pass'  : 'in4mix',
    'jdbc.url'   : 'jdbc:informix-sqli://192.168.99.100:9088/sysuser:INFORMIXSERVER=dev;user=informix;password=in4mix'
]

With this configuration in place, I only need to setup the current hibernate.properties configuration file to use Informix:

gradle clean testClasses -Pdb=informix

Now I can run any Informix integration test right from my IDE.

When I’m done, I stop the Docker container with the following command:

docker stop iif_innovator_c

As simple as that!

Introduction

When you’re using JDBC or if you are generating SQL statements by hand, you always know what statements are sent to the database server. Although there are situations when a native query is the most obvious solution to a given business use case, most statements are simple enough to be generated automatically That’s exactly what JPA and Hibernate do, and the application developer can focus on entity state transitions instead.

Nevertheless, the application developer must always assert that Hibernate generates the expected statements, as well as the number of statements being generated (to avoid N+1 query issues).

Proxying the underlying JDBC Driver or DataSource

In production, it’s very common to Proxy the underlying Driver Connection providing mechanism so that the application benefits from connection pooling, or for monitoring connection pool usage. For this purpose, the underlying JDBC Driver or DataSource can be proxied using tools such as P6spy or datasource-proxy. In fact, this is also a very convenient way of logging JDBC statements along with their bind parameters.

While for many application, it’s not an issue to add yet another dependency when you are developing an open source framework you strive to minimize the number of dependencies your project needs to depend on. Luckily, for Hibernate, we don’t even need to use an external dependency for intercepting JDBC statements, and this post is going to show you how easily you can tackle this requirement.

StatementInspector

For many use cases, the StatementInspector is the only thing you need to capture all SQL statements that are executed by Hibernate. The StatementInspector must be provided during SessionFactory bootstrapping as follows:

public class SQLStatementInterceptor {

    private final LinkedList<String> sqlQueries = new LinkedList<>();

    public SQLStatementInterceptor(SessionFactoryBuilder sessionFactoryBuilder) {
        sessionFactoryBuilder.applyStatementInspector(
        (StatementInspector) sql -> {
            sqlQueries.add( sql );
            return sql;
        } );
    }

    public LinkedList<String> getSqlQueries() {
        return sqlQueries;
    }
}

With this utility we can easily verify the Oracle follow-on-locking mechanism which is caused by the FOR UPDATE clause restrictions imposed by the database engine:

sqlStatementInterceptor.getSqlQueries().clear();

List<Product> products = session.createQuery(
    "select p from Product p", Product.class )
.setLockOptions( new LockOptions( LockMode.PESSIMISTIC_WRITE ) )
.setFirstResult( 40 )
.setMaxResults( 10 )
.getResultList();

assertEquals( 10, products.size() );
assertEquals( 11, sqlStatementInterceptor.getSqlQueries().size() );

So far, so good. But as simple as the StatementInspector may be, it does not mix well with JDBC batching. StatementInspector intercepts the prepare phase, whereas for batching we need to intercept the addBatch and executeBatch method calls.

Even without native support for such a feature, we can easily design a custom ConnectionProvider that can intercept all PreparedStatement method calls.

First, we start with the ConnectionProviderDelegate which is capable of substituting any other ConnectionProvider that would otherwise be picked by Hibernate (e.g. DatasourceConnectionProviderImpl, DriverManagerConnectionProviderImpl, HikariCPConnectionProvider) for the current configuration properties.

public class ConnectionProviderDelegate implements
        ConnectionProvider,
        Configurable,
        ServiceRegistryAwareService {

    private ServiceRegistryImplementor serviceRegistry;

    private ConnectionProvider connectionProvider;

    @Override
    public void injectServices(ServiceRegistryImplementor serviceRegistry) {
        this.serviceRegistry = serviceRegistry;
    }

    @Override
    public void configure(Map configurationValues) {
        @SuppressWarnings("unchecked")
        Map<String, Object> settings = new HashMap<>( configurationValues );
        settings.remove( AvailableSettings.CONNECTION_PROVIDER );
        connectionProvider = ConnectionProviderInitiator.INSTANCE.initiateService(
                settings,
                serviceRegistry
        );
        if ( connectionProvider instanceof Configurable ) {
            Configurable configurableConnectionProvider = (Configurable) connectionProvider;
            configurableConnectionProvider.configure( settings );
        }
    }

    @Override
    public Connection getConnection() throws SQLException {
        return connectionProvider.getConnection();
    }

    @Override
    public void closeConnection(Connection conn) throws SQLException {
        connectionProvider.closeConnection( conn );
    }

    @Override
    public boolean supportsAggressiveRelease() {
        return connectionProvider.supportsAggressiveRelease();
    }

    @Override
    public boolean isUnwrappableAs(Class unwrapType) {
        return connectionProvider.isUnwrappableAs( unwrapType );
    }

    @Override
    public <T> T unwrap(Class<T> unwrapType) {
        return connectionProvider.unwrap( unwrapType );
    }
}

With the ConnectionProviderDelegate in place, we can now implement the PreparedStatementSpyConnectionProvider which, using Mockito, it returns a Connection spy instead of an actual JDBC Driver Connection object:

public class PreparedStatementSpyConnectionProvider
        extends ConnectionProviderDelegate {

    private final Map<PreparedStatement, String> preparedStatementMap = new LinkedHashMap<>();

    @Override
    public Connection getConnection() throws SQLException {
        Connection connection = super.getConnection();
        return spy( connection );
    }

    private Connection spy(Connection connection) {
        if ( new MockUtil().isMock( connection ) ) {
            return connection;
        }
        Connection connectionSpy = Mockito.spy( connection );
        try {
            doAnswer( invocation -> {
                PreparedStatement statement = (PreparedStatement) invocation.callRealMethod();
                PreparedStatement statementSpy = Mockito.spy( statement );
                String sql = (String) invocation.getArguments()[0];
                preparedStatementMap.put( statementSpy, sql );
                return statementSpy;
            } ).when( connectionSpy ).prepareStatement( anyString() );
        }
        catch ( SQLException e ) {
            throw new IllegalArgumentException( e );
        }
        return connectionSpy;
    }

    /**
     * Clears the recorded PreparedStatements and reset the associated Mocks.
     */
    public void clear() {
        preparedStatementMap.keySet().forEach( Mockito::reset );
        preparedStatementMap.clear();
    }

    /**
     * Get one and only one PreparedStatement associated to the given SQL statement.
     *
     * @param sql SQL statement.
     *
     * @return matching PreparedStatement.
     *
     * @throws IllegalArgumentException If there is no matching PreparedStatement or multiple instances, an exception is being thrown.
     */
    public PreparedStatement getPreparedStatement(String sql) {
        List<PreparedStatement> preparedStatements = getPreparedStatements( sql );
        if ( preparedStatements.isEmpty() ) {
            throw new IllegalArgumentException(
                    "There is no PreparedStatement for this SQL statement " + sql );
        }
        else if ( preparedStatements.size() > 1 ) {
            throw new IllegalArgumentException( "There are " + preparedStatements
                    .size() + " PreparedStatements for this SQL statement " + sql );
        }
        return preparedStatements.get( 0 );
    }

    /**
     * Get the PreparedStatements that are associated to the following SQL statement.
     *
     * @param sql SQL statement.
     *
     * @return list of recorded PreparedStatements matching the SQL statement.
     */
    public List<PreparedStatement> getPreparedStatements(String sql) {
        return preparedStatementMap.entrySet()
                .stream()
                .filter( entry -> entry.getValue().equals( sql ) )
                .map( Map.Entry::getKey )
                .collect( Collectors.toList() );
    }

    /**
     * Get the PreparedStatements that were executed since the last clear operation.
     *
     * @return list of recorded PreparedStatements.
     */
    public List<PreparedStatement> getPreparedStatements() {
        return new ArrayList<>( preparedStatementMap.keySet() );
    }
}

To use this custom provider, we just need to provide an instance via the hibernate.connection.provider_class configuration property:

private PreparedStatementSpyConnectionProvider connectionProvider =
    new PreparedStatementSpyConnectionProvider();

@Override
protected void addSettings(Map settings) {
    settings.put(
            AvailableSettings.CONNECTION_PROVIDER,
            connectionProvider
    );
}

Now, we can assert that the underlying PreparedStatement is batching statements according to our expectations:

Session session = sessionFactory().openSession();
session.setJdbcBatchSize( 3 );

session.beginTransaction();
try {
    for ( long i = 0; i < 5; i++ ) {
        Event event = new Event();
        event.id = id++;
        event.name = "Event " + i;
        session.persist( event );
    }
}
finally {
    connectionProvider.clear();
    session.getTransaction().commit();
    session.close();
}

PreparedStatement preparedStatement = connectionProvider.getPreparedStatement(
    "insert into Event (name, id) values (?, ?)" );

verify(preparedStatement, times( 5 )).addBatch();
verify(preparedStatement, times( 2 )).executeBatch();

The PreparedStatement is not a mock but a real object spy, which can intercept method call while also propagating the call to the underlying actual JDBC Driver PreparedStatement object.

Although getting the PreparedStatement by its associated SQL String is useful for the aforementioned test case, we can also get all executed PreparedStatements like this:

List<PreparedStatement> preparedStatements = connectionProvider.getPreparedStatements();
assertEquals(1, preparedStatements.size());
preparedStatement = preparedStatements.get( 0 );

verify(preparedStatement, times( 5 )).addBatch();
verify(preparedStatement, times( 2 )).executeBatch();

Hibernate Community Newsletter 12/2016

Posted by    |       |    Tagged as Discussions Hibernate ORM

Welcome to the Hibernate community newsletter in which we share blog posts, forum, and StackOverflow questions that are especially relevant to our users.

Articles

Hibernate Community Newsletter 11/2016

Posted by    |       |    Tagged as Discussions Hibernate ORM

Welcome to the Hibernate community newsletter in which we share blog posts, forum, and StackOverflow questions that are especially relevant to our users.

Articles

In this post, I’d like you to meet Mark Paluch, who, among other projects, is one of our Hibernate OGM project contributors.

  1. Hi, Mark. Would you like to introduce yourself and tell us what you are currently working on?

    I am Mark Paluch, and I am working for Pivotal Software as Spring Data Engineer. I am a member of the JSR 365 EG (CDI 2.0), project lead of the lettuce Redis driver, and I run a couple of other open source projects. I enjoy tinkering on Internet of Things projects in my spare time. Before I joined Pivotal, I worked since the early 2000’s as a freelancer in a variety of projects using Java SE/EE and web technologies. My focus lies now on Spring Data with Redis, Cassandra, and MongoDB in particular.

  2. You have contributed a lot to the Hibernate OGM Redis module. Can you please tell us a little bit about Redis?

    I was not the first one bringing up the idea of Redis support in Hibernate OGM. In fact, Seiya Kawashima did a pretty decent job with his pull-request but at some point, Hibernate OGM development and the PR diverged. I came across the pull request and picked it up from there.

    Redis is an in-memory data structure store, used as database, cache and message broker. It originated from a key-value store but evolved by supporting various data structures like lists, sets, hashes and much more. Redis is blazing-fast although it runs mostly single-threaded. Its performance originates in a concise implementation and that all operations are performed in-memory. This does not mean that Redis has no persistence. Redis is configured by default to store data on disk and disk I/O is asynchronous. Redis facilitates through its versatile nature an enormous number of use-cases such as Caching, queues, remote locking, just storing data and much more. An important fact to me is always that I’d never use Redis for data I cannot recover as wiping data from Redis is just too easy but using it as semi-persistent a store is the perfect use.

  3. You are also the author of the Lettuce open source project. How does it compare to Hibernate OGM?

    Hibernate OGM and lettuce are projects with different aims. Lettuce is a driver/Java-binding for Redis. It gives Java developers access to the Redis API using synchronous, asynchronous and reactive API bindings. You can invoke the Redis API with lettuce directly and get the most out of Redis if you need it. Any JDBC driver is situated on a similar abstraction level as lettuce except for some specific features. lettuce does not require connection-pooling and dealing with broken connections as it allows users to benefit from auto-reconnection and thread-safe connections. Hibernate OGM Redis uses this infrastructure and provides its data mapping features on top of lettuce.

  4. What benefit do you think Hibernate OGM offers to application developers compared to using the NoSQL API directly?

    Each NoSQL data store has its own, very specific API. Native APIs require developers not only get familiar with the data store traits but also with its API. Redis API comes with over 150 commands that translate to 650+ commands with sub-command permutations.

    Every Redis command is very specific and behaves on its own. The Redis command documentation provides detailed insight to commands, but users are required to spend a fair amount of their time to get along with the native API.

    Hibernate OGM applies elements from the JPA spec to NoSQL data stores and comes up with an API that Java EE/JPA developers are familiar. Hibernate OGM lowers barriers to entry. Hibernate OGM comes with a purpose of mapping data into a NoSQL data store. Mapping simple JPA entities to the underlying data store works fine but sometimes, like associations or transactions, do not map well to MongoDB and Redis. Users of Hibernate OGM need to be aware of the underlying persistence technology to get familiar with its concepts and strengths as well with their limitations.

    I also see a great advantage of the uniform configuration mechanism of Hibernate OGM. Every individual datastore driver comes up with its individual configuration method. Hibernate OGM unifies the styles into a common approach. One item on my wish list for Hibernate OGM is JDNI/WildFly configuration support to achieve similar flexibility as it is possible with JDBC data sources.

  5. Do you plan on supporting Hibernate OGM in Spring Data as well?

    Hibernate OGM and Spring Data follow both the idea of supporting NoSQL data stores. Hibernate OGM employs several features from NoSQL data stores to enhance its data mapping centering around JPA. JPA is an inherently relational API, which talks about concepts that are not necessarily transferable to the NoSQL world. Spring Data comes with modules for various popular data stores with a different approach to providing a consistent programming model for the supported stores but not try to force everything into a single abstracting API. Spring Data Modules provide multiple levels of abstraction on top of the NoSQL data store APIs. Core concepts of NoSQL data stores are exposed through an API that commonly looks and feels like Spring endpoints. Hibernate OGM can already be used together with Spring Data JPA. A good use-case is the Spring Data Repositories abstraction which provides a uniform interface to access data from various data stores that do not require users to write a query language and leverages store-specific features.

Thank you, Mark, for taking your time. It is a great honor to have you here. To reach Mark, you can follow him on Twitter.

back to top