Edit: I no longer use this approach. I use pax-jdbc-config instead. See the article Rewriting applications to use pax-jdbc-config and liquibase for details
When creating databases for my apache karaf based web applications, I want the following things:
- A database-independent schema creation using liquibase (i.e. supporting multiple relational database systems)
- Pluggable databases (each web application typically has an in-memory derby database for testing and evaluation and a PostgreSQL database for production use)
- The ability to run multiple applications in the same apache karaf instance without the application databases colliding
- The possibility to configure the JDBC connection information using the karaf configuration
The way I accomplish this, is:
- Create an application specific OSGi service for connecting to the database
- Create declarative services (DS) components implementing the service for each database system I wish to support
- Create a bundle implementing the calls to liquibase (and containing the liquibase scripts as resources) and use this bundle from all DS components implementing the database service
- Use high level karaf features to compose my webapps, with variants for each supported database system
OSGi 101: when I talk about “OSGi service” I mean “Java Interface”. Java interfaces form the basis of the OSGi plug-in model. When an OSGi bundle (a “bundle” is a JAR file with some OSGi specific headers added to its MANIFEST.MF) is loaded it can have dependencies to services provided by other bundles. When those dependencies are satsified the bundle can go active and provide its own services to other bundles that may be listening.
I have created a Java interface called DatabaseService that implements the necessary operations I do against a JDBC database, and they aren’t many:
- getDatasource() that will return a DataSource connected to a relational database with the expected schema (Note: if connecting to the database fails, the DS component providing the service never goes active, so the only way code ever gets to call this method is if the connection was successful. This simplifies the code using the method)
- getConnection() that returns a Connection to the database. This is a convenience method, because what the implementations actually do, is call Datasource.getConnection(). It makes sense to have this method in the interface, because:
- Fetching the connection is the only thing done with DataSource, so that it makes all usages a little bit simpler and shorter
- The DS components providing the service all have ended up implementing this method anyway, in the calls to liquibase
The application specific DatabaseService definitions are mostly without methods of their own, like e.g.
public interface AuthserviceDatabaseService extends DatabaseService { | |
} |
However, I’ve had one case where I needed to have very different syntax for the different database systems supported, and then I added methods returning database specific versions of the problematic SQL queries to the application specific database service definition:
public interface UkelonnDatabase extends DatabaseService { | |
String sumOverYearQuery(); | |
String sumOverMonthQuery(); | |
} |
For the curious: the SQL with different syntax had to do with aggregation over months and years where the SQL looks like this in derby and the SQL looks like this in PostgreSQL.
The next step is to create a bundle holding the liqubase schema definition of the database. This bundle neither expects OSGi service injection, not exposes any services. It’s an OSGi library bundle that is like using a regular jar, except that the exports and imports of packages is controlled by OSGi and OSGi-headers in the bundle’s MANIFEST.MF.
More OSGi terminology: “bundle” means a jar-file with an OSGi-compliant MANIFEST.MF.
The liquibase bundles contain a single class and liquibase scripts as resources. In OSGi the resources are bundle local, so other bundles can’t load them directly. But they can instantiate a class from the liquibase bundle and that class can in turn load the resources that resides in the same bundle as the class.
The classes in the liquibase bundles typically contains a method for initial schema setup, and a method for schema modifications. In addition to the scripts in the liquibase bundle, the database plugins contain their own liquibase scripts (the liquibase scripts of the in-memory derby test database plugins contains sets of hopefully realistic test data, and the liquibase scripts of the PostgreSQL plugins contains initial data such as e.g. a single initial user).
- Examples of liquibase bundle classes:
- Examples of liquibase bundle schema setup scripts
- Examples of initial data:
- ukelonn
- authservice
The declarative services (DS) components of the database plugins expect an injection of the DataSourceFactory OSGi service. The DataSourceFactory interface isn’t part of JDBC (like DataSource itself is), but is part of the OSGi service platform.
To ensure that the correct DataSourceFactory is injected it is possible to filter on the osgi.jdbc.driver.name service property (service properties is a Map<String, String> associated with an OSGi service), like e.g.
@Component(service=UkelonnDatabase.class, immediate=true) | |
public class PGUkelonnDatabaseProvider implements UkelonnDatabase { | |
private DataSourceFactory dataSourceFactory; | |
@Reference(target="(osgi.jdbc.driver.name=PostgreSQL JDBC Driver)") | |
public void setDataSourceFactory(DataSourceFactory dataSourceFactory) { | |
this.dataSourceFactory = dataSourceFactory; | |
} | |
} |
and
@Component(service=UkelonnDatabase.class, immediate=true) | |
public class UkelonnDatabaseProvider implements UkelonnDatabase { | |
private DataSourceFactory dataSourceFactory; | |
@Reference(target="(osgi.jdbc.driver.name=derby)") | |
public void setDataSourceFactory(DataSourceFactory dataSourceFactory) { | |
this.dataSourceFactory = dataSourceFactory; | |
} | |
} |
Once all @Reference service injections are satisfied, the method annotated with @Activate is called. This method needs to create a successful connection to the database and run liquibase scripts. A method for a derby memory database can look like this (the full source of the UkelonnDatabaseProvider class is here):
@Component(service=UkelonnDatabase.class, immediate=true) | |
public class UkelonnDatabaseProvider implements UkelonnDatabase { | |
@Activate | |
public void activate() { | |
createDatasource(); | |
UkelonnLiquibase liquibase = new UkelonnLiquibase(); | |
try(Connection connect = getConnection()) { | |
try { | |
liquibase.createInitialSchema(connect); | |
insertMockData(); | |
liquibase.updateSchema(connect); | |
} finally { | |
// Liquibase sets Connection.autoCommit to false, set it back to true | |
connect.setAutoCommit(true); | |
} | |
} catch (Exception e) { | |
logError("Failed to create derby test database schema", e); | |
} | |
} | |
} |
What the method does is to first create a DataSource object (stored in a field and accessible through the DatabaseService interface), and it then creates a connection in a try-with-resource and runs the liquibase scripts using that connection before the try-with-resource releases the connection. The scripts are typically an initial schema, followed by initial data for the database, followed by modifications to the schema.
It is possible to create configuration for a DS component using the karaf console command line, and creating JDBC connection info for a PostgreSQL database can be done with the following commands:
config:edit no.priv.bang.ukelonn.db.postgresql.PGUkelonnDatabaseProvider | |
config:property-set ukelonn.db.jdbc.url "jdbc:postgresql:///ukelonn" | |
config:property-set ukelonn.db.jdbc.user "karaf" | |
config:property-set ukelonn.db.jdbc.password "karaf" | |
config:update |
The resulting configuration will be injected into the @Activate method like e.g. so (the full source of the PGUkelonnDatabaseProvider class is here):
@Component(service=UkelonnDatabase.class, immediate=true) | |
public class PGUkelonnDatabaseProvider implements UkelonnDatabase { | |
@Activate | |
public void activate(Map<String, Object> config) { | |
createDatasource(config); | |
try(Connection connect = getConnection()) { | |
... (create schema and insert initial data) | |
} catch (Exception e) { | |
logError("Failed to create ukelonn database schema in the PostgreSQL ukelonn database", e); | |
} | |
} | |
void createDatasource(Map<String, Object> config) { | |
Properties properties = createDatabaseConnectionProperties(config); | |
try { | |
datasource = dataSourceFactory.createDataSource(properties); | |
} catch (Exception e) { | |
logError("PostgreSQL database service failed to create connection to local DB server", e); | |
} | |
} | |
Properties createDatabaseConnectionProperties(Map<String, Object> config) { | |
String jdbcUrl = (String) config.getOrDefault(UKELONN_JDBC_URL, "jdbc:postgresql:///ukelonn"); | |
String jdbcUser = (String) config.get(UKELONN_JDBC_USER); | |
String jdbcPassword = (String) config.get(UKELONN_JDBC_PASSWORD); | |
Properties properties = new Properties(); | |
properties.setProperty(DataSourceFactory.JDBC_URL, jdbcUrl); | |
if (jdbcUser != null) { | |
properties.setProperty(DataSourceFactory.JDBC_USER, jdbcUser); | |
} | |
if (jdbcPassword != null) { | |
properties.setProperty(DataSourceFactory.JDBC_PASSWORD, jdbcPassword); | |
} | |
return properties; | |
} | |
} |
The injected config is sent into the method creating the DataSource instance.
Important: the DataSource instance is cached and is used to represent the connected database. The Connection instance are not cached. The Connection instances are created on demand in try-with-resource, to ensure that they are closed and all transactions are completed.
The method creating the DataSource instance passes the config on to a method that picks the JDBC connection info out of the injected config and puts into a Properties instance that can be used to connect to a JDBC database.
Using the pluggable database is done by having the DS component implementing the application’s business logic have the application specific database service injected, e.g. like so (full UkelonnServiceProvider source here):
@Component(service=UkelonnService.class, immediate=true) | |
public class UkelonnServiceProvider extends UkelonnServiceBase { | |
private UkelonnDatabase database; | |
@Activate | |
public void activate() { | |
// Nothing to do here | |
} | |
@Reference | |
public void setUkelonnDatabase(UkelonnDatabase database) { | |
this.database = database; | |
} | |
} |
Using the injected service is calling the getConnection() method in a try-with-catch, do the SQL operations and then let the end of the try-with-catch release the connection, like e.g. so
@Component(service=UkelonnService.class, immediate=true) | |
public class UkelonnServiceProvider extends UkelonnServiceBase { | |
... | |
@Override | |
public Account getAccount(String username) { | |
try(Connection connection = database.getConnection()) { | |
try(PreparedStatement statement = connection.prepareStatement("select * from accounts_view where username=?")) { | |
statement.setString(1, username); | |
try(ResultSet resultset = statement.executeQuery()) { | |
if (resultset.next()) | |
{ | |
return mapAccount(resultset); | |
} | |
throw new UkelonnException(String.format("Got an empty ResultSet while fetching account from the database for user \\\"%s\\\"", username)); | |
} | |
} | |
} catch (SQLException e) { | |
throw new UkelonnException(String.format("Caught SQLException while fetching account from the database for user \"%s\"", username), e); | |
} | |
} | |
public Account mapAccount(ResultSet results) throws SQLException { | |
String username = results.getString(UkelonnServiceProvider.USERNAME); | |
no.priv.bang.osgiservice.users.User user = useradmin.getUser(username); | |
return new Account( | |
results.getInt("account_id"), | |
username, | |
user.getFirstname(), | |
user.getLastname(), | |
results.getDouble("balance")); | |
} | |
} |
It probably helps for this approach for pluggable databases that I’m using JDBC prepared statements directly instead of using an ORM like e.g. hibernate (but that’s a story/rant for another day…).
Picking what database to use is done using karaf features.
The karaf-maven-plugin is used on all maven modules that create an OSGi bundle, to create a karaf feature repository file and attach it to the OSGi bundle artifact.
In addition I create a handwritten feature repository file where I include all of the feature files of the OSGi bundle artifacts using their maven coordinates. And then in the same file, create high level features that start the application with either database.
Examples of hand-written feature repositories:
- aggregate feature repository for ukelonn (maven URL mvn:no.priv.bang.ukelonn/karaf/LATEST/xml/features )
- aggregate feature repository for authservice (maven URL mvn:no.priv.bang.authservice/authservice/1.6.0/xml/features )
Using the authservice as an example:
- First define a feature that loads the features required for the application, except for the database
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters. Learn more about bidirectional Unicode characters<feature name="authservice-with-dbrealm-and-session"> <feature>authservice-web-security-dbrealm</feature> <feature>authservice-web-security-memorysession</feature> <feature>authservice-users</feature> <feature>authservice-web-security</feature> </feature> - Then define a feature that uses that feature in addition with the feature created when building the OSGi bundle for the derby test database
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters. Learn more about bidirectional Unicode characters<feature name="authservice-with-derby-dbrealm-and-session"> <feature>authservice-db-derby-test</feature> <feature>authservice-with-dbrealm-and-session</feature> </feature> - Finally define a feature that uses the first feature in addition with the feature created when building the OSGi bundle for the PostgreSQL database
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters. Learn more about bidirectional Unicode characters<feature name="authservice-with-postgresql-dbrealm-and-session"> <feature>authservice-db-postgresql</feature> <feature>authservice-with-dbrealm-and-session</feature> </feature>
This means that trying out the authservice application can be done by:
- Download and start apache karaf using the quick start guide
- Install authservice with derby by loading the feature repository using the maven coordinates, and then loading the feature that pulls in the application and the derby database
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters. Learn more about bidirectional Unicode characterskaraf@root()> feature:repo-add mvn:no.priv.bang.authservice/authservice/LATEST/xml/features Adding feature url mvn:no.priv.bang.authservice/authservice/LATEST/xml/features karaf@root()> feature:install authservice-with-derby-dbrealm-and-session karaf@root()> - All bundles required by the features are pulled in from maven central and the application is started and becomes available on http://localhost:8181/authservice
Installing and starting the PostgreSQL takes a little more work (but not much):
- Install and start PostgreSQL (left as an exercies for the reader)
- Create a PostgreSQL user named karaf with a password (in the examples “karaf”)
- Create a blank database named “authservice” owned by user karaf
- Download and start apache karaf using the quick start guide
- Create database connection configuration from the karaf console
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters. Learn more about bidirectional Unicode characterskaraf@root()> config:edit no.priv.bang.ukelonn.db.postgresql.PGUkelonnDatabaseProvider karaf@root()> config:property-set ukelonn.db.jdbc.url "jdbc:postgresql:///ukelonn" karaf@root()> config:property-set ukelonn.db.jdbc.user "karaf" karaf@root()> config:property-set ukelonn.db.jdbc.password "karaf" karaf@root()> config:update karaf@root()> - Install authservice with PostgreSQL by loading the feature repository using the maven coordinates, and then loading the feature that pulls in the application and the PostgreSQL database
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters. Learn more about bidirectional Unicode characterskaraf@root()> feature:repo-add mvn:no.priv.bang.authservice/authservice/LATEST/xml/features Adding feature url mvn:no.priv.bang.authservice/authservice/LATEST/xml/features karaf@root()> feature:install authservice-with-postgresql-dbrealm-and-session karaf@root()> - All bundles required by the features are pulled in from maven central and the application is started and becomes available on http://localhost:8181/authservice
One thought on “Pluggable databases for apache karaf applications”