Composing applications with karaf features

I create web applications by first creating a set of OSGi bundles that form the building blocks of the application, and then use karaf features to pull the building blocks together to create complete applications that run inside apache karaf.

The bundles are (in order of initial creation, and (more or less) order of maven reactor build):

  1. A bundle defining the liquibase schema for the application’s database
  2. A services bundle defining the OSGi service for the business logic layer
  3. A bundle defining the in-memory test database, with dummy data, used for unit tests and demo. I use apache derby for the in-memory test database
  4. A bundle defining the business logic and exposing it as an OSGi service
  5. A bundle defining a webcontext in the OSGi web whiteboard and an Apache Shiro Filter connecting to the webcontext and getting authentication and authorization info from authservice
  6. A bundle implementing the application’s web REST API, using the webcontext of the above bundle and connecting to the OSGi web whiteboard, with operations provided by an OSGi service provided by the backend bundle
  7. A bundle implementing the application’s web frontend, connecting to the above webcontext, and communicating with the application’s web REST API
  8. A bundle defining the production database. I use PostgreSQL for the production databases

Creating karaf features using maven

OSGi bundles are jar files with some extra fields added to the MANIFEST.MF, as outlined by the OSGi spec. The maven build of my projects use the maven-bundle-plugin to create jar files that are also OSGi bundles.

“Feature” is, strictly speaking, not an OSGi concept. It’s a mechanism used by apache karaf to robustly load OSGi runtime dependencies in a version and release independent matter.

Apache karaf has many features built-in. Basically everything from apache servicemix and everything from OPS4J (aka “the pax stuff”) can be loaded from built-in features.

Karaf “feature respositories” are XML files that contains feature definitions. A feature definition has a name and can start OSGi bundles, e.g.:

Listing 1.

<features xmlns="" name="">
 <feature name="handlereg-services" version="1.0.0.SNAPSHOT">
  <bundle start-level="80">mvn:no.priv.bang.handlereg/</bundle>

The above example is a feature repository, containing a feature named “handlereg-services”.

When the feature handlereg-service is installed, it will start the the OSGi bundle in the <bundle> element, referenced with maven coordinates consisting of groupId, artifactId and version.

The karaf-maven-plugin can be used in a bundle maven module to create a feature repository containing a feature matching the bundle built by the maven module, and attach the feature repository to the resulting maven artifact.

In addition to starting bundles, features can depend on other features, which will cause those features to be loaded.

The bundle feature repositories can be included into a master feature repository and used to compose features that make up complete applications, which is what this article is about. See the section Composing features to create an application at the end of this blog post.

Defining the database schema

I use liquibase to create the schemas, and treat schema creation as code.

Liquibase has multiple syntaxes: XML, JSON, YAML and SQL. Using the SQL syntax is similar to e.g. using Flyaway. Using the non-SQL syntaxes gives you a benefit that Flyaway doesn’t have: cross-DBMS support.

I mainly use the XML syntax, because using the Liquibase schema in my XML editor gives me good editor support for editing changelists.

I also use the SQL syntax, but only for data, either initial data for the production database or dummy data for the test data base. I don’t use the SQL syntax for actual database schema changes, because that would quickly end up not being cross-DBMS compatible.

The ER models of my applications are normalized and contain the entities the application is about. At the ER modeling stage, I don’t think about Java objects, I just try to make the ER model fit my mental picture of the problem space.

I start by listing the entities, e.g. for the weekly allowance app

  1. accounts
  2. transactions (i.e. jobs or payments)
  3. transaction types (i.e. something describing the job or payment)

Then I list the connections, e.g. like so

  1. One account may have many transactions, while each transaction belong to only one account (1-n)
  2. Each transaction must have a type , while each transaction type can belong to multiple transactions (1-n)

Then I start coding:

  1. Create a standard OSGi bundle maven project
  2. Import the bundle into the IDE
  3. Create a JUnit test, where I fire up a derby in-memory datatbase
  4. Let the IDE create a class for applying liquibase scripts to a JDBC DataSource
  5. Create a maven jar resource containing the liquibase XML changelog (I create an application specific directory inside src/main/resources/, not because it’s needed at runtime, because resources are bundle local), but I’ve found the need to use liquibase schemas from different applications in JUnit tests, and then it makes things simpler if the liquibase script directories don’t overlap)
  6. Create a method in the JUnit test to insert data in the first table the way the schema is supposed to look, the insert will expectedly fail (since there is no table)
  7. Create a changeset for the first table, e.g. like so

    Listing 2.

    <changeSet author="sb" id="ukelonn-1.0.0-accounts">
     <preConditions onFail="CONTINUE" >
       <tableExists tableName="accounts" />
     <createTable tableName="accounts">
      <column autoIncrement="true" name="account_id" type="INTEGER">
       <constraints primaryKey="true" primaryKeyName="account_primary_key"/>
      <column name="username" type="VARCHAR(64)">
       <constraints nullable="false" unique="true"/>

    Some points to note, both are “lessons learned”:

    1. The <preConditions> element will skip the changeSet without failing if the table already exists
    2. The <changeSet> is just for a single table
  8. After the test runs green, add a select to fetch back the inserted data and assert on the results
  9. Loop from 6 until all tables and indexes and constrains are in place and tested

Note: All of my webapps so far, has the logged in user as a participant in the database. I don’t put most of the user information into the database. I use a webapp called authservice to handle authentication and authorization and also to provide user information (e.g. full name and email address). What I need to put into the database is some kind of link to authservice.

The username column is used to look up the account_id which what is used in the ER model, e.g. a transactions table could have a column that is indexed and can be joined with the accounts table in a select.

Some examples of liquibase schema definitions

  1. The sonar-collector database schema, a very simple schema for storing sonarqube key metrics
  2. The authservice database schema
  3. The ukelonn database schema a database schema for a weekly allowance app, this is the first one created and has several mistakes:
    1. The entire schema is in a single changeset, rather than having a changeSet for each table and/or view (the reason for this is that this liquibase file was initially created by dumping an existing database schema and the result was a big single changeset)
    2. No preConditions guard around the creation of each table meant that moving the users table out of the original schema and into the authservice schema became a real tricky operation
  4. The handlereg database schema (a database schema for a groceries registration app)

Some examples of unit tests for testing database schemas:

  1. AuthserviceLiquibaseTest
  2. UkelonnLiquibaseTest
  3. HandleregLiquibaseTest

Defining the business logic OSGi service

Once a datamodel is in place I start on the business logic service interface.

This is the service that will be exposed by the business logic bundle and that the web API will listen for.

Creating the interface, I have the following rough plan:

  1. Arguments to the methods will be either beans or lists of beans (this maps to JSON objects and arrays of JSON objects transferred in the REST API)
  2. Beans used by the business logic service interface are defined the same bundle as the service interface, with the following rules:
    1. All data members are private
    2. All data members have a public getter but no setter (i.e. the beans are immutable)
    3. There is a no-args constructor for use by jackson (jackson creates beans and set the values using reflection)
    4. There is a constructor initializing all data members, for use in unit tests and when returning bean values
  3. Matching the beans with the ER datamodel isn’t a consideration:
    1. Beans may be used by a single method in the service interface
    2. Beans may be denormalized in structore compared to the entities in the ER model (beans typically contains rows from the result of a join in the datamodel, rather than individual entities)

Some examples of business logic service interfaces:

  1. UserManagementService (user adminstrations operations used by the web API of the authservice authentication and authorization (and user management) app)
  2. UkelonnService (the web API operations of a weekly allowance app)
  3. HandleregService (the web API operations of groceries registrations and statistics app)

Note: Creating the business logic service interface is an iterative process. I add methods while working on the implementation of the business logic and move them up to the service interface when I’m satisfied with them.

Creating a test database

The test database bundle has a DS component that exposes the PreHook OSGi service. PreHook has a single method “prepare” that takes a DataSource parameter. An example is the HandleregTestDbLiquibaseRunner DS component from the handlereg.db.liquibase.test bundle in the handlereg groceries shopping registration application:

Listing 3.

@Component(immediate=true, property = "name=handleregdb")
public class HandleregTestDbLiquibaseRunner implements PreHook {
    public void prepare(DataSource datasource) throws SQLException {
        try (Connection connect = datasource.getConnection()) {
            HandleregLiquibase handleregLiquibase = new HandleregLiquibase();
        } catch (LiquibaseException e) {
            logservice.log(LogService.LOG_ERROR, "Error creating handlreg test database schema", e);

In the implementation of the “prepare” method, the class containing the schema is instantiated, and run to create the schema. Then Liquibase is used directly on files residing in the test database bundle, to fill the database with test data.

To ensure that the correct PreHook will be called for a given datasource, the DS component is given a name, “name=handleregdb” in the above example.

The same name is used in the pax-jdbc-config configuration that performs the magic of creating a DataSource from a DataSourceFactory. The pax-jdbc-config configuration resides in the template feature.xml file of the bundle project, i.e. in the handlereg.db.liquibase.test/src/main/feature/feature.xml file. The pax-jdbc-config configuration in that template feature.xml, looks like this:

Listing 4.

<feature name="handlereg-db-test" description="handlereg test DataSource" version="${project.version}">
 <config name="org.ops4j.datasource-handlereg-test">

The xml example above, defines a feature that:

  1. Depends on the feature created by the bundle project
  2. Depends on the pax-jdbc-config feature (built-in in karaf)
  3. Creates the following configuration (will end up in the file etc/org.ops4j.datasource-handlereg-test.cfg in the karaf installation):

    Listing 5.

    Explanation of the configuration:

    1. will make pax-jdbc-config use the DataSourceFactory that has the name “derby”, if there are multiple DataSourceFactory services in the OSGi service registry
    2. ops4j.preHook=handleregdb makes pax-jdbc-config look for a PreHook service named “handleregdb” and call its “prepare” method (i.e. the liquibase script runnner defined at the start of this section)
    3. url=jdbc:derby:memory:handlereg;create=true is the JDBC URL, which one third of the conection properties needed to create a DataSource from a DataSourceFactory (the other two parts are username and password, but they aren’t needed for an in-memory test database)
    4. dataSourceName=jdbc/handlereg gives the name “jdbc/handlreg” to the DataSource OSGi service, so that components that waits for a DataSource OSGi service can qualify what service they are listening for

Implementing the business logic

The business logic OSGi bundle defines a DS component accepting a DataSource with a particular name and exposing the business logic service interface:

Listing 5.

@Component(service=HandleregService.class, immediate=true)
public class HandleregServiceProvider implements HandleregService {

    private DataSource datasource;

    @Reference(target = "(")
    public void setDatasource(DataSource datasource) {
        this.datasource = datasource;

    ... // Implementing the methods of the HandleregService interface

The target argument with the value “jdbc/handlereg”, matching the dataSourceName config value, ensures that only the correct DataSource service will be injected.

The implementations of the methods in the business logic service interface all follow the same pattern:

  1. The first thing that happens is that a connection is created in a try-with-resource. This ensures that the database server doesn’t suffer resource exhaustion
  2. The outermost try-with-resource is followed by a catch clause that will catch anything, log the catch and re-throw inside an application specific runtime exception (I really don’t like checked exceptions)
  3. A new try-with-resource is used to create a PreparedStatement.
  4. Inside the try, parameters are added to the PreparedStatement. Note: Parameter replacements in PreparedStatements are safe with respect to SQL injection (parameters are added after the SQL has been parsed)
  5. Then, if it’s a query, the returned ResultSet is handled in another try-with-resource and then the result set is looped over to create a java bean or a collection of beans to be returned

I.e. a typical business logic service method looks like this:

Listing 6.

public List<Transaction> findLastTransactions(int userId) {
    List<Transaction> handlinger = new ArrayList<>();
    String sql = "select t.transaction_id, t.transaction_time, s.store_name, s.store_id, t.transaction_amount from transactions t join stores s on s.store_id=t.store_id where t.transaction_id in (select transaction_id from transactions where account_id=? order by transaction_time desc fetch next 5 rows only) order by t.transaction_time asc";
    try(Connection connection = datasource.getConnection()) {
        try (PreparedStatement statement = connection.prepareStatement(sql)) {
            statement.setInt(1, userId);
            try (ResultSet results = statement.executeQuery()) {
                while( {
                    int transactionId = results.getInt(1);
                    Date transactionTime = new Date(results.getTimestamp(2).getTime());
                    String butikk = results.getString(3);
                    int storeId = results.getInt(4);
                    double belop = results.getDouble(5);
                    Transaction transaction = new Transaction(transactionId, transactionTime, butikk, storeId, belop);
    } catch (SQLException e) {
        String message = String.format("Failed to retrieve a list of transactions for user %d", userId);
        logError(message, e);
        throw new HandleregException(message, e);
    return handlinger;

To someone familiar with spring and spring boot this may seem like a lot of boilerplate, but I rather like it. I’ve had the misfortune to have to debug into spring applications created by others, and to make reports from relational databases with schemas created by spring repositories.

Compared to my bad spring experience:

  1. This is very easy to debug: you can step and/or breakpoint straight into the code handling the JDBC query and unpack
  2. If the returned ResulSet is empty, it’s easy to just paste the SQL query from a string in the Java code into an SQL tool (e.g. Oracle SQL Developer, MS SQL Server Management Studio, or PostgreSQL pgadmin) and figure out why the returned result set is empty
  3. Going the other way, it’s very simple use the databases’ SQL tool to figure out a query that becomes the heart of a method
  4. Since the ER diagram is manually created for ease of query, rather than autogenerated by spring, it’s easy to make reports and aggregations in the database

Defining a webcontext and hooking into Apache Shiro

This bundle contains a lot of boilerplate that will be basically the same from webapp to webapp, except for actual path of the webcontext. I have created an authservice sample application that is as simple as I could make it, to copy paste into a bundle like this.

As mentioned in the sample application, I use a webapp called “authservice” to provide both apache shiro based authentication and authorizaton, and a simple user managment GUI.

Authservice has been released to maven central and can be used in any apache karaf application by loading authservice’s feature repository from maven central and then installing the appropriate feature.

All of my web applications have a OSGi web whiteboard webcontext that provides the application with a local path, and is hooked into Apache Shiro for authorization and authentication.

The bundle contains one DS component exposing the WebContextHelper OSGi service that is used to create the webcontext, e.g. like so:

Listing 7.

    property= {
public class AuthserviceSampleClientServletContextHelper extends ServletContextHelper { }

The bundle will also contain a DS component exposing a servlet Filter as an OSGi service and hooking into the OSGi web whiteboard and into the webcontext, e.g. like so:

Listing 8.

    property= {
        HttpWhiteboardConstants.HTTP_WHITEBOARD_CONTEXT_SELECT + "=(" + HttpWhiteboardConstants.HTTP_WHITEBOARD_CONTEXT_NAME +"=sampleauthserviceclient)",
public class AuthserviceSampleClientShiroFilter extends AbstractShiroFilter { // NOSONAR

    private Realm realm;
    private SessionDAO session;
    private static final Ini INI_FILE = new Ini();
    static {
        // Can't use the Ini.fromResourcePath(String) method because it can't find "shiro.ini" on the classpath in an OSGi context

    public void setRealm(Realm realm) {
        this.realm = realm;

    public void setSession(SessionDAO session) {
        this.session = session;

    public void activate() {
        IniWebEnvironment environment = new IniWebEnvironment();

        DefaultWebSessionManager sessionmanager = new DefaultWebSessionManager();

        DefaultWebSecurityManager securityManager = DefaultWebSecurityManager.class.cast(environment.getWebSecurityManager());



I hope to make the definition and use of the webcontext simpler when moving to OSGi 7, because the web whiteboard of OSGi 7 will be able to use Servlet 3.0 annotations to specify the webcontexts, servlets and filters.

I also hope to be able to remove a lot of boilerplate from the shiro filter when moving to the more OSGi friendly Shiro 1.5.

Implementing a REST API

The REST API for one of my webapps, is a thin shim over the application’s business logic service interface:

  1. I create a DS component that subclasses the Jersey ServletContainer and exposes Servlet as an OSGi interface, hooking into the OSGi web whiteboard and the webcontext created by the web securiy bundle (I have created a ServletContainer subclass that simplifies this process)
  2. The component gets an injection of the application’s business logic OSGi service
  3. The DS component adds the injected OSGi service as a service to be injected into Jersey resources implementing REST endpoints
  4. I create a set of stateless Jersey resources implementing the REST endpoint that gets injected with the applications business logic OSGi service

Some examples of web APIs:

  1. A user management REST API wrapping the UserManagement OSGi service
  2. The REST API of the weekly allowance app, wrapping the UkelonnService OSGi service
  3. The REST API of the groceries registration app, wrapping the HandleregService OSGi service

I have also created a sample application demonstrating how to add OSGi services to services injected into stateless Jersey resources implementing REST endpoints.

Implementing a web frontend

Composing features to create an application

At this point there is a lot of building blocks but no application.

Each of the building blocks have their own feature repository file attached to the maven artifact.

What I do is to manually create a feature repository that imports all of the generated feature repositories and then hand-write application features that depends on a set of the building block features. I don’t involve the karaf-maven-plugin in this because I only want to load the other feature repositories. I don’t want to inline the contents. I use the maven-resources-plugin resource filtering to expand all of the maven properties, and then use the build-helper-maven-plugin to attach the filtered feature repository to a pom maven artifact.

Some examples of manually created feature repositories:

  1. The authservice authentication and authorization and user management application master feature repository, where the handwritten features are:
    1. authservice-with-dbrealm-and-session which pulls in everything needed for karaf authentication and authorization against a JDBC realm, except for the actual database connection. This feature pulls in none of the user adminstration support of authservice
    2. authservice-with-testdb-dbrealm-and-session which builds on authservice-with-dbrealm-and-session and adds a derby test database with mock data
    3. authservice-with-productiondb-dbrealm-and-session which builds on authservice-with-dbrealm-and-session and adds a PostgreSQL database connection
    4. authservice-user-admin which builds on authservice-with-dbrealm-and-session and adds user adminstration, but pulls in no actual JDBC database
    5. user-admin-with-testdb which builds on authservice-user-admin and adds a derby test database with mock data
    6. user-admin-with-productiondb which builds on authservice-user-admin and adds a PostgreSQL database connection
  2. The ukelonn weekly allowance application master feature repository, where the handwritten features are:
    1. ukelonn-with-derby which pulls in all bundles needed to start the weekly allowance app with a database with mock data, and also pulls in the authentication and authorization app, also with an in-memory database with mock data (no user administration UI pulled in, since the weekly allowance app has its own user administration)
    2. ukelonn-with-postgresql which pulls in all bundles needed to start the weekly allowance app with a JDBC connection to a PostgreSQL database, and also pulls in the authentication and authorization app connected to a PostgreSQL database
    3. ukelonn-with-postgresql-and-provided-authservice which pulls in the weekly allowance app with a PostgreSQL JDBC connection and no authorization and authentication stuff. This feature won’t load if the authservice application hasn’t already been loaded
  3. The handlereg groceries registration application master feature, where the handwritten features are:
    1. handlereg-with-derby starts the application with a test database and also pulls in authservice (that’s the <feature>user-admin-with-testdb</feature> which actually pulls the full user administration application (with a derby test database))
    2. handlereg-with-derby-and-provided-authservice is the same as handlereg-with-derby except for not pulling in authservice. This requires the authservice to already be installed before this feature installed, but has the advantage of not uninstalling authservice when this service is uninstalled
    3. handlereg-with-postgresql starts the application with a PostgreSQL database connection and authservice
    4. handlereg-with-postgresql-and-provided-authservice starts the application with a PostgreSQL database and no autheservice. This is actually the feature used to load handlereg in the production system (since it means the feature can be uninstalled and reinstalled without affecting other applications)

As an example, the handlereg-with-derby feature mentioned above looks like this.

Listing 9.

<feature name="handlereg-with-derby" description="handlereg webapp with derby database" version="${project.version}">

To start the composed application, install and start apache karaf, and from the karaf console, first load the master feature repository and then install the manually composed feature:

Listing 10.

feature:repo-add mvn:no.priv.bang.handlereg/handlereg/LATEST/xml/features
feature:install handlereg-with-derby

One thought on “Composing applications with karaf features”

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.