I create web applications by first creating a set of OSGi bundles that form the building blocks of the application, and then use karaf features to pull the building blocks together to create complete applications that run inside apache karaf.
The bundles are (in order of initial creation, and (more or less) order of maven reactor build):
- A bundle defining the liquibase schema for the application’s database
- A services bundle defining the OSGi service for the business logic layer
- A bundle defining the in-memory test database, with dummy data, used for unit tests and demo. I use apache derby for the in-memory test database
- A bundle defining the business logic and exposing it as an OSGi service
- A bundle defining a webcontext in the OSGi web whiteboard and an Apache Shiro Filter connecting to the webcontext and getting authentication and authorization info from authservice
- A bundle implementing the application’s web REST API, using the webcontext of the above bundle and connecting to the OSGi web whiteboard, with operations provided by an OSGi service provided by the backend bundle
- A bundle implementing the application’s web frontend, connecting to the above webcontext, and communicating with the application’s web REST API
- A bundle defining the production database. I use PostgreSQL for the production databases
Creating karaf features using maven
OSGi bundles are jar files with some extra fields added to the MANIFEST.MF, as outlined by the OSGi spec. The maven build of my projects use the maven-bundle-plugin to create jar files that are also OSGi bundles.
“Feature” is, strictly speaking, not an OSGi concept. It’s a mechanism used by apache karaf to robustly load OSGi runtime dependencies in a version and release independent matter.
Apache karaf has many features built-in. Basically everything from apache servicemix and everything from OPS4J (aka “the pax stuff”) can be loaded from built-in features.
Karaf “feature respositories” are XML files that contains feature definitions. A feature definition has a name and can start OSGi bundles, e.g.:
<features xmlns="http://karaf.apache.org/xmlns/features/v1.5.0" name="handlereg.services"> <feature name="handlereg-services" version="1.0.0.SNAPSHOT"> <bundle start-level="80">mvn:no.priv.bang.handlereg/handlereg.services/1.0.0-SNAPSHOT</bundle> </feature> </features>
The above example is a feature repository, containing a feature named “handlereg-services”.
When the feature handlereg-service is installed, it will start the the OSGi bundle in the <bundle> element, referenced with maven coordinates consisting of groupId, artifactId and version.
The karaf-maven-plugin can be used in a bundle maven module to create a feature repository containing a feature matching the bundle built by the maven module, and attach the feature repository to the resulting maven artifact.
In addition to starting bundles, features can depend on other features, which will cause those features to be loaded.
The bundle feature repositories can be included into a master feature repository and used to compose features that make up complete applications, which is what this article is about. See the section Composing features to create an application at the end of this blog post.
Defining the database schema
I use liquibase to create the schemas, and treat schema creation as code.
Liquibase has multiple syntaxes: XML, JSON, YAML and SQL. Using the SQL syntax is similar to e.g. using Flyaway. Using the non-SQL syntaxes gives you a benefit that Flyaway doesn’t have: cross-DBMS support.
I mainly use the XML syntax, because using the Liquibase schema in my XML editor gives me good editor support for editing changelists.
I also use the SQL syntax, but only for data, either initial data for the production database or dummy data for the test data base. I don’t use the SQL syntax for actual database schema changes, because that would quickly end up not being cross-DBMS compatible.
The ER models of my applications are normalized and contain the entities the application is about. At the ER modeling stage, I don’t think about Java objects, I just try to make the ER model fit my mental picture of the problem space.
I start by listing the entities, e.g. for the weekly allowance app
- accounts
- transactions (i.e. jobs or payments)
- transaction types (i.e. something describing the job or payment)
Then I list the connections, e.g. like so
- One account may have many transactions, while each transaction belong to only one account (1-n)
- Each transaction must have a type , while each transaction type can belong to multiple transactions (1-n)
Then I start coding:
- Create a standard OSGi bundle maven project
- Import the bundle into the IDE
- Create a JUnit test, where I fire up a derby in-memory datatbase
- Let the IDE create a class for applying liquibase scripts to a JDBC DataSource
- Create a maven jar resource containing the liquibase XML changelog (I create an application specific directory inside
src/main/resources/
, not because it’s needed at runtime, because resources are bundle local), but I’ve found the need to use liquibase schemas from different applications in JUnit tests, and then it makes things simpler if the liquibase script directories don’t overlap) - Create a method in the JUnit test to insert data in the first table the way the schema is supposed to look, the insert will expectedly fail (since there is no table)
-
Create a changeset for the first table, e.g. like so
Listing 2.<changeSet author="sb" id="ukelonn-1.0.0-accounts"> <preConditions onFail="CONTINUE" > <not> <tableExists tableName="accounts" /> </not> </preConditions> <createTable tableName="accounts"> <column autoIncrement="true" name="account_id" type="INTEGER"> <constraints primaryKey="true" primaryKeyName="account_primary_key"/> </column> <column name="username" type="VARCHAR(64)"> <constraints nullable="false" unique="true"/> </column> </createTable> </changeSet>
Some points to note, both are “lessons learned”:
- The <preConditions> element will skip the changeSet without failing if the table already exists
- The <changeSet> is just for a single table
- After the test runs green, add a select to fetch back the inserted data and assert on the results
- Loop from 6 until all tables and indexes and constrains are in place and tested
Note: All of my webapps so far, has the logged in user as a participant in the database. I don’t put most of the user information into the database. I use a webapp called authservice to handle authentication and authorization and also to provide user information (e.g. full name and email address). What I need to put into the database is some kind of link to authservice.
The username column is used to look up the account_id which what is used in the ER model, e.g. a transactions table could have a column that is indexed and can be joined with the accounts table in a select.
Some examples of liquibase schema definitions
- The sonar-collector database schema, a very simple schema for storing sonarqube key metrics
- The authservice database schema
- The ukelonn database schema a database schema for a weekly allowance app, this is the first one created and has several mistakes:
- The entire schema is in a single changeset, rather than having a changeSet for each table and/or view (the reason for this is that this liquibase file was initially created by dumping an existing database schema and the result was a big single changeset)
- No preConditions guard around the creation of each table meant that moving the users table out of the original schema and into the authservice schema became a real tricky operation
- The handlereg database schema (a database schema for a groceries registration app)
Some examples of unit tests for testing database schemas:
Defining the business logic OSGi service
Once a datamodel is in place I start on the business logic service interface.
This is the service that will be exposed by the business logic bundle and that the web API will listen for.
Creating the interface, I have the following rough plan:
- Arguments to the methods will be either beans or lists of beans (this maps to JSON objects and arrays of JSON objects transferred in the REST API)
- Beans used by the business logic service interface are defined the same bundle as the service interface, with the following rules:
- All data members are private
- All data members have a public getter but no setter (i.e. the beans are immutable)
- There is a no-args constructor for use by jackson (jackson creates beans and set the values using reflection)
- There is a constructor initializing all data members, for use in unit tests and when returning bean values
- Matching the beans with the ER datamodel isn’t a consideration:
- Beans may be used by a single method in the service interface
- Beans may be denormalized in structore compared to the entities in the ER model (beans typically contains rows from the result of a join in the datamodel, rather than individual entities)
Some examples of business logic service interfaces:
- UserManagementService (user adminstrations operations used by the web API of the authservice authentication and authorization (and user management) app)
- UkelonnService (the web API operations of a weekly allowance app)
- HandleregService (the web API operations of groceries registrations and statistics app)
Note: Creating the business logic service interface is an iterative process. I add methods while working on the implementation of the business logic and move them up to the service interface when I’m satisfied with them.
Creating a test database
The test database bundle has a DS component that exposes the PreHook OSGi service. PreHook has a single method “prepare” that takes a DataSource parameter. An example is the HandleregTestDbLiquibaseRunner DS component from the handlereg.db.liquibase.test bundle in the handlereg groceries shopping registration application:
@Component(immediate=true, property = "name=handleregdb") public class HandleregTestDbLiquibaseRunner implements PreHook { ... @Override public void prepare(DataSource datasource) throws SQLException { try (Connection connect = datasource.getConnection()) { HandleregLiquibase handleregLiquibase = new HandleregLiquibase(); handleregLiquibase.createInitialSchema(connect); insertMockData(connect); handleregLiquibase.updateSchema(connect); } catch (LiquibaseException e) { logservice.log(LogService.LOG_ERROR, "Error creating handlreg test database schema", e); } } }
In the implementation of the “prepare” method, the class containing the schema is instantiated, and run to create the schema. Then Liquibase is used directly on files residing in the test database bundle, to fill the database with test data.
To ensure that the correct PreHook will be called for a given datasource, the DS component is given a name, “name=handleregdb” in the above example.
The same name is used in the pax-jdbc-config configuration that performs the magic of creating a DataSource from a DataSourceFactory. The pax-jdbc-config configuration resides in the template feature.xml file of the bundle project, i.e. in the handlereg.db.liquibase.test/src/main/feature/feature.xml file. The pax-jdbc-config configuration in that template feature.xml, looks like this:
<feature name="handlereg-db-test" description="handlereg test DataSource" version="${project.version}"> <config name="org.ops4j.datasource-handlereg-test"> osgi.jdbc.driver.name=derby dataSourceName=jdbc/handlereg url=jdbc:derby:memory:handlereg;create=true ops4j.preHook=handleregdb </config> <capability> osgi.service;objectClass=javax.sql.DataSource;effective:=active;osgi.jndi.service.name=jdbc/handlereg </capability> <feature>${karaf-feature-name}</feature> <feature>pax-jdbc-config</feature> </feature>
The xml example above, defines a feature that:
- Depends on the feature created by the bundle project
- Depends on the pax-jdbc-config feature (built-in in karaf)
-
Creates the following configuration (will end up in the file etc/org.ops4j.datasource-handlereg-test.cfg in the karaf installation):
Listing 5.osgi.jdbc.driver.name=derby dataSourceName=jdbc/handlereg url=jdbc:derby:memory:handlereg;create=true ops4j.preHook=handleregdb
Explanation of the configuration:
osgi.jdbc.driver.name=derby
will make pax-jdbc-config use the DataSourceFactory that has the name “derby”, if there are multiple DataSourceFactory services in the OSGi service registryops4j.preHook=handleregdb
makes pax-jdbc-config look for a PreHook service named “handleregdb” and call its “prepare” method (i.e. the liquibase script runnner defined at the start of this section)url=jdbc:derby:memory:handlereg;create=true
is the JDBC URL, which one third of the conection properties needed to create a DataSource from a DataSourceFactory (the other two parts are username and password, but they aren’t needed for an in-memory test database)dataSourceName=jdbc/handlereg
gives the name “jdbc/handlreg” to the DataSource OSGi service, so that components that waits for a DataSource OSGi service can qualify what service they are listening for
Implementing the business logic
The business logic OSGi bundle defines a DS component accepting a DataSource with a particular name and exposing the business logic service interface:
@Component(service=HandleregService.class, immediate=true) public class HandleregServiceProvider implements HandleregService { private DataSource datasource; @Reference(target = "(osgi.jndi.service.name=jdbc/handlereg)") public void setDatasource(DataSource datasource) { this.datasource = datasource; } ... // Implementing the methods of the HandleregService interface }
The target argument with the value “jdbc/handlereg”, matching the dataSourceName config value, ensures that only the correct DataSource service will be injected.
The implementations of the methods in the business logic service interface all follow the same pattern:
- The first thing that happens is that a connection is created in a try-with-resource. This ensures that the database server doesn’t suffer resource exhaustion
- The outermost try-with-resource is followed by a catch clause that will catch anything, log the catch and re-throw inside an application specific runtime exception (I really don’t like checked exceptions)
- A new try-with-resource is used to create a PreparedStatement.
- Inside the try, parameters are added to the PreparedStatement. Note: Parameter replacements in PreparedStatements are safe with respect to SQL injection (parameters are added after the SQL has been parsed)
- Then, if it’s a query, the returned ResultSet is handled in another try-with-resource and then the result set is looped over to create a java bean or a collection of beans to be returned
I.e. a typical business logic service method looks like this:
public List<Transaction> findLastTransactions(int userId) { List<Transaction> handlinger = new ArrayList<>(); String sql = "select t.transaction_id, t.transaction_time, s.store_name, s.store_id, t.transaction_amount from transactions t join stores s on s.store_id=t.store_id where t.transaction_id in (select transaction_id from transactions where account_id=? order by transaction_time desc fetch next 5 rows only) order by t.transaction_time asc"; try(Connection connection = datasource.getConnection()) { try (PreparedStatement statement = connection.prepareStatement(sql)) { statement.setInt(1, userId); try (ResultSet results = statement.executeQuery()) { while(results.next()) { int transactionId = results.getInt(1); Date transactionTime = new Date(results.getTimestamp(2).getTime()); String butikk = results.getString(3); int storeId = results.getInt(4); double belop = results.getDouble(5); Transaction transaction = new Transaction(transactionId, transactionTime, butikk, storeId, belop); handlinger.add(transaction); } } } } catch (SQLException e) { String message = String.format("Failed to retrieve a list of transactions for user %d", userId); logError(message, e); throw new HandleregException(message, e); } return handlinger; }
To someone familiar with spring and spring boot this may seem like a lot of boilerplate, but I rather like it. I’ve had the misfortune to have to debug into spring applications created by others, and to make reports from relational databases with schemas created by spring repositories.
Compared to my bad spring experience:
- This is very easy to debug: you can step and/or breakpoint straight into the code handling the JDBC query and unpack
- If the returned ResulSet is empty, it’s easy to just paste the SQL query from a string in the Java code into an SQL tool (e.g. Oracle SQL Developer, MS SQL Server Management Studio, or PostgreSQL pgadmin) and figure out why the returned result set is empty
- Going the other way, it’s very simple use the databases’ SQL tool to figure out a query that becomes the heart of a method
- Since the ER diagram is manually created for ease of query, rather than autogenerated by spring, it’s easy to make reports and aggregations in the database
Defining a webcontext and hooking into Apache Shiro
This bundle contains a lot of boilerplate that will be basically the same from webapp to webapp, except for actual path of the webcontext. I have created an authservice sample application that is as simple as I could make it, to copy paste into a bundle like this.
As mentioned in the sample application, I use a webapp called “authservice” to provide both apache shiro based authentication and authorizaton, and a simple user managment GUI.
Authservice has been released to maven central and can be used in any apache karaf application by loading authservice’s feature repository from maven central and then installing the appropriate feature.
All of my web applications have a OSGi web whiteboard webcontext that provides the application with a local path, and is hooked into Apache Shiro for authorization and authentication.
The bundle contains one DS component exposing the WebContextHelper OSGi service that is used to create the webcontext, e.g. like so:
@Component( property= { HttpWhiteboardConstants.HTTP_WHITEBOARD_CONTEXT_NAME+"=sampleauthserviceclient", HttpWhiteboardConstants.HTTP_WHITEBOARD_CONTEXT_PATH+"=/sampleauthserviceclient"}, service=ServletContextHelper.class, immediate=true ) public class AuthserviceSampleClientServletContextHelper extends ServletContextHelper { }
The bundle will also contain a DS component exposing a servlet Filter as an OSGi service and hooking into the OSGi web whiteboard and into the webcontext, e.g. like so:
@Component( property= { HttpWhiteboardConstants.HTTP_WHITEBOARD_FILTER_PATTERN+"=/*", HttpWhiteboardConstants.HTTP_WHITEBOARD_CONTEXT_SELECT + "=(" + HttpWhiteboardConstants.HTTP_WHITEBOARD_CONTEXT_NAME +"=sampleauthserviceclient)", "servletNames=sampleauthserviceclient"}, service=Filter.class, immediate=true ) public class AuthserviceSampleClientShiroFilter extends AbstractShiroFilter { // NOSONAR private Realm realm; private SessionDAO session; private static final Ini INI_FILE = new Ini(); static { // Can't use the Ini.fromResourcePath(String) method because it can't find "shiro.ini" on the classpath in an OSGi context INI_FILE.load(AuthserviceSampleClientShiroFilter.class.getClassLoader().getResourceAsStream("shiro.ini")); } @Reference public void setRealm(Realm realm) { this.realm = realm; } @Reference public void setSession(SessionDAO session) { this.session = session; } @Activate public void activate() { IniWebEnvironment environment = new IniWebEnvironment(); environment.setIni(INI_FILE); environment.setServletContext(getServletContext()); environment.init(); DefaultWebSessionManager sessionmanager = new DefaultWebSessionManager(); sessionmanager.setSessionDAO(session); sessionmanager.setSessionIdUrlRewritingEnabled(false); DefaultWebSecurityManager securityManager = DefaultWebSecurityManager.class.cast(environment.getWebSecurityManager()); securityManager.setSessionManager(sessionmanager); securityManager.setRealm(realm); setSecurityManager(securityManager); setFilterChainResolver(environment.getFilterChainResolver()); } }
I hope to make the definition and use of the webcontext simpler when moving to OSGi 7, because the web whiteboard of OSGi 7 will be able to use Servlet 3.0 annotations to specify the webcontexts, servlets and filters.
I also hope to be able to remove a lot of boilerplate from the shiro filter when moving to the more OSGi friendly Shiro 1.5.
Implementing a REST API
The REST API for one of my webapps, is a thin shim over the application’s business logic service interface:
- I create a DS component that subclasses the Jersey ServletContainer and exposes Servlet as an OSGi interface, hooking into the OSGi web whiteboard and the webcontext created by the web securiy bundle (I have created a ServletContainer subclass that simplifies this process)
- The component gets an injection of the application’s business logic OSGi service
- The DS component adds the injected OSGi service as a service to be injected into Jersey resources implementing REST endpoints
- I create a set of stateless Jersey resources implementing the REST endpoint that gets injected with the applications business logic OSGi service
Some examples of web APIs:
- A user management REST API wrapping the UserManagement OSGi service
- The REST API of the weekly allowance app, wrapping the UkelonnService OSGi service
- The REST API of the groceries registration app, wrapping the HandleregService OSGi service
I have also created a sample application demonstrating how to add OSGi services to services injected into stateless Jersey resources implementing REST endpoints.
Implementing a web frontend
I have a maven-centric approach to web frontends, instead of the more common node centric approach. I won’t detail the approach here, since I’ve already done so in Deliver react.js from apache karaf and Simplified delivery of react.js from apache karaf.
The frontends plug into the OSGi web whiteboard and the web context and from the web context, get both a local path matching the REST API and authentication and authorization.
Some examples of web frontends:
Composing features to create an application
At this point there is a lot of building blocks but no application.
Each of the building blocks have their own feature repository file attached to the maven artifact.
What I do is to manually create a feature repository that imports all of the generated feature repositories and then hand-write application features that depends on a set of the building block features. I don’t involve the karaf-maven-plugin in this because I only want to load the other feature repositories. I don’t want to inline the contents. I use the maven-resources-plugin resource filtering to expand all of the maven properties, and then use the build-helper-maven-plugin to attach the filtered feature repository to a pom maven artifact.
Some examples of manually created feature repositories:
- The authservice authentication and authorization and user management application master feature repository, where the handwritten features are:
- authservice-with-dbrealm-and-session which pulls in everything needed for karaf authentication and authorization against a JDBC realm, except for the actual database connection. This feature pulls in none of the user adminstration support of authservice
- authservice-with-testdb-dbrealm-and-session which builds on authservice-with-dbrealm-and-session and adds a derby test database with mock data
- authservice-with-productiondb-dbrealm-and-session which builds on authservice-with-dbrealm-and-session and adds a PostgreSQL database connection
- authservice-user-admin which builds on authservice-with-dbrealm-and-session and adds user adminstration, but pulls in no actual JDBC database
- user-admin-with-testdb which builds on authservice-user-admin and adds a derby test database with mock data
- user-admin-with-productiondb which builds on authservice-user-admin and adds a PostgreSQL database connection
- The ukelonn weekly allowance application master feature repository, where the handwritten features are:
- ukelonn-with-derby which pulls in all bundles needed to start the weekly allowance app with a database with mock data, and also pulls in the authentication and authorization app, also with an in-memory database with mock data (no user administration UI pulled in, since the weekly allowance app has its own user administration)
- ukelonn-with-postgresql which pulls in all bundles needed to start the weekly allowance app with a JDBC connection to a PostgreSQL database, and also pulls in the authentication and authorization app connected to a PostgreSQL database
- ukelonn-with-postgresql-and-provided-authservice which pulls in the weekly allowance app with a PostgreSQL JDBC connection and no authorization and authentication stuff. This feature won’t load if the authservice application hasn’t already been loaded
- The handlereg groceries registration application master feature, where the handwritten features are:
- handlereg-with-derby starts the application with a test database and also pulls in authservice (that’s the <feature>user-admin-with-testdb</feature> which actually pulls the full user administration application (with a derby test database))
- handlereg-with-derby-and-provided-authservice is the same as handlereg-with-derby except for not pulling in authservice. This requires the authservice to already be installed before this feature installed, but has the advantage of not uninstalling authservice when this service is uninstalled
- handlereg-with-postgresql starts the application with a PostgreSQL database connection and authservice
- handlereg-with-postgresql-and-provided-authservice starts the application with a PostgreSQL database and no autheservice. This is actually the feature used to load handlereg in the production system (since it means the feature can be uninstalled and reinstalled without affecting other applications)
As an example, the handlereg-with-derby feature mentioned above looks like this.
<feature name="handlereg-with-derby" description="handlereg webapp with derby database" version="${project.version}"> <feature>handlereg-db-test</feature> <feature>handlereg-web-frontend</feature> <feature>user-admin-with-testdb</feature> <feature>handlereg-backend-testdata</feature> </feature>
To start the composed application, install and start apache karaf, and from the karaf console, first load the master feature repository and then install the manually composed feature:
feature:repo-add mvn:no.priv.bang.handlereg/handlereg/LATEST/xml/features feature:install handlereg-with-derby
One thought on “Composing applications with karaf features”