Pages

Thursday, December 4, 2014

OSGi doesn't suck - You're just using it wrong

James Ward published a nice post "Java Doesn't Suck - You're Just Using It Wrong" recently. If you didn't read it yet, please do so first! I completely agree with James and thought it would be fun to see how this applies to the Amdatu stack, and how we used this on different projects in the past few years.

10 Page Wikis to Setup Dev Environments Suck

Even on our largest projects the dev environment setup is just a git clone away. All our projects are based on Bndtools, and a cloned workspace just needs to be imported in Eclipse. Headless builds based on Gradle are supported out of the box for each Bndtools workspace without any additional setup. A freshly cloned workspace contains all the configuration to run on the developer's machine.

Most developers install Mongo manually on their machines, but there's a hosted Mongo cluster for development as well.

For UI development we use tools from the JavaScript ecosystem such as Grunt, and those are installed automatically from the Gradle build.

Incongruent Deployment Environments Suck

Our build server does a fully automated deployment at each merge to master in git. In general master is pretty stable, because all work is done and reviewed on feature branches. Promoting a build to the more stable test server is done by starting a build on Bamboo. This will create a tag in git so that the release is reproducible and does the automated install on the test cluster. The same process applies to promote a build from test to production, it's just a single click on the build server.

When using OSGi you don't actually need an application server. The OSGi framework is part of the application, and you can start the application as an executable jar file. Things like a web server is just another bundle in the application (we use Jetty). On the cluster nodes (in any environment) we don't deploy the application directly either. A clean node starts a very bare bones OSGi application containing a Management Agent. The agent connects to a provisioning server, Apache Ace. The build server deploys bundles to the provisioning server, and when a new cluster node connects, it will receive the latest version of the application's bundles. This makes installations light weight, completely automated and reproducible. The important thing is, everything is completely automated, and there are no servers to maintain. When cluster nodes become unhealthy, a new one will be started automatically (by AWS AutoScaling) and will connect to the provisioning server to install the software. The videos below show more about deployments and Apache ACE.

Deploying with Apache ACE from Luminis Technologies on Vimeo.

Creating runnable JARs from Luminis Technologies on Vimeo.


Servers That Take More Than 30 Seconds to Start Suck

We don't even HAVE an application server to startup, so there's no waiting either... Starting an OSGi framework is extremely light weight (remember it was once designed for embedded environments). The only startup there is, is the actual application code. Even for very large applications (more than 500 bundles) this is a few seconds at max, while most applications start pretty much instantly. 
Besides that, you don't actually restart your application often during development. Using Bndtools you get hot code deployment; each time you compile code, the bundle containing that code is rebuilt and updated in the running framework. This process is so fast that you won't even notice it, coding feels like working in a dynamic language like Groovy or Ruby. 

Manually Managed Dependencies Suck

The very best way to manage dependencies is using bnd (which Bndtools is built on top of). Just like in Maven you declare dependencies by name and version. Because bnd is built for OSGi it understands things like semantic version ranges. You don't need POM files; OSGi bundles already contain all the metadata a POM file normally contains. This means that conceptually dependency management in bnd are not that different, but bnd is a lot easier because it's closer to OSGi. Similar to Maven there are online repositories, and you can host your own repositories.
There is one difference, bnd doesn't do transitive dependencies. Transitive dependencies are dangerous and can cause a lot of trouble. Also, the resolver in the OSGi framework will help you to make sure that all dependencies your application need are installed.

Bnd does integrate very well with Gradle, which we use for headless builds. But again, dependencies are managed by bnd, not by Gradle.

For UI development we often use Bower to manage dependencies. 

Unversioned & Unpublished Libraries Suck

Versioning is an important topic in OSGi. All the tools discussed understand semantic versioning, which helps a lot when checking for compatibility with newer versions of (external) bundles. API's can be baselined automatically, which will force correct semantic versioning of your packages.

baselining from Luminis Technologies on Vimeo.

Long Development / Validation Cycles Really Suck

Of all the things in OSGi, this is probably what I love most; an instant feedback cycle during development. Just check out this video.

Bndtools code reload from Luminis Technologies on Vimeo.

Monolithic Releases Suck

Completely automated deployments are required when releasing often, but as discussed above, all the tools are in place to do this. We deploy multiple times per day to development and often also to test, and multiple times per week to production. This definitely requires a different mind set in the team (and not just the development team), but once everyone got used to it, it's perfect. Fast feedback for the win!
James makes another excellent point about the need for monitoring. When deploying new code all the time, things can always go wrong once in a while. Make sure to monitor for this! We implemented health checks on all our (OSGi) services. The Load Balancer will check these health checks every few minutes. When there's something wrong, we receive notifications immediately. 

Another really useful tool that we use is New Relic. Specially when performance issues occur it gives a lot of helpful information.

Sticky Sessions and Server State Suck

All web user interfaces are based on AngularJS. Pretty much all session state is on the client side and this greatly reduces the need for server side sessions. The only state the server has is related to authentication on the RESTful resources, which is stored in Mongo. This gives horizontal scalability, and makes topics like failover and auto scaling a lot less complex.

Useless Blocking Sucks

Clients use a mix of RESTful web services and Web Sockets, where Web Sockets are used for asynchronous responses from the server. This makes asynchronous communication easy for wherever it applies. Also we use RabbitMQ and RX at several places. This all doesn't have much to do with OSGi, but it works perfectly together. 

The Java Language Kinda Sucks

I did agree with this for a long time. We used a mix of Groovy and Java for a while, but this turns out to introduce quite a few problems. The Groovy tooling in Eclipse is horrible, which is the main reason I'm not so eager to use it in new projects any more. Java got a LOT better with Java 8 as well. Once you go stream() you never go back (or something like that ;-) ). I'm really not sure anymore if polyglot really makes that much sense, even when I do enjoy playing with alternative languages. Groovy and Scala do work without any problem in OSGi, it's just another bundle... 





Sunday, November 30, 2014

Making JavaFX better with OSGi

Why you should run JavaFX on OSGi

OSGi makes JavaFX better for three reasons:
  1. Hot code reload during development
  2. A services based architecture keeps UI features nicely isolated
  3. Provisioning to devices

Let's take a look at all of them in detail before we see how to make JavaFX run on OSGi.

Hot code reload

Waiting for a build and restarting your application to see the effect of code changes is annoying. Even more so with UI related work, because it's less easy to work test-driven and you often need a bit of experimentation to get the UI to look the way you intended. 

OSGi is designed to be dynamic; in a running framework bundles can be added, removed and updated. Combined with an IDE that understands this, we can make updates to a running application without ever waiting for builds or restarts. See the following video for an example.



Services based architecture

In OSGi it's trivial to create plugin systems based on the so called "whiteboard pattern". Each part of the UI can be provided by a different bundle, which makes the code loosely coupled. This makes it easier to add features or change existing features without impacting the rest of the code. The same mechanism can be implemented using other Dependency Injection frameworks, but it comes most natural in OSGi. 

Provisioning

JavaFX is gaining popularity in the IoT space; it's very well suited to run on all kind of devices. If we have a lot of devices, it becomes a question how we install and update our application to those devices. Manually copying files to a devices is ok if we have one or two of them, but what if you roll out your software to many devices? Also, in the IoT space there might be limited bandwidth, so rolling out updates should preferably be efficient in this aspect as well. Because an OSGi framework can be updated, it's possible to create provisioning systems; a system that takes care of installing and updating the software running on a target (such as a device). Apache ACE is a great example of this. At JFokus, Sander Mak and me will present about exactly this topic in a lot more detail.

Running JavaFX on OSGi

Out of the box it seems a bit problematic to run JavaFX on OSGi. The reason for this is that starting a JavaFX application requires the use of a launcher, which wasn't designed to be used in a modular or dynamic environment. 

Typically the launch method is used to start a JavaFX application. Running this method from an OSGi bundle (e.g. from an Activator) produces two issues:
  1. The method fails with a ClassNotFoundException
  2.  The method may only be called once (this is explicitly checked), so updating/restarting the bundle that invokes the method is not possible.

Importing JavaFX packages

Besides these two problems there is something else to straighten out. When using the javafx packages, most OSGi frameworks will give resolver issues such as: "Unable to resolve 11.0: missing requirement [11.0] osgi.wiring.package; (osgi.wiring.package=javafx.application)".
In OSGi the packages made available from the JRE are explicitly exported by the "system bundle". The list of packages needs to be configured however, and most launchers available today don't export the javafx packages yet. This is no problem at all, we can do this manually with just a bit of configuration. When using Bndtools you can add the following to a .bndrun file: "-runsystempackages: javafx.application,javafx.scene". Just add other packages when you need them.

Fixing the ClassNotFoundException

Looking at the source code of the launch method, it uses the ContextClassLoader of the current thread to load the class that you are trying to launch. A common problem with an easy fix: Setting the ConextClassLoader to the bundle's classloader before invoking the method. 

Thread.currentThread().setContextClassLoader(
this.getClass().getClassLoader());
launch();

Working around the single invoke launch limitation

When we setup the UI in the same bundle as where the launch method is invoked, we run into a practical problem. Each time we edit our code, the bundle is updated in the running app (like we want it to). This fails because the launch will throw an exception. Not good. We can easily work around this problem by moving the code that actually sets up the UI to a separate bundle (or bundles). 

In the following example I register the Stage created by the launcher as a service. Another bundle can depend on this service to get the Stage and add UI elements to it. This way the launching code and the actual UI are separated, and we can happily restart and update the UI bundle without problem. 

Code in the launcher bundle.

import java.util.concurrent.Executors;

import javafx.application.Application;
import javafx.application.Platform;
import javafx.stage.Stage;

import org.apache.felix.dm.DependencyManager;
import org.apache.felix.dm.annotation.api.Component;
import org.apache.felix.dm.annotation.api.Start;
import org.apache.felix.dm.annotation.api.Stop;
import org.osgi.framework.BundleContext;
import org.osgi.framework.FrameworkUtil;

import example.javafx.launcher.StageService;

@Component
public class App extends Application {

 @Start
 public void startBundle() {

  Executors.defaultThreadFactory().newThread(() -> {
    Thread.currentThread().setContextClassLoader(
      this.getClass().getClassLoader());
    launch();
   }).start();
 }

 @Override
 public void start(Stage primaryStage) throws Exception {
  
  BundleContext bc = FrameworkUtil.getBundle(this.getClass()).getBundleContext();
  DependencyManager dm = new DependencyManager(bc);
  
  dm.add(dm.createComponent()
    .setInterface(StageService.class.getName(), null)
    .setImplementation(new StageServiceImpl(primaryStage)));
 }

 @Stop
 public void stopBundle() {
  Platform.exit();
 }
}

Code in the UI bundle.
@Component
public class UI {

 @ServiceDependency
 private volatile StageService m_stageService;
 
 @Start
 public void start() {
  Platform.runLater(() -> {
  
  Stage primaryStage = m_stageService.getStage();
   primaryStage.setTitle("Hello World!");
         Button btn = new Button();
         btn.setText("Say 'Hello'");
         btn.setOnAction(new EventHandler() {
  
             @Override
             public void handle(ActionEvent event) {
                 System.out.println("Hello World!");
             }
         });
         
         StackPane root = new StackPane();
         root.getChildren().add(btn);
         primaryStage.setScene(new Scene(root, 300, 250));
         primaryStage.show();
  });
 }
}

That's it! JavaFX now runs successfully in an OSGi container.

A pluggable architecture

Now that we are running on OSGi, we can make use of services to make the whole architecture more modular. As an example we take a UI that has multiple screens, divided by tabs. Let's try to create a mechanism that let you provide new tabs from different bundles, so that new features can be added to the UI without even changing anything in the main UI code. 

The main UI bundle just sets up the TabPane. It will than listen for any AppScreen services that are registered (this is an example of the whiteboard pattern). Each AppScreen represents a tab, and has a title and a Node that represent that tab. When a new AppScreen is found, it is added to the TabPane, and when an AppScreen is removed it is removed from the TabPane. Now we can add new parts to the UI by just installing new bundles. 



import java.util.Map;
import java.util.Optional;
import java.util.concurrent.ConcurrentHashMap;

import javafx.application.Platform;
import javafx.scene.Scene;
import javafx.scene.control.Tab;
import javafx.scene.control.TabPane;
import javafx.stage.Stage;

import org.apache.felix.dm.annotation.api.Component;
import org.apache.felix.dm.annotation.api.ServiceDependency;
import org.apache.felix.dm.annotation.api.Start;
import org.osgi.framework.ServiceReference;

import example.javafx.launcher.StageService;
import example.javafx.ui.AppScreen;

@Component
public class UI {

 @ServiceDependency
 private volatile StageService m_stageService;
 private volatile TabPane tabPane;

 private final Map screens = new ConcurrentHashMap<>();

 @Start
 public void start() {
  Platform.runLater(() -> {

   Stage primaryStage = m_stageService.getStage();
   primaryStage.setTitle("Tabs example!");
   tabPane = new TabPane();

   screens.values().forEach(this::createTab);

   primaryStage.setScene(new Scene(tabPane, 300, 250));
   primaryStage.show();

  });
 }

 private void createTab(AppScreen s) {
  Tab tab = new Tab(s.getName());
  tab.setContent(s.getContent());
  tabPane.getTabs().add(s.getPosition(), tab);
  tabPane.getSelectionModel().select(tabPane.getTabs().size()-1);
 }

 @ServiceDependency(removed = "removeScreen")
 public void addScreen(ServiceReference sr, AppScreen screen) {
  if (tabPane != null) {
   Platform.runLater(() -> {
    createTab(screen);
   });
  }

  screens.put(sr, screen);

 }

 public void removeScreen(ServiceReference sr) {
  Platform.runLater(() -> {
   AppScreen remove = screens.remove(sr);
   Optional findAny = tabPane.getTabs().stream()
     .filter(t -> t.getText().equals(remove.getName()))
     .findAny();
   if (findAny.isPresent()) {
    tabPane.getTabs().remove(findAny.get());
   }
  });
 }
}

A bundle that adds a new tab could contain the following code:


import javafx.scene.Node;
import javafx.scene.control.Button;
import javafx.scene.layout.VBox;

import org.apache.felix.dm.annotation.api.Component;

import example.javafx.ui.AppScreen;

@Component
public class OtherScreen implements AppScreen{

 @Override
 public String getName() {
  return "Other screen";
 }

 @Override
 public Node getContent() {
  VBox vbox = new VBox();
  Button button = new Button("Other screen");
  vbox.getChildren().add(button);
  
  return vbox;
 }

 @Override
 public int getPosition() {
  return 1;
 }

}

Tooling

To get the most out of OSGi we need an IDE that "understands" updating bundles in a running framework. This makes build tools like Maven and Gradle a less than optimal choice, although they can definitely be used for building bundles. A much better choice is Bndtools; an Eclipse plugin that makes OSGi development easy. There is also an open issue for support in Intellij, please vote! For JavaFX support in Eclipse I used e(fx)clipse.

Deploying

Just like any OSGi application built with Bndtools we can export the application as an executable JAR, as also shown in this video. Simply click the export button in the .bndrun configuration screen, or run gradle export with the out of the box available Gradle build.

For a more advanced IoT setup we would use Apache ACE for provisioning. Come see us at JFokus for more about this :-)

Thursday, November 6, 2014

JMaghreb slides and resources

The JMaghreb conference in Casablanca was great! Lots of interesting people to talk to and a lot of good content. I had three talks and many people have asked for slides, code and more resources which are listed in this post.

Modularity Patterns with OSGi




The source code can be found here.

Lessons learned from a large scale OSGi web app



Tutorial: Introduction to OSGi

In this tutorial I showed building a complete chat application backend using OSGi and Amdatu. The code was already available here




More resources:

  • Amdatu.org for introductions and the Amdatu components
  • My book: Modular cloud apps with OSGi


First beta of Amdatu Bootstrap

For the past few months we have been working on a development tool: Amdatu Bootstrap. Amdatu Bootstrap makes OSGi development faster and easier by providing an interactive tool to automate common tasks like configuring a build path or run configuration and it integrates with many libraries. Amdatu Bootstrap is built on top of Bnd and is typically used together with Bndtools.

The video below gives an impression how you can use Amdatu Bootstrap.


Amdatu Bootstrap comes with a web based UI and an OSGi based backend. The reason for working with web technology in the frontend is because this makes it easy to develop a user friendly application, and it gives the possibility to integrate in different IDEs.

So why not just extend Bndtools with the functionality of Amdatu Bootstrap? First of all, because Bndtools is based on Eclipse, it is not very easy to extend Bndtools. A lot of knowledge about Eclipse RCP is required, even for relatively simple tasks. Also, we want a tool with the potential to be used with other IDEs in the future. Amdatu Bootstrap is designed to be extensible. It is completely based on OSGi services, and adding a plugin is as easy as implementing an interface.

Amdatu Bootstrap went through several iterations of APIs and ideas, while being used by a diverse group of early users. We are now really happy with the API and the way the tool works, and are announcing the first beta release. Please provide feedback! You can send feedback on the mailinglist or create issues and feature requests on JIRA. If you want to help out even more you can take a look at the plugin development guide and work on some awesome new plugins.

Links:





Wednesday, November 5, 2014

Introducing Amdatu JPA

JPA is a popular way of working with relation databases in Java. It was designed for use in Java EE however, and it was problematic to use in OSGi. Wtih Amdatu JPA we fixed that problem!

Amdatu JPA makes JPA usable in OSGi using either Hibernate, EclipseLink or OpenJPA. It takes care of data source registration, declarative transaction management and makes EntityManagers available as an OSGi service to integrate it tightly to the programming model we are familiar with in OGSi. Amdatu JPA was first released a few months ago, and after initial feedback and experience in some large projects it is now time to start using it!

The following video shows how to use Amdatu JPA. The full documentation can be found on the Amdatu website


Thursday, August 28, 2014

Join me at JDD Krakow

October 13th and 14th I will be speaking at JDD in Krakow. I was speaking at JDD last year I'm very happy to be back! The conference is not too large which give you great opportunity to actually meet and talk to people. There are some excellent speakers on the schedule already, and I'm expecting many more. The call for papers is still open, so you can be one of them: http://14.jdd.org.pl/cfp/cfp/

Krakow is also a great place to be, and seems to be a hotspot for software engineering.

My talk is a two hour introduction to OSGi:
Modularity is becoming more relevant each day. It is the key to maintainable code and the ultimate agile tool. OSGi is the only mature modularity solution available today. In this talk you will see OSGi development in action.
OSGi has a name of being hard to use and complex. With today’s tools and frameworks this is far from true! In this presentation you will see an OSGi application being built from scratch and learn about package imports and exports, dynamic services, dependency injection and integration with JAX-RS and MongoDB. This talk is both for developers new to OSGi that want to learn the OSGi basics, and for developers with some OSGi experience looking to optimize their workflow.


If you have the possibility: make sure to be there!


Monday, June 2, 2014

Deploying OSGi applications

There are many different options for deploying OSGi applications, and with that, many opinions
about the "best" way to run OSGi apps in production. In this post I will try to explain the different
options and also explain why we deploy applications the way we do at Luminis Technologies.
Let's first see the options we have.

The era of application servers

For many years the deployment of Java backend applications involved application servers, and I
have been doing so for many years. For this discussion a Servlet container like Tomcat is the
same thing as an application server; although it’s smaller, it’s built on the same concepts.
Let's look at what an application server actually offers:
  • Running multiple applications on shared resources
  • Container level configuration of data sources and other resources
  • Framework components such as a Servlet container
  • Some basic monitoring and log viewing facilities

This seems to be quite useful features, and definitely have been in the past. The basic idea is
that we have multiple applications that should be deployed on the same machine, using the
same resources. In a time where compute resources can easily be virtualised in small sizes
(using cloud services like EC2, but even more so using technology like Docker), you may
wonder if this is still relevant? Why not give each application a separate process, or even
separate (virtual) machines? The overhead of creating multiple Java VMs is hardly relevant any
more, so why create a maintenance dependency between two unrelated applications?
Framework components such as a Servlet container are also no longer heavy weight
components, and can easily be embedded within an application.

Looking at the Micro Services ideas and the popularity of tools like Spring Boot it’s clear that the
idea of large application servers is gone. Of course there are new problems to deal with in a
setup like this; instead of managing one large container, we need to manage many small isolated
machines and processes. This is not necessarily difficult, but definitely different in the area of
deployment and management.

Accepting the fact that there might be alternative ways to deploy Java application, let’s take a
look at options to deploy OSGi applications.

Self contained executable JARs

Based on a Bnd run configuration we can generate an executable JAR file that contains an OSGi
framework, and all bundles that should be installed. The whole application can be started by
simply executing the JAR file:
java -jar myapp.jar
The obvious benefit is that it's extremely simple to run, and that it runs everywhere where Java is
installed. There is no need for any kind of installation or management required of some kind of
container. When the application needs to be upgraded, we can simply replace the JAR file and
restart the process. This update process is a bit harsh; we don’t use the facilities provided by
OSGi to update parts of the application by updating only new bundles. If we have a large number
of machines/devices to update it also requires some manual work or scripting, we have to
update all those machines.


Let’s explore options to make this process a bit more flexible.

Provisioning using Apache ACE

This is the deployment mechanism we use ourselves at Luminis Technologies. In this approach
we don’t manually install our application on a machine or container. Instead we use a
provisioning server as a broker. When a new release of our software is ready, the bundles are
uploaded to Apache ACE. The bundles can be grouped into features and distributions. This is a
powerful way to create variations of distributions based on a common set of bundles.

As long as we don’t register any targets, our bundles just sit on Apache ACE, and our application
is not running yet. To actually run the app, we need to start and register a target. A target is an
OSGi framework with the Apache ACE management agent bundle installed. Based on
configuration we pass to the agent, it registers to the Apache ACE server. Apache ACE will than
send the distribution prepared for this target to the target. The target receives the deployment
package, starts the bundles and will be up and running. The target itself can be started again as
a self contained executable JAR file, and run everywhere where Java is installed (including
embedded devices etc.).


Why add this extra complexity to deployments!? There are a number of benefits compared to
simply running self contained JARs.
  • Incremental updates
  • Distribution management
  • Automatic updates of sets of targets

When new bundles are uploaded to Apache ACE, the targets that use these bundles can
automatically be updated. The deployment package send to the targets only contains the
updated bundles, and the updates are installed while the target is running; the target never has to
be restarted. This also makes it easy to update large numbers of targets that all run the same
software. We use this to update large number of cluster nodes running on Amazon EC2, but the
same mechanism works great for the embedded/IoT world where a large number of devices
requires an update. This is even more useful when there are variations of distributions used by
the targets. Instead of rebuilding each distribution manually, updates are automatically deployed
by Apache ACE to relevant distributions.

You could create the same mechanism using some scripting. You might make distributions
available in some central location and use scripts to push those distributions to targets. Although
this is not rocket science, it’s still quite some work to actually get this working, specially when
incremental updates are required (for example in low bandwidth situations).

From the perspective of targets both solutions are pretty much equal; applications are started as
a process and should be managed and monitored as such.


Configuration and Monitoring without a container

When deploying applications as processes we need some way to configure and monitor the
application. For configuration we have all required tools built into OSGi already: Configuration
Admin. Using Config Admin we can easily load configuration from any place; property files, a
REST interface, a database… This opens up endless possibilities to keep configuration data
separate from the software release itself. In the PulseOn project we deploy application nodes to
Amazon EC2. At EC2 there is the concept of user data; basically arbitrary configuration data we
can specify when configuring machines. The user data is made available on a REST interface
only accessible by the machine itself. This data is loaded, and pushed to Config Admin, which
configures our components.

What about monitoring? An application server often has functionality to view log files, sometimes
combined for a cluster of nodes. Well, this is not very useful at all by itself. Does it make sense
to just look through log files? What we need is mechanisms to actively report problems, actively
check the health of nodes and mechanism to analyse log files in smart ways. We don’t really
care if the application process is running or not; it only matters if the application serves client
requests correctly. There are plenty of great tools available to centralise log analyzes and active
monitoring and reporting should be part of our application services.

A nice example from the PulseOn project again is our health check mechanism. Each OSGi
service in our application can implement a health check interface. The service itself reports if it’s
healthy. Our load balancers check these health checks and decide if a node is healthy based on
these checks. When a node is unhealthy the cluster replaces that node.

OSGi app servers

I hope I have made my point by now that a application server or container deployment model is
really not necessary any more today. Still there are lots of users deploying OSGi bundles to
containers, so let’s discuss this further. One pouplar container to use for OSGi is Apache Karaf.
Apache Karaf is basically an application server focussed on OSGi. Using Karaf it’s easy to
deploy multiple applications in the same container. Also it comes with a bunch of pre-installed
features to more easily work with technology that is not primarily designed to be used in a
modular environment. While this is great when depending on these technologies; you should
probably ask yourself if it’s such a good idea to use non-modular frameworks in a modular
architecture in the first place… Frameworks and components designed to be used with OSGi,
such as the components from the Amdatu project, don’t require any tricks to use. On the long
term this will keep your architecture a lot cleaner.

Other users deploy OSGi applications to Java EE app servers like Wepshere or Wildfly/EAP.
The main benefit is integration with Java EE technology, bridging the dynamic OSGi world with
the static, but familiar Java EE world. This is recipe for disaster. Although you can easily use
things like JPA and EJB, it breaks all concepts of service dynamics. More importantly, you really
don’t need to do this. Tools for dependency injection, creating RESTful web services and work
with data stores is available in a much more OSGi natural way, so why stay in the non-modular

world with one leg and lose a lot of OSGi’s benefits?

Sunday, April 27, 2014

Ten reasons to use OSGi

In this post I will discuss ten reasons to use OSGi. The reason for this post is that there are many misconceptions about OSGi. At Luminis Technologies we use OSGi for all our development, and are investing in OSGi related open source projects. The reason to do so is because we think it's the best available development stack, and here are some reasons why.

#1 Developer productivity

One of OSGi's core features is that it can update bundles in a running framework without restarting the whole framework. Combined with tooling like Bndtools this brings an extremely fast development cycle, similar to scripting languages like JavaScript and Ruby. When file is saved in Bndtools, the incremental compiler of Eclipse will build the affected classes. After compilation Bndtools will automatically rebuild the affected bundles, and re-install those bundles in the running framework. It's not only fast, but also reliable; this mechanism is native to OSGi, and no tricks are required.

Compare this to doing Maven builds and WAR deployments in an app-server... This is the development speed of scripting languages combined with type-safeness and runtime performance of Java. It's hard to beat that combination.


#2 Never a ClassNotFoundException

Each bundle in OSGi has it's own class loader. This class loader can only load classes from the bundle itself, and classes explicitly imported by the bundle using the Import-Package manifest header. When an imported package is not available (exported by another bundle) in the framework, the bundle will not resolve, and the framework will tell you when the bundle is started. This fail-fast mechanism is much better than runtime ClassNotFoundExceptions, because the framework makes you aware of deployment issues right away instead of when a user hits a certain code path in runtime. 

Creating Import-Package headers is easy and automatic. Bnd (either in Bndtools or Maven) generates the correct headers at build time, by inspecting the byte-code of the bundle. All used classes that are not part of the bundle must be imported. By letting the tools do the heavy lifting, there's not really any way to get this wrong. This is unless there's dynamic class loading in the code (using Class.forName). Luckily this is hardly ever necessary besides JDBC drivers.

The Import-Package mechanism does introduce a common problem when using libraries. The transitive dependency madness in Maven has made some developers unaware of the fact that some libraries pull in many, many, other dependencies. In OSGi this means those transitive dependencies must also be installed in the framework, and the resolver makes you immediately aware of that. While this makes it harder to use some libraries, you can argue this is actually a good thing. From an architectural perspective, do you really want to pull in 30 dependencies just because you want to use some library or framework? This might work well for a few libraries, but breaks sooner or later when there are version conflicts between dependencies. Automatically pulling in transitive dependencies is easy for developers, but dangerous in practice. 

#3 All the tools for modern (web) backends

Even more important than the language or core platform is the availability of mature components to develop actual applications. In the case of Luminis Technologies that's often everything related to creating a backend for modern web applications. There is a wealth of open source OSGi components available to help with this. The Amdatu project is a great place to look, as well as Apache Felix. Amdatu is a collection of OSGi components focussed on web/cloud applications. Examples are MongoDB integration, RESTful web services with JAX-RS and scheduling. 

It is strongly advisable to stay close to the OSGi eco-system when selecting frameworks. Not all frameworks are designed with modularity in mind, and trying to use such frameworks in a modular environment is painful. This is an actual downside of OSGi; your choice of Java frameworks is somewhat limited by the compatibility with OSGi of the frameworks. This might require you to leave behind some of the framework knowledge that you already have, and learn something new. Besides the investment of learning something new, nothing is lost. There are so many framework alternatives, do you really need that specific framework even although it's not fit for modular development?

In practice we most commonly hear questions about either using OSGi in combination with Java EE or Spring. As a heavy user of both in the past, I'm pretty confident to say that you don't need either of them. Dependency injection is available with Apache Felix Dependency Manager, Declarative Services and others, and I already mentioned Amdatu as a place to look for components to build applications. 

#4 It's fast

OSGi has close to zero runtime overhead. Invocations to OSGi services are direct method calls and no proxy magic is required. Remember that OSGi was originally designed to run embedded on small devices; it's extremely lightweight by design. From a deployment perspective it's fast as well. Although there are app-servers with OSGi support, we prefer to deploy our apps as bare bones Apache Felix instances. This way nothing is included that we don't need, which drastically improves startup speed of applications. Though that a few seconds startup time for an app-server is impressive? That's what an OSGi framework does on a Raspberry Pi ;-)

#5 Long term maintainability

This should probably be the key reason to use OSGi; modularity as an architectural principle. Modularity is key to maintainable code; by splitting up a code base in small modules it's much easier to reason about changes to code. This is about the basic principles of separation of concerns and low coupling/high cohesion. These principles can be applied without a modular runtime as well, but it's much easier to make mistakes because the runtime doesn't enforce module boundaries. A modular code base without a modular runtime is much more prone to "code rot", small design flaws that break modularity. Ultimately this leads to unmaintainable code.

Of course OSGi is no silver bullet either. It's very well possible to create a completely unmaintaintable code base with OSGi as well. However, when we adhere to basic OSGi design principles, it's much easier to do the right thing.

Another really nice feature of a modular code base is that it's easy to throw code away. Given new insights and experience it's sometimes best to just throw away some code and re-implement it from scratch. When this is isolated to a module it's extremely easy to do; just throw away the old bundle and add a new one. Again, this can be done without a modular runtime as well, but OSGi makes it lot more realistic in practice. 

#6 Re-usability of software components

A side effect of a modular architecture is that it becomes easier to re-use components in a different context. The most important reason for this is that a modular architecture forces you to isolate code into small modules. A module should only have a single responsibility, and it becomes easy to spot when a module does too much. When a module is small, it's inherently easy to re-use. 

Many of the Amdatu components are developed exactly that way. In our projects we create modules to solve technical problems. When we have other projects requiring a similar component, we share these implementations cross-project. If the components prove to be usable and flexible enough, we open source them into Amdatu. In most cases this requires very limited extra work.

This has benefits within a single project context as well. When the code base is separated into many small modules, it becomes easier to make drastic changes to the architecture, while still re-using most of the existing code. This makes the architecture more flexible as well, which is a very powerful tool.

#7 Flexible deployments

OSGi can run everywhere, from large server clusters to small embedded devices. Depending on the exact needs there are many deployment options to choose from. Using Bndtools or Gradle it's easy to export a complete OSGi application to a single JAR that can run by simply running "java -jar myapp.jar". In deployments with many servers (as is the case in many of our own deployments) we can use Apache ACE as a provisioning server. Instead of managing servers manually, software updates are distributed to servers automatically from the central provisioning server. The same mechanism works when we're not working with server clusters, but many small devices for example.

The flexibility of deployments also implies that OSGi can be used for any type of application. We can use the same concepts when working on large scale web applications, embedded devices or desktop applications.

OSGi can even be embedded into other deployment types easily. There are many products that use OSGi to create plugin systems, while the application is deployed in a standard Servlet container. Although I wouldn't advice this for normal OSGi development, it does show how flexible OSGi is for deployments.

Also check out this video to learn more about Apache ACE deployments.


#8 It's dynamic

Code in OSGi is implement using services. OSGi services are dynamic, meaning that they can come and go at runtime. This allows a running framework to adapt to new configuration, new or updated bundles and hot deployments. Basically, we never need to restart an application. I recently blogged about this in more detail in a post "Why OSGi Service dynamics are useful".

#9 Standardized configuration

One of the OSGi specifications that I find most useful is Configuration Admin. This specification defines a Java API to configure OSGi services. On top of this API there are many components that load configuration from various places, such as property files, XML files, a database, provisioned from Apache ACE or loaded from AWS User Data. The great thing is that your code doesn't care where configuration comes from, it just needs to implement a single method to receive configuration. Although hard to understand that Java itself still doesn't have a proper configuration mechanism, Configuration Admin is extremely useful because almost every application needs configuration.

#10 It's easy

This might be the most controversial point in this post. Unfortunately OSGi isn't immediately associated with "easy" by most developers. This is mostly caused by developers trying to use OSGi in existing applications, where modularity is an afterthought. Making something non-modular into something modular is challenging, and OSGi doesn't magically do this either. However, when modularity is a core design principle and OSGi is combined with the right tooling, there's nothing difficult about it. 

There are plenty of resources to learn OSGi as well. Of course there is the book written by Bert Ertman and me, and there are a lot of video tutorials available recorded at various conferences where we speak.

Finally, when trying out OSGi, try it with a full OSGi stack, for example as described in our book or the Amdatu website. Don't try to fit your existing stack into OSGi as a first step (which is actually advise that applies to learning almost any new technology).

Video sources:
Book:



Tuesday, April 22, 2014

Micro Services vs OSGi services

Recently the topic of Micro Services has been getting a lot of attention. The OSGi world has been talking about micro services for a long time already. Micro services in OSGi are also often written as µServices, which I will use in the remainder of the post to separate the two concepts. Although there is a lot of similarity between µServices in OSGi and Micro Services as recently became popular, they are not the same. Let's first explore what OSGi µServices are.

OSGi µServices

OSGi services are the core concept that you use to create modular code bases. At the lowest layer OSGi is about class loading; each module (bundle) has it's own class loader. A bundle defines external dependencies by using an import-package. Only packages which are explicitly exported can be used by other bundles. This layer of modularity makes sure that only API classes are shared between bundles, and implementation classes are strictly hidden.
This also imposes a problem however. Let's say we have an interface "GreeterService" and an implementation "GreeterServiceImpl". Both the API and implementation are part of bundle "greeter", which exports the API, but hides the implementation. Now we take a second bundle that want to use the GreeterService interface. Because this bundle can't see the GreeerServiceImpl, it would be impossible to write the following code:

GreeterService greeter = new GreeterServiceImpl();

This is obviously a good thing, because this code would couple our "conversation" bundle directly to an implementation class of "greeter", which is exactly what we're trying to avoid in a modular system. OSGi offers a solution for this problem with the service layer, which we will look at next. As a side note, this also means that when someone claims to have a modular code base, but doesn't use services, there is a pretty good chance that the code is not so modular after all....

In OSGi this problem is solved by the Service Registry. The Service Registry is part of the OSGi framework. A bundle can register a service in the registry. This will register an instance of an implementation class in the registry with it's interface. Other bundles can then consume the service by looking it up by the interface of the service.  The bundle can use the service using it's interface, but doesn't have to know which implementation is used, or who provided the implementation. In it's essence the services model is not very different from dependency injection with frameworks such as CDI, Spring or Guice, with the difference that the services model builds on top of the module layer to guarantee module borders.



OSGi services are often called micro services, or µServices. This makes sense because they are "lightweight" services. Although there is the clear model of service providers and service consumers, the whole process works within a single JVM with close to zero overhead. There is no proxying required, so in the end a service call is just a direct method call. As a best practice a service does only a single thing. This way services are easy to replace and easy to reuse. These are also the immediate benefits of a services model; they promote separation of concerns, which in the end is key to maintainable code.

Comparing with Micro Services

So how does this relate to the Micro Services model that recently got a lot of attention? The obvious difference is that OSGi services live in a single JVM, and the Micro Services model is about completely separate deployments, possibly using many different technologies. The main advantages of such a model go back to the general advantages of a modular system:


  1. Easier to maintain: Unrelated code is strictly isolated from each other. This makes it easier to understand and maintain code, because you don't have to worry too much about code outside of the service.
  2. Easier to replace: Because services are small, it's also easy to simply throw a service away and re-implement it if/when requirements change. All you care about is the service interface, the implementation is replaceable. This is an incredibly powerful tool, and will prevent "duct taping" of code in the longer term.
  3. Re-usability: Services do only a single thing, and can be easily used in new scenarios because of that. This goes both for re-usability in different projects/systems when it's about technical components, or re-usability of functional components within a system.


Do these benefits look familiar when thinking about SOA? In recent years not much good is said about SOA, because we generally associate it with bloated tools and WSDLs forged in the deepest pits of hell. I loathe these tools as well, but we should remember that this is just (a very bad) implementation of SOA. SOA itself is about architecture, and describes basically a modular system. So Micro Services is SOA, just without the crap vendors have been trying to sell us.

Micro Services follow the same concept, but on a different scale. µServices are in-VM, Micro Services are not. So let's compare some benefits and downsides of both approaches.

Advantages of Services within a JVM

One advantage of in-VM services is that there is no runtime overhead; service calls are direct method calls. Compared to the overhead of network calls, this is a huge difference. Another advantage is that the programming model is considerably simpler. Orchestrating communication between many remote services often requires an asynchronous programming model and sending of messages. No rocket science at all, but more complicated than simple method calls.
The last and possibly most important advantage is ease of deployment. An OSGi application contain many services can be deployed as a single deployment, either in a load-balanced cluster or on a single machine. Deploying a system based on Micro Services requires significant work on the DevOps side of things. This doesn't just include automation of deployments (which is relatively easy), but also making sure that all required services are available with the right version to make the whole system work.


Advantages of Micro Services

The added complexity in deployments also offers more flexibility. I believe the most important point about Micro Services is that services have their own life-cycle. Different teams can independently work on different services. They can deploy new versions independently of other teams (yes this requires communication...), and services can be implemented with the tools and technology that is optimal for that specific service. Also, it is easier to load balance Micro Services, because we can potentially horizontally scale a single service instead of the whole system.

This brings the question back to scale of the system and team. When only a single team (say maximum 10 developers) works on a system, the advantages of Micro Services compared to µServices don't seem to weigh up. When there are multiple teams working on the same system, this might be a different story. In that case it could also be an option to mix and match both approaches. Instead of going fully Micro Service, we could break up an already modular system into different deployments and have the benefits of both. Of course, this adds new challenges and requirements; for starters, we need a remoting/messaging layer on top of services, and we might need to modify the granularity of services.

This article was mostly written as a clarification of the differences between uServices and Micro Services. I'm a strong believer of the power of separated services. From my experience building large scale OSGi applications, I also know that many of the benefits of modularity can be achieved without the added complexity of a full Micro Service approach. Ultimately I think a mixed approach would work best on a larger scale, but that's just my personal view on the current state of technology. 

Sunday, March 23, 2014

Upcoming conferences

This year already proves to be an interesting year with conferences. This week I will start with a talk at JavaLand, a brand new conference in Germany. Together with Sander Mak I will talk about Modular JavaScript. We will show options to modularize a JavaScript code base. We will discuss module systems, see a lot of RequireJS, talk about dependency injection and services and show real world best practices.

Next will be DevNation, another conference that I'm really looking forward to. Devnation is also a new conference, with an amazing speaker line up. I will be speaking about OSGi with a practical introduction of modular development. There will be a lot of live coding in this talk so that you get a good impression of OSGi development in practise. Along the way you will learn about bundles, imports/exports, OSGi services and their dynamics and see practical topics such as integration testing and creating modern web applications.



Last but not least there will be GeeCon beginning of May, where Sander and me will be speaking about Modular JavaScript again. It's exciting to see how JavaScript is becoming an importart part of the Java developer's tool stack more and more. GeeCon was great last year, so my expectations are high for this year as well.

Saturday, January 4, 2014

Why OSGi service dynamics are useful

In the past two years I have given numerous conference talks about developing with OSGi. One of my favorite talks is one where I show OSGi development by building a complete application from scratch during the talk. People are mostly impressed by the fact that OSGi is a lot easier to use than they expect. Besides that it's easy, it also has some great benefits such as:

  • True modularity, which is key to maintainable code
  • An extremely fast code-save-test cycle in the IDE
  • Incremental deployments (using Apache ACE)
  • Service dynamics
That last point might look a bit strange, but it's there for a reason. During each and every talk someone will ask something like: 

"Those dynamic services look powerful, but it also adds complexity. Why do we need services to be dynamic?"

This is a valid question. The fact that services can come and go at any given time adds complexity because you will have to deal with the possibility that dependencies are unavailable somehow. The answer is a bit complicated, because there are several reasons why dynamics are well worth the extra complexity. But before we get into that, lets take a look at how difficult (or easy) it really is to use them.

Working with service dynamics

An OSGi service can be registered and deregistered at any given time. Practically this means that you never know if another service that you depend on will still be there the next moment. Depending on the dependency injection model that you are using (e.g. Apache Felix Dependency Manager, Declarative Services, iPojo etc.) there are a number of ways to deal with this fact. As an example I will focus on the two most often used mechanisms in Apache Felix Dependency Manager.

Required dependencies
The easiest way to deal with dynamics is to declare required dependencies. By using a required dependency the component doesn't have to deal with the situation where dependencies are unavailable. Instead it will be deregistered itself, until all it's required dependencies are available again. The downside of this approach is that deregistering a component can have ripple effect through the system, where potentially all services will become unavailable. But when no useful work can be done when that component is available, it's the right thing to do.


The following video shows the difference between required and optional dependencies.


Required services don't require any extra code; there is no code to handle the case when a dependency is not available. This makes using services as easy as using a static dependency injection model such as CDI or Spring.


Optional dependencies
When declaring an optional dependency you accept the fact that it might not always be there. This is useful when a component can still do useful work when a dependency is unavailable. An often used example is a dependency on LogService. When LogService is available you want to use it, but without logging a component can still work.

By default, Apache Felix Dependency Manager injects a null object when a dependency becomes unavailable. A null object returns null on every method invocation. Void methods become basically no-ops. This also allows us to write code to handle the fact that a dependency is unavailable. For example, we could fall back on other code paths, or show users specific error messages.

Choosing between required vs optional
So when to use required and when to use optional dependencies? My own rule of thumb is to use required dependencies by default, unless there is really something useful to do when a dependency is unavailable.

Reasons for dynamic services

Working with dynamic services is hardly any more work than using static dependency injection. But the question remains, why do you need it? Let's look at a few benefits. 

Hot code deployment during development
Have you ever envied dynamic language users for their fast code-save-test development cycle? In most Java environments the experience is a lot slower; compile code, create an archive and re-install the complete application in an application server. In modern application servers this might costs only a few seconds, but it's still extremely disturbing to the development experience. The problem is that the application server needs to re-install the full application, it's not possible to just re-install the pieces that you actually changed.

OSGi is designed to deal with updates to bundles in a running framework. Bndtools uses this to re-install updated bundles as soon as you save your code. This process is so fast that you don't actually notice any delay between saving a class and seeing it's changes in the running application. The video below demonstrates this. This feature alone would justify using OSGi and Bndtools :-)


You might wonder how this is related to dynamic services. Reloading part of an application during runtime is only possible if it's internals can deal with parts of the application (temporarily) not being available, and services offer exactly that.

Hot code deployment in production
The same dynamic update mechanism can be used during production updates. A production server doesn't have to be stopped completely to apply a hotfix. When using a provisioning server such as Apache ACE we can even push updates to large amounts of servers or devices.

Note that there is actually some downtime during the update. The updated bundle must be re-installed, and although this takes less than a second, it's services are unavailable during the update. Depending on your situation this might or might not be acceptable. Of course you can use a cluster or load balancer to deal with failover during updates as well. Even than, it's great if deploying a hotfix only takes a second.

Creating services dynamically
So far we have been talking about services that become temporarily unavailable because of bundle updates. Services can be created from code during runtime as well. An often used scenario is creating new services when the environment changes. For example, new screens might be added to a user interface when new configuration data is found in the database. Of course you could create some custom dynamic registration mechanism for this, but it's a lot easier when you can just build on basic building blocks of the runtime instead of re-inventing the wheel.

Configuration updates
Ever tried updating a configuration file in a running Java EE application? This hardly ever works, because configuration is often used to bootstrap static components. Once these components are bootstrapped, they will not re-initialize until the application is restarted. With OSGi services this is trivial to do. A component can be configured using Configuration Admin, and whenever the configuration changes, the component is updated.

Service dynamics do add some complexity, but using a dependency injection framework this is almost entirely taken care of by the framework. It does give some really nice benefits both during development and production in return. Once you have gotten used to the fast code-save-test cycle in Bndtools it's hard to imagine going back to slow redeployments.