Pages

Sunday, April 27, 2014

Ten reasons to use OSGi

In this post I will discuss ten reasons to use OSGi. The reason for this post is that there are many misconceptions about OSGi. At Luminis Technologies we use OSGi for all our development, and are investing in OSGi related open source projects. The reason to do so is because we think it's the best available development stack, and here are some reasons why.

#1 Developer productivity

One of OSGi's core features is that it can update bundles in a running framework without restarting the whole framework. Combined with tooling like Bndtools this brings an extremely fast development cycle, similar to scripting languages like JavaScript and Ruby. When file is saved in Bndtools, the incremental compiler of Eclipse will build the affected classes. After compilation Bndtools will automatically rebuild the affected bundles, and re-install those bundles in the running framework. It's not only fast, but also reliable; this mechanism is native to OSGi, and no tricks are required.

Compare this to doing Maven builds and WAR deployments in an app-server... This is the development speed of scripting languages combined with type-safeness and runtime performance of Java. It's hard to beat that combination.


#2 Never a ClassNotFoundException

Each bundle in OSGi has it's own class loader. This class loader can only load classes from the bundle itself, and classes explicitly imported by the bundle using the Import-Package manifest header. When an imported package is not available (exported by another bundle) in the framework, the bundle will not resolve, and the framework will tell you when the bundle is started. This fail-fast mechanism is much better than runtime ClassNotFoundExceptions, because the framework makes you aware of deployment issues right away instead of when a user hits a certain code path in runtime. 

Creating Import-Package headers is easy and automatic. Bnd (either in Bndtools or Maven) generates the correct headers at build time, by inspecting the byte-code of the bundle. All used classes that are not part of the bundle must be imported. By letting the tools do the heavy lifting, there's not really any way to get this wrong. This is unless there's dynamic class loading in the code (using Class.forName). Luckily this is hardly ever necessary besides JDBC drivers.

The Import-Package mechanism does introduce a common problem when using libraries. The transitive dependency madness in Maven has made some developers unaware of the fact that some libraries pull in many, many, other dependencies. In OSGi this means those transitive dependencies must also be installed in the framework, and the resolver makes you immediately aware of that. While this makes it harder to use some libraries, you can argue this is actually a good thing. From an architectural perspective, do you really want to pull in 30 dependencies just because you want to use some library or framework? This might work well for a few libraries, but breaks sooner or later when there are version conflicts between dependencies. Automatically pulling in transitive dependencies is easy for developers, but dangerous in practice. 

#3 All the tools for modern (web) backends

Even more important than the language or core platform is the availability of mature components to develop actual applications. In the case of Luminis Technologies that's often everything related to creating a backend for modern web applications. There is a wealth of open source OSGi components available to help with this. The Amdatu project is a great place to look, as well as Apache Felix. Amdatu is a collection of OSGi components focussed on web/cloud applications. Examples are MongoDB integration, RESTful web services with JAX-RS and scheduling. 

It is strongly advisable to stay close to the OSGi eco-system when selecting frameworks. Not all frameworks are designed with modularity in mind, and trying to use such frameworks in a modular environment is painful. This is an actual downside of OSGi; your choice of Java frameworks is somewhat limited by the compatibility with OSGi of the frameworks. This might require you to leave behind some of the framework knowledge that you already have, and learn something new. Besides the investment of learning something new, nothing is lost. There are so many framework alternatives, do you really need that specific framework even although it's not fit for modular development?

In practice we most commonly hear questions about either using OSGi in combination with Java EE or Spring. As a heavy user of both in the past, I'm pretty confident to say that you don't need either of them. Dependency injection is available with Apache Felix Dependency Manager, Declarative Services and others, and I already mentioned Amdatu as a place to look for components to build applications. 

#4 It's fast

OSGi has close to zero runtime overhead. Invocations to OSGi services are direct method calls and no proxy magic is required. Remember that OSGi was originally designed to run embedded on small devices; it's extremely lightweight by design. From a deployment perspective it's fast as well. Although there are app-servers with OSGi support, we prefer to deploy our apps as bare bones Apache Felix instances. This way nothing is included that we don't need, which drastically improves startup speed of applications. Though that a few seconds startup time for an app-server is impressive? That's what an OSGi framework does on a Raspberry Pi ;-)

#5 Long term maintainability

This should probably be the key reason to use OSGi; modularity as an architectural principle. Modularity is key to maintainable code; by splitting up a code base in small modules it's much easier to reason about changes to code. This is about the basic principles of separation of concerns and low coupling/high cohesion. These principles can be applied without a modular runtime as well, but it's much easier to make mistakes because the runtime doesn't enforce module boundaries. A modular code base without a modular runtime is much more prone to "code rot", small design flaws that break modularity. Ultimately this leads to unmaintainable code.

Of course OSGi is no silver bullet either. It's very well possible to create a completely unmaintaintable code base with OSGi as well. However, when we adhere to basic OSGi design principles, it's much easier to do the right thing.

Another really nice feature of a modular code base is that it's easy to throw code away. Given new insights and experience it's sometimes best to just throw away some code and re-implement it from scratch. When this is isolated to a module it's extremely easy to do; just throw away the old bundle and add a new one. Again, this can be done without a modular runtime as well, but OSGi makes it lot more realistic in practice. 

#6 Re-usability of software components

A side effect of a modular architecture is that it becomes easier to re-use components in a different context. The most important reason for this is that a modular architecture forces you to isolate code into small modules. A module should only have a single responsibility, and it becomes easy to spot when a module does too much. When a module is small, it's inherently easy to re-use. 

Many of the Amdatu components are developed exactly that way. In our projects we create modules to solve technical problems. When we have other projects requiring a similar component, we share these implementations cross-project. If the components prove to be usable and flexible enough, we open source them into Amdatu. In most cases this requires very limited extra work.

This has benefits within a single project context as well. When the code base is separated into many small modules, it becomes easier to make drastic changes to the architecture, while still re-using most of the existing code. This makes the architecture more flexible as well, which is a very powerful tool.

#7 Flexible deployments

OSGi can run everywhere, from large server clusters to small embedded devices. Depending on the exact needs there are many deployment options to choose from. Using Bndtools or Gradle it's easy to export a complete OSGi application to a single JAR that can run by simply running "java -jar myapp.jar". In deployments with many servers (as is the case in many of our own deployments) we can use Apache ACE as a provisioning server. Instead of managing servers manually, software updates are distributed to servers automatically from the central provisioning server. The same mechanism works when we're not working with server clusters, but many small devices for example.

The flexibility of deployments also implies that OSGi can be used for any type of application. We can use the same concepts when working on large scale web applications, embedded devices or desktop applications.

OSGi can even be embedded into other deployment types easily. There are many products that use OSGi to create plugin systems, while the application is deployed in a standard Servlet container. Although I wouldn't advice this for normal OSGi development, it does show how flexible OSGi is for deployments.

Also check out this video to learn more about Apache ACE deployments.


#8 It's dynamic

Code in OSGi is implement using services. OSGi services are dynamic, meaning that they can come and go at runtime. This allows a running framework to adapt to new configuration, new or updated bundles and hot deployments. Basically, we never need to restart an application. I recently blogged about this in more detail in a post "Why OSGi Service dynamics are useful".

#9 Standardized configuration

One of the OSGi specifications that I find most useful is Configuration Admin. This specification defines a Java API to configure OSGi services. On top of this API there are many components that load configuration from various places, such as property files, XML files, a database, provisioned from Apache ACE or loaded from AWS User Data. The great thing is that your code doesn't care where configuration comes from, it just needs to implement a single method to receive configuration. Although hard to understand that Java itself still doesn't have a proper configuration mechanism, Configuration Admin is extremely useful because almost every application needs configuration.

#10 It's easy

This might be the most controversial point in this post. Unfortunately OSGi isn't immediately associated with "easy" by most developers. This is mostly caused by developers trying to use OSGi in existing applications, where modularity is an afterthought. Making something non-modular into something modular is challenging, and OSGi doesn't magically do this either. However, when modularity is a core design principle and OSGi is combined with the right tooling, there's nothing difficult about it. 

There are plenty of resources to learn OSGi as well. Of course there is the book written by Bert Ertman and me, and there are a lot of video tutorials available recorded at various conferences where we speak.

Finally, when trying out OSGi, try it with a full OSGi stack, for example as described in our book or the Amdatu website. Don't try to fit your existing stack into OSGi as a first step (which is actually advise that applies to learning almost any new technology).

Video sources:
Book:



Tuesday, April 22, 2014

Micro Services vs OSGi services

Recently the topic of Micro Services has been getting a lot of attention. The OSGi world has been talking about micro services for a long time already. Micro services in OSGi are also often written as µServices, which I will use in the remainder of the post to separate the two concepts. Although there is a lot of similarity between µServices in OSGi and Micro Services as recently became popular, they are not the same. Let's first explore what OSGi µServices are.

OSGi µServices

OSGi services are the core concept that you use to create modular code bases. At the lowest layer OSGi is about class loading; each module (bundle) has it's own class loader. A bundle defines external dependencies by using an import-package. Only packages which are explicitly exported can be used by other bundles. This layer of modularity makes sure that only API classes are shared between bundles, and implementation classes are strictly hidden.
This also imposes a problem however. Let's say we have an interface "GreeterService" and an implementation "GreeterServiceImpl". Both the API and implementation are part of bundle "greeter", which exports the API, but hides the implementation. Now we take a second bundle that want to use the GreeterService interface. Because this bundle can't see the GreeerServiceImpl, it would be impossible to write the following code:

GreeterService greeter = new GreeterServiceImpl();

This is obviously a good thing, because this code would couple our "conversation" bundle directly to an implementation class of "greeter", which is exactly what we're trying to avoid in a modular system. OSGi offers a solution for this problem with the service layer, which we will look at next. As a side note, this also means that when someone claims to have a modular code base, but doesn't use services, there is a pretty good chance that the code is not so modular after all....

In OSGi this problem is solved by the Service Registry. The Service Registry is part of the OSGi framework. A bundle can register a service in the registry. This will register an instance of an implementation class in the registry with it's interface. Other bundles can then consume the service by looking it up by the interface of the service.  The bundle can use the service using it's interface, but doesn't have to know which implementation is used, or who provided the implementation. In it's essence the services model is not very different from dependency injection with frameworks such as CDI, Spring or Guice, with the difference that the services model builds on top of the module layer to guarantee module borders.



OSGi services are often called micro services, or µServices. This makes sense because they are "lightweight" services. Although there is the clear model of service providers and service consumers, the whole process works within a single JVM with close to zero overhead. There is no proxying required, so in the end a service call is just a direct method call. As a best practice a service does only a single thing. This way services are easy to replace and easy to reuse. These are also the immediate benefits of a services model; they promote separation of concerns, which in the end is key to maintainable code.

Comparing with Micro Services

So how does this relate to the Micro Services model that recently got a lot of attention? The obvious difference is that OSGi services live in a single JVM, and the Micro Services model is about completely separate deployments, possibly using many different technologies. The main advantages of such a model go back to the general advantages of a modular system:


  1. Easier to maintain: Unrelated code is strictly isolated from each other. This makes it easier to understand and maintain code, because you don't have to worry too much about code outside of the service.
  2. Easier to replace: Because services are small, it's also easy to simply throw a service away and re-implement it if/when requirements change. All you care about is the service interface, the implementation is replaceable. This is an incredibly powerful tool, and will prevent "duct taping" of code in the longer term.
  3. Re-usability: Services do only a single thing, and can be easily used in new scenarios because of that. This goes both for re-usability in different projects/systems when it's about technical components, or re-usability of functional components within a system.


Do these benefits look familiar when thinking about SOA? In recent years not much good is said about SOA, because we generally associate it with bloated tools and WSDLs forged in the deepest pits of hell. I loathe these tools as well, but we should remember that this is just (a very bad) implementation of SOA. SOA itself is about architecture, and describes basically a modular system. So Micro Services is SOA, just without the crap vendors have been trying to sell us.

Micro Services follow the same concept, but on a different scale. µServices are in-VM, Micro Services are not. So let's compare some benefits and downsides of both approaches.

Advantages of Services within a JVM

One advantage of in-VM services is that there is no runtime overhead; service calls are direct method calls. Compared to the overhead of network calls, this is a huge difference. Another advantage is that the programming model is considerably simpler. Orchestrating communication between many remote services often requires an asynchronous programming model and sending of messages. No rocket science at all, but more complicated than simple method calls.
The last and possibly most important advantage is ease of deployment. An OSGi application contain many services can be deployed as a single deployment, either in a load-balanced cluster or on a single machine. Deploying a system based on Micro Services requires significant work on the DevOps side of things. This doesn't just include automation of deployments (which is relatively easy), but also making sure that all required services are available with the right version to make the whole system work.


Advantages of Micro Services

The added complexity in deployments also offers more flexibility. I believe the most important point about Micro Services is that services have their own life-cycle. Different teams can independently work on different services. They can deploy new versions independently of other teams (yes this requires communication...), and services can be implemented with the tools and technology that is optimal for that specific service. Also, it is easier to load balance Micro Services, because we can potentially horizontally scale a single service instead of the whole system.

This brings the question back to scale of the system and team. When only a single team (say maximum 10 developers) works on a system, the advantages of Micro Services compared to µServices don't seem to weigh up. When there are multiple teams working on the same system, this might be a different story. In that case it could also be an option to mix and match both approaches. Instead of going fully Micro Service, we could break up an already modular system into different deployments and have the benefits of both. Of course, this adds new challenges and requirements; for starters, we need a remoting/messaging layer on top of services, and we might need to modify the granularity of services.

This article was mostly written as a clarification of the differences between uServices and Micro Services. I'm a strong believer of the power of separated services. From my experience building large scale OSGi applications, I also know that many of the benefits of modularity can be achieved without the added complexity of a full Micro Service approach. Ultimately I think a mixed approach would work best on a larger scale, but that's just my personal view on the current state of technology.