News Stay informed about the latest enterprise technology news and product updates.

Adopting OSGi requires patience and money, but development flexibility results

One expert says to use OSGi in your project as soon as you are ready, but being ready means being willing and able to adopt the additional structure OSGi requires in order to obtain its benefits, and to take the time and investment required to really adopt modularity. Once OSGi is in place, improved flexibility will eventually save costs.

The most important thing about OSGi is its support for modularity. But because most applications and systems were not designed for modularity, or were designed and built with a home-grown modular design, adopting OSGi typically involves some level of difficulty.

Sometimes I think about this in the general context of industry attempts to improve the structure of applications and the application development process. About 20 years ago I was working on Digital's structured three-tier TP monitor product (ACMS). We viewed our main competition to be IBM's CICS, a much older technology. Customers typically created "spaghetti code" applications using CICS - there was no structure, and programs typically intermixed end user and database operations. The ACMS design required developers to separate out user code and database code, and put them in different programs, with distinct interface types and a requirement to deploy them into different address spaces. We also required developers to create programs for the middle tier using a new block structured language that I later helped standardize at X/Open.

When we first started doing this, it sometimes seemed a lot to ask a developer to trust that we'd gotten it right. In those days the imposition of such a structure, and the requirement to learn a new language, also often seemed too much of a burden, especially for those who had become accustomed to the greater flexibility of the old unstructured approach, and who needed to get their projects done as quickly as possible.

This dynamic seems to repeat itself in the software industry, as when we see a new approach, such as OSGi, that advances the art of programming a step or two, but at the cost of incorporating additional structure, and learning a new language (in the case of OSGi, the OSGi metadata language that sits in between the modules). Today few would argue against using a structured three-tier or n-tier architecture for a TP application project, and up through EJB2 the three-tier model was actually baked into Java EE. The benefits are now widely understood of separating end user and database code and of including one or more middle tiers for caching, replication, failover, and various scalability techniques. But, when we first tried to impose this additional structure on developers used to more flexibility and freedom, we encountered significant resistance.

This is a kind of long way of saying that my answer is to use OSGi in your project as soon as you are ready. Being ready means being willing and able to adopt the additional structure OSGi requires in order to obtain its benefits, and to take the time and investment required to really adopt modularity.

The benefits of modular programming have been well understood for about 40 years, but before OSGi, developers had to invent their own modular designs and systems. The particular benefits of OSGi are fairly well documented on the Web now, starting with the OSGi Alliance's website and in several related blogs and articles, and in some recently published books, such as Modular Java by Craig Walls and OSGi in Practice by Neil Bartlett.

In short, large Java projects benefit from adopting OSGi because the project can be more easily divided among multiple developers, each of whom create separate modules that can be assembled later onto an OSGi framework such as Eclipse Equinox or Apache Felix. The big advantage is that all developers adopt the OSGi metadata and do not have to be aware of what each other is doing - at least not to the detailed level of how to assemble the code for deployment. There is of course a lot more to this than I've covered in this brief summary, but the approach has been proven to work and has been proven to be industrial strength. All major Java enterprise vendors have adopted, or are adopting, this approach for their internal large Java projects.

But there may be good reasons why a particular company or project should not adopt OSGi. Among them could be the requirement for the greater flexibility and control allowed in a less structured environment, the time it takes to learn something new, or the challenges of using OSGi in a typical enterprise development lifecycle. This last point is something that OSGi tool vendors and the OSGi Alliance are working hard to improve. Since OSGi was designed originally for use in embedded applications where it remains a popular platform - the enterprise lifecycle tools are just starting to emerge. Early adopters report that find it challenging to handle this more or less on their own, but that will change during the next few years.

I should add that when talking about OSGi, people sometimes conflate OSGi-enabled middleware (in which a product uses OSGi internally but does not expose the OSGi programming model to the developers) and the "native" use of the OSGi framework. Using the OSGi programming model directly for enterprise applications is where the real benefits will be achieved, but this is also where some challenges still remain. Along with the additional structure of the OSGi programming model, however, comes the additional benefits of improved flexibility and decreased cost. But there is really no single answer that applies to each project.

Dig Deeper on Topics Archive

Start the conversation

Send me notifications when other members comment.

Please create a username to comment.