Hopefully you enjoyed the OSGi journey in its first installment.
Though simple and easy to understand, the first example does nothing out of the ordinary. It is far more interesting to start exploiting some of the basic features OSGi gets us “for free”.
For instance, we could begin by moving our first basic implementation into a separate “model” bundle and enhancing the interface so it can throw exceptions. For instance, an exception can be thrown when no implementations are available or cannot be contacted/operated.
Read the rest of the post for the implementation details…
We start by picking up the project as we left it and firstly export the controller service. Open the MANIFEST.MF and on the ‘Runtime’ section we add the ‘com.calidos.dani.osgi.cache.controller’ package. This will enable any bundle clients to use the cache.
Secondly we create and expose the backend interface we want the model bundles to implement. This is the ‘service’ the bundles can offer that our controller is interested in and is able to use as engines to store the cache data. The interface obviously needs to mimic most if not all of the cache’s public service but we also add lower level functionality, etc. A trivial way to do that is to have the declared backend interface inherit from the public service one.
public interface CacheProvider extends Cache { CacheProviderStatus getStatus(); }
At the moment we define a method to get the status of the backend implementation, we return an object as opposed to a simple type to be able to enrich it in the future. If we place this interface in its own package we need to export it on the plug-in manifest as well.
We then the enhance the original interface with exceptions, which give information about the cause of the problem yet they shouldn’t break up too much encapsulation.
public interface Cache { [...] void init() throws CacheProviderException; […]
It is interesting to note that depending on how we leverage the component nature of OSGi some of these errors can actually be recoverable which is what makes exceptions shine in java.
When adding the exceptions all our tests don’t compile anymore so it’s time to fix them by for example adding the throws declaration into the test signature as at the moment there is squat we can do about exceptions being thrown and these should be test failures anyway.
Cool, so next we create a new project to hold the in-memory implementation of the cache on its own. So we create a new plug-in project and name it accordingly:
Good, next we move our code from the controller bundle onto the new one. First of all we need to modify the new plug-in MANIFEST.MF to reflect we depend on the CacheProvider and Cache interfaces:
This means we can move the code from the controller onto this new provider bundle. Code compiles but our original tests don’t as the actual code has moved.
We first need to change the BasicCacheController class so it implements the more low-level provider interface, time for some refactoring (we also rename the implementation class to something more indicative of its function).
The tests need passing and actually we can move the old test code onto the new component as it effectively tests the in-memory implementation perfectly well. The controller component will need more sophisticated code which we will do after we finish the cleanup. Once the test is moved and the refactoring is complete tests on the new plug-in pass.
Summarizing:
- Move BasicCache class to com.calidos.dani.osgi.cache.provider.memory bundle
- Rename BasicCache to MemoryCache
- Move test from old bundle onto com.calidos.dani.osgi.cache.provider.memory
- Fix compiling errors on the test and rename it to MemoryCacheTest
- Make MemoryCache implement the CacheProvider interface
- Modify the MemoryCacheTest class to test the extended CacheProvider functionality
hat’s cool so now it is time to work in the OSGi world to wire the components together. To do that we edit the memory cache provider Activator class to register the service once it is activated like this:
public class Activator implements BundleActivator { private MemoryCache cacheService; private ServiceRegistration registration; /* * (non-Javadoc) * @see org.osgi.framework.BundleActivator#start(org.osgi.framework.BundleContext) */ public void start(BundleContext context) throws Exception { cacheService = new MemoryCache(); cacheService.init(); registration = context.registerService(CacheProvider.class.getName(), cacheService, null); } /* * (non-Javadoc) * @see org.osgi.framework.BundleActivator#stop(org.osgi.framework.BundleContext) */ public void stop (BundleContext context) throws Exception { registration.unregister(); cacheService.clear(); } }
On the start() method we are creating a memory cache object and then registering it under the CacheProvider interface class name. This means that any other bundle in the OSGi environment that is interested in cache services can use the service provided by this bundle.
This highlights an interesting problem. Is it better to be active like this code and init the service before registering it? Or should we wait for the consumer to do it and follow a more lazy-loading design? That is actually a good question and I have just decided to create self-contained services that are fully operative once published as I think that leverages OSGi better but YMMV.
Next, we need the main controller to wait for cache provider services. To do that we can create a ServiceTracker subclass on the controller component. On that subclass we implement two methods that are called whenever a cache provider is registered into the context and when it is unregistered. We also supply a constructor that stores the controller Activator so we can pass the services back.
public class CacheProviderTracker extends ServiceTracker { protected Activator activator; protected static Logger log = Logger.getLogger(CacheProviderTracker.class); public CacheProviderTracker (BundleContext context, String clazz, ServiceTrackerCustomizer customizer, Activator activator) { super(context, clazz, customizer); log.debug("CacheProviderTracker built"); this.activator = activator; } @Override public Object addingService (ServiceReference reference) { CacheProvider cacheProvider = (CacheProvider) context.getService(reference); log.debug("Obtained a new CacheProvider service"); return cacheProvider; } @Override public void removedService(ServiceReference reference, Object service) { log.debug("A CacheProvider service has been removed from the context"); context.ungetService(reference); } }
We then modify the callbacks so the services are actually passed onto the Activator for their use. We need to create the appropriate methods in the Activator and we will fill them up next.
public void addCacheProvider(CacheProvider cacheProvider) { } public void removeService(CacheProvider service) { }
Now we need to glue the backend provider with the controller so any Cache interface requests are passed onto CacheProvider services. To do that we create a new Cache interface implementation class in the controller bundle (appropriately named CacheControllerCore). The signature is something like:
public class CacheController implements Cache {
Which means we can offer this class as a Cache service once it’s ready. So when should we create a CacheControllerCore instance and offer it as a service? Well, whenever we have backend providers ready, not before. Therefore we now complete the addCacheProvider in the Activator method to do so:
if (controller==null) { controller = new CacheControllerCore(); controller.addCacheProvider(cacheProvider); } else { controller.addCacheProvider(cacheProvider); }
This creates the instance only the fist time a provider is registered and after that just simply tells the controller class that an extra one is available. An added bonus of doing on a controller class instead of directly in the Activator is that we don’t want to overload the Activator with lots of logic. We also complete the removeService method. If in the future we deal with more services we will need a refactor as we would be dealing with a lot of handover methods from the activator class onto the core controller.
Okay, so what is the main goal of the CacheControllerCore then? To maintain a list of providers and hand over requests to them. As a convention and for greater flexiblity, it will send all messages to all services except for get, which will basically hand over the first object it finds.
DESIGN WARNING: in doing the implementation of the core controller we realise there are potential thread-safe potential problems in accessing the provider list. This is a cache and not an ACID system and we are looking for speed + 99.99% efficacy when changing providers and 100% efficacy when providers are stable. One should consider that in 99.9999999% of the time we are not changing providers mid-flight we wouldn’t want to compromise the overall efficiency of the architecture by the odd chance that two or three requests out of a million might actually get a miss from the cache where they should have had a hit or a cache set operation needs to be retried. Therefore, we implement a few basic checks on the core to minimize problems and provide graceful degradation of service in edge cases as opposed to meaningless null pointer exceptions.
Once this cautious but hopefully practical implementation is complete we can take a step back and review what we have done so far.
- We have defined a cache backend provider interface
- Also created a simple provider implementation that stores data in memory
- We have also glued the backend example to the core and refactored things a bit
- The core now loops through available backends and is reasonably resilient
Click here to download the code as it stands. More to come!