Welcome to the third instalment of our OSGi ABC tutorial. Please make sure you check both the 1st installment and the 2nd.
In this post, we will add another cache provider implementation to the mix as well as provide an HTTP front-end so the whole application can be tested.
First of all, let’s present a conceptual diagram of all the bundles and fragments involved so far.
Read on for more…
We start with providing the extra implementation. We will be making a memcached cache provider using the excellent memcached-spy java library (MIT licensed).
As usual, we create a new plug-in project and name it accordingly (com.calidos.dani.osgi.cache.provider.memcached). We firstly create source folders named ‘src’ and ‘test’ to hold any unit tests. Secondly, we download the spy memcached library from here.
We create a ‘lib’ folder on the project to hold the library and modify the classpath of the plug-in so its code is able to see the memcached client classes (but not the bundle clients!). In the runtime section of the MANIFEST.MF editor we select the library like this:
We also add the controller as a dependency so we can properly implement the CacheProvider service interface.
Secondly, we “copy” the unit testing code from the Memory cache bundle as all tests will be quite similar (as both bundles basically fulfil the same contract).
When doing all this work, we discover that the memcached system doesn’t have some functionality other cache systems might have. For instance, the number of elements in the cache can’t be counted (not easily and cheaply). Additionally, the cache can’t really be cleared without restarting all memcached instances.
So it makes sense to add the UnsupportedOperationException exception to the interface, acknowledging the fact that some cache implementations might not provide all the facilities of the service. Other strategies would be to use subclassing, specialise the providers, etc In this case, we opt for adding the exceptions where it makes sense. That is, in the case of the get and set operations we can’t allow a service that doesn’t let you manage its elements. That’s not our definition of “cache”.
We adjust the code and tests in the Memory cache if necessary to adapt to the new interface and implement the mecached client (the unit tests will need some modification).
The memcached bundle in the project view ends looking like this:
We make sure tests pass on this new bundle, to do that we need to tweak the test code a little bit (to capture new exceptions being thrown).
Next, we register the Cache service proper from the core controller tracker like this:
controller = new CacheControllerCore(); controller.addCacheProvider(cacheProvider); //we have a provider, we can start publishing our service registration = context.registerService(Cache.class.getName(),controller,null);
Note that we register the Cache service once we have added the first provider backend. This ensures that at least a backend is available as soon as the main service is registered.
At this point, things are looking good on backend implementation bits so we can move to providing a nice HTTP front-end for the cache so it can be used.
Therefore, we create a new bundle and call it something appropriate, such as ‘com.calidos.dani.osgi.cache.frontend.http’. The main function of this bundle is to glue together a HTTP Service being made available in the context to the business logic Cache service (provided by the core controller bundle). In this manner, the controller bundle does not need to know anything about HTTP or Servlet implementation. Moreover, this bundle only uses the Cache and doesn’t know about providers or implementation details.
We need to import a few packages and require the equinox http bundle to be present:
As an HTTP provider we can use the one provided by Jetty (that is bundled with Equinox).
Following the same model that on the controller bundle, we register two trackers one for the http service and another for the Cache service. Once both are made available we can create the servlet and glue them together.
There is a method to register the servlet on a particular URL:
cacheHttpFrontend.setCacheService(cacheService); httpService.registerServlet("/cache", cacheHttpFrontend, null, null);
From that moment onwards we are sending any ‘/cache’ URLs onto the servlet instance, which will use ‘/cache’
The servlet can implement the usual methods ‘doPost’ and ‘doPut’ to add data onto the cache and doGet to retrieve it. These method calls translate into the appropriate Cache calls. Any exceptions or null values can be translated into equivalent HTTP errors or responses. For example, we can return a 500 if an unrecoverable exception is thrown from the cache or a 400 if the path is simply ‘/cache’ (which means no key is provided).
As a result of thinking in HTTP terms, we come to realise we need some metadata about the contents of the cache. At least, we need to store the mime type of the data so we can properly serve multimedia and any content other than text. Moreover, we could store some more metadata to tune up the cache, do automatic refreshing of the content, etc.
One possible strategy is to ask the Cache to store a POJO instance that holds the metadata as well as the cached content itself. This seems the simplest approach at the moment so this is what we do. The Cache is still a generic service and as long as we prefix the HTTP caches accordingly at the app level we will be fine with using it in other clients (and these might revert to MIME as their way to tag content types as well).
We can create a custom class POJO but that means that any backend that does POJO serialisation needs access to this class so it can create one. To avoid that class and encapsulation break we just use a plain vanilla serialisable class (Map). In any case, we need to make a not of this problem of the design so it is addressed in the future.
NOTE: it could be addressed in a simple enough manner. The cache entry object interface could be formalised and exported by the controller. In this manner, the front-end and the providers could make use of it and serialise it properly.
We create the cache entry Map instance in a simple enough manner:
HashMaphttpCacheEntry = new HashMap (2); httpCacheEntry.put(ENTRY_BODY_KEY, body); httpCacheEntry.put(ENTRY_MIME_KEY, contentType); cache.set(key,httpCacheEntry);
In this manner, we can recover the appropriate metadata (in this case MIME type) from the entry cache whenever we do a get() operation. This doesn’t mean we expect all requested entries to have that structure (principle of robustness), it just means we will use it if it’s there, setting the HTTP response MIME type accordingly.
With these enhancements and a few more small code refactors we can stabilise our code and call it a day. To summarise what we have done in this post:
- Created a new CacheProvider implementation using memcached
- Added a bundle that provides a frontend of the system using HTTP and a Servlet
- Devised a way to store some metadata on the cache
- Done some code cleanup and refactoring.
As usual, you can download the code here. Thanks!