Visual drag and drop A/B web testing

In this post we will introduce a technique and set of open source tools to visually perform A/B testing experiments on Web applications using drag and drop on a test application written in vanilla React and without introducing any code dependencies.

Drag and drop visual A/B testing on React applications, using pure framework-independent JSX code

A/B testing for web applications

In this post we will introduce a technique and set of open source tools to visually perform A/B testing experiments on Web applications using drag and drop on a test application written in vanilla React and without introducing any code dependencies.

A/B testing is a modern product development technique that is widely used today, according to Wikipedia:

A/B testing (also known as bucket testing or split-run testing) is a user experience research methodology. A/B tests consist of a randomized experiment with two variants, A and B. It includes application of statistical hypothesis testing or “two-sample hypothesis testing” as used in the field of statistics. A/B testing is a way to compare two versions of a single variable, typically by testing a subject’s response to variant A against variant B, and determining which of the two variants is more effective.

Wikipedia on A/B testing

While no silver bullet, it is definitely an essential tool to iterate on web development. A number of mature solutions exist in the market, covering a wide range of use-cases, from pure frontend A/B testing to backend, hybrid frontend and backend, etc., and a number of major companies have rolled out their own in-house systems, like Spotify, Booking, and of course Google and many others.

Web development can be fast-paced as its model allows for rapid iteration (unlike mobile development with mandatory slower store approval rollouts) as is therefore very well suited for leveraging A/B testing to the max. Web applications can be deployed several times a day and there are no limits on how many releases can be done or how frequently.

Many solutions come with some caveats, as we are after all modifying the behaviour of our web app, they either require specific frameworks to be used, need the deployment of specific microservices in the critical path, inject bulky Javascript into the frontend code, introduce risks create client-side issues, and a long etc. A common issue is that they require product people to wait for developers to implement the needed changes and roll them out, even if the change is very small. For instance, changing the size of a logo or button for an experiment is a small change that should not require much development effort if any. There are workarounds to allow product profiles to make those change that but they often need hacks like the blanket injection of 3rd-party Javascript in our web app, can cause UI flicker, increased latency, etc.

In other words:

Wouldn’t it be great if we could have a visual way to create A/B tests without developer intervention?

An imaginary web product owner

But we need to keep our frontend developers in mind:

Wouldn’t it be great if such an A/B system did not interfere with my code and required absolutely no dependencies?

An imaginary sensible frontend developer

All this reminded me of my days working in the fast-paced online news industry, the problems we faced there and how they were solved, so I decided to give this a try and created a system to attack this problem directly.

Desirable properties of our visual A/B testing system

Let’s try to come up with a checklist of desirable properties we would like our web A/B testing system to have:

  • Independent of the A/B testing engine – have complete independence on the engine used, namely the system used to split users between variants, the types of tests done (A/A, A/B, multiple variants, etc.), and so on.
  • Server-side, client-side and hybrid client-server A/B testing – we should be able to do all three types of A/B tests using the same approach.
  • Compatible with modern engineering practices – allow automated testing, version control CI/CD integration, gitops and any other sensible software development, testing, deployment and operation methodologies. Allow for seamless devops operation in times of duress and out of hours incidents.
  • No client or server code injection of any kind – no injection of extraneous Javascript in the frontend, or extra microservices in the backend (besides any code mandated by the A/B engine itself to do the split), no added performance issues, UI flicker, library includes, extra latency or other side-effects. No code added in the critical path means no extra bugs are introduced.
  • Code in any way you want – impose no artificial restrictions in the frontend code, use any architecture or frameworks you like. Use React, Vue.js, Angular, etc. JSX? TSX? Redux? It should not be a problem.
  • Allow visual operation by non-engineers – allow non-developer product and business users to visually create, manage and delete A/B tests in production, having little or no knowledge on the technical internals.

This is an extensive list of sometimes conflicting requirements but let’s see how well we can strike a balance.

A/B for React applications in JSX

Enter Morfeu, an OSS web application to visually manage complex APIs. Morfeu is a generic system designed to manage APIs in YAML, JSON and XML that can be extended to handle other formats. Snow Package is a microservice that adds JSX support so we can use it in application using JSX to handle the frontend, for instance web apps written in React.

JSX is a Javascript syntax extension that is often used to describe the interface of React applications, and as a structured language, it can be parsed and generated to help with specialised use-cases like A/B testing.

Let’s see a short demo of it in action:

How it works

The following diagram illustrates the process:

Morfeu and Snow Package working together to present a page structure to the user

Morfeu needs an Abstract Syntax Tree to represent the different elements in the interface. The Snow Package application uses the Babel parser to read the original JSX and turn the original code into the appropriate AST. A schema of the possible structure of pages is also needed, written in a subset of XML Schema with a bit of extra metadata thrown in. We call this schema the model.

All this package is read by Morfeu and presented in a ‘simplified visual interface’ to users, which is a logical representation of the underlying site. This interface hides a lot of the complexity which helps non-technical users into focusing on what is really important. Morfeu users can then freely modify that site structure within the constrains of the defined model (it is not a free-for-all as we will see below) and see in realtime what is going on with a direct feedback loop wherever needed.

Typically, the page in question is edited to add content, modify its configuration, delete unwanted stuff and so on, but in this post we are particularly interested in adding A/B tests in the page. A/B experimentation is also a particularly good example of meaningful page manipulation so it is perfect to illustrate the overall concept.

This is the original site we are editing (apologies for the barebones CSS ^^):

Original site we are editing, a very vanilla (and somewhat ugly) commerce site

Which is presented in simplified form in the Morfeu UI like thus:

Morfeu interface presenting the simplified version of the page being edited, with the different elements like the title, search bar, images, footer, etc. This hides a lot of unwanted complexity.

The content central part is in generated from the page vanilla JSX code, robustly parsed using a Babel traverse function (docs). The JSX source in question:

const _root = <>
  <Menu callCopy="Please enjoy this demo site!" logoCopy="Welcome to snow package!" logoSize="XL"/>
  <Search searchButtonCopy="Search!" searchExamplesCopy="Cars, Android, phones..." startCategory="Phones"/>
<Title>Need some inspiration for today?</Title>
  <Col size="8">
    <ImgText imgURL="/img/photos/houses.png" text="Nice apartments" textColor="light" textSize="M"/>
    <ImgText imgURL="/img/photos/clothes.png" text="Clothes" textColor="white" textSize="XL"/>
  <Col size="4">
    <ImgText imgURL="/img/photos/misc.png" text="Bargains found!" textColor="primary" textSize="XL"/>
    <ImgText imgURL="/img/photos/cars.png" text="Cars!" textColor="dark" textSize="XL"/>
    <ImgText imgURL="/img/photos/phones.png" text="Handsets here!" textColor="primary" textSize="XL"/>
<Copyright legalCopy="(c) 2020 Snow package test site">
  <ExtraLink link="" text="Github"/>
  <ExtraLink link="" text=""/>

This diagram shows the different elements of the interface:

Morfeu UI explained

As shown in the UI screenshot and described in the diagram, on the right hand side we see the model generated from the XML Schema and extra metadata, where we see we can add experiments to the images in the middle section (called IMGTEXT in our model, and nested inside a ROW / COL structure):

The model is telling us that we can add A/B tests inside the row and col structure

We want to create an A/B test where we either show the car photo (variant A) or the phone handset photo (variant B). We drag the A/B experiment element to the page content and move the two photos inside as variant A and B:

A/B experiment holding a different image for each variant

We hit ‘save’ and that’s it. Morfeu sends the new AST structure that now includes the experiment, Snow Package converts back it to JSX and is finally serialised to disk. This is the new column structure in the code as a result of making this change (the rest of the file is unmodified):

<Col size="4">
  <ImgText imgURL="/img/photos/misc.png" text="Bargains found!" textColor="primary" textSize="XL"/>
  <Experiment experimentID={1234}>
    <ImgText imgURL="/img/photos/cars.png" text="Cars!" textColor="dark" textSize="XL"/>
    <ImgText imgURL="/img/photos/phones.png" text="Handsets here!" textColor="primary" textSize="XL"/>

The Experiment element with ID 1234 has been added, and that includes two ImgText React Component instances underneath it with their specific attributes. As expected, the test site has been updated, is executing the Experiment React Component and presents either the car or the handset image variants:

In this case we are showing variant B (handsets) in our test React website

That is all that is needed to create and deploy an A/B testing experiment with the proposed setup. The experiment needs to run its course now, its results analysed, and so forth. Once the experiment is concluded the same user can drag and drop the selected variant out and remove the rest, propose a new test, etc.

Let us look at the code of the different elements, starting with the Experiment React Component (source):

export function Experiment(props) {

  const children = props.children ? props.children : [];
  const variant = Math.random() < 0.5

  // if we only have one child we do an A|A test
  const A = children.length > 0 ? children[0] : '';
  const B = children.length>1 ? children[1] : children[0];
  if (variant) {
    return B;

  return A;


This is a trivial non-persistent implementation that randomly selects the variant and shows it to the user and therefore not that useful as an A/B testing engine, but we could easily drop in the react-ab-test React component, our own in-house implementation or a commercial component. As long as we define the right schema model (for instance, in the case of react-ab-test the experiment ID is stored in the attribute name instead of experimentID) we are good to go, the Morfeu setup does not care about that.

What about the rest of the React components? They are just as vanilla, take ImgText for example:

export class ImgText extends React.Component {

constructor(props) {


  this.text = props.text;
  const textClass = 'card-title text-'+(props.textColor ? props.textColor : 'dark');
  switch (props.textSize) {
    case 'S': this.finalText = <h5 className={textClass}>{this.text}</h5>; break;
    case 'L': this.finalText = <h4 className={textClass}>{this.text}</h4>; break;
    case 'XL': this.finalText = <h2 className={textClass}>{this.text}</h2>; break;
    default: this.finalText = <h3 className={textClass}>{this.text}</h3>;
  this.imgURL = props.imgURL;


render() {
  return <div className="card text-{this.textColor}">
           <img className="card-img img-white" 
                style={{filter: 'blur(1px)', }}/>
           <div className="card-img-overlay">{this.finalText}</div>


Please do not mind any bad style or other snafus as this is my first React application ever ^^, but it should work to showcase the concept, it is a plain vanilla React Component that takes the props defined in the model and renders them as you would expect.

What about the model schema that needs to be defined for all this to work? It basically defines the possible structure of the page in question and the attributes we want to expose to our Morfeu users (complete source). This is the model for the ImgText React Component:

<xs:complexType name="imgText">
        <mf:desc>Static image with a title</mf:desc>
        <mf:cell-presentation type="IFRAME">http://localhost:3010/#/preview/ImgText?$_ATTRIBUTES</mf:cell-presentation>
        <mf:category categ="Content" />
        <mf:category attr="@text" categ="Content" />
        <mf:category attr="@textSize" categ="Content" />
        <mf:category attr="@textColor" categ="Content" />
        <mf:category attr="@imgURL" categ="Content" />
        <mf:default-value name="@textSize">M</mf:default-value>
        <mf:default-value name="@imgURL">/img/IMAGE GOES HERE.png</mf:default-value>
        <mf:default-value name="@textColor">black</mf:default-value>
  <xs:attribute name="text" type="textField" use="required"/>
  <xs:attribute name="textSize" type="sizeList" />
  <xs:attribute name="textColor" type="colorsList" />
  <xs:attribute name="imgURL" type="imgURLTextField" use="required"/>

When editing ImgText elements in the UI, this is what is presented (not pretty but functional):

The different attributes are presented to the user, along with possible values and so forth. In this case we are editing an ImgText inside an A/B test and the UI tells us we have specified in the model to only have max two images in this context (with [0..2])

The model schema for this component is certainly far less complex than the actual component development itself. It is important to note that only the parts of the application we want to handle visually need to be modelled, the rest of the app like nested components, app logic, message passing, state management, the JS code wrapping the JSX structure, other pages, etc., can be safely ignored.

It is also really important to note that by defining our model we are adding a lot of useful semantics, for instance we define the prop textSize of the ImgText component can have the enumerated values S, M, L, XL (in this particular model, another use-case could have totally different values). What CSS class or style properties correspond to each value is up to the React Component implementation. There are a number of good reasons why we keep it this way:

  • Letting non-specialists manipulate low level details like CSS classes, div structures without control or assistance is dangerous and will can interfere with the webapp behaviour, frontend logic, etc., so in Morfeu those details are kept in the React Component code and not exposed in the UI, this approach avoids a common source of bugs and frustration.
  • It also lets frontend developers evolve the implementation, style, look and feel, etc. without having to modify the top-level JSX structure code or the model schema itself.
  • It establishes a clear, formally-defined contract between the frontend developer and the Morfeu users, this is what you can modify, set A/B experiments on, etc., and that is the high-level semantic level you should be operating on. Morfeu presents this interface and no implementation details or any other aspects of the app, keeping a sane separation of concerns.

Today’s applications are complex, to cater to that complexity, the model schema can specify the following aspects of the page:

  • Which components go where, in what order an how many of them (min, max, open ended)
  • Which components can be nested inside which other components and which do not, which child component elements are mandatory and which optional
  • Re-use component definitions in different contexts (ie. we can have only max two ImgText inside an Experiment component but an unlimited number inside a Row / Col structure)
  • Make certain components readonly, so they cannot be modified (ie, all pages should have a header, footer, and so on and those cannot be changed).
  • Specify allowed component props, and which are mandatory or optional
  • Specify basic types of the props (number, string, boolean, enumeration) and check valid values with a regexp (Morfeu will present those as sensible UX elements and interactively check regexp upon editing), it should be easy to add things like color pickers, calendars or more advanced UX editors to the current UI
  • Default values of props, practical for mandatory properties
  • Specify prop logical categories which in Morfeu are presented in different property tabs for more readability (for instance, we can use an Advanced tab for technical props we do not want non-advanced users to modify).

An extra important feature to consider when letting users manipulate complex components or properties: in the case of simple components, a placeholder like a title tile, simple PNG image with no detail can be used to signal we have a Title component in place but we have the option of a realtime feedback loop for more complex situations, where we have more options or props values can interact in non-trivial ways. A component like ImgText has quite a few options that may not be easy to work with so Morfeu lets you present the outcome of a specific configuration in an iframe showing users how the component will be rendered (either realistically or with a custom made informative representation). The code to add to the React application to do this realtime preview presentation is trivial:

export function Preview(props) {

  const { component } = useParams();
  const query = useQuery(); 
  let params = {};
  query.forEach((v, k) => params[k] = v);
  params._preview = true;

  let preview;
  switch(component) {
    case 'ImgText':
      preview = new ImgText(params).render();


  return  preview;


function useQuery() {
  return new URLSearchParams(useLocation().search);

The above implementation just reuses the very ImgText React Component we defined in the first place and renders it as is, we are not adding any extra logic, and with that we can show in realtime to the Morfeu user how the component will look and behave once applied to the page. This creates an instant feedback loop so we need to be careful with the performance of ImgText as we will get a request every time the Morfeu user makes a change to that component or one of its props. We can always revert to using static files if the component is too slow for a realtime loop or provide a simplified view. As another option, we could inherit from any given component and decorate it with extra information to assist Morfeu users in specially complex configurations, provide helpful tips, etc.

All the data flows are summarised in the following diagram:

Most important data flows of using Morfeu to manage JSX applications, including the realtime feedback for complex components

Last but not least, it is quite common to find instances of having complex components that act in concert, must together or require specific combinations of props configurations. For that very case Morfeu has the feature of snippets, which are pre-created sets of components with all values and children pre-configured, those can be dragged wherever it is relevant and then modified. This is much more convenient and practical than recreating everything from scratch all the time. The snippets are related to a catalogue and thus listed in a plain JSON file, and the snippets themselves are just fragments of JSX code as you would expect.

Examples of A/A and A/B testing pre-created experiments ready to be dragged into a page, drawn from plain snippets of JSX code

As a final note, when implementing approaches like this, we can get carried away and thing we can do effective development without writing code and there are solutions out there that attack this particular problem. This is not what Morfeu aims to solve, it focuses instead of product iteration using pre-built features that both product and development have agreed on and developed together.

So how did we fare? Let’s check our original checklist.


Given the described approach, let’s review the checklist of objectives:

  • Independent of the A/B testing engine ✅ – As we are editing raw JSX code, we are not depending on any specific A/B testing engine or particular experimentation library.
  • Server-side, client-side and hybrid client-server A/B testing ✅ – the resulting React code can run in bundled javascript form completely in the client or be rendered in the server, etc. according to your preferences and configuration.
  • Compatible with modern engineering practices ✅ – Morfeu treats the original JSX code as source of truth, therefore any kind of gitops, CI/CD process can be done, from code reviews to continuous deployment, etc. Morfeu is completely independent of that. An example approach would be to have a staging environment where changes are pushed directly by changes in the JSX files (with its own branch) and when product is happy with the result a full CI/CD automated testing and continuous deployment cycle is done, with changes tested and pushed to production. You name it.
  • No client or server code injection of any kind ✅ – We are modifying JSX code that has no dependencies introduced by Morfeu, there are no libraries or extra microservices to run in the application critical path that could introduce bugs or latency. Morfeu and Snow Package are completely stateless services that can be run in containers and shut down or restarted completely independently of our React applications. The static model files and preview logic are not needed to run any code and can be stripped out for production, staging, etc. We could use a system like Central Dogma to help automate all aspects of such a pipeline.
  • Code in any way you want ✅ – The described approach still allows engineers to develop and code the actual application in any way or structure they want, as well as make changes to the pages managed by Morfeu using their editor and environment of choice. As Morfeu and Snow Package treat the JSX as source of truth, any and all manual changes introduced by developers will be parsed and presented to Morfeu users and vice-versa. It is also important to note that we only model the schema of the pages of the application we want product or business to modify using the UI, the rest of the application and React components do not need a model definition. TSX support should trivial thanks to the use of Babel. For Vue.js it should be even simpler, given Vue uses an HTML-based template syntax that is easy to parse and generate. Angular templating language is also easy to parse and it helps templates can be stored separately from application logic. As we are not imposing any restrictions on the code itself, choice of other development libraries like Redux, Jasmine, etc. is also completely up to the frontend developer team.
  • Allow visual operation by non-engineers ✅ – As shown, people without deep knowledge of the technical details can visually operate the system, add and manage A/B experiments without the direct intervention of a developer. The way Morfeu is designed, only the previously-agreed properties exposed by the page component developers are there, technical details, subcomponents, technical props and things like specific CSS classes and all sorts properties can be hidden away in any way we want, like being specified in configuration files or hardcoded in the code and in general kept away from unwanted Morfeu user operation.

Try it yourself

A Docker Compose setup is available to make it really easy to start up:

export DOCKERIP=<your docker ip here>

# clone the repos, all from the same folder, checking out the same version
git clone
cd morfeu && git fetch && git -c advice.detachedHead=false checkout v0.8.5 && cd ..
git clone
cd snow-package && git fetch && git -c advice.detachedHead=false checkout 0.8.1 && cd ..

# clone the React demo site
git clone
cd snowpackage-site && git fetch && git -c advice.detachedHead=false checkout 0.8.1

# start the build and the services (this will take a while), remember DOCKERIP needs to have a value 
docker-compose build --build-arg HOSTNAME=$DOCKERIP && docker-compose up

# on another window, jump into the demo site to make live changes as a developer 
# or see how Morfeu changes the JSX files
docker exec -it snowpackage-site /bin/bash

# morfeu should be at http://DOCKERIP:8980/?config=%2Fproxy%2Fsite%2Fsnowpackage%2Fconfig.json
# demo site should be at http://DOCKERIP:3010

# React demo site code is mounted in a volume that will persist between restarts, 
# you need to manually delete it to start from scratch
docker volume ls | grep site

This is what this snippet does step by step:

  • We first specify the IP where our Docker host exposes services (if using Docker Machine, the command docker-machine ip <name> will help), if running on a Linux machine with Docker installed, localhost should be OK
  • We next clone the repos and checkout a stable version of each one
  • Perform the Docker Compose build and start the different containers (this will take a while), specifying the hostname or IP where services will be exposed as a docker compose build argument, it will also create a persistent volume with the React JSX code in it
  • Launch a bash shell process to poke around the React application
  • We can see Morfeu in action in http://DOCKERIP:8980/?config=%2Fproxy%2Fsite%2Fsnowpackage%2Fconfig.json and the React site in http://DOCKERIP:3010
  • The config parameter lets the frontend know where to pick up the configuration for this specific scenario
  • A Docker volume called ‘site’ is created that contains the React JSX files, so they will persist across container restarts, showcasing the stateless nature of Morfeu and Snow Package, the command docker volume will help managing this
  • The different Docker files are on each repo and can be used to start the microservices in a different configuration.

Next steps

The basic concept demonstrated by the Morfeu and Snow Package combo is completed. We can edit JSX-based React applications and manage experiments on them without a developer and without interfering with the code, not adding extra latency or any extraneous dependencies. Complex web applications in React or other frameworks can be edited visually to handle configuration, iterate them, make small changes or perform all sorts of A/B test experiments. The described setup is quite flexible so extra features like support for TSX could be added, or even new types of UX elements for props like date or color pickers, etc. Morfeu and Snow Package are available under the Apache 2 OSS license.

Thanks for reading up to this ^^, issues and PRs welcome.

Easier web app testing by mapping entities to UI objects

Automated, browser-based testing is a key element of web application development, benefiting both simple and complex applications. Writing effective tests for browser-based apps can be a complex, tedious and often repetitive task. In this post, I will be discussing a general approach to write meaningful, loosely-coupled UI tests for web applications by going beyond the Page Object Design Pattern into a more fine-grained approach I call ‘Logical entities to UI object mapping‘. I will show code in written Java 8 leveraging the Selenium and Selenide frameworks to show examples of the method described.

Layers of web app testing responsibility
Layers of web app testing responsibility

Continue reading “Easier web app testing by mapping entities to UI objects”

provashell – testing shell scripts the easy way

In this post I will describe the provashell project, an Apache 2.0 licensed bash and shell Unit Testing library using annotations. The library is a self-contained script file with no dependencies that can be embedded or used easily in your project. Extra care has been taken to use POSIX shell features and the code has been tested in the popular shells bash, dash and zsh (latest versions the time of writing this article). I will add some detail on what drove me to do it, what it does, how it works and some examples. Continue reading “provashell – testing shell scripts the easy way”

Easy deployment of Zookeeper and Storm in RPM packages

In this post we will package Storm and its dependencies to achieve seamless deployment of a realtime big data processing system. Following up on the first Meteorit project article, we will be adding the minimal supervisor system mon, Zookeeper, zeromq and finally Storm itself. Packaging will enable fast deployment of the whole processing system using RPM packages.
Continue reading “Easy deployment of Zookeeper and Storm in RPM packages”

Crayon: codi font en colors en WordPress amb cara i ulls

Feia temps que buscava una bona solució per pintar codi font una mica decent. Fins ara tenia el Syntax Highlighter Evolved, basat en el Syntax Highlighter d’Alex Gorbatchev, però no n’estava gaire convençut.
Navegant per altres blocs he vist el resultat del plugin de Crayon per a WordPress, desenvolupat per Aram Kocharyan. El codi font es pot trobar en la seva pàgina de GitHub llest per fer-hi contribucions, amb llicència GPLv2.
Continue reading “Crayon: codi font en colors en WordPress amb cara i ulls”

Handling real-time events with Nginx, Lua and Redis

In this post, we will explore options to handle lots of HTTP events asynchronously using Nginx, Lua for the frontend and Redis as the backend. Although there are plenty of options out there to deal with this, we will check these technologies out in a bit more detail and leaving lots of options open for customisation and tuning, while using tried and true building blocks.
Continue reading “Handling real-time events with Nginx, Lua and Redis”

Building an FTP server Maven plugin from scratch

In this post we design and code a new Maven plugin that fires up a FTP server to aid in integration testing of FTP-bound processes, thus demonstrating the flexibility and power of Maven plugins.
Continue reading “Building an FTP server Maven plugin from scratch”

Nokia, Google and Microsoft, without any options in the mobile arena

Google, Microsoft and Nokia have something in common: they had little choice in some of their latest strategic moves in the mobile arena, maybe their biggest moves in a long, long time. Unusual for such giants, eh?

In this post I argue that Google, Microsoft and Nokia have something in common: they had little choice in some of their latest strategic moves in the mobile arena, maybe their biggest moves in a long, long time. Unusual for such giants, eh?

For the sake of drama and structure I will go from the most obvious to the slightly less evident and finish with the most subtle of the three.

Microsoft, Google, Nokia

Firstly, there is Microsoft.

It is common knowledge that the old boys from Redmond are in deep trouble in the mobile arena (and still are). After being a major player in the mobile smartphone arena for quite a long time it has gradually become a so-so player, being met by some skepticism by users and critics alike.

Even though Windows Phone 7 received decent reviews it feels like too little and definitely too late.

The Windows Phone 7 rollout still feels too corporate to me and still part of a lukewarm strategy.

Yeah but Microsoft has conquered or at least made a good dent on other markets before, even if just by brute force, yes? (read XBOX).

So why would a juggernaut like Microsoft be constrained in its long-term strategy? Shouldn’t be.

Well, it’s the systems, stupid.
If anything, Apple has taught the industry one thing with the iPod, iPhone and lately with the iPad. Consumers love SYSTEMS.

Consumers don’t like loosely coupled devices. Consumers don’t like replacing or upgrading components. Consumers don’t like lengthy boot sequences. Consumers don’t like invasive firmware installations. Consumers don’t like the software provider plus OEM combination. Maybe they did in the beginning of the 90’s. We ain’t in the 90’s anymore. XBOX, though successful, has been targeted at hardcore gamers rather than casual ones up until the Kinect.

On top of that Microsoft has learned something on its own, it doesn’t know how to build consumer SYSTEMS at all.

Therefore, to get itself out of the hole, Microsoft considered its options.

Acquisition of a successful company just to go on and destroy it in the process was ruled out, lest it turn into another disaster.

Lesson learned and any acquisitions ruled out, Microsoft then took a look at the OEM market… Err, in the case of MS, it meant HTC, builder of the 80% of cell phones bearing Microsoft’s mobile OS (at least prior the 7th version). Out of 50 partners!

Well, a simple Google search for Android and HTC is quite revealing. HTC didn’t jump ship, not exactly. But it started to be really busy building handsets with that OS.

Well, considering Android had been in the market for a while, a mature product, consumers liking it, with a plethora of apps and -most importantly- being FREE, what do you expect? Considering that MS has been known to thoroughly screw its partners from time to time., who can blame HTC?

Surely, HTC has hedged its bets and is only too happy to license to MS and adopt a wait-and-see attitude to it to see if it sticks and to exploit slow IT corporate policies that still mandate Windows-Everywhere(™). The good old leverage from the fading age of Windows was gone though and Microsoft would feel like second best here.

Choice taken away: traditional OEMs were out.

To add icing on the cake, it would seem that Microsoft had little choice but to license the ARM architecture, surely watched closely by its mobile hardware OEMs who have perceived it as yet another sword hanging over their little necks. A sword that would materialise as Microsoft mobile hardware. Uh-oh. Again, Microsoft has had to take some eggs from the software-only basket and put them on the systems one though only as a last resort (see Kin fiasco).

No pure-OEM model then. No acquisitions. Definitely not Android. SONY? You gotta be kidding. No Microsoft hardware (yet). Samsung maybe? Nay.


It had to be Nokia then. It’s smartphone marketshare in decline, Microsoft couldn’t go anywhere else.

Which leads nicely to the next big boy, Nokia itself.

Nokia is a giant. Massive with customers, it has created remarkable legendary mobile designs. However, its fortunes -or at least its marketshare lead- got reversed and were showing a worrying trend.

The new CEO put it very well himself, the company is in dire straits indeed, at least in the long term.

Nokia also considered its options.

Nokia understands systems and consumers very well, probably much more than Microsoft does or has ever done. With that in mind, Elop and senior managers understood that Symbian is a thing of the past that. Meego wasn’t ready and perhaps would never be good enough.

But, wait? A company the size of Nokia has lots of engineering talent! Yes, but also lots of middle managers and lacklustre leadership, if one is to believe Nokia’s top brass.

Therefore, culling the company of useless meddlers and taking the time to isolate a dedicated, motivated and driven team to push Meego (much like what Palm managed to do with its WebOS) up to scratch would take time, too long. If I remember correctly, it took Palm the better part of two years or more to get WebOS near 1.0-status and that is an heroic feat. Time though, is not something Nokia had available. Option out.

Acquisition? Nokia doesn’t strike me as an acquisition-crazy company.

Firstly, a nimble startup would understandably have a hard time integrating into a company of 12.000+ employees.

Secondly, while the Trolltech acquisition had some technical merit, it was completed in 2008 and Nokia has’t really made Qt onto the big promised paradise it was intented to. And they have had plenty of time to do it.

And finally, who to buy? RIM? Are you kidding? Making the company bigger and having a huge corporate cultural clash to boot?!? And getting RIM’s loony CEO(s) onboard?

Nay, purchasing its way out of trouble, choice taken away.

Which lead Nokia to consider Android.

One can imagine Nokia seeing Android as the forbidden fruit of sin. Juicy, tempting, free for the taking… and ultimately damning.

Oh, there is the legal swamp. Before taking a bite out of the fruit, Nokia must have had it’s lawyer legions take a good look at it.

If that was not to scare the Nokia strategist what would?

Well, I would argue that the real reason is freedom. Freedom to differentiate. As the Symbian guys put it “[…] surrender too much of the value and differentiation ability, most obviously in services and advertising, to Google.” Exactly. Nokia would play on the same field as all the other OEM. Bummer, from leading systems developer down to mere squabbling OEM.

Surely Nokia had been following what happened to Motorola and the Skyhook guys.

Apparently, both Motorola and ‘Company X’ (believed to be Samsung) have been prevented by Google from shipping handsets bearing its location technology. Remeber Navteq? Ever heard of Nokia maps?

Jump onto the Android bandwagon and kiss goodbye to all that.

So Android being out what was left as an option? A company in desperate need?


Which leads to the idea of Google having courted Nokia precisely for many of the reasons Microsoft ‘bought’ them in the first place.

Finally, Google is the third big one having run out of options on some of its most strategic options.

Consider the official gospel on why Google created Android and released it to the wild. Well, most of it, anyway.

According to Vic Gundotra from Google:

“If Google didn’t act, it faced a draconian future where one man, one phone, one carrier were our choice. That’s a future we don’t want.”

Sounds chivalrous and following the ‘Don’t be evil’ mantra. Google released Android to save us all from that bleak future.

Well, it’s not true.

Consider Google’s business model. It revolves around advertising and to do well in advertising you need eyeballs, lots of them. Furthermore, to do very well you need targeted eyeballs, that is, to show the right adverts to the right people, to maximise chances of these people hitting the ads.

Google’s business model elevator pitch: person would like to get a service or product on the Web. Cool, she fires up Google on her desktop or laptop and does a search for the terms she thinks will help her find that product or service. Maybe clicks on the ads, maybe finds what she is looking for, maybe not. Bonus points: does more searches through Google, refining the terms (translation: getting more targeted ads), finally finds what she is looking for. This happens millions of times. Google makes tons of money.

Whereas if that same person would like to get a service or product on her iPhone, the story is completely different. Cool, she fires up the App Store app, does a search or browses the categories. Looks at the top charts, reads the reviews and how many stars the candidate solutions have, ponders wether to pay or get a free app, etc. Person chooses app and installs it. Google gets zero money. Zaroo, zip, nada, nothing.

There are a few things that need to be considered:

  • Not all services are available through the App Store and probably will never be a 100%, but there are zillions of apps (read, services) waiting to be downloaded.
  • People trust Google to deliver good search results but how to compare one result from the other can basically only be done by the rank itself or by clicking on the link and spending sometime on the site. The App Store offers a simplified interface allegedly easier the average consumer (ranks, reviews, stars, etc.).
  • People trust the App Store as well, it has never allowed malware on people’s devices.
  • Apple is very careful and vocal about not having any influence on the ranking, like, erm, Google.
  • To get iPhone apps, geeks will do research on Google and the Web. Will look for extended reviews, comparisons and specialised sites. Consumers just use the App Store.

Google knew that smartphones and later tablets would be the devices the mass market consumer will use to access the Internet. iOS was looking good and demolishing the competition at the time. It seems to be doing the same now.

The App Store is a ‘Google search’ replacement.

I would say that Apple wasn’t expecting it to become so successful. I would add that Apple didn’t set out to build a Google search replacement when the App Store was built. Apple engineers and product managers only wanted to create a better experience for consumers than googling vague terms, shuffling through specialised review sites, getting malware off dubious websites, etc. Google is collateral damage.

Google knew it had to act, very fast and with very decisive steps. Apple’s competition was in shambles and couldn’t get their stuff together for the life of them. Microsoft Windows Mobile 6.x was crap and any real future releases years into the future, RIM was -is- a shadow of its former self and Palm was on its last legs. Google couldn’t wait on them.

Android was born and released for free. Had to be. Google didn’t have any other choice. What they really imagined was a mobile future sans Android, where Apple would rule. Imagine if Android didn’t exist what the marketshare of Apple would be on smartphones today and Google wouldn’t be able to continue making money on it. That is the future Google didn’t want.

A corporate strategy is about having a vision and making it happen and the three giants have lost the initiative.

Templating the OSGi way with Freemarker

After some well-deserved rest the OSGi components on the server series is back with a vengeance and a somewhat independent post. For some background please check the other posts in the series.

A common feature of Web applications is the need to output structured content, be it XML, XHTML markup, JSON or many others. A number of technologies is used to do that but few seem to be that dynamic, usually only reloading templates when they change or loading them from URLs. Surely we can leverage OSGi to make things more interesting…

Therefore we should be following the OSGi dynamic philosophy as much as possible, exploiting the features made available by the framework such as dynamic discovery, services, bundle headers and the lot.

In the case of our cache app we are violating quite a few basic design principles by having the format embedded in the java code. So we should as well use a template separate from the code and if possible reuse some existing well-known templating engine from somewhere.

Given these basic design principles let’s get started.

Firstly, we need a robust and flexible templating engine. We select the mature Freemarker engine which is reasonably speedy and has the power and flexibility we want. Make sure you check out the license, of course.

We could stop at putting the library JAR in a bundle and package it so it can be used by any other bundle and that is what we do to be able to use it in OSGi. That however doesn’t exploit many of the nicer OSGi capabilities so we will create another bundle called ‘com.calidos.dani.osgi.freemarker-loader’.

What we want is to allow bundles to carry templates inside them and have our freemarker-loader templating bundle discover them automagically. This is the same technique that the Spring dynamic modules use and you can find more info here. The mechanism is described in this diagram:

OSGi freemarker templating diagram

That is easy enough with a BundleTracker and a BundleTrackerCustomizer implementation. The BundleTracker class tracks bundles being added to the environment like this:

tracker = new BundleTracker(context, Bundle.RESOLVED, templateTracker);;

With this snippet the tracker instance will look for bundles in the RESOLVED state (which lets us track fragments). The ‘templateTracker’ object is an instance of BundleTrackerCustomizer and will receive callbacks whenever bundles are added to the environment.

For instance, when a bundle is added we check for a special header in the bundle which tells us what is the relative path of available templates in the bundle being resolved:

public Object addingBundle(Bundle bundle, BundleEvent event) {
// we look for the header and act accordingly		
String templatesLocation = (String) bundle.getHeaders().get(TEMPLATE_HEADER);
if (templatesLocation!=null) {
	Enumeration bundleTemplates = bundle.findEntries(templatesLocation, "*.ftl", true);
	HashSet templatesFromAddedBundle = new HashSet();
	while (bundleTemplates.hasMoreElements()) {
		URL templateURL = bundleTemplates.nextElement();
		addTemplate(bundle, templateURL,templatesLocation);
	templatesOfEachBundle.put(bundle.getBundleId(), templatesFromAddedBundle);
return null;
}	// addingBundle

An interesting method being used here is ‘findEntries’ which loads all the entries in the provided templates folder and lets us add them to our holding structure. We also take care to implement the methods to remove the templates and update them accordingly whenever bundles are updated or unloaded from the environment.

Having TEMPLATE_HEADER with a value of ‘Freemarker-Templates’ means that bundles having a header such as Freemarker-Templates: /templates will have any templates within that folder (please note that the ‘/templates’ bit is not added to template URLs!).

The next thing we need to do is make the loaded templates available to the environment. To do that we make a freemarker Configuration object accessible as an OSGi service object. That Configuration instance is the main object Freemarker to load and use templates and has an interesting mechanism to override its template loading mechanism we use to make available our OSGi environment templates.

freemarkerConfig.setTemplateLoader( new URLTemplateLoader() {
protected URL getURL(String url) {
Stack templateStack = templates.get(url);
if (templateStack!=null) {
	TemplateEntry templateStackTop = templateStack.peek();
	if (templateStackTop!=null) {
		return templateStackTop.getTemplateURL();
	return null;
return null;


The service Configuration object is set with a template loader inner class that uses our template holding objects to retrieve templates stored in our OSGi context. Cool.

This also allows us to effectively disable the template refreshing cycles that Freemarker does by default (allegedly making it slightly more efficient). Now we only need to refresh a bundle containing the templates to get the new version. This can be modified by using the appropriate methods on the Configuration service of course. (There is another method explained later).

An interesting feature we can add to exploit the dynamic nature of OSGi is to make templates available in a stack. This means different bundles can dynamically overwrite templates by the same name. Moreover, once a template is removed the previous version becomes available. This can be used to make temporary changes to templates to add or remove diagnostics information, test new templates temporarily, etc.

We do that using a Stack of TemplateEntry objects, TemplateEntry being a helper class to store template entries.

This is all very nice but we have a problem when having multiple versions of the same bundle that hold multiple copies of the same template, this means they will stack and we have no way to access a particular version of a template. To solve this problem we store each template in three different stacks by three different URLs:

  • ‘path/file.ftl’
  • ‘bundle://bundlename/path/file.ftl’
  • ‘bundle://bundlename:version/path/file.ftl’

In this manner we can use the more generic URL in most cases but still can access specific versions when needed. It is important to think about the dynamic nature of OSGi as well as the possibility of several versions of the same bundle coexisting peacefully in the same environment.

From the perspective of any bundle using the service in the simplest case it only needs to look for a service named ‘freemarker.template.Configuration’. For convenience, the property ‘dynamicConfiguration’ is set to ‘true’ to coexist peacefully with other Configuration services (maybe coming from an official Freemarker bundle). For instance, if we know for sure our dynamic Configuration service is the only one present we can do:


That will give us the highest ranking Configuration service. If there are several such services we can use a call like this to get the one that has the dynamic OSGi loader:

context.getServiceReferences(Configuration.class.getName(), "dynamicConfiguration=true");

There is one last feature which lets bundle users feed an already configured template configuration object to the bundle by publishing a Configuration service with property ‘preparedConfiguration’ set to ‘true’. This will get picked up by the bundle and its template loader put in sequence with the dynamic OSGi template loader. This means that any original Configuration settings are maintained (For further information on service filtering based on properties please check the BundleContext javadoc.).

Best thing to do is to go and try it by downloading the bundles here. Source is also available.

Components on the server (6): adding Integration Testing

In this installment of the server-side OSGi series, we add integration testing capabilities to our project. Integration testing goes beyond plain unit testing and checks the interactions between real components. This is in contrast with unit testing, which generally uses mockups to represent components outside the one being tested. Please take a look at previous installments, as usual.

In the case of integration testing, it is manly used in a pre-production environment, with a valid build that has all unit tests passed. It can even be used in production to just after a deployment is made, taking care not to have destructive checks or massive load tests in the integration test code. YMMV.

To achieve integration testing we need to check the various OSGi components deployed interact in the way that is expected of them. Therefore we need to test the components in a group and not in isolation. To do that in the OSGi world means we need to have access to the OSGi context from within the tests to access services, call them and check their responses, etc.

To allow for this kind of integration testing within the OSGi environment, we make a slight modification to the excellent test.extender we have already patched in the previous installment.

Basically, the basic test.extender seeks out any JUnit test classes within the fragment bundle, creates an instance using an empty constructor and then fires up the tests. This is activated either by default when the fragment is loaded or by using ‘test ‘ in the console. For further information please see the previous post about this subject.

For our integration testing, we add an extra command to test.extender:

public Object _integrationTest(CommandInterpreter intp) {
        String nextArgument = intp.nextArgument();
    	return null;

And we refactor the TestExtender to add the integrationTest method which reuses some of the code to instantiate test cases using a constructor that accepts the OSGi context as a parameter.

Constructor[] constructors = clazz.getConstructors();
boolean foundConstructor = false;
for (int i = 0; i < constructors.length && !foundConstructor; i++) {
	Constructor constructor = constructors[i];
	Class[] types = constructor.getParameterTypes();
	if (types.length==1 && types[0].isInstance(context)) {
		foundConstructor = true;
		EClassUtils.testClass(inspectClass, constructor.newInstance(context));
} // for

The OSGi context is passed onto the constructor and then the test class is run. It is obviously up to the test class to use the context appropriately for its integration testing.

In our cache project setup, we can do some useful integration testing on the cache.controller component, basically checking if the interaction with the provider components is behaving as we expect it. The integration testing is also added to a fragment that can be deployed optionally, of course.

We start by creating the fragment and adding a testing class like this:

Adding test class

Next, we add the constructor that accepts an OSGi context, which is very simple:

public CacheIntegrationTest(BundleContext ctx) {
	this.context = ctx;

In the setup and teardown methods we get and unget the cache service to perform the testing:

public void setUp() throws Exception {
	serviceReference = context.getServiceReference(Cache.class.getName());
	controller = (CacheControllerCore) context.getService(serviceReference);


public void tearDown() throws Exception {		
	controller = null;		

In this case we get the controller cache service and store it in an instance used to perform the tests. This is quite simple and fulfills our intended purpose but we still have the flexibility to make more complex integration testing if needed.

Next we create as many test cases as needed:

public void testGet() {
	try {
		double v = Math.random();
		String k = "/k"+v;
		controller.set(k, v);
		assertEquals(v, controller.get(k));
	} catch (CacheProviderException e) {


It should be noted that while the code looks like regular testing code, it is actually using real services from the OSGi environment as opposed to mockups. This means we are testing the real integration between components as well as the individual controller component code. The disadvantage here is that if there is an error in the controller we might mistake the problem with an issue with the services used. In conclusion, having integration code doesn’t negate the need to have unit tests.

Once we load the fragment onto the environment, first we need to obtain the bundle id of the integration fragment and then launch the integration testing in this manner:

osgi> integrate 125
Bundle : [125] : com.calidos.dani.osgi.cache.controller.integration
CLASS : [com.calidos.dani.osgi.cache.controller.CacheIntegrationTest]
Method : [ testInit ] PASS
Method : [ testInitInt ] PASS
Method : [ testSize ] PASS
14:21:43,077 WARN CacheControllerCore Couldn't clear some of the provider caches as operation is unsupported
14:21:43,077 WARN CacheControllerCore Couldn't clear some of the provider caches as operation is unsupported
Method : [ testClear ] PASS
Method : [ testSet ] PASS
Method : [ testGet ] PASS
Method : [ testGetStatus ] PASS

The results tell us that all operations are OK but we need to bear in mind that the clear operation is not supported in some backend caches. If this is what is expected by the operator then all is fine.

We take advantage of the new integration testing functionality to make some extensive changes to logging and exception handling of the controller code. By running the integration tests we make sure all seems to work fine (even though we still need some proper unit testing of the controller). Modifications are made quite quickly thanks to the integration tests.

To recap, we’ve added integration testing support to the existing ‘test.extender’ bundle and created integration testing code for the cache controller component. This has allowed us to make code changes quickly with less risk of mistakes.

Here you can find a patch for the test extender project as well as the patched testing bundle already compiled. Enjoy!