El conflicte entre les xarxes socials i els mitjans de comunicació

En aquest article parlaré de la confrontació silenciosa actual entre els mitjans de comunicació i les les xarxes socials. Els mitjans no estan desapareixent, ni tampoc saltant pels aires, sino sofrint una erosió terrible en un conflicte que està canviant la indústria de la comunicació mica en mica, tuit a tuit, post a post.

Els mitjans tradicionals arrossegats a la batalla amb les grans xarxes socials
Continue reading “El conflicte entre les xarxes socials i els mitjans de comunicació”

Building an FTP server Maven plugin from scratch

In this post we design and code a new Maven plugin that fires up a FTP server to aid in integration testing of FTP-bound processes, thus demonstrating the flexibility and power of Maven plugins.
Continue reading “Building an FTP server Maven plugin from scratch”

100 push-ups 2.0

Els programes de fitness són sovint complicats. Es composen de multitud d’exercicis força difícils, amb risc de lesió si no es fan correctament i a sobre repartits entre diversos dies, complicant-ne el seguiment i l’execució. Definitivament representen una barrera d’entrada intimidatòria pels aventurers de la bona forma física.

Portada 100 push-ups

Per això m’encanta el programa de 100 push-ups en 7 setmanes de Steve Speirs.
La idea bàsica del programa és arribar a fer un centenar de push-ups a l’estil militar si es segueix el programa al peu de la lletra. D’entrada sembla un repte ‘impossible’ pels mortals més comuns (un servidor inclòs).

La premissa bàsica és que un push-up és un exercici en la línia de ‘full body movement‘, on es mou tot el cos i s’involucren el màxim de grups musculars i articulacions. Es creu que els exercicis d’aquest tipus són molt beneficiosos i presenten un rendiment superior que el de les típiques màquines de musculació o els exercicis ‘localitzats’ en una sola part del cos. Córrer n’és un dels exemples més clàssics.

Un push-up involucra els pectorals, els tríceps (els grans oblidats), deltoides, serratus anterior (músculs de l’esquena situats sota la part superior dels braços), els músculs abdominals, glutis i bíceps. En el cas d’aquests últims amb els push-ups tradicionals no es treballen gaire i cal fer alguna variant si es vol potenciar-los. La diferència en el nivell de treball dels bíceps i tríceps és significativa pel fet que els tríceps comformen el 60% de la massa muscular de la zona superior dels braços. Cal pensar que s’acostuma a abusar del treball amb els bíceps per raons estètiques, cosa que pot causar desequilibris de tota mena.

Per seguir el programa, primerament es fa un test de quantes flexions es poden fer seguides, sense parar. Depenent de la quantitat final -jo en vaig fer 20-, es comença amb un programa concret o un altre. En el meu cas em toca el ‘Intermediate 2’. Malgrat que amb un push-up més ja passaria al ‘Advanced 1’, millor no precipitar-se! Fins i tot hi ha un programa pels qui no són capaços de fer cap push-up o només uns quatre o cinc.

Un cop fet el test i seleccionat el programa només cal anar seguint els passos cada dia. L’estructura principal gira al voltant de la setmana o almenys de cicles iteratius incrementals de set dies. Quatre dies de descans alternats amb tres dies d’exercici.

L’exercici es basa en l’escalfament, uns quants sets de push-ups amb un descans d’un minut entre cadascun i els estiraments finals. En un quart d’hora o vint minuts es pot completar tranquil·lament, escalfament i estiraments inclosos.

Avui em toca descans i demà toca ‘warm up, 11, 14, 11, 11, 16+, stretch (Intermediate 2, week 3)’, que són com a mínim 63 push-ups, no està del tot malament.

Per fer el seguiment es pot fer servir qualsevol tècnica, un paper, el llibre de Steve Speirs, GTD, una agenda… Jo faig servir el Remember The Milk que és un servei de GTD boníssim.

Al durant la setmana final de cada programa es poden fer sèries impresionants de més de 22 o 24 push-ups i en el cas del Intermediate 2 hi ha una traca final de més de 60!

Un cop completat el programa la idea és descansar dos o tres dies i intentar fer 100 push-ups seguits. Sembla impossible, no?

Per més detalls sobre el programa hi ha diverses opcions, primerament hi ha el site One Hundred Push ups, on podem trobar el programa, diversos recursos, programes alternatius, merxandatge i molts testimonials, inclòs el d’algun linuxero, hehe.

També hi ha el llibre (el programa exacte que segueixo), escrit per l’ideòleg del programa. Els més moderns poden provar l’aplicació per iPhone.

A veure si us animeu!

Nokia, Google and Microsoft, without any options in the mobile arena

Google, Microsoft and Nokia have something in common: they had little choice in some of their latest strategic moves in the mobile arena, maybe their biggest moves in a long, long time. Unusual for such giants, eh?

In this post I argue that Google, Microsoft and Nokia have something in common: they had little choice in some of their latest strategic moves in the mobile arena, maybe their biggest moves in a long, long time. Unusual for such giants, eh?

For the sake of drama and structure I will go from the most obvious to the slightly less evident and finish with the most subtle of the three.

Microsoft, Google, Nokia

Firstly, there is Microsoft.

It is common knowledge that the old boys from Redmond are in deep trouble in the mobile arena (and still are). After being a major player in the mobile smartphone arena for quite a long time it has gradually become a so-so player, being met by some skepticism by users and critics alike.

Even though Windows Phone 7 received decent reviews it feels like too little and definitely too late.

The Windows Phone 7 rollout still feels too corporate to me and still part of a lukewarm strategy.

Yeah but Microsoft has conquered or at least made a good dent on other markets before, even if just by brute force, yes? (read XBOX).

So why would a juggernaut like Microsoft be constrained in its long-term strategy? Shouldn’t be.

Well, it’s the systems, stupid.
If anything, Apple has taught the industry one thing with the iPod, iPhone and lately with the iPad. Consumers love SYSTEMS.

Consumers don’t like loosely coupled devices. Consumers don’t like replacing or upgrading components. Consumers don’t like lengthy boot sequences. Consumers don’t like invasive firmware installations. Consumers don’t like the software provider plus OEM combination. Maybe they did in the beginning of the 90’s. We ain’t in the 90’s anymore. XBOX, though successful, has been targeted at hardcore gamers rather than casual ones up until the Kinect.

On top of that Microsoft has learned something on its own, it doesn’t know how to build consumer SYSTEMS at all.

Therefore, to get itself out of the hole, Microsoft considered its options.

Acquisition of a successful company just to go on and destroy it in the process was ruled out, lest it turn into another disaster.

Lesson learned and any acquisitions ruled out, Microsoft then took a look at the OEM market… Err, in the case of MS, it meant HTC, builder of the 80% of cell phones bearing Microsoft’s mobile OS (at least prior the 7th version). Out of 50 partners!

Well, a simple Google search for Android and HTC is quite revealing. HTC didn’t jump ship, not exactly. But it started to be really busy building handsets with that OS.

Well, considering Android had been in the market for a while, a mature product, consumers liking it, with a plethora of apps and -most importantly- being FREE, what do you expect? Considering that MS has been known to thoroughly screw its partners from time to time., who can blame HTC?

Surely, HTC has hedged its bets and is only too happy to license to MS and adopt a wait-and-see attitude to it to see if it sticks and to exploit slow IT corporate policies that still mandate Windows-Everywhere(™). The good old leverage from the fading age of Windows was gone though and Microsoft would feel like second best here.

Choice taken away: traditional OEMs were out.

To add icing on the cake, it would seem that Microsoft had little choice but to license the ARM architecture, surely watched closely by its mobile hardware OEMs who have perceived it as yet another sword hanging over their little necks. A sword that would materialise as Microsoft mobile hardware. Uh-oh. Again, Microsoft has had to take some eggs from the software-only basket and put them on the systems one though only as a last resort (see Kin fiasco).

No pure-OEM model then. No acquisitions. Definitely not Android. SONY? You gotta be kidding. No Microsoft hardware (yet). Samsung maybe? Nay.

Um…

It had to be Nokia then. It’s smartphone marketshare in decline, Microsoft couldn’t go anywhere else.

Which leads nicely to the next big boy, Nokia itself.

Nokia is a giant. Massive with customers, it has created remarkable legendary mobile designs. However, its fortunes -or at least its marketshare lead- got reversed and were showing a worrying trend.

The new CEO put it very well himself, the company is in dire straits indeed, at least in the long term.

Nokia also considered its options.

Nokia understands systems and consumers very well, probably much more than Microsoft does or has ever done. With that in mind, Elop and senior managers understood that Symbian is a thing of the past that. Meego wasn’t ready and perhaps would never be good enough.

But, wait? A company the size of Nokia has lots of engineering talent! Yes, but also lots of middle managers and lacklustre leadership, if one is to believe Nokia’s top brass.

Therefore, culling the company of useless meddlers and taking the time to isolate a dedicated, motivated and driven team to push Meego (much like what Palm managed to do with its WebOS) up to scratch would take time, too long. If I remember correctly, it took Palm the better part of two years or more to get WebOS near 1.0-status and that is an heroic feat. Time though, is not something Nokia had available. Option out.

Acquisition? Nokia doesn’t strike me as an acquisition-crazy company.

Firstly, a nimble startup would understandably have a hard time integrating into a company of 12.000+ employees.

Secondly, while the Trolltech acquisition had some technical merit, it was completed in 2008 and Nokia has’t really made Qt onto the big promised paradise it was intented to. And they have had plenty of time to do it.

And finally, who to buy? RIM? Are you kidding? Making the company bigger and having a huge corporate cultural clash to boot?!? And getting RIM’s loony CEO(s) onboard?

Nay, purchasing its way out of trouble, choice taken away.

Which lead Nokia to consider Android.

One can imagine Nokia seeing Android as the forbidden fruit of sin. Juicy, tempting, free for the taking… and ultimately damning.

Oh, there is the legal swamp. Before taking a bite out of the fruit, Nokia must have had it’s lawyer legions take a good look at it.

If that was not to scare the Nokia strategist what would?

Well, I would argue that the real reason is freedom. Freedom to differentiate. As the Symbian guys put it “[…] surrender too much of the value and differentiation ability, most obviously in services and advertising, to Google.” Exactly. Nokia would play on the same field as all the other OEM. Bummer, from leading systems developer down to mere squabbling OEM.

Surely Nokia had been following what happened to Motorola and the Skyhook guys.

Apparently, both Motorola and ‘Company X’ (believed to be Samsung) have been prevented by Google from shipping handsets bearing its location technology. Remeber Navteq? Ever heard of Nokia maps?

Jump onto the Android bandwagon and kiss goodbye to all that.

So Android being out what was left as an option? A company in desperate need?

Microsoft.

Which leads to the idea of Google having courted Nokia precisely for many of the reasons Microsoft ‘bought’ them in the first place.

Finally, Google is the third big one having run out of options on some of its most strategic options.

Consider the official gospel on why Google created Android and released it to the wild. Well, most of it, anyway.

According to Vic Gundotra from Google:

“If Google didn’t act, it faced a draconian future where one man, one phone, one carrier were our choice. That’s a future we don’t want.”

Sounds chivalrous and following the ‘Don’t be evil’ mantra. Google released Android to save us all from that bleak future.

Well, it’s not true.

Consider Google’s business model. It revolves around advertising and to do well in advertising you need eyeballs, lots of them. Furthermore, to do very well you need targeted eyeballs, that is, to show the right adverts to the right people, to maximise chances of these people hitting the ads.

Google’s business model elevator pitch: person would like to get a service or product on the Web. Cool, she fires up Google on her desktop or laptop and does a search for the terms she thinks will help her find that product or service. Maybe clicks on the ads, maybe finds what she is looking for, maybe not. Bonus points: does more searches through Google, refining the terms (translation: getting more targeted ads), finally finds what she is looking for. This happens millions of times. Google makes tons of money.

Whereas if that same person would like to get a service or product on her iPhone, the story is completely different. Cool, she fires up the App Store app, does a search or browses the categories. Looks at the top charts, reads the reviews and how many stars the candidate solutions have, ponders wether to pay or get a free app, etc. Person chooses app and installs it. Google gets zero money. Zaroo, zip, nada, nothing.

There are a few things that need to be considered:

  • Not all services are available through the App Store and probably will never be a 100%, but there are zillions of apps (read, services) waiting to be downloaded.
  • People trust Google to deliver good search results but how to compare one result from the other can basically only be done by the rank itself or by clicking on the link and spending sometime on the site. The App Store offers a simplified interface allegedly easier the average consumer (ranks, reviews, stars, etc.).
  • People trust the App Store as well, it has never allowed malware on people’s devices.
  • Apple is very careful and vocal about not having any influence on the ranking, like, erm, Google.
  • To get iPhone apps, geeks will do research on Google and the Web. Will look for extended reviews, comparisons and specialised sites. Consumers just use the App Store.

Google knew that smartphones and later tablets would be the devices the mass market consumer will use to access the Internet. iOS was looking good and demolishing the competition at the time. It seems to be doing the same now.

The App Store is a ‘Google search’ replacement.

I would say that Apple wasn’t expecting it to become so successful. I would add that Apple didn’t set out to build a Google search replacement when the App Store was built. Apple engineers and product managers only wanted to create a better experience for consumers than googling vague terms, shuffling through specialised review sites, getting malware off dubious websites, etc. Google is collateral damage.

Google knew it had to act, very fast and with very decisive steps. Apple’s competition was in shambles and couldn’t get their stuff together for the life of them. Microsoft Windows Mobile 6.x was crap and any real future releases years into the future, RIM was -is- a shadow of its former self and Palm was on its last legs. Google couldn’t wait on them.

Android was born and released for free. Had to be. Google didn’t have any other choice. What they really imagined was a mobile future sans Android, where Apple would rule. Imagine if Android didn’t exist what the marketshare of Apple would be on smartphones today and Google wouldn’t be able to continue making money on it. That is the future Google didn’t want.

A corporate strategy is about having a vision and making it happen and the three giants have lost the initiative.

St Wolfgang, Àustria

Menció especial d’aquestes vacances pel racó austríac de St. Wolfgang im Salzkammergut, prop de St. Gilgen.

Localització genial totalment recomanable més enllà de les visites més típiques com pot ser Vienna, la zona del Danubi, Salzburg, etc.

Cool St Wolfgang photo

La manera més fàcil d’arribar és via St. Gilgen i és recomanable agafar el ferri que et porta pel llac Wolfgangsee i va parant pels diversos poblets. Mentres degustes una cerveseta dins del ferri vas escoltant com t’expliquen curiositats de la zona primer en alemany i després en un anglès molt correcte. Bé, almenys fins que s’en cansen i es passen a l’alemany directament… així et pots imaginar què diuen i l’experiència es torna encara més estimulant. El ferri és molt còmode i diposa de taules on recuperar forçes i disfrutar del viatge.

De fet, això es repeteix força en les localitzacions Austríaques menys cèntriques -que no menys turístiques- on et trobes amb una gran quantitat de menús exclusivament en la llengua germànica i res de res en anglès. Una mica d'”aventura” no fa mal.

Pel que sembla, St. Wolfgang és famós per l’hotel del ‘Cavall Blanc’, regentat per la mateixa família desde el 1712 i escenari de la òpera ‘White Horse Inn’, de Ralph Benatzky. Segons diuen és una peça “mundialment famosa”… veurem quan arriba al Liceu.

En sí el poble és força petit tot i que té unes quantes perles amagades. Per exemple, hi ha una col·lecció de nines força gran muntada per un senyor força friki, on destaca la immensa quantitat i varietat de Barbies. Les Barbie islàmiques, orientals i indies destaquen especialment.

Tampoc us perdeu el típic pub de poble amb la típica disco al costat. Al pub es poden beure còctels d’allò més bons (recomanat el “Swimming Pool”) i observar com els locals consumeixen unes quantitats gens despreciables d’alcohol.

Pel que fa a l’allotjament, imprescindible el petit hotel ‘Hupfmuehle‘, al costat d’un rierol amb cascada inclosa que va a desembocar al llac. Hi ha una pujadeta important però quan s’arriba a dalt segurament us trobareu al patriarca de la família que el regenta que us estarà esperant… D’aquest pal, vaja. Moltes de les habitacions donen directament a la cascada i el so és tant relaxant que et quedes adormit en un plis plas. Mentres esperes l’hora de sopar et pots estar a la terrassa fent un vermutet.

Hotelet Upfmuehle

El menú és força curiós pel fet que és molt curt: sopa o amanida de primer i de segon només hi ha peix, truita de riu o una varietat local. Quan has triat el pesquen i el preparen al moment. Com era d’esperar és deliciós.

L’endemà cal recuperar-se de la ressaca generada al pub del poble i deixar-se caure pel restaurant Joseph’s. No és precisament barat però tampoc és desproporcionat pel servei i la qualitat. El menú és exclusivament en alemany i està escrit en una pissara mòbil que et porta el cambrer cosa que dóna molta vidilla. Hi ha menú degustació o pots triar els plats que vols. Les làmines de vedella amb verdures són de lo millor que he tastat mai. La selecció de vins, xampany, caves i licors és molt extensa i també val com a vinoteca o sigui que si un vi t’ha agradat en pots comprar una ampolla per potser acabar-la a l’hotel amb més intimitat?

Resumint, cal fugir dels llocs més habituals i explorar aquests raconets genials que ens regala Europa… No ens en penedirem!

Templating the OSGi way with Freemarker

After some well-deserved rest the OSGi components on the server series is back with a vengeance and a somewhat independent post. For some background please check the other posts in the series.

A common feature of Web applications is the need to output structured content, be it XML, XHTML markup, JSON or many others. A number of technologies is used to do that but few seem to be that dynamic, usually only reloading templates when they change or loading them from URLs. Surely we can leverage OSGi to make things more interesting…

Therefore we should be following the OSGi dynamic philosophy as much as possible, exploiting the features made available by the framework such as dynamic discovery, services, bundle headers and the lot.

In the case of our cache app we are violating quite a few basic design principles by having the format embedded in the java code. So we should as well use a template separate from the code and if possible reuse some existing well-known templating engine from somewhere.

Given these basic design principles let’s get started.

Firstly, we need a robust and flexible templating engine. We select the mature Freemarker engine which is reasonably speedy and has the power and flexibility we want. Make sure you check out the license, of course.

We could stop at putting the library JAR in a bundle and package it so it can be used by any other bundle and that is what we do to be able to use it in OSGi. That however doesn’t exploit many of the nicer OSGi capabilities so we will create another bundle called ‘com.calidos.dani.osgi.freemarker-loader’.

What we want is to allow bundles to carry templates inside them and have our freemarker-loader templating bundle discover them automagically. This is the same technique that the Spring dynamic modules use and you can find more info here. The mechanism is described in this diagram:

OSGi freemarker templating diagram

That is easy enough with a BundleTracker and a BundleTrackerCustomizer implementation. The BundleTracker class tracks bundles being added to the environment like this:

tracker = new BundleTracker(context, Bundle.RESOLVED, templateTracker);
tracker.open();

With this snippet the tracker instance will look for bundles in the RESOLVED state (which lets us track fragments). The ‘templateTracker’ object is an instance of BundleTrackerCustomizer and will receive callbacks whenever bundles are added to the environment.

For instance, when a bundle is added we check for a special header in the bundle which tells us what is the relative path of available templates in the bundle being resolved:

public Object addingBundle(Bundle bundle, BundleEvent event) {
		
// we look for the header and act accordingly		
String templatesLocation = (String) bundle.getHeaders().get(TEMPLATE_HEADER);
if (templatesLocation!=null) {
			 
	Enumeration bundleTemplates = bundle.findEntries(templatesLocation, "*.ftl", true);
	HashSet templatesFromAddedBundle = new HashSet();
	while (bundleTemplates.hasMoreElements()) {
			
		URL templateURL = bundleTemplates.nextElement();
		addTemplate(bundle, templateURL,templatesLocation);
		templatesFromAddedBundle.add(templateURL);
				
	}
	
	templatesOfEachBundle.put(bundle.getBundleId(), templatesFromAddedBundle);
			
}
return null;
		
}	// addingBundle

An interesting method being used here is ‘findEntries’ which loads all the entries in the provided templates folder and lets us add them to our holding structure. We also take care to implement the methods to remove the templates and update them accordingly whenever bundles are updated or unloaded from the environment.

Having TEMPLATE_HEADER with a value of ‘Freemarker-Templates’ means that bundles having a header such as Freemarker-Templates: /templates will have any templates within that folder (please note that the ‘/templates’ bit is not added to template URLs!).

The next thing we need to do is make the loaded templates available to the environment. To do that we make a freemarker Configuration object accessible as an OSGi service object. That Configuration instance is the main object Freemarker to load and use templates and has an interesting mechanism to override its template loading mechanism we use to make available our OSGi environment templates.

freemarkerConfig.setTemplateLoader( new URLTemplateLoader() {
			
@Override
protected URL getURL(String url) {
Stack templateStack = templates.get(url);
if (templateStack!=null) {
	TemplateEntry templateStackTop = templateStack.peek();
	if (templateStackTop!=null) {
		return templateStackTop.getTemplateURL();
	}
	return null;
}
return null;
}

});

The service Configuration object is set with a template loader inner class that uses our template holding objects to retrieve templates stored in our OSGi context. Cool.

This also allows us to effectively disable the template refreshing cycles that Freemarker does by default (allegedly making it slightly more efficient). Now we only need to refresh a bundle containing the templates to get the new version. This can be modified by using the appropriate methods on the Configuration service of course. (There is another method explained later).

An interesting feature we can add to exploit the dynamic nature of OSGi is to make templates available in a stack. This means different bundles can dynamically overwrite templates by the same name. Moreover, once a template is removed the previous version becomes available. This can be used to make temporary changes to templates to add or remove diagnostics information, test new templates temporarily, etc.

We do that using a Stack of TemplateEntry objects, TemplateEntry being a helper class to store template entries.

This is all very nice but we have a problem when having multiple versions of the same bundle that hold multiple copies of the same template, this means they will stack and we have no way to access a particular version of a template. To solve this problem we store each template in three different stacks by three different URLs:

  • ‘path/file.ftl’
  • ‘bundle://bundlename/path/file.ftl’
  • ‘bundle://bundlename:version/path/file.ftl’

In this manner we can use the more generic URL in most cases but still can access specific versions when needed. It is important to think about the dynamic nature of OSGi as well as the possibility of several versions of the same bundle coexisting peacefully in the same environment.

From the perspective of any bundle using the service in the simplest case it only needs to look for a service named ‘freemarker.template.Configuration’. For convenience, the property ‘dynamicConfiguration’ is set to ‘true’ to coexist peacefully with other Configuration services (maybe coming from an official Freemarker bundle). For instance, if we know for sure our dynamic Configuration service is the only one present we can do:

context.getServiceReference(Configuration.class.getName());

That will give us the highest ranking Configuration service. If there are several such services we can use a call like this to get the one that has the dynamic OSGi loader:

context.getServiceReferences(Configuration.class.getName(), "dynamicConfiguration=true");

There is one last feature which lets bundle users feed an already configured template configuration object to the bundle by publishing a Configuration service with property ‘preparedConfiguration’ set to ‘true’. This will get picked up by the bundle and its template loader put in sequence with the dynamic OSGi template loader. This means that any original Configuration settings are maintained (For further information on service filtering based on properties please check the BundleContext javadoc.).

Best thing to do is to go and try it by downloading the bundles here. Source is also available.

Components on the server (6): adding Integration Testing

In this installment of the server-side OSGi series, we add integration testing capabilities to our project. Integration testing goes beyond plain unit testing and checks the interactions between real components. This is in contrast with unit testing, which generally uses mockups to represent components outside the one being tested. Please take a look at previous installments, as usual.

In the case of integration testing, it is manly used in a pre-production environment, with a valid build that has all unit tests passed. It can even be used in production to just after a deployment is made, taking care not to have destructive checks or massive load tests in the integration test code. YMMV.

To achieve integration testing we need to check the various OSGi components deployed interact in the way that is expected of them. Therefore we need to test the components in a group and not in isolation. To do that in the OSGi world means we need to have access to the OSGi context from within the tests to access services, call them and check their responses, etc.

To allow for this kind of integration testing within the OSGi environment, we make a slight modification to the excellent test.extender we have already patched in the previous installment.

Basically, the basic test.extender seeks out any JUnit test classes within the fragment bundle, creates an instance using an empty constructor and then fires up the tests. This is activated either by default when the fragment is loaded or by using ‘test ‘ in the console. For further information please see the previous post about this subject.

For our integration testing, we add an extra command to test.extender:

public Object _integrationTest(CommandInterpreter intp) {
        String nextArgument = intp.nextArgument();
    	testExtender.integrationTest(Long.parseLong(nextArgument));
    	return null;
}

And we refactor the TestExtender to add the integrationTest method which reuses some of the code to instantiate test cases using a constructor that accepts the OSGi context as a parameter.

Constructor[] constructors = clazz.getConstructors();
boolean foundConstructor = false;
for (int i = 0; i < constructors.length && !foundConstructor; i++) {
	Constructor constructor = constructors[i];
	Class[] types = constructor.getParameterTypes();
	if (types.length==1 && types[0].isInstance(context)) {
		foundConstructor = true;
		EClassUtils.testClass(inspectClass, constructor.newInstance(context));
	}
} // for

The OSGi context is passed onto the constructor and then the test class is run. It is obviously up to the test class to use the context appropriately for its integration testing.

In our cache project setup, we can do some useful integration testing on the cache.controller component, basically checking if the interaction with the provider components is behaving as we expect it. The integration testing is also added to a fragment that can be deployed optionally, of course.

We start by creating the fragment and adding a testing class like this:

Adding test class

Next, we add the constructor that accepts an OSGi context, which is very simple:

public CacheIntegrationTest(BundleContext ctx) {
	super();
	this.context = ctx;
}

In the setup and teardown methods we get and unget the cache service to perform the testing:


public void setUp() throws Exception {
	serviceReference = context.getServiceReference(Cache.class.getName());
	controller = (CacheControllerCore) context.getService(serviceReference);

}

public void tearDown() throws Exception {		
	context.ungetService(serviceReference);
	controller = null;		
}

In this case we get the controller cache service and store it in an instance used to perform the tests. This is quite simple and fulfills our intended purpose but we still have the flexibility to make more complex integration testing if needed.

Next we create as many test cases as needed:

public void testGet() {
	try {
		controller.init();
		double v = Math.random();
		String k = "/k"+v;
		controller.set(k, v);
		assertEquals(v, controller.get(k));
	} catch (CacheProviderException e) {
		e.printStackTrace();
		fail(e.getMessage());
	}

}

It should be noted that while the code looks like regular testing code, it is actually using real services from the OSGi environment as opposed to mockups. This means we are testing the real integration between components as well as the individual controller component code. The disadvantage here is that if there is an error in the controller we might mistake the problem with an issue with the services used. In conclusion, having integration code doesn’t negate the need to have unit tests.

Once we load the fragment onto the environment, first we need to obtain the bundle id of the integration fragment and then launch the integration testing in this manner:


osgi> integrate 125
Bundle : [125] : com.calidos.dani.osgi.cache.controller.integration
_
CLASS : [com.calidos.dani.osgi.cache.controller.CacheIntegrationTest]
___________________________________________________________________________
Method : [ testInit ] PASS
Method : [ testInitInt ] PASS
Method : [ testSize ] PASS
14:21:43,077 WARN CacheControllerCore Couldn't clear some of the provider caches as operation is unsupported
14:21:43,077 WARN CacheControllerCore Couldn't clear some of the provider caches as operation is unsupported
Method : [ testClear ] PASS
Method : [ testSet ] PASS
Method : [ testGet ] PASS
Method : [ testGetStatus ] PASS
___________________________________________________________________________

The results tell us that all operations are OK but we need to bear in mind that the clear operation is not supported in some backend caches. If this is what is expected by the operator then all is fine.

We take advantage of the new integration testing functionality to make some extensive changes to logging and exception handling of the controller code. By running the integration tests we make sure all seems to work fine (even though we still need some proper unit testing of the controller). Modifications are made quite quickly thanks to the integration tests.

To recap, we’ve added integration testing support to the existing ‘test.extender’ bundle and created integration testing code for the cache controller component. This has allowed us to make code changes quickly with less risk of mistakes.

Here you can find a patch for the test extender project as well as the patched testing bundle already compiled. Enjoy!

El manifest – Unlink your feeds

Gràcies al Roger per enllaçar al manifest ‘Unlink your feeds‘, em subscric totalment a la iniciativa.

La idea és anar amb molt de compte en enllaçar els posts i updates de microblogging entre els diferents serveis, no sempre és lo òptim. Com diu el mateix Roger:

Un clam al cel perquè la gent deixi de reaprofitar el mateix missatge per a totes les xarxes socials. Crea soroll innecesari, molts cops perd context (hashtags a facebook, etc…) i molesta. Ok a que em diguis “avui he dinat molt bé”, però no necessito rebre-ho per triplicat.

Flash on iPhone OS: two extra reality checks

Unless you live in a remote and far away place you will have heard of the media impact of Apple’s decision to ban the Adobe Flash to iPhone solution and the various discussions that have ensued. I won’t bother with providing more links. I love John Gruber’s take and Louis Gerbarg’s as well.

Both J.G. and L.G. have brought up many valid points: Adobe dragged their heels on Flash for mobile for a looooong time. There have been countless detabes and blog posts.

I’ll bring two further arguments, one logical and one historical, the latter being one that I believe hasn’t been brought up yet.

Logical argument: the fallacy of Flash being cross-platform

Moreover, why should we choose to lock ourselves onto Adobe’s propietary API instead of Apple’s propietary API? Why is Adobe’s API, specs and runtime superior to any others? Many would say the main reason (the only?) is that it’s crossplatform?

Wikipedia says that cross-platform means:

“an attribute conferred to computer software or computing methods and concepts that are implemented and inter-operate on multiple computer platforms”.

Oh, okay, so multiple means Windows, Mac and Linux desktops. Hello?!? Anyone there?!? This is 2010 and the 90’s called because they want their meaning of “platform” back. Nowadays, multiple means desktop and mobile. Therefore:

  • Flash Lite is a joke. Where are the multiplicity of mobile devices supporting proper Flash 10.1 today?
  • Where is the pervasive Android support today? HTC Hero buyers are screwed, a phone bought as little as less than a year ago won’t be supported.
  • Where is the support for Windows Mobile devices? Not there until WinMo7. You mean it will be supported on a OS that it’s not even there yet?

Well, you could say that the Open Screen Project and the releasing of the FLV, SWF and the lot specs are true multiplatformness… Well, so far so good but where is the real market traction? Where are the tried-and-true implementations? Adobe has released this technology but until it gains traction it’s no more than a glorified press release. Apple can play this game too, with tech such as WebKit which is used in a zillion places including Adobe’s own Air platform. Press releases and freeing technologies are irrelevant until adopted. A good initiative which I applaud, but still not widely used.

No, Flash is not cross-platform anymore. What is the marketshare of the mobile devices capable of running Flash 10.1? Nowhere big enough for Adobe to be pulling muscle and demanding anything. It seems Adobe is using Flash devs and aficionados as cannon fodder or -as Gerbarg more aptly puts it-, “Adobe used its userbase and their livelihood as a bargaining chip”. Adobe dragged its feet for years in the mobile arena and now is paying for its mistakes.

Historical argument: Macromedia Adobe has screwed its own developers like this before

In the nineties Macromedia had a great product already. It was cross-platform (as per 90’s definition), had powerful scripting capabilities, an powerful extension architecture, a great browser plug-in runtime that was also cross-platform, video capabilities, awesome rapid development tools, stellar graphics integration, built-in debugger, great performance, etc. It was called Director and it was really cool.

Then Macromedia bought FutureSplash in 1996 which would be later renamed to Flash. It had far less capabilities that Director at the time (no real scripting at the beginning, etc.) and would remain technically inferior for sometime, having no debugging and many other features being missing for a long while. However, it had three distinct advantages. Strike one: having less features and being a newer codebase meant it could have a more lightweight runtime than the Director one. Strike two: it had support for vector graphics, which desktop CPUs at the time were just capable of displaying and animating adequately. Director came from a less CPU-intensive bitmap background, faster but consuming more space than vector definitions and looking less sleek in many cases.

One “advantage” remained. So what did Macromedia do? Take advantage of the FutureSplash technology acquisition and the Director established developer base? It could have easily added the vector drawing technology into Director (it supported many types of media already). Refactoring the code and runtime would be no trivial feat but doable. Any Director developer would have been OK with a new restriction being put into place that meant only vector-graphics resources being allowed any new Director web runtime, would have welcomed and embraced such a change.

Strike three: Macromedia realised that with the maturity and feature-complete of Director no long-winded upgrade path was in sight. Just adding vector graphics and a streamlined runtime would do for one or two Director upgrades, no more. They thoroughly screwed the existing loyal developer base royally and released a sleek new 1.0 Flash product. Then they spent years releasing upgrades that added features that had already been present in Director for some time. They created another cash cow, a cow that would ride the wave of the Web explosion of the .com era. Director out. And no, I’m not buying any of the “official” reasons for its languishing and Flash emergence. It was all about money.

I am not crying for Director’s demise -or rather, being put into life-support mode. It was a good platform and many (myself included) made good money using it. This doesn’t mean Adobe didn’t screw up many of its own developers and went in for the next cash cow.

I am not buying Flash developers and supporters taking offense and the moral high ground, the company (now Adobe) you support has done a much worse thing to its own developers. Why should Apple even care for developers of other platforms? Just because you learned a framework and are scared of it fading away doesn’t mean anything… Some of us have done this many times before and moved on…

Final word: put your code where your mouth is

Okay. Time to recap, folks. If Flash is such a great platform -it’s actually not bad but not that good either- go ahead and develop these killer apps on Android or WinMo7 whenever that comes out. With killer apps being available on competing platforms making a huge difference Apple will simply change the clause and allow you in. They’re not stupid.

Now stop crying and start coding. Myself, will try to put this out of my mind.