No scripts in this post, just a quick review of what was topical at iqnite 2012, the software quality and testing conference.
Conference attendance was dominated by suits – mostly test managers rather than testers, and mostly from big corporates and government bodies.
As with any conference, there were a number of topics covered.
The burning new issue that kept coming up was: how to test for multiple hardware platforms, specifically how to test for mobile platforms.
The issue was raised by, amongst others, Jeff Findlay (Borland), Michael Palotas (EBay), Chris Dart-Kelly (BankWest) and Kelvin Ross (K Ross).
The problem is that there are a lot of different devices: iphones, ipads, ‘droids and other smart phones. And they are all different. Even on a single platform there are significant differences between versions, most obviously different screen resolutions and capacities, but also other differences under the hood, so to speak.
In short, there are too many devices and too many operating systems and versions and too many combinations of the above to test all of them.
Of course, supporting too many platforms isn’t a new issue, but for most of us, it’s an issue that largely went away (at the user end) when IBM standardised the PC and (at the back end) when all but a few Unix vendors vanished.
Those with long memories will be able to anticipate some of the ways people are tackling the problem:
1. restrict the platforms supported:
- Most people are just targeting Apple and Android.
- Android is more difficult because it is less standardised, but it is too common for most people to ignore.
2. restrict the platforms you test:
- risk based testing – focus testing on the most commonly used platforms/versions
- corner test – focus testing on the extremes – eg the oldest/newest supported operating system, and the largest smallest operating systems
- automated testing (and there are many new tools and emulators to help)
- don’t test – according to Jeff Findlay, 61% of mobile apps are not formally tested
There is also a new solution: web-based testing. Instead of having your own physical devices, or emulators, or even testers, there are plenty of websites prepared to rent any or all of these things to you – ‘just’ upload your application and test it, and potentially on a much wider range of real hardware than would ever be practical for most of us.
I say ‘just’ of course, because life is never that simple. And this was the other theme that kept recurring; it’s all very well creating a large number of automated tests, but someone needs to maintain these tests, and debug them when they fail. And done properly, it’s expensive, and, politically, no-one wants it to be counted against their own budget.
So some problems still aren’t solved. As delegate after delegate complained, manual tests takes too long to execute, automated tests cost too much to maintain. Although Kelvin Ross reported that waterfall projects fail more often than Agile projects, it was still common to hear that “Agile is frAgile”, and this is a position I have some sympathy for. But that’s a topic for another post.