Dec 4, 2011

Convergence of applications - New challenges


When Apple released its iPhone 4S,  one of the key additions to its feature list was 'Siri', a virtual personal assistant. Siri is an application that responds to voice commands and performs a variety of actions, based on the instructions. For example, if you ask Siri to suggest good restaurants, it uses your current location and searches the internet to find restaurants and further can go the restaurants website and book a seat as well.

In the above simple example, we have seen an application that uses voice recognition, collecting GPS data from the mobile, framing search queries based on data and instruction, conducting an internet search and performing a small transaction of booking a seat. More complex transactions will involve far more complex interactions and involvement of multiple application, the data from one being used for performing another transaction on the other.

It is the bringing together of multiple applications to perform a function rather than building all of the capabilities into one application that I see as a new area for testers to explore. Convergence of applications and devices are not a new concept. We have been discussing and debating in various forms and types. But with the possible commercial success of Siri (which we will come to know soon), There will surely be a boom of applications that uses one form or other of convergence technologies and are sure to be a challenge for testers.

It will not only make tests more complex by extending the boundaries of the system under test to the applications in converge, but also make scenarios far more specific to the context in which the test needs to be developed. It will also make the 'domain knowledge' far more extensive and make the tests more intuitive that ticking off a set of test conditions.
I test. Therefore I am.

Nov 30, 2011

Tablets and new callenges to testing

Probably the dot net boom was one of the biggest trigger for expanding the focus of testing. Functional testing which was a less glamorous job underwent a complete overhaul as a result of the explosion in the online market place. Functionality, Performance, Security, Compatibility all became specialised areas for testing and quality assurance.

The next big wave in my opinion is the boom in tablet devices that Apple iPad has triggered. With a sale of around 8 million pieces a quarter, the tablet user base of iPad alone is growing at a very fast pace. Combined with Android and Windows based tablet users, the user base is significantly large for any one not to provide customised versions of contents for this large customer base.

A large portion of such content is delivered as 'Apps' so that they leverage the capabilities of the tablet devices and provide a better user experience. This is leading to a new requirements for testing: verifying the functionality of the application in the tablet devices and verifying the conformance to the user interface standards of the tablet operating systems. For example, Apple has its own user interface standards for all Apps that are shares through its App store. Also, these devices provide API for leveraging certain capabilities of the devices or provide development platforms for developing content.

With the wide variation in hardware configuration, operating systems, User Interface Standards, and native device capabilities, the testers have a whole new set of challenges to face. To add to the complexity, many phones that run the same operating systems require a different App than their tablet counterparts, even though they are from the same manufacturer (Not true for Apple).Yet to see is the effect of the new iPhone based payment method that Apple is introducing.

With many App trying to do similar things and trying to use same device capabilities, I suspect that inter-operability also will become a prime concern in the coming days.

I test. Therefore I am.

Part-3 Browser Compatibility: What is it?

This is the last of the three part discussion on Browser Compatibility Testing.

Any strategy to test the compatibility of an application across browser-platform combination has to be considered with the understanding that the test is for checking the compatibility and not to establish that the application and its behaviour are exactly same in all the browsers and on all platforms (hardware, software and their combinations).

The first consideration should be the way the application is designed to support the different browsers and platforms. Applications are designed with a target user base, and information on their user profiles including operating systems, browser versions and the trends in the industry that will affect browser and platform usage. Based on these information, applications are designed for a set of 'most preferred browsers and a degraded support to another set of browsers. There are numerous browsers in the market, but only the browsers and platform combination that is either fully supported or are 'supported on a downgraded level' are tested as part of the compatibility tests.

So, the objective is to test and ensure that the application is correctly and consistently behaving in the set of preferred browsers and platforms and the degradation in matching the the way it is designed to. As an example, if the application needs to use a different style sheet on a lower version of IE, the test will verify that the user interface confirms to the styles specified for that version of IE.

The next is to identify a combination of platforms and browsers that needs to be tested. Depending on the technology and components that are used in an application, we can establish an 'equivalence' among browsers in the way they handle components and reduce the test combinations. Once the test combination is reviewed with the design/development team agreed upon, tests that cover the breadth of the application and UI components is designed. Generally, the scenarios are a subset of the functional scenarios, but are focused more on UI characteristics that functionality.

On many occasions, people rush to automate the tests because they have to run the same test on many browsers and operating systems. But, remember that the expected results are going to be different on the preferred browser(s) and the less supported ones and so the 're-usability' that is counted to justify the automation investment may not be true. But, there are several cases where automating such tests make complete sense.

There are several tools that support varying levels of browser compatibility testing. Even functional testing tools like HP QTP and TOSCA (from Tricentis)can do a limited amount of browser compatibility testing. Both these tools claim to come up with support for newer versions for FireFox (currently supports only v3.6) and also Google Chrome some time in December. With that, they all should be able to do a certain level of compatibility testing, at least with the 'most preferred browsers'. Open source tools like Selenium also can support browser compatibility tests.

The best tool I have come across is Eggplant (http://www.testplant.com/) which is an image based automation which uses a proprietary technology to recognise images. It also is in dependant of the operating system as it uses a VNC connection to the various machines that host the various versions of browsers on which the tests need to run.

Automation or no automation, the key to success is identifying the expected behaviour in the target and supported browsers and designing an optimum number of tests that cover the GUI objects to validate the expected behaviour.

I test. Therefore I am.

Nov 26, 2011

Part-2 Browser Compatibility: What is it?


In Part 1, we discussed why different browsers may treat the same content differently. No we will briefly discuss a few other factors that can cause the pages to be displayed differently.

We know that majority of the internet users uses Microsoft Windows as the operating system. But they all do not use the same version of Windows. There is another 10-15% Apple uses and a significant percentage of Linux users. The operating system capabilities also cause differences in the display of contents by the browser. There are special capabilities as well as limitations for most of these operating systems, and any page that leverages those capabilities may not be displayed accurately by same browser in another operating system.

For example, a page that utilises a Font that is supported exclusively by Mac will not be correctly displayed in other operating systems. Also, some fonts will look different in different operating systems and can distort the displayed contents.

Now, with the fast growing popularity of tablet PCs, the combination expands to mobile operating systems including iOS version.

The screen resolution is another factor that will add to the complexity. Different screen resolutions with PC and mobile devices and the clients to the web content, the complexity in managing the display consistently becomes more complex and at the same time more relevant to provide a consistent user experiences across devices and configurations.

Before we discuss how to test these, let us briefly discuss why developers use tags, plug-ins and technologies that are not consistently rendered across platforms. If everyone used only the tags that are understood by all the browsers and used only plug-ins that are supported by all browsers, and did not use attributes that are not consistent across browsers, we could have avoided this whole issue of inconsistency.

So we know that there will be in-consistency, we know there are ways to prevent it and still will use newer and emerging (and hence less supported) ways of rendering content. Is that the issue? I do not think so. That is the way technology has always emerged. Inventions and discoveries have changed the way we work and the way we use things. But they were all a gradual change with varying rate of change or adoption rate.

I have read an interesting comparison of browser compatibility with the Television technology. From the days of the black and while CRT based televisions, the technology has progressed to . But, Not all televisions that are currently in use does not support colour, high definition and 3D. But the television broadcasting is continuously enhancing and the TVs handle the transmissions matching their capability.

So is the case of web technologies as well. They technologies emerge, the standards improve, and the browsers do a catch-up. So, we will develop a strategy, accepting the fact that inconsistency in rendering contents is a reality. We will discuss more about that in the next.

I test. Therefore I am.

Nov 25, 2011

Part-1 Browser Compatibility: what is it?

As we were progressing with functional test automation for one of the major upcoming platforms for one of the clients, a request for automating corss browser compatibility test case up as a hot topic. Many a times, the source of inspiration and sparking of an idea are beyond any ones control. And many a times, people do not wait for the Newtonian apple to fall on their heads to get ideas! And, in services industry, we all know that the time from the ideas strikes to shouting "Eureka!" to publish to the world till finding someone to 'implement' the idea is generally measured in nano-seconds.So, sooner that I realised, I was also part of this team solving the issue of browser compatibility (or incompatibility).

Everyone was looking for something that will automatically test the application across a whole bunch of browsers and their different versions. Something that learns what the user will do on that application, and then, one-by-one open differnt browsers and repeat the same and check if everything works fine. It does sound simple. But, the more I thought about it, the less excited I became. Is this so simple? It was the non-convincing simplicity of the solution that made me think about the problem more. I did a search of the web to find the tools that can do the job. I found a dozen odd tools and each one claimed capable of doing things that others can't. The problem started looking more complex than I originally thought. I decided to dig much more deeper. The result was really interesting, and I decided to post it in my blog - rather a series of posts.

There were several questions that I needed to answer to validate the solution that I already had. Why do we need to test the same code on different browsers? Is testing on different browsers enough? These were the two basic questions I decided to strat with.

Browsers are essentialty intermediaries that translate the HTML code to what we see on a web page. Internally, browsers use layout engines or rendering engines to generate the content based on HTML pages. One of the oldest issues is browser companies building custom tags which require specific browsers to interpret them. But this is becoming a non issue as most browsers are following the W3 standards.

As we all know, HTML standards are developed and maintained by the World Wide Web Consortium (www.w3.org) and is dynamic - means it is continuously evolving and udating. For example, HTML5 standards have been published, but is still evolving.If the standards are universal, who do browsers process them differently?

The standards are dynamic and evolving also means that generally all browsers have to do a catch-up with the standards as they are developed and published.This means that different browsers will comply with the standards in varying degrees. This will also mean that the un-spoorted part of the code is left for the browser to handle in its own way!

Also, the HTML standards are never un-ambiguosly well defined. While the tags and their syntax are well defined, there are numerous attributes to the tags that are not mandatory, but do not have a 'default' value. In most cases, it is not mandatory to define values to these attributes as well. This means that the browsers are free to define their own default values, This is one cause of difference in display between browsers.

Contents that require specific plug-ins to display are another cause of difference in page rendering across browsers. Different browsers will require plug-ins specific to that broser and not all browsers will have supported plug-ins for all content. In such cases, differences in the plug-in ans also the way a browser handles un-spported contents wil cause difference in displays.

Browsers are also evolving as the standards and technologies evolve and advance, and always there is a drive to push out newer and better versions to the market before the competition. This puts pressure on the browser developers and the push to crunch the timelines causes bugs to leak out to the market. Especially with the browsers that have 'automatic updates', these bugs are pushed to the end user without their knowledge.

We also know about the defects that the developers of the application have introduced that will have its own effect in the browsers (That should be the reason why this problem started!). I have read an interesting statistic in the website of one of the HTML syntax testing tools websites - 85% of the few million pages that they have tested has atleast one HTML tag related defect!

In the next post, we will look at a few more considerations of browser compatibility before we define the objective of the test and discuss a strategy for the test.

I test. Therefore I am.

Mar 28, 2011

Appreciative Enquiry

The drive to improve the maturity of the IT processes has been around for a few decades. There have been some significant developments and evolutions in the maturity models that govern the software processes. Software Testing also has seen some very significant developments including enhancements to the existing models and introduction of new models. While I appreciate the methodologies that are used by each of these models, I see a larger commonality among all these models. They focus on what is not standard, what is not repetitive, what is not scalable and what ever else that is not right. Once we know what is not right, we try to prescribe solutions to fix the problems. So, organisations also look at these “inspectors” or assessors as people who probe for issues and deficiencies, and rate them at a certain level of maturity. Every assessment includes some information about the good practices, but the focus is generally on the “opportunities for improvement” by fixing the deficiencies. Little consideration is given to what has worked well in the groups or teams within the organisation and to see how that be replicated across the organisation. It is in this context that I consider Appreciative Inquiry as a means for organisational transformation. Shifting from what is not working to what works well or what has worked well is offcourse a major shift in the approach, and hence may require a lot of preparation and facilitation so that the efforts are driving the organisation towards its ‘desired’ end state. Appreciative Inquiry is not a new concept. In fact, it has been around since 80’s and has been used by a lot of practitioners facilitating change management. What attracted me is the focus on driving change through identifying the positive aspects (appreciation) of something successful and using that to design what will help in bringing in desired outcomes. Probably a very powerful and positive way of bringing in all those changes that we have been triggering through pointing gaps and prescribing fixes. I think I need to take a deeper look at this.

I test. Therefore I am.