Nov 30, 2011

Tablets and new callenges to testing

Probably the dot net boom was one of the biggest trigger for expanding the focus of testing. Functional testing which was a less glamorous job underwent a complete overhaul as a result of the explosion in the online market place. Functionality, Performance, Security, Compatibility all became specialised areas for testing and quality assurance.

The next big wave in my opinion is the boom in tablet devices that Apple iPad has triggered. With a sale of around 8 million pieces a quarter, the tablet user base of iPad alone is growing at a very fast pace. Combined with Android and Windows based tablet users, the user base is significantly large for any one not to provide customised versions of contents for this large customer base.

A large portion of such content is delivered as 'Apps' so that they leverage the capabilities of the tablet devices and provide a better user experience. This is leading to a new requirements for testing: verifying the functionality of the application in the tablet devices and verifying the conformance to the user interface standards of the tablet operating systems. For example, Apple has its own user interface standards for all Apps that are shares through its App store. Also, these devices provide API for leveraging certain capabilities of the devices or provide development platforms for developing content.

With the wide variation in hardware configuration, operating systems, User Interface Standards, and native device capabilities, the testers have a whole new set of challenges to face. To add to the complexity, many phones that run the same operating systems require a different App than their tablet counterparts, even though they are from the same manufacturer (Not true for Apple).Yet to see is the effect of the new iPhone based payment method that Apple is introducing.

With many App trying to do similar things and trying to use same device capabilities, I suspect that inter-operability also will become a prime concern in the coming days.

I test. Therefore I am.

Part-3 Browser Compatibility: What is it?

This is the last of the three part discussion on Browser Compatibility Testing.

Any strategy to test the compatibility of an application across browser-platform combination has to be considered with the understanding that the test is for checking the compatibility and not to establish that the application and its behaviour are exactly same in all the browsers and on all platforms (hardware, software and their combinations).

The first consideration should be the way the application is designed to support the different browsers and platforms. Applications are designed with a target user base, and information on their user profiles including operating systems, browser versions and the trends in the industry that will affect browser and platform usage. Based on these information, applications are designed for a set of 'most preferred browsers and a degraded support to another set of browsers. There are numerous browsers in the market, but only the browsers and platform combination that is either fully supported or are 'supported on a downgraded level' are tested as part of the compatibility tests.

So, the objective is to test and ensure that the application is correctly and consistently behaving in the set of preferred browsers and platforms and the degradation in matching the the way it is designed to. As an example, if the application needs to use a different style sheet on a lower version of IE, the test will verify that the user interface confirms to the styles specified for that version of IE.

The next is to identify a combination of platforms and browsers that needs to be tested. Depending on the technology and components that are used in an application, we can establish an 'equivalence' among browsers in the way they handle components and reduce the test combinations. Once the test combination is reviewed with the design/development team agreed upon, tests that cover the breadth of the application and UI components is designed. Generally, the scenarios are a subset of the functional scenarios, but are focused more on UI characteristics that functionality.

On many occasions, people rush to automate the tests because they have to run the same test on many browsers and operating systems. But, remember that the expected results are going to be different on the preferred browser(s) and the less supported ones and so the 're-usability' that is counted to justify the automation investment may not be true. But, there are several cases where automating such tests make complete sense.

There are several tools that support varying levels of browser compatibility testing. Even functional testing tools like HP QTP and TOSCA (from Tricentis)can do a limited amount of browser compatibility testing. Both these tools claim to come up with support for newer versions for FireFox (currently supports only v3.6) and also Google Chrome some time in December. With that, they all should be able to do a certain level of compatibility testing, at least with the 'most preferred browsers'. Open source tools like Selenium also can support browser compatibility tests.

The best tool I have come across is Eggplant (http://www.testplant.com/) which is an image based automation which uses a proprietary technology to recognise images. It also is in dependant of the operating system as it uses a VNC connection to the various machines that host the various versions of browsers on which the tests need to run.

Automation or no automation, the key to success is identifying the expected behaviour in the target and supported browsers and designing an optimum number of tests that cover the GUI objects to validate the expected behaviour.

I test. Therefore I am.

Nov 26, 2011

Part-2 Browser Compatibility: What is it?


In Part 1, we discussed why different browsers may treat the same content differently. No we will briefly discuss a few other factors that can cause the pages to be displayed differently.

We know that majority of the internet users uses Microsoft Windows as the operating system. But they all do not use the same version of Windows. There is another 10-15% Apple uses and a significant percentage of Linux users. The operating system capabilities also cause differences in the display of contents by the browser. There are special capabilities as well as limitations for most of these operating systems, and any page that leverages those capabilities may not be displayed accurately by same browser in another operating system.

For example, a page that utilises a Font that is supported exclusively by Mac will not be correctly displayed in other operating systems. Also, some fonts will look different in different operating systems and can distort the displayed contents.

Now, with the fast growing popularity of tablet PCs, the combination expands to mobile operating systems including iOS version.

The screen resolution is another factor that will add to the complexity. Different screen resolutions with PC and mobile devices and the clients to the web content, the complexity in managing the display consistently becomes more complex and at the same time more relevant to provide a consistent user experiences across devices and configurations.

Before we discuss how to test these, let us briefly discuss why developers use tags, plug-ins and technologies that are not consistently rendered across platforms. If everyone used only the tags that are understood by all the browsers and used only plug-ins that are supported by all browsers, and did not use attributes that are not consistent across browsers, we could have avoided this whole issue of inconsistency.

So we know that there will be in-consistency, we know there are ways to prevent it and still will use newer and emerging (and hence less supported) ways of rendering content. Is that the issue? I do not think so. That is the way technology has always emerged. Inventions and discoveries have changed the way we work and the way we use things. But they were all a gradual change with varying rate of change or adoption rate.

I have read an interesting comparison of browser compatibility with the Television technology. From the days of the black and while CRT based televisions, the technology has progressed to . But, Not all televisions that are currently in use does not support colour, high definition and 3D. But the television broadcasting is continuously enhancing and the TVs handle the transmissions matching their capability.

So is the case of web technologies as well. They technologies emerge, the standards improve, and the browsers do a catch-up. So, we will develop a strategy, accepting the fact that inconsistency in rendering contents is a reality. We will discuss more about that in the next.

I test. Therefore I am.

Nov 25, 2011

Part-1 Browser Compatibility: what is it?

As we were progressing with functional test automation for one of the major upcoming platforms for one of the clients, a request for automating corss browser compatibility test case up as a hot topic. Many a times, the source of inspiration and sparking of an idea are beyond any ones control. And many a times, people do not wait for the Newtonian apple to fall on their heads to get ideas! And, in services industry, we all know that the time from the ideas strikes to shouting "Eureka!" to publish to the world till finding someone to 'implement' the idea is generally measured in nano-seconds.So, sooner that I realised, I was also part of this team solving the issue of browser compatibility (or incompatibility).

Everyone was looking for something that will automatically test the application across a whole bunch of browsers and their different versions. Something that learns what the user will do on that application, and then, one-by-one open differnt browsers and repeat the same and check if everything works fine. It does sound simple. But, the more I thought about it, the less excited I became. Is this so simple? It was the non-convincing simplicity of the solution that made me think about the problem more. I did a search of the web to find the tools that can do the job. I found a dozen odd tools and each one claimed capable of doing things that others can't. The problem started looking more complex than I originally thought. I decided to dig much more deeper. The result was really interesting, and I decided to post it in my blog - rather a series of posts.

There were several questions that I needed to answer to validate the solution that I already had. Why do we need to test the same code on different browsers? Is testing on different browsers enough? These were the two basic questions I decided to strat with.

Browsers are essentialty intermediaries that translate the HTML code to what we see on a web page. Internally, browsers use layout engines or rendering engines to generate the content based on HTML pages. One of the oldest issues is browser companies building custom tags which require specific browsers to interpret them. But this is becoming a non issue as most browsers are following the W3 standards.

As we all know, HTML standards are developed and maintained by the World Wide Web Consortium (www.w3.org) and is dynamic - means it is continuously evolving and udating. For example, HTML5 standards have been published, but is still evolving.If the standards are universal, who do browsers process them differently?

The standards are dynamic and evolving also means that generally all browsers have to do a catch-up with the standards as they are developed and published.This means that different browsers will comply with the standards in varying degrees. This will also mean that the un-spoorted part of the code is left for the browser to handle in its own way!

Also, the HTML standards are never un-ambiguosly well defined. While the tags and their syntax are well defined, there are numerous attributes to the tags that are not mandatory, but do not have a 'default' value. In most cases, it is not mandatory to define values to these attributes as well. This means that the browsers are free to define their own default values, This is one cause of difference in display between browsers.

Contents that require specific plug-ins to display are another cause of difference in page rendering across browsers. Different browsers will require plug-ins specific to that broser and not all browsers will have supported plug-ins for all content. In such cases, differences in the plug-in ans also the way a browser handles un-spported contents wil cause difference in displays.

Browsers are also evolving as the standards and technologies evolve and advance, and always there is a drive to push out newer and better versions to the market before the competition. This puts pressure on the browser developers and the push to crunch the timelines causes bugs to leak out to the market. Especially with the browsers that have 'automatic updates', these bugs are pushed to the end user without their knowledge.

We also know about the defects that the developers of the application have introduced that will have its own effect in the browsers (That should be the reason why this problem started!). I have read an interesting statistic in the website of one of the HTML syntax testing tools websites - 85% of the few million pages that they have tested has atleast one HTML tag related defect!

In the next post, we will look at a few more considerations of browser compatibility before we define the objective of the test and discuss a strategy for the test.

I test. Therefore I am.