It has been a while since my last post, and the testing world has changed so much in between. I will now re-start the discussions on how testing is transforming....
I test. Therefore I am.
Testing Thoughts
A place to talk mostly about everything about testing, and sometimes, technology in general
Aug 31, 2018
Dec 11, 2012
Selecting target devices - learnings from Kin
Arriving at a decision on the target devices and OS to be included in testing is a difficult task. Many times people resort to industry data on phone sales as a reference for selecting the devices that they need to test. While it is a good source of information, one cannot rely entirely on such information nor can they select the latest devices or devices from the largest players alone.
I do not know how many of us know what a Kin is – it ‘was’ a smart phone that Microsoft invented. Reportedly, it was in the market for a little over a month only. Like one reporter observed, many fruit juices have more shelf life than that. When Microsoft builds a mobile, will any one ignore it? Surely will not. If it lived a little longer, it would have inflated the testing budget by adding one more device and resources to test it, before it dies its natural death.
Deciding on the device combination is one of the key success factors for a good testing strategy for mobile devices. With 75% of the market share, Android has a place in all test combinations, followed by iOS with 15% share. However, a deep dive into the target market and its behaviour is required before the selections can be made.
The best way to short list the devices is to segment the target audience, and then depending on the risk appetite, eliminate the devices from the segments to arrive at a combination of Must, Good and Nice to have. This combination will include devices operating systems and their combination (there can be devices that need to be tested on more than one operating system).
Another challenge with the test combination is that it is dynamic. Every release, the test team will have to look at the combinations and revise the matrix to reflect the latest changes to the combination.
I test. Therefore I am.
I do not know how many of us know what a Kin is – it ‘was’ a smart phone that Microsoft invented. Reportedly, it was in the market for a little over a month only. Like one reporter observed, many fruit juices have more shelf life than that. When Microsoft builds a mobile, will any one ignore it? Surely will not. If it lived a little longer, it would have inflated the testing budget by adding one more device and resources to test it, before it dies its natural death.
Deciding on the device combination is one of the key success factors for a good testing strategy for mobile devices. With 75% of the market share, Android has a place in all test combinations, followed by iOS with 15% share. However, a deep dive into the target market and its behaviour is required before the selections can be made.
The best way to short list the devices is to segment the target audience, and then depending on the risk appetite, eliminate the devices from the segments to arrive at a combination of Must, Good and Nice to have. This combination will include devices operating systems and their combination (there can be devices that need to be tested on more than one operating system).
Another challenge with the test combination is that it is dynamic. Every release, the test team will have to look at the combinations and revise the matrix to reflect the latest changes to the combination.
I test. Therefore I am.
Nov 28, 2012
Testing and Mobility
Last year, about the same time, I wrote about the probable impacts of tablet devices on testing. When I look back, I see the predictions were true, but reality has far exceeded my predictions. The tablets have become an integral part of the application space. It is reported that the sale of tablets have exceeded the sale of PCs and by 2015, tablets will exceed the PC population.
It is not very unusual for my blog to hibernate for long periods. But after reading several reports and stats on the growth of mobile devices and the predictions for the next few years, I thought it is time to update the old posts on testing tablet devices.
The northbound trend in the population of mobile devices including smartphones and tablets is is a very significant development that will have far reaching impact on application development and testing. Not only will more and more applications become mobile enabled, they will become an integral component in completing transactions. This will mean much more tan testing how the web pages will behave in different mobile browsers, but will mean testing complex hybrid applications.
Testing native or hybrid application needs a different strategy and approach than testing browser based applications on mobile devices. Right from the code base to the capabilities of the native hardware will have an influence on the testing outcome. Testers need to understand and develop suitable strategies to focus on functionalities as well as device and OS dependant features that need to be tested on each target device.
While the number of devices is showing a uniform growth trend, it is also increasing the diversity of the target devices that need to be included in testing. With several device manufacturers designing devices with all possible variations in form factor, processing power, interface standards and network capabilities, the hardware choices are numerous. So are the Operating System choices. With many different Android versions, and device manufacturers and telecom carriers modifying or adapting the operating systems to optimise in their networks increases the test combination many fold. Android currently has around 75% of the market followed by Apple with a share of 15% covering 90% of the market. However, the market is very dynamic and user base is not uniform across geographies.
In theory we have three broad categories of software for mobile; the native apps, mobile web and hybrid. The native apps reside in a hand held device whereas mobile web is accessing web content using a mobile browser. Hybrid applications on the other hand utilises a combination of both hybrid and web features. While there are several things that are common in them, there are many unique factors also.
When testing mobile web, the focus is primarily on the mobile browsers capability to interpret the code and deliver the content. Different browsers use different engines internally (Eg. Chrome uses WebKit while Microsoft uses Trident), and so, the way they render content is not universal, especially while handling evolving standards like HTML5. Native apps on the other hand, reside in the device and interact more closely with the host operating system and hardware and software resources of the device. This also means that devices running different operating systems will use different code base, the testing becomes extremely critical on different operating systems. Hybrid apps extends the challenges by combining both native and web characteristics into one.
These are in addition to the capabilities of the devices, capabilities dependent on carriers, usability, performance, security and interoperability that are other critical characteristics. In the next few posts we will discuss a few of them in details and also discuss about tools that help in accelerating/automating the tests on mobile.
I test. Therefore I am.
Feb 11, 2012
New era software tool licencing
There not a single day that we do not hear something new about the 'cloud'. HP has now announced that its testing software, especially the load and performance testing software will be available on the cloud.
Traditionally, the cost of licences and the ability of the projects to invest in licenses were the two biggest deterrents when it comes to introduction of new testing tools in an organisation. The new model will drastically change the game and will push the service providers and IT departments to work out better models of using tools and delivering services based on tools. I hope to see some significant changes this new year.
I test. Therefore I am.
Traditionally, the cost of licences and the ability of the projects to invest in licenses were the two biggest deterrents when it comes to introduction of new testing tools in an organisation. The new model will drastically change the game and will push the service providers and IT departments to work out better models of using tools and delivering services based on tools. I hope to see some significant changes this new year.
I test. Therefore I am.
Dec 4, 2011
Convergence of applications - New challenges
When Apple released its iPhone 4S, one of the key additions to its feature list was 'Siri', a virtual personal assistant. Siri is an application that responds to voice commands and performs a variety of actions, based on the instructions. For example, if you ask Siri to suggest good restaurants, it uses your current location and searches the internet to find restaurants and further can go the restaurants website and book a seat as well.
In the above simple example, we have seen an application that uses voice recognition, collecting GPS data from the mobile, framing search queries based on data and instruction, conducting an internet search and performing a small transaction of booking a seat. More complex transactions will involve far more complex interactions and involvement of multiple application, the data from one being used for performing another transaction on the other.
It is the bringing together of multiple applications to perform a function rather than building all of the capabilities into one application that I see as a new area for testers to explore. Convergence of applications and devices are not a new concept. We have been discussing and debating in various forms and types. But with the possible commercial success of Siri (which we will come to know soon), There will surely be a boom of applications that uses one form or other of convergence technologies and are sure to be a challenge for testers.
It will not only make tests more complex by extending the boundaries of the system under test to the applications in converge, but also make scenarios far more specific to the context in which the test needs to be developed. It will also make the 'domain knowledge' far more extensive and make the tests more intuitive that ticking off a set of test conditions.
I test. Therefore I am.
Nov 30, 2011
Tablets and new callenges to testing
Probably the dot net boom was one of the biggest trigger for expanding the focus of testing. Functional testing which was a less glamorous job underwent a complete overhaul as a result of the explosion in the online market place. Functionality, Performance, Security, Compatibility all became specialised areas for testing and quality assurance.
The next big wave in my opinion is the boom in tablet devices that Apple iPad has triggered. With a sale of around 8 million pieces a quarter, the tablet user base of iPad alone is growing at a very fast pace. Combined with Android and Windows based tablet users, the user base is significantly large for any one not to provide customised versions of contents for this large customer base.
A large portion of such content is delivered as 'Apps' so that they leverage the capabilities of the tablet devices and provide a better user experience. This is leading to a new requirements for testing: verifying the functionality of the application in the tablet devices and verifying the conformance to the user interface standards of the tablet operating systems. For example, Apple has its own user interface standards for all Apps that are shares through its App store. Also, these devices provide API for leveraging certain capabilities of the devices or provide development platforms for developing content.
With the wide variation in hardware configuration, operating systems, User Interface Standards, and native device capabilities, the testers have a whole new set of challenges to face. To add to the complexity, many phones that run the same operating systems require a different App than their tablet counterparts, even though they are from the same manufacturer (Not true for Apple).Yet to see is the effect of the new iPhone based payment method that Apple is introducing.
With many App trying to do similar things and trying to use same device capabilities, I suspect that inter-operability also will become a prime concern in the coming days.
I test. Therefore I am.
The next big wave in my opinion is the boom in tablet devices that Apple iPad has triggered. With a sale of around 8 million pieces a quarter, the tablet user base of iPad alone is growing at a very fast pace. Combined with Android and Windows based tablet users, the user base is significantly large for any one not to provide customised versions of contents for this large customer base.
A large portion of such content is delivered as 'Apps' so that they leverage the capabilities of the tablet devices and provide a better user experience. This is leading to a new requirements for testing: verifying the functionality of the application in the tablet devices and verifying the conformance to the user interface standards of the tablet operating systems. For example, Apple has its own user interface standards for all Apps that are shares through its App store. Also, these devices provide API for leveraging certain capabilities of the devices or provide development platforms for developing content.
With the wide variation in hardware configuration, operating systems, User Interface Standards, and native device capabilities, the testers have a whole new set of challenges to face. To add to the complexity, many phones that run the same operating systems require a different App than their tablet counterparts, even though they are from the same manufacturer (Not true for Apple).Yet to see is the effect of the new iPhone based payment method that Apple is introducing.
With many App trying to do similar things and trying to use same device capabilities, I suspect that inter-operability also will become a prime concern in the coming days.
I test. Therefore I am.
Part-3 Browser Compatibility: What is it?
This is the last of the three part discussion on Browser Compatibility Testing.
Any strategy to test the compatibility of an application across browser-platform combination has to be considered with the understanding that the test is for checking the compatibility and not to establish that the application and its behaviour are exactly same in all the browsers and on all platforms (hardware, software and their combinations).
The first consideration should be the way the application is designed to support the different browsers and platforms. Applications are designed with a target user base, and information on their user profiles including operating systems, browser versions and the trends in the industry that will affect browser and platform usage. Based on these information, applications are designed for a set of 'most preferred browsers and a degraded support to another set of browsers. There are numerous browsers in the market, but only the browsers and platform combination that is either fully supported or are 'supported on a downgraded level' are tested as part of the compatibility tests.
So, the objective is to test and ensure that the application is correctly and consistently behaving in the set of preferred browsers and platforms and the degradation in matching the the way it is designed to. As an example, if the application needs to use a different style sheet on a lower version of IE, the test will verify that the user interface confirms to the styles specified for that version of IE.
The next is to identify a combination of platforms and browsers that needs to be tested. Depending on the technology and components that are used in an application, we can establish an 'equivalence' among browsers in the way they handle components and reduce the test combinations. Once the test combination is reviewed with the design/development team agreed upon, tests that cover the breadth of the application and UI components is designed. Generally, the scenarios are a subset of the functional scenarios, but are focused more on UI characteristics that functionality.
On many occasions, people rush to automate the tests because they have to run the same test on many browsers and operating systems. But, remember that the expected results are going to be different on the preferred browser(s) and the less supported ones and so the 're-usability' that is counted to justify the automation investment may not be true. But, there are several cases where automating such tests make complete sense.
There are several tools that support varying levels of browser compatibility testing. Even functional testing tools like HP QTP and TOSCA (from Tricentis)can do a limited amount of browser compatibility testing. Both these tools claim to come up with support for newer versions for FireFox (currently supports only v3.6) and also Google Chrome some time in December. With that, they all should be able to do a certain level of compatibility testing, at least with the 'most preferred browsers'. Open source tools like Selenium also can support browser compatibility tests.
The best tool I have come across is Eggplant (http://www.testplant.com/) which is an image based automation which uses a proprietary technology to recognise images. It also is in dependant of the operating system as it uses a VNC connection to the various machines that host the various versions of browsers on which the tests need to run.
Automation or no automation, the key to success is identifying the expected behaviour in the target and supported browsers and designing an optimum number of tests that cover the GUI objects to validate the expected behaviour.
I test. Therefore I am.
Any strategy to test the compatibility of an application across browser-platform combination has to be considered with the understanding that the test is for checking the compatibility and not to establish that the application and its behaviour are exactly same in all the browsers and on all platforms (hardware, software and their combinations).
The first consideration should be the way the application is designed to support the different browsers and platforms. Applications are designed with a target user base, and information on their user profiles including operating systems, browser versions and the trends in the industry that will affect browser and platform usage. Based on these information, applications are designed for a set of 'most preferred browsers and a degraded support to another set of browsers. There are numerous browsers in the market, but only the browsers and platform combination that is either fully supported or are 'supported on a downgraded level' are tested as part of the compatibility tests.
So, the objective is to test and ensure that the application is correctly and consistently behaving in the set of preferred browsers and platforms and the degradation in matching the the way it is designed to. As an example, if the application needs to use a different style sheet on a lower version of IE, the test will verify that the user interface confirms to the styles specified for that version of IE.
The next is to identify a combination of platforms and browsers that needs to be tested. Depending on the technology and components that are used in an application, we can establish an 'equivalence' among browsers in the way they handle components and reduce the test combinations. Once the test combination is reviewed with the design/development team agreed upon, tests that cover the breadth of the application and UI components is designed. Generally, the scenarios are a subset of the functional scenarios, but are focused more on UI characteristics that functionality.
On many occasions, people rush to automate the tests because they have to run the same test on many browsers and operating systems. But, remember that the expected results are going to be different on the preferred browser(s) and the less supported ones and so the 're-usability' that is counted to justify the automation investment may not be true. But, there are several cases where automating such tests make complete sense.
There are several tools that support varying levels of browser compatibility testing. Even functional testing tools like HP QTP and TOSCA (from Tricentis)can do a limited amount of browser compatibility testing. Both these tools claim to come up with support for newer versions for FireFox (currently supports only v3.6) and also Google Chrome some time in December. With that, they all should be able to do a certain level of compatibility testing, at least with the 'most preferred browsers'. Open source tools like Selenium also can support browser compatibility tests.
The best tool I have come across is Eggplant (http://www.testplant.com/) which is an image based automation which uses a proprietary technology to recognise images. It also is in dependant of the operating system as it uses a VNC connection to the various machines that host the various versions of browsers on which the tests need to run.
Automation or no automation, the key to success is identifying the expected behaviour in the target and supported browsers and designing an optimum number of tests that cover the GUI objects to validate the expected behaviour.
I test. Therefore I am.
Nov 26, 2011
Part-2 Browser Compatibility: What is it?
We know that majority of the internet users uses Microsoft Windows as the operating system. But they all do not use the same version of Windows. There is another 10-15% Apple uses and a significant percentage of Linux users. The operating system capabilities also cause differences in the display of contents by the browser. There are special capabilities as well as limitations for most of these operating systems, and any page that leverages those capabilities may not be displayed accurately by same browser in another operating system.
For example, a page that utilises a Font that is supported exclusively by Mac will not be correctly displayed in other operating systems. Also, some fonts will look different in different operating systems and can distort the displayed contents.
Now, with the fast growing popularity of tablet PCs, the combination expands to mobile operating systems including iOS version.
The screen resolution is another factor that will add to the complexity. Different screen resolutions with PC and mobile devices and the clients to the web content, the complexity in managing the display consistently becomes more complex and at the same time more relevant to provide a consistent user experiences across devices and configurations.
Before we discuss how to test these, let us briefly discuss why developers use tags, plug-ins and technologies that are not consistently rendered across platforms. If everyone used only the tags that are understood by all the browsers and used only plug-ins that are supported by all browsers, and did not use attributes that are not consistent across browsers, we could have avoided this whole issue of inconsistency.
So we know that there will be in-consistency, we know there are ways to prevent it and still will use newer and emerging (and hence less supported) ways of rendering content. Is that the issue? I do not think so. That is the way technology has always emerged. Inventions and discoveries have changed the way we work and the way we use things. But they were all a gradual change with varying rate of change or adoption rate.
I have read an interesting comparison of browser compatibility with the Television technology. From the days of the black and while CRT based televisions, the technology has progressed to . But, Not all televisions that are currently in use does not support colour, high definition and 3D. But the television broadcasting is continuously enhancing and the TVs handle the transmissions matching their capability.
So is the case of web technologies as well. They technologies emerge, the standards improve, and the browsers do a catch-up. So, we will develop a strategy, accepting the fact that inconsistency in rendering contents is a reality. We will discuss more about that in the next.
I test. Therefore I am.
Subscribe to:
Posts (Atom)