This article follows up Stephane’s upon the variety of methods available for testing mobile applications when you lack some of the physical hardware required: ‘How to test on smartphones without smartphones?’
Be careful nonetheless to the relevance of those tests…
Given the high cost of most representative smartphones, and despite those repackaging platforms selling them at a much lower price, such as the french one “BackMarket”, it is very tempting to use much more often than the devices themselves the free and easy-to-use options such as the browsers simulators.
Here is a first example: entering the site URL or clicking on your favorite link within the desktop browser already configured with responsive display seems faster than manually typing the address within the mobile browser, or than accessing the favorite menu in a few taps… Between the two options, if you have the test device ready at hand, you may gain about 15 seconds.
This time benefit can rise much higher: in case of iOS platform and its XCode IDE, launching the simulator with the latest build avoids the pain with certificate issues required to set up the build on the real devices. If you don’t have a minimum understanding and knowledge of what to do, by experience those problems may take a long time – I am talking about minimum 30minutes – to be solved and may involve a team member which adds up to the dependencies and risks.
Those costs and pace benefits can create comfortable habits, which we know is the variety’s enemy, hence tester’s enemy. The right question is not: “how much time and money can we save up by using free and different tools?” It is more: “Can we execute tests on different tools whose relevance is, if not equal, at least acceptable enough to trust our mobile application behaviour?”
Having worked 3 years for the Net-A-Porter.com Yoox group (an online luxury fashion retailer) within the team responsible for their iOS and Android mobile applications, here is a summary of what I learnt regarding tests avoiding the physical device: I will start in this first article with mobile websites. A second article will closely follow on testing mobile native applications.
Picture yourself the User Agent as a shortlist of characteristics sent by the web browser to the server for authentication, similar to the one on your ID: your date of birth, your gender… The plugins available make it easy to forge a specific client identity: the server can see you as being an iPhone when in fact you are navigating through your desktop Chrome browser, serving you the content and its formatting as if you were the former. A useful plugin is ‘User Agent Switcher’ available on Chrome Web Store.
The benefit I have found with that type of tool is to be able to quickly test text labels and to verify, for user stories requiring translations, which and where the translations were missing, or simply if some text was displayed at all.
Nonetheless, it only ‘mimicks’ the display for a specific platform, and does not size right the display. It makes your life harder to check for example that those long-long german translated labels do not deform the page or button layout…
- Quickly test if text labels have been implemented (translations, legal notices…)
- Not relevant for all other tests, since it depends on a desktop platform uncorrelated with the reality of physical mobile device targets.
Browser development tools
In his article, Stéphane shows the responsive sizing tool of Chrome, which allows for much more accuracy regarding the display of your site elements given a specific mobile platform: it should solve your long-long german translations puzzle that we mentioned above. You can not trust however that type of tool with the functionality behaviour that you may observe, since it once again relies on a desktop platform which does not mirror your mobile device.
Combining those tools with the User Agent method, you should be able to quickly gather background information which are invisible by solely testing the mobile website on the device: the ‘Network’ tab will highlight latencies, slowness or missing requests explaining an unexpected behaviour… you can also manage the cookies for security test purposes.
On another hand, those same tools are a huge asset when the mobile device itself is plugged to your desktop station – provided an hour max for setting it up-: the proxy tools will allow you to intercept the website requests for analysis. Safari web browser shows up the device and offers its complete toolkit for testing.
- The responsive tools afford fast feedback on display rendering,
- Variety and efficiency of available tools.
- Development tools are not equal across different browsers, which may lead to more of the testing being done on a specific browser not representative enough of your application targets,
- Feature testing can not be trusted.
Although requiring the set up of a bigger toolset beforehand, the simulator fills a big gap as compared to previous tools regarding the behaviour on the device. The display rendering is much more relevant than of browser’s tools, and it allows you to tweak real settings for the mobile web browsers and mobile OS itself, though some are missing such as the camera.
I often used the simulator to obtain quick functional testing feedback, and for some smoke tests when validating a new build. It increases the test coverage by covering the mobile device hardware configurations that you don’t have in your team or the company – especially on Android where there are so many of them.
Keep in mind however that you use the simulator tool by ‘clicking’ with your mouse on your desktop rather than ‘tapping’ on your smaller mobile screen: you can not infer anything with the User Experience. Besides, it still uses your desktop computer resources, which means that you won’t be able to detect latencies, performance and ‘tap responsiveness’ issues.
I will add an additional drawback I have observed: bugs which occurred on the mobile device web browser but not on the simulator, especially involving URL redirections, troubling the access to pages such as terms&agreement.
- Relevant for most of the functional testing – limits defined in drawbacks below -, can be used for smoke tests,
- Increases test coverage by adding the hardware configurations missing in your company.
- Often requires the setup of a whole ecosystem which can take non-neglectable amount of time: IDE, subscription and accounts for iOS platform, one SDK version per OS version needing to be tested),
- Not relevant at all for User Experience: tap responsiveness, ‘fat fingers’ issues, consistency of elements positions across the page with the usage, etc,
- Relies on your desktop computer resources, which prevents from using it for performance purposes,
- Does not give access to all the same OS or browser settings as on the device, such as the camera.
Since I have personally never tested any mobile website nor native application on an emulator, I can not share any experience on that topic. I only know that emulators are supposed to run on a similar set of hardware configuration than the device is, addressing the resource issue as compared to the simulator.
Those services have been mentioned by Stéphane in his article, and I never used them either. We would be glad to have your feedback on the matter.
The physical device
Let’s not forget the ‘Hercules’ of mobile testing, the physical support itself!
What we haven’t talked about yet is the quality of images: while the simulator relies on your desktop screen characteristics and makes it hard to spot bad quality images, blurriness will jump to you straight away on the mobile screen. My experience on iOS shows that different quality assets needed to be defined for Retina screens (iPad since version 3 for example) and Non-Retina (iPad version 2 and before), an implicit functional requirement which could easily be forgotten within a User Story if not written amongst the acceptance criteria. That criteria involved the mobile designers and a deadline to deliver the assets for build integration (at a time we did not have yet any server for distributing content).
- The best tool for tests relying upon hardware and software resources,
- Allows to validate the images quality.
- The cost of buying the devices, which you have to plan within your budget,
- Sharing few devices with other team members can be a hurdle and trigger bottlenecks in your project: plan for efficient device management across the team!
What should I remember from this article?…
…That the available tools to test without mobile only are temporary methods to accelerate part of your tests. Wisdomly used, those tools can increase the test coverage and reduce the risks, but will never guarantee you that your developments will behave as expected on the most important physical devices, the ones that your customers use. You should always have a few handsets at hand for testing.
If your company does not want to invest into mobile devices, one reason may be that they trust too much those tools without questioning their relevance. In that case I hope that this article will have given you plenty of arguments to make a strong case for that investment…