With the myriad social apps that are flooding the market today, the consumer is spoiled for choice. The pressure is on to make slicker and smoother apps that can manage to retain user interest over time.
To test a mobile social app, a tester needs to be able to think like the users—well, the majority of them, anyway—and still be able to imagine the unimaginable and think the unthinkable. Identifying and eliminating the edge cases, however fantastic they may seem, is vital to defining the scope of the project.
Let’s take a look at some of the test scenarios involved in testing a mobile social app.
Push Notifications
Simply put, push notifications are social media or text message alerts that are “pushed” to your device by the application. It might be your social app prompting you with something as innocuous as “You have new messages,” or it might be a retailer telling you about the latest deals available.
Push notifications are interesting in that they learn from your engagement history with the app, then send you prompts based on your interactions.
Each operating system—Android, iOS, Windows, and BlackBerry—has a push notifications service (PNS). An app publisher must register with the push notifications service of each of the operating systems that they intend to release the app on. The PNS, in turn, provides an API to the app. The app publisher uploads the app to the app store and makes it available to the users.
When the user downloads and installs the app, a unique ID for that user and that device are sent to the push notification service of the operating system of the device. These credentials are stored in the app server so that push notifications can be sent in the future after authentication. Because the credentials are device ID-specific, the application must be deployed to an actual device to be tested.
Therefore, when we test push notifications, the test cases may be the same for different OSes, but the behavior may be different. For the sake of this discussion, let’s talk about Android and iOS devices.
We create separate test cases for when the app may be running in the foreground or the background. Let’s say user A is chatting with user B. For both iOS and Android devices, there will be no audio-visual alerts for messages received in the same chat thread. But say that during this time, user C sends a message to user A. On an Android device, user A will receive both an audio and visual alert, whereas on iOS, they will receive only an audio alert.
When the app is running in the background, on Android, the alerts will be displayed on the notification panel until the user accesses these via the app. On iOS, the alert is displayed on the notification panel for a few seconds, then it descends to the app icon on the desktop, where it is shown in the badge count for received messages.
Even the manner in which messages from multiple senders are displayed is different for Android and iOS. On Android, the messages will be clubbed together and displayed as, for example, “3 new messages from 2 conversations.” On iOS, messages from individual senders will be displayed separately in the notification panel, with the latest message from each sender being displayed.
On the locked screen, if the user chooses to display the push notifications, we should test if these are being displayed as designed, and if they are actionable—whether you can scroll through the messages and reply in the popup box only, without needing to unlock the device.
Push notifications may behave differently for the same app on different platforms, screen sizes, and resolutions.
Multiple Devices
It is not always possible to have all the device-OS configurations handy when you test. To work around this, my team took the usage reports for the target market. Based on these, we took the top four models of each of the mobile brands in use. The idea was to test for the latest version of each device and one version previous to it, thus assuring the client of 80 percent test coverage.
For the first version, we were aiming for 50 percent coverage. For every successive release, the plan was to increase the coverage by 5 percent to 10 percent. Post-release, with an eye on gauging the success of the product, we took a subset of users—those who made up the majority of our consumer base—and assessed their acceptance of the app. The percentage increase in test coverage every time would depend on the business intelligence reports.
Media Transfer
Media transfer is a vital aspect of a social app. Transmitting media in its original, uncompressed form is cost-prohibitive, as it requires gigabytes of bandwidth and is more time-consuming.
For a social app, data quality is compromised to some extent because digital media content is compressed prior to transmission. There is partial data loss every time that digital content is shared further, but this is barely noticeable. Moreover, the ease of transmission, storage, and retrieval of this data more than makes up for the infinitesimal loss of data quality.
The role of QA here is to test that the loss of image quality should not exceed the mandated limit. The focus is not to test for the quality of the media post-transmission, but rather the speed of transmission, which is done via load testing.
We also have to test deleting media on the devices. When you send media via a social app and the recipients download it, local copies of it are created on their devices. You will have a copy of it in your device when you received it too, but you will also have the media resource for each recipient separately in your device, because the app maintains a resource database on your device.
If you try to delete the media from one thread, it should not be deleted from the other threads—only the resource for that recipient should be deleted. Even if you delete all references to that media from all the recipient chat threads, still the local copy on your device will be retained. All references to that media will be removed from the device only when you delete this local copy.
Load Testing
For load testing, two factors are vital from the QA perspective: speed and user load. To test these aspects, my team used API tools where they specified the load and the expected response time in the API calls, then verified the results from the responses.
Let’s say we have to test for media transfer between a thousand users in ten seconds. We will need to run a thousand API calls—this is where the API test tool helps. From the results of these API calls, we can work out the average response time, basing it also on the type of data being transmitted: text, image, or video.
Running the API calls locally would make the workflow dependent on the local internet speed, so the test tool server should be ideally local to where the API server is. If the test team is geographically remote, the IP address of the test server can be whitelisted with the API server.
Test cases will vary per the type of media being shared. For a video, we should test that the play length of the video is retained after every successive transmission. Or, say a hundred users are sending a video to a user. Using our test tool, we run the script so that it would behave as if all users were sending the same video file so we could assess the response time uniformly.
Here’s another scenario: How many chat threads can a user launch while uploading videos simultaneously? They can be uploading the same video to multiple chat threads, or they could be trying to upload multiple videos. For this project, my team chose the limit for video uploads in a maximum of four threads.
To get an optimum number, we started increasing the number of users until we could identify a deterioration in the performance, then we stopped there. We identified the time that would be required for that specific action, then we worked backward, starting from the permissible lag in response time.Taking into consideration the internet speed, status change for the user should be visible in a specified time—say, one millisecond. So, considering that almost 90 percent of the time is consumed in transmission via the internet, we had just 10 percent of the time left—that is, a tenth of a second for the server to process the message.
Think about the Users
With so many popular social media apps out there, they all function differently. But there are some core aspects they have in common, and users expect these aspects to work well for them, regardless of their device or operating system—or if they want to send four videos at once. It’s important to design test scenarios that take the primary social media app capabilities into account so that every user can have a good experience.
User Comments
As I am a Fresher, I donot have much knowledge about these topics but After reading your article, I could say that now I am able to understand this topics .Thankyou Sir