AI has been the most common buzz word now-a-days. And if you think that AI is all about having chat-bots in your application, then you are totally mistaken! The use-cases for AI are numerous and possibilities seem endless.
When we talk about AI and testing, there can be two analogies – ‘AI in Testing’ and ‘Testing in AI’ i.e., the first is about applying AI to the testing strategies & efforts and the other is about testing AI applications, algorithms and products. This article speaks about the former.
Can AI enhance Quality Assurance efforts?
The answer is ‘yes’. After all, testing involves human intelligence and AI can be used to augment it whenever possible. With the IT industry looking towards robotics, AI and machine learning to improve customer experience, rapid time to market, it’s time to make use of these tools & concepts in testing to speed up the testing life cycle and deliver a high-quality product. We are living in a world where we already have bots that can be used for testing APIs and do some of the manual testing work.
How AI can Help in Quality Assurance and Test Management
Most of the applications developed now-a-days, interact with other systems mostly through API’s. As the system architecture changes, addition of legacy systems creates a more complex system for these interactions. According to World Quality Report 2018-19, end-user satisfaction is at the top of testing-strategy goals. The 2016-17 World Quality Report suggests that AI will help the testing community – “We believe that the most important solution to overcome increasing QA and Testing Challenges will be the emerging introduction of machine-based intelligence,” the report states.
With that given, Artificial Intelligence can help to improve the way quality testing is done. Some areas where AI and Machine learning can be applied are:
- Wherever manual repetitive tasks are performed – by implementing bots to take over the manual efforts
- When there is huge amount of data for analysis and testing
- Understanding the trends in domain-specific bugs using data science tools and preventing them even before they are introduced into the application
- Analysis of data, such as a site’s traffic over a period, for planning your performance and load testing effectively
This is not an exhaustive list, but just to mention a few. The possibilities are open and abundant.
Let’s take a use-case of how AI can boost your automation tests to different levels.
At the base level, you have your automation framework that run automated tests again and again but writing the code itself can be a repetitive task; say, when a small change, like adding a new field to a form, comes in, you need to manually write code in the automation suite to test the new field. Likewise, adding a new form to a page would require writing tests for each of the fields and buttons and adding a new page would mean testing all the components on that page. Now, as the developers keep introducing changes to an existing code, your automation tests might fail, and you need to re-verify if it were a false negative or a real bug.
At first level of leveraging AI, you could use a tool that can ‘look’ at a page holistically through the DOM or a screenshot and help write tests that you’d otherwise do manually. Instead of writing code for each of the components or functionalities on a page, it could be done all at one swoop.
For this level of testing, the testing tools would need AI algorithms to determine which changes are not really ‘changes’ and which ones are real. At this level, you are still driving the tests, but AI can automatically do some checks against a baseline and report if there are failures, which would again require manual intervention to recheck them.
At the 2nd level of testing, AI could be used to group all the ‘changes’ found on many pages of your application by understanding the semantics of those changes. So, if a test is identified to have failed due to the same change, all the tests involving that change could be grouped and discarded as a false negative. Here, the AI algorithm should be able to tell when the changes are the same and ask to confirm or reject all those changes as a group. This would greatly save the tedious task of rechecking all the failed cases manually.
At the 3rd level, AI could automatically determine if a failed test is a bug or not. Until level-2, AI could capture the changes, analyze the changes, group them, and compare against a baseline to identify bugs. Moving a level up, you could have AI determine whether a page is correct or not by applying machine learning techniques to the pages. That would mean checking the visual aspects like white-space use, color and fonts usage, and data aspects as well, like checking of form field types, allowed values, etc. For example, at this level of usage of AI, your automation tool could detect a change and also check whether that change is as expected or a bug by understanding the application’s design and data rules.
AI could look at hundreds of test results and see how things change over time and detect the anomalies to be reviewed by a human.
At even a higher level, AI could be used to drive the tests automatically, rather than manually creating and driving the tests. AI can differentiate between pages (for example, a cart page vs a profile page) and understand the semantics of the pages, enabling it to drive the tests as per the respective flow of interactions. Even if there were pages that were new to the AI system (for which there might have been no historical data to be fed to the machine learning systems), the AI system would be able to understand them based on the user interactions and ‘learn’ the flow over time.
AI could be employed to evolve more complex solutions and the opportunities could be endless as technologies advance. At the current state, AI tools are mostly at Level 1 and moving towards Level 2. The higher levels still need work but nonetheless are on the road map of AI-for-testing.
What does it mean to the businesses?
Software development life cycle will get enhanced to bring more efficient delivery time spans. Releases that used to happen once in month, can be done on a weekly basis as the continuous automated testing reduces the time span required to complete quality assurance tests. What’s more, you could save hugely on the cost and manual effort of designing, developing and deploying and maintaining automation test suites by employing AI tools.
Machines which mimic human behaviors can move the traditional testing models to a new high. Quality testing empowered with Artificial Intelligence can bring continues testing platform which can identify problems more efficiently. Artificial Intelligence will bring tools like controls and maps which can be configured once to run as automation for testing.
Impact on QA & testers
AI is not meant to “replace” testers, but to enhance the efficiency and throughput of the testing efforts. While tools can be employed to automate the tiresome, time-consuming, monotonous manual tasks, that may require minimal intelligence, human intelligence can be employed to focus more on the business value and think about how to improve the quality of the product. So, rest assured that machines are not going to replace you!
AI-as-testing-assistant as projected by Milman and Carmi from Applitools claim – “First, we’ll see a trend where humans will have less and less mechanical dirty work to do with implementing, executing, and analyzing test results, but they will be still integral and necessary part of the test process to approve and act on the findings.”
The other side of the coin
One of the biggest challenges would possibly be – how would an AI that tests, know that the system under test is correct? When it comes to a human, this knowledge is gained from a source of truth such as a Product Owner, Business Analyst or a stake-holder. So, what would be the source of truth for an AI-based tool? This opens the doors to ‘testing AI tools to test AI based applications’ and this again gives opportunity for testers to be equipped with a whole new set of skills to build and maintain AI based test suites that in turn test AI based products. And the test engineers would require data science skills and understand deep learning principles.
So, what are you waiting for? There’s a whole new plethora of opportunities out there for testers and QA community to be explored.