AI in Testing - The Imitation Game

Last Sunday I happen to come across this movie The imitation Game . It is a true story on Alan Turing and how he helped the British to decrypt Enigma made by Germans to create crypt messages, and eventually helped them win the war. It is a must watch movie for all the AI enthusiasts out there who are in the field of testing. In this post I will try to explain why I felt so, and how since the start of this year the events which happened motivated me to write this article. So the hot trends this year in testing is definitely AI.  There are a few who understand the power of it, a few who are skeptical, a few like me who still don’t know how and where will it be used, and a few who have no clue what it is. There is a great course on udemy by Sujal Patel, which I did to understand AI in testing, it is available for free, bless the man. Here’s the link - https://www.udemy.com/introduction-to-artificial-intelligence-in-software-testing/.  I hope it helps some of you, like it helped me.

Yesterday, I went to a workshop conducted by Pradeep Soundrajan, in which he explained us many areas in a testing cycle where automation can be effectively applied, besides only in test execution. He also stressed upon using analytics on test data generated effectively to increase your credibility as a tester. As he was explaining the role of AI in testing, he stressed upon HI, which was human intervention. The moment he stated so, my mind went back to the movie. Now I won’t be spoiling the movie for you, but to finally break the code, they required human intervention as the machine with all the permutations and combinations couldn’t decrypt the enigma code in twenty four hours. And by the way that machine which Alan Turing built, I hope you all must know formed the basis of what we now call as computers. But that machine needed human intelligence to get designed in the first place and then heuristic in actually decrypting the code from the human counterpart.

There was another thing stressed in his talk, that we like to see our automation test passing, so as testers who are now made to think like developers, they are scared to see failed automated tests. This reminded me of a talk which I came across on linked in by Rahul Verma. I was now able to understand the essence of the talk and appreciate it more. If you are interested find it here - https://www.youtube.com/watch?v=l13C3NM-8HI. I wonder if it is true for your automation team? Do you do the analysis whether the automated test failed because the application changed? So it was an issue with script or was it a bug? Is the development team a stakeholder in automation? Can AI help us in here?

Recently this organization Test.ai, owner Jason Arbon, along with Appium released a plugin with it which still in alpha release which uses AI for object recognition. There is a great initiative started by an organization Applitools called as Test University. Angie Jones is the director for this university and some great courses are available here.  So one such course which talks about AI in object recognition and is now available with Appium, thanks to Jason,  is available here - https://testautomationu.applitools.com/ai-for-element-selection-erasing-the-pain-of-fragile-test-scripts/

 

There is another organization Autonomiq, which is working on autonomous testing. For one of the projects I happened to see their awesome product and was fascinated by it. They are doing some real good work to implement AI on the pain areas of automation in testing, so that the tester can just focus on the testing aspect of the software and leave the rest of management aspect of artefacts reporting to the tool. Another organization which is there is the market is Testim, you will find its mention in Sujal’s udemy course if you happen to go through it.

So what happens if Selenium the tool itself comes up with AI itself, Where it becomes smart enough to find out the object even if the description of the object changes. There is another must watch talk by Jason on it - https://www.youtube.com/watch?v=3YLxZfZHnPg. I wonder if the people behind Selenium are thinking about it or not? Or maybe the work has already started!

But then as I write it all here, another thought comes to me from Pradeep’s talk where he talks about a lot of force being downstream, as in there is a lot of pressure on the testers to ensure quality whereas quality is everyone’s responsibility. If the developers are writing their unit tests properly, if the requirements are captured bug free, if the world is ideal, then maybe during the RCA meetings it won’t be testers who would be shouted at first. I wonder if BDD is the answer to these challenges which the industry faces? I wonder if we as an industry have understood that BDD is not a way to write test cases but for an entire team to communicate with each other. And no technical layer change should impact the feature file.  I hope before applying BDD it becomes mandatory for the teams to go through the Cucumber School actual course. Or make your team travel to the places where these guys are conducting their workshops.

As I summarize my thoughts in here, I hope that the role and importance of manual tester is not disappeared into oblivion. As the testing community faces bravely this onslaught of tools, framework, technologies, processes all to ensure quality they don’t forget their own sanity. And we all keep that human element in quality alive, because well watch the movie J