QA Financial Forum New York | 15 May 2024 | BOOK TICKETS
Search
Close this search box.

TestPlant: automation and the limits of artificial intelligence

antony-edwards-eggplant-cto-sponsor--1570698200

TestPlant, the London-based testing software specialist backed by the Carlyle Group investment firm, was a first-mover in automation for mobile apps when it opened for business in 2008. Its eggPlant automation toolkit is now widely-used for functional, load and performance testing across different platforms and different industries.

Among recent client wins, TestPlant has announced a partnership with Citibank to implement test automation, and it has also worked with Spain’s Banco Sabadell and the UK’s Nationwide Building Society.

QA Financial met up with Antony Edwards, TestPlant’s chief technology officer, to talk about how some banks are still getting test automation wrong and what the the real benefit of artificial intelligence might be to testing.

Why are banks increasingly investing in test automation?

The adoption of test automation has been spurred by the move to agile. In a waterfall project that has a one or two-year timeline, accelerating testing is not critical. Even if testing takes a month, it is only a small fraction of total development time. It is when you are working with short release times, like two weeks, that testing starts to become a bottleneck – and this is where automation pays off.

Regulation also has an impact. With the incoming European Union General Data Protection Regulation [EU rules that will unify data protection for individuals, which comes into force in 2018], banks will have to be very careful who they share their data with. One consequence of this is that they are increasingly near-shoring their testing because they of concerns around data privacy. Testing costs go up because salaries are not as competitive. If banks want to hold onto some of those savings they made from off-shoring, they need to implement automation and reduce the amount of man-hours employed.

How advanced are banks and other financial firms in terms of test automation?

Test automation is still at its infancy. Most industries automate maybe just over 25% of all their testing. You would not know this from attending test conferences, where there are all kinds of futuristic ideas being circulated. The truth is that after 20 years of test automation, we are still not there in terms of adequate automated test coverage.

That is where TestPlant comes in. We position ourselves as providers of tools that have complete coverage. A lot of the open-source tools have blind spots; they might not work with the latest release of iOS, for example. We try to make test automation simple and straightforward, so you can get end-to-end coverage without needing to have incredible coding skills.  

Where are banks going wrong in terms of test automation?

Some industries, and this includes finance, have a very technology-heavy approach to testing. They know how to automate, but they do not have a thorough knowledge of the products they are testing and end up not testing the most important things. Other industries – and by the way I think it is a better way of doing things – have more of a business-analyst approach to testing. Testers at Bloomberg, for example, know how a trader will use their product. They will look at the the product from the point of view of the consumer.

Here’s one example of what can go wrong with an overly-technical approach. I know one test consultant that went to a tier one bank to help with their testing. He found that their quality assurance department had built 10,000 automated test cases for an app, which sounds impressive. But the problem is that they had automated testing for the simplest features – 90% of the test coverage was for things like the home page and the settings. He went in and replaced them with just 80 test cases.

I think people are starting to realise that just automating as many test cases as possible is not a smart approach to things. What matters more is how users will engage with the product. In any case we have to make sure that our tools cater to both of those types of tester profiles.

Can artificial intelligence or machine learning help with testing?

There is a lot of talk about using artificial intelligence to test. It is an area I have looked into, but so far I do not see anyone doing it in a useful way. Where I do see potential is using AI for test management. Rather than involve AI in the actual test process, you use it to help plan testing beforehand.

A practical example of AI test management might be to imagine you are going to create an app that enables customers to apply for a mortgage. The AI would compare your past projects and your historical data, comparing your current project with similar ones in the past. Based on that data, it can tell you where you might focus your testing efforts. It might say that if you test for a certain amount of time, you will have five critical defects, and so can focus your resources where you will need to do code reviews. I know that TCS, Accenture and Wipro all are working on projects like this, and it is an area we are interested in.

  • Antony Edwards will be chairing a panel session on mobile app testing at the QA Financial Forum London, which takes place at the Hilton, Canary Wharf, on April 5th 2017. See here for more details.