QA Financial Forum Chicago | 9 April 2024 | BOOK TICKETS
Search
Close this search box.

Turning Quality Assurance into a Profit Centre with AI

eggplant-logo-2--1569329491

Two years ago, Carlyle Group was looking for a new CEO for Eggplant, the UK-based test automation and AI specialist which is majority-owned by Carlyle’s European technology fund. Carlyle chose Dr John Bates, a former technology academic who had previously co-founded Apama, the streaming analytics business, which was ultimately sold to Software AG, the German giant, in 2013.

After managing Apama for its new owner, Bates moved on to Silicon Valley, where he became CEO of PLAT.ONE, a SAP business offering an Internet-of-Things platform for manufacturing firms. He has written a book on the IoT – Thingalytics – and between 2010 and 2016 served on the technology advisory committee of the US Commodity Futures Trading Commission. Earlier this year, Eggplant announced it had secured NTT Docomo, Japan’s leading mobile provider as a client. However, the single largest growth opportunity for Eggplant over the next few years – as with its closest competitors in test automation, Tricentis and SmartBear – is in the financial services vertical. QA Financial interviewed John Bates in his City of London office.

Q: Eggplant describes its product as “continuous intelligent AI-assisted test automation”. Can you explain that? A: We have a fairly unique approach to the way we can automate testing. Anything that has an interface, we can automate. In the digital world, dominated by the cloud and mobile devices, testing is going to be increasingly done through the user interface and through APIs. It’s getting more complex to test, rather than easier, because the architectures are getting more complicated; there are more moving parts. Our system can ‘see’ the images on a screen and understand them, whether we’re testing for an iPad or a Windows machine. It can compare the images, not precisely but with an intelligent fuzziness, and understand them, reading the text to see if it is working. This enables the automation. We’ve been very lucky, for example, to have worked for NASA to verify the cockpit displays on the Orion space vehicle project.

Q: Where do your products sit alongside open-source tools such a Selenium and also the other firms that sell “off-the-shelf” test automation tools? A: Well, we are not a competitor to Selenium. That might have been the perception when I joined Eggplant, but it’s since changed. We are something completely different and complementary. One of the things that we do in our digital automation intelligence suite is call out to other underlying frameworks like Selenium and plug our automation model into theirs. Imagine a model-driven approach to test automation, in which each step could be automated using different technologies.

And we’ve done a couple of things to support open-source communities, such as provide some advanced scripting functionality under a special licence for the JMeter community. We also released Eggplantium, which is a way for Selenium users to call out to mobile testing capabilities. We’re fine and living along-side open source. As for Tricentis and SmartBear, we respect all other vendors and there’s a big opportunity for everyone.

Like us, they have strong private equity backers [Tricentis is majority-owned owned by Insight Venture Partners and SmartBear by Francisco Partners]. But there are differences. Tricentis has an old-style perpetual license business model. We’re more about annual recurring revenue and an enterprise software model. SmartBear is high-velocity, low-touch, buy-it-online. So, three very different approaches. All leaders in their own way. Some people have characterised the competition between us as: “who is the anointed replacement for HPE/Micro Focus [test automation tools]”. But I see this more about how the work we do is fundamentally changing.

It’s becoming less about the execution of the automation and more about the harvesting, which is analysing the application. Has the app changed? If so, can we re-create the model of that test? It’s also increasingly about how we measure success. It used to be about how many tests are we running. Now it’s about whether we are meeting our business goals from this digital product.

In retail, this translates as are we getting the revenue, the basket conversions and the renewals that we wanted?  Can I measure when we are delighting our customers and where can I improve? QA is becoming a critical business function, a profit centre.

Q: What are your customers in the financial vertical asking for, and how does AI help in testing? A: There’s obviously a drive to do more with automation across capital markets, banking and insurance. In some key respects, digital banking is just like other areas of digital.

Your app is measured by stars in the app store and the digital experience is how you capture people. So being able to manage customer delight and customer problems is very important. We recently commissioned a survey from YouGov that covered hundreds of retail and financial firms. It showed that a slow website was worse than a website that goes down in terms of keeping customers, particularly millennials. So, customer experience is important. More generally, banking and insurance firms are like anyone else in the DevOps world – they have moved from a release every six months to a release every week, or daily. The velocity of testing needs to be increased.

We can automate more things and do more testing, and better testing, in a compressed time. With the AI component of what we do, it’s not just about running tests, but analysing the application and automatically re-creating tests, and recomputing customers’ journeys.

It’s AI as your wing-person for improving coverage. If you think about traditional regression testing, you are running the same tests every time. That’s fine if you’ve thought of the scenario that is actually going to arise. But what about some new code that has been written perhaps by a developer who is less reliable that you would like, or a new model that hasn’t been tested? This is where you need a new method: exploratory testing and then a model-driven approach that can be navigated by AI reasoning in areas where tests have not been written previously. We use an ensemble of algorithms to apply our layer of AI – a number of different techniques which maximise the coverage. An example is a combination of AI reasoning and Chaos Monkey, a piece of software that riffs around anomalies a human could generate. So, for example, logging in with no username and no password, or typing something in a strange way, or doing things in a strange order.

Q: What about capital markets and investment banking? Increasingly banks are looking to link their trading desks in real time to their risk engines, and Basel regulators are driving that with the Fundamental Review of the Trading Book. Is your kind of testing relevant to this? A: Yes, and I worked in that world for many years. We went into an investment bank recently, which wanted to use Eggplant to test a real-time bond-pricing engine. The punchline was that the engine was built with Apama: a model taking in multiple data streams from electronic bond markets, to get an aggregated view of liquidity and then skew off that to make a price. So, complexities arise, first around ensuring that the calculations are being done correctly and, second, testing that they are being rendered correctly; that the information is coming out the right way. You need to take a fusion approach to that challenge because you want to look at underlying values and how things are being calculated and then compare that with how things are being shown on the screen. And you have to do that in real time: perception is reality. Eggplant uses that fusion approach to automation, by which I mean that we can look at the underlying object and within the same script or model that we use to look at what’s appearing on the interface.

Think in terms of functional and performance testing, as well as usability testing all in the same engine which also has the ability to call out to API and other testing systems. This gives you power to look at data coming in and how it is being rendered without having to switch between apps. The testing is non-intrusive. Financial firms’ algorithms are top secret, they are their IP, so that non-intrusiveness is very important to them. So, yes, there are two key opportunities in finance for us: retail UX and capital markets, where the requirement is to fully integrate systems and test not just for functionality, but test under many different loads. So, let’s simulate non-farm payrolls, or Black Friday, or even Brexit!

Q: You mention the secretiveness of firms’ algorithms. That’s been seen as a problem by some regulators.  How do you see those regulators dealing with the challenge of supervising the way firms are automating and using AI? A: It is difficult. I previously served on the CFTC’s technology advisory committee, which was set up by one of the CFTC’s commissioners, Scott O’Malia [now CEO at ISDA, the derivatives dealers’ trade association]. And when I was at Apama, our work for financial services firms was based on finding patterns in data which could be used for algorithmic trading, but which could also be used for market surveillance, which was why Scott was interested in getting me involved. The question was: could you analyse an algorithm to find out if it was going to be bad for markets? My book covers a number of examples of things that have gone wrong in markets, such as [NYSE market maker] Knight Capital, which lost $440m in 2012.

That could have been avoided because the cause was a bug. They had an active test market within their trading algorithm that they hadn’t switched off and it started sending out prices that were way off the market. But while we can say these things are avoidable, the core issue has been that traders have been driving around in Ferraris and the regulators have been trying to keep up with them on a bicycle.

Q: What about the broader opportunity for Eggplant? What’s the revenue potential in your marketplace? A: Most analysts put the annual revenue value of the test automation market at around $2 billion, and the market for testing services at $40 billion. This indicates that there are things that you can’t automate and people are important. But intelligent test automation will augment manual testing and fewer people will be required. Perhaps a manual tester can become an automation analyst. This is a new market opportunity, a subset of automation, which will supercharge humans to become much more productive.

The other opportunity is the CI/CD and DevOps marketplace, the opportunity to monitor and test apps once they are in production. How can we learn from the post-production world and what people are really doing with apps to identify business risks and derive a measurable RoI? That’s what we call customer experience optimisation and that’s where we think we have a unique approach. And it is true DevOps, because it goes across both Dev and Ops.