QA Financial Forum New York | 15 May 2024 | BOOK TICKETS
Search
Close this search box.

Test pricing: Automation brings new benchmarks

shalini-chaudhari-1569407725
Shalini Chaudhari, Accenture

Shalini Chaudhari, Accenture

In the old days, when testing was something done at the end of software development, the metrics of pricing were simple. The testing services provider would charge a fixed price defined by the number of lines of code tested, or by the equally simple metric of getting the job done. The testing service providers with the largest number of testers in the most cost-effective offshore location had the edge.

It’s more complicated now. The rise of DevOps and Agile processes software development have inextricably joined quality assurance to testing. To build quality into a financial technology project, testing targets have to be agreed at the outset and embedded in the development process.

Meanwhile, the relentless pressure on margins at customer firms, particularly investment banks, coupled with a new focus on vendor risk management, has encouraged them to reduce the number of external suppliers they deal with.

The other major factor that customers have to juggle in their procurement decisions is the opportunity presented by innovations in IT and data management. A common objective of the major testing service providers is to deliver testing-as-a-service, hosted in the cloud; a development that will be driven by automated testing and the opportunities presented by machine learning. In theory, the same test results will be possible with less testing. 

The industry is not there yet, though. “Right now, automated testing is largely limited to integration testing. What the industry is looking at is how to move automation to functional testing and into test planning,” says Rajneesh Malviya, global head of financial services testing at Infosys, the business technology service provider. “The objective is continuous, end-to-end, automated testing.”

Rajneesh Malviya, Infosys

Rajneesh Malviya, Infosys

That vision suggests that not only will testing services become more cost efficient, but that service providers will be able to create simpler, commoditised, pricing benchmarks based on defined outputs. 

Yes and no. Convergence in pricing standards will happen naturally over time, believes Malviya, but the major obstacle to overcome is the incompatibility of different test environments for different apps. “A fundamental issue is defining what a test case will be in that automated environment,” explained Malviya. “Clients certainly want to see that we have skin in the game and, as a generalisation, that means some sort of output-based pricing model. But a compliance test, for example, might have 20 steps, but a test case for a trading app might have 100 steps. So it’s not easy to come up with uniform measures of output.” 

Infosys has developed a test unit model with the aim of simplifying definitions, said Malviya, and is working with customers to customise those definitions and create a variability index to improve the rate of adoption among clients.

Balancing rewards

Other major testing service vendors are devising sophisticated new pricing models. According to Shalini Chaudhari, the testing services lead for EALA at Accenture Technology, the professional services company “gain-share mechanisms”, under which payments can be tied to either a value defined, a value committed or a value realised in a project are a good way to ensure risk and reward is balanced for both the provider and the client.

“We do see demand for newer pricing models and we offer flexibility in the different models we work to, based on the client requirements” said Chaudhuri. “I think the main considerations when it comes to benchmarking have to be transparency in consumption and cost.”

“A critical success factor involves having a business leader, rather than procurement, involved on the client side, to construct a work plan that closes a value gap, with a clear understanding of effort required, benefits that constitute success and time required to measure results,” said Chaudhari. “This approach enables alignment on what the provider and the client will jointly deliver and how success is measured.”

“For pricing in particular, whether for traditional or cloud-based services, we work with our clients in both output and outcome based models; with productivity and cost-savings at the core of the arrangements as well as the possibility of gain-share mechanisms.”

Sridhar Throvagunta, Wipro

Sridhar Throvagunta, Wipro

A key issue in defining a new model for pricing is how to measure the performance of the end-product. “More and more of our customers are moving to innovative commercial models such as output, or outcome, based models for pricing,” said Sridhar Throvagunta, head of testing services marketing, alliances, analyst and advisor relations at Wipro, the IT consulting and outsourcing firm. So far, this has been especially true for apps designed for the retail financial marketplace, Throvagunta said, because product development there has increasingly focused on the experience of the end-customer.

When it comes to benchmarking and what that means in terms of the performance of an app, there are three key benchmarks to apply, Throvagunta said. These are the response time of the app, the quality of the app (in terms of design or user experience, for example) and, finally,  the completeness and accuracy of the functionality of the app.

That last criteria — functionality — is often taken for granted and might seem hard to define. But it’s critical to benchmark that performance compared to the alternative apps a customer might use, said Throvagunta. He cites the example of an auto insurance company that wants customers to use a new app because that is the fastest and cheapest way for them to get the best quote. But is that what the app delivers, compared to say a conversation with a human agent? A human agent might just be able to get a better deal/ discount for the customer, based on information the agent can glean in a conversation about the customer’s detailed risk profile. Can the app really connect the customer to a better deal, and how could that be tested? The complexities of testing in the real world means a  framework of “omni-channel assurance” is essential, Throvagunta said.

Risk and reward

Over recent years, testing companies have marketed a number of varieties of output-based pricing models, without establishing an industry standard. One variant, for example, is based on the number of faults found in the software, and the severity of those faults. But while a “severity one” fault – such as a fault that stops a customer using an app – might be easy enough to define – scales for for lesser faults are harder to agree.

“The  model may be a good fit in certain contexts but is not easy to adapt for fast moving, complex and agile environments,” said Jim Cowdery, head of business development at Testhouse, the London-based testing services firm. “It’s quite possible for customers to end up paying more than they might have done simply using a fixed price or time and materials model.” A “risk and reward” model can incorporate more sophisticated commercial incentives for improving quality, said Cowdery.

 “The truth is that there is no standard currency for testing software. However, working closely with a testing partner to properly understand your quality requirements and combining this with value-add commercial models, such as a ‘risk and reward’ model that incorporates more sophisticated commercial incentives for improving quality, will help achieve the best of both worlds – increased quality at a fair price”.

While all testing firms can agree on that need to work hand-in-hand with the customer, there are other players at the table. Pressure for new pricing benchmarks is also coming from major trading and risk management software vendors as it is from the end customers. They too see automation as the fundamental game-changer in pricing dynamics.

“There needs to be a common framework for exchanging information about development and testing, and that framework is the automation process itself,” said the head of quality assurance at one leading trading and risk systems vendor. The key challenge facing his firms bank customers, this vendor added, is integrating new systems with what is typically a patchwork of legacy systems — systems designed by multiple software providers according to different testing standards. That is one reason why it makes sense for software firms, testing firms and their customers to agree more common standards.

“Automation offers the possibility of integrated test labs, where maybe 80% of product testing could be done based on standard industry templates; templates provided via APIs that would mean banks would not have to share their source code for apps with testers,” he said. “That would leave the other 20% to be done on a bespoke basis. Our bank customers are saying they are ready to share their test cases if that means faster implementation times and cost savings.”

Agreeing test standards can be considered a simple task at one level: they only have to reflect the workflow of the app product, adjusted for the specific differences in the way the different customers use the app — different compliance requirements or different ways of calculating interest, for example.

That kind of thinking suggests that everyone around the table has an interest in sharing more information, and ultimately sharing standards. But the real world — especially one in  which financial markets customers are still reducing the number of external vendors they are dealing with — doesn’t always work like that.