QA Financial Forum New York | 15 May 2024 | BOOK TICKETS
Search
Close this search box.

The QA expert interview: Dale Bulmer, Markit

dave-bulmer-1569408043

Data is the fuel for financial markets. Since its launch in 2003, Markit has grown to challenge the leading suppliers, Bloomberg and Thomson Reuters in its core markets, such as credit benchmarks. We met up with Dale Bulmer to discuss the key trends in mobile, security and vendor risk management, and how those trends translate into customer RFPs delivered to Markit. More agreement on testing standards would be helpful, he said.


 
Q: Are mobile apps a major area of development for Markit? It seems that most investment banks and asset managers – the core of your client base – are not interested in moving their own trading onto mobile. But what about data services?

Many of our clients are institutional clients and the products they want are often not those that lend themselves well to a mobile interface. If you are consuming large volumes of data usually that is going to be integrated in a downstream system rather than something you could do on a mobile phone or an iPad.

However, we do have products for retail investors and that is the side where we’re going to see a greater adoption of mobile technologies. If you wanted to look at research, for example, that’s something I might want to do while I’m on the bus on the way to work.

Our Markit on Demand products provide white-labelled websites and apps for clients; these are often retail or research-type portals. If you’re an end-user going through your broker, or looking to do research, that portal may be branded by our client, but the underlying system would be hosted and built by Markit. That brings us into the mobile space because this is very much a managed service and there are high expectations around functionality. A client will expect that the system delivered to them is tested, resilient and that it will work on a range of mobile devices. Whilst there will always be a level of user acceptance testing on the client side, the majority of the ownership and accountability on testing in those mobile devices fall on Markit.

 

Dale-Bulmer-Markit

Dale Bulmer, Markit

 

Everyone knows the challenges of this have come with the increasing range of mobile platforms. If you build something for Apple, that’s in a completely different coding language and you have go and rebuild it to get it to work on Android. Then within the Android environment there are so many manufacturers, different versions and screen sizes. It isn’t possible to just test a function once as you would with an installed app that is run on Windows in, say, four different versions. In the mobile ecosystem compatibility and usability are far harder to achieve. Fundamentally people want something that is very easy to use and has a great user experience. Validating that on the testing side is very, very challenging.

Given the range of different devices, how do you make sure there is a consistent user experience? You have a choice between testing a physical device where you actually feel the user experience; how you swipe on the interface, for example.

At the other end of the spectrum you can use an emulator where you don’t use the physical device at all. You can create a virtual environment of the operating system and in that instance you risk losing the touch and sensitivity you get from swiping the screen. But you are able to test and potentially automate a vast number of different environments using that approach.

Generally people are looking at some kind of hybrid: some real device testing so you can feel the user experience, as well as a cloud based offering. And there is now another way to test whereby you can put a physical device in a cradle in the cloud: a mobile device connected by video for testing. Or you can go down the route of a full emulator where there’s no physical device in play at all, but where you’re able to see whether the code works on a particular operating system.

Another issue is performance in terms of speed. You’ve got far more control of the performance and latency when you’re sitting on a high speed network accessed via PCs, because latency is never going to be an issue. If you need to watch a trading system with real time or near-real time data and the screen is ticking red, then green, you do not want to miss it.

The moment you move into the mobile ecosystem that functionality is going to depend quite literally on where you are standing at that time. For instance, you could walk into the station and have just one bar of reception. What does that mean for the user experience? You’ve got to understand what would happen in a high latency environment. If you are talking about a trade that requires confirmation, what happens in an environment when reception drops out for five seconds? How does the system respond to that? When you’re planning mobile testing there are many factors up for consideration that are outside your control that you will have to deal with.

Q: Are we going to see a huge increase in mobile use? The major investment banks tend to say that their own trading will remain on their desks.

Beyond widespread adoption of mobile in retail banking, I do think broker research reports, for example, lend themselves to mobile use. What we see right now at the stage of the use case for our products is that many of those data products are not products you expect to see on a small screen. Customers are used to seeing them across three or four screens, so how do you take that and repackage it for a table tablet? A lot of those use cases aren’t at a stage where they can be launched.

Q: Markit has to plug its products into banks’ existing systems, and that often means a patchwork of legacy systems. Does that complicated legacy hold banks back? Are they behind the development curve because of that?

Yes. Many banks are facing the challenge of legacy systems coded in a language that almost no-one knows, which makes maintenance difficult. There are COBOL mainframes that don’t lend themselves well to integration with new things coming in. And you’ve got the rate of regulatory change that requires you to solve new problems.

You could end up with an ecosystem made up from 40 legacy systems. Often we are working with a client on fixing one piece of the puzzle, but with the challenge of solving a wider problem. Ultimately you want to solve the end-to-end client problems, but that can’t be done in one go.

Q: When it comes to testing integration projects for your clients, how much of that do you do in-house at Markit?

It depends on the client. If you look at our portfolio of products across our divisions, these are generally hosted solutions. Markit will provide the reference data – that is the price evaluation that will either be used directly, or integrated within the downstream system – and checking that the outgoing data is correct quite rightly falls on Markit. It is 100% our responsibility to validate the data points and ensure that the delivery mechanism serves its correct purpose.

Where a client’s testing resources come in is with the integration within their own system. Often clients make use of third parties to help them with that integration. There are a number of consulting firms that specialise in particular software implementation and you will find they’re very effective as they have worked on that software multiple times.

Q: Regulators have increasingly been talking about “vendor risk management” and “IT risk management”. The simple message is that financial firms should minimise the number of external vendors. How is that going to impact those relationships of advice and consulting?

You will always need to have the right person for the right job. The premise of vendor risk management makes a lot of sense: reduce the number of weak links in the chain. But the nature of financial systems is that you need a depth of knowledge in what can be a niche domain, as well as an understanding of the complexities of the data and technology. There really can’t be a one-size-fits-all solution.

You might believe this focus on risk is great opportunity for some of the bigger vendors to increase their market share, but there are still so many things in the ecosystem that you need to have a specialisation in, or else you’re going to have to accept the trade-off of less effective integration in order to work with a smaller number of suppliers. That would be counterproductive.

The due diligence needed for IT risk management is definitely a growing burden, and I think we will see a trend towards standardising some of that due diligence. Rather than every bank asking every supplier the same questions in a different bespoke format – “When did you last review your business continuity plan? How resilient are your systems?” You try and standardise the structure of those questions so that you can ask and answer it once. The push to reducing costs across the industry will lead to more convergence.

I believe among the community of IT risk managers there is a growing awareness that getting the response back in ten different ways is not cost effective for the client, nor cost effective for the suppliers. It’s not just about cost and build time, but perhaps more about the non-functional requirements: cyber security, disaster recovery and resiliency.

Q: What is Markit’s approach to outsourcing when it comes to testing its own products in development?

Generally speaking, outsourcing testing would not be effective for us. The premise of outsourcing is that you can get someone to do something cheaper, quicker and better than you could do it yourself and that relies on someone that can understand the system. Finance is a broad domain, but the environment that Markit is in is specialised. Just because someone has finance experience it does not necessarily mean they can come in and understand the algorithm.

It might be three to six months before someone who joins the organisation as a third party provider begins to offer solid value. Generally it will take two to three years before they really are at that optimal level of being a subject matter expert. When you’re evaluating a consulting proposal, you are really evaluating the ROI on that. Where is the ROI going to come from if it takes someone three months before they’re really productive?

Where we would potentially look to use external resources would be to augment the experience of the team here with additional short-term resources. That way you can share the knowledge and still get some of the benefits.

Q: Is automation of software development now the only way to go?

It is. The finance ecosystem revolves around the quality of data. As far as the delivery mechanisms for that data and consumption of data goes, and how you consume the data, these are all testable elements. But how do you make sure the data is correct? If you’ve got millions or hundreds of millions of records generated by a single batch process, how do you make sure that a single line of code hasn’t changed everything? You can’t do that manually.

But the challenge with the term automation is that people assume it’s some large investment in an end-to-end process. Usually you automate the front-end of a system and it takes six to eight months. By the time you get the automation, the underlying system has to change.

In the early stages of maturity in an organisation development will test its own code, the next step is to have business users doing user acceptance testing. In this way you get your software testers carrying out manual testing, and then when you feel really comfortable you get the quality right and do it independently.

At that point, the challenge is: how to do more at a quicker pace? Traditionally you would use front-end automation, so something like Rational Functional Testing [a testing automation tool developed by IBM] or Unified Functional Testing [a similar tool developed by HP] from the major vendors. However, this is a time-consuming approach to automation. as even when an issue is identified, if it’s in a complex integrated environment, it’s hard to understand the root cause of that problem.

So what we’re seeing now is a convergence or even a re-convergence of the development and the testing teams. For us, it’s not so much about building the team around automation tool specialists but much more about asking our software engineers to write tests in the same code that the developer uses, albeit with a testing hat on. So you start to test at the lowest possible level that you can, and that means your development team starts with the internal application programme interfaces [a set of protocols and tools used to develop software and applications] and you begin by automating one component. In using this process, you can do that far more cheaply, than you can by automating an end-to-end system. This is a trend we have seen in the last year.

Additional Information: For our video interview with Dale Bulmer, click here.