What’s the Key to a Successful Automation Programme?
Last week, visual UI testing specialist Applitools announced its new knowledge-sharing initiative, Test Automation University. The project, a collection of free online courses taught by industry experts, is spearheaded by Applitools’ latest hire, Angie Jones, who joins the Tel Aviv and California-based start-up from a previous position at Twitter. Passionate about test automation, Jones is a frequent speaker at conferences and a contributor to industry discussion. She promotes a highly prioritised, data-driven approach to test automation, which she explained for QA Financial. In the following interview, the automation advocate also shares her views on the keys to a successful automation programme, the state of enterprise test automation and the future of Test Automation University.
Q: What do you think the key drivers behind the move towards more automation?
Q: Over the past few years, software delivery processes have transformed in an effort to speed up the release cycles of new software features.
Testing has now shifted left and regression testing is now often automated to allow for its continuous execution. This helps meet the demand for releasing features more often.
Q: What technologies are companies using to improve the reliability of their QA process?
A: Test automation used to be done in a silo, with automation engineers developing scripts that only ran on their local machines and the rest of the development team unaware of what’s been covered by the automation.
In the age of continuous integration and continuous deployment (CI/CD), this will no longer work. The execution of the automation now has to be automated as well. The tests have a much more critical role and are essentially gating the deployment of new features.
For this reason, the automation scripts must be highly reliable, and accomplishing this requires team collaboration. Teams are no longer automating low-value tests but are now much more strategic in their selection of what gets automated; developers are providing earlier test coverage via more extensive unit testing and also building in testability into the application; and automation engineers are now engaged all throughout the process, as opposed to working in a silo.
This also frees up testers to be able to focus on the newer features which do not yet have extensive automation coverage.
Q: In order to achieve effective, repeatable test automation and increase coverage, many teams are now assembling toolsets for testing, automation and monitoring. Have you observed any trends in terms of popular tooling at the moment?
A: When building an automation framework, there are a plethora of tools to choose from. And your automation framework can be as basic or advanced as needed.
A verification tool is also an essential part of any automation framework. The navigational tools will drive your web actions, but they are unable to determine whether your test passes or fails. So, a verification tool such as JUnit, TestNG, or Chai is required for assertions.
Navigation and verification tools are the bare minimum requirements for a UI automation framework. However, like I said, there’s lots of other tools available to take your test automation to the next level. There are tools like Extent that provide enriched reporting, which comes in very handy for the quick debugging that’s needed for continuous testing. There are also visual testing tools, such as Applitools, that enable you to go beyond just functional testing and also catch the visual regressions that are missed by the basic test automation. And there are collaborative tools, such as Cucumber, which turn artifacts from design meetings into executable test scenarios.
Q: You’re a big proponent of automating where it makes sense. Is there any part of testing that cannot or does not need to be automated?
A: This will vary by businesses, applications, and even the features themselves. A lot of times you’ll hear people like me say “be sure to automate strategically; don’t automate everything - only automate the things that you care about or that provide high value.” And that’s easy to say, but it’s not really actionable advice.
I have a talk called “Which Tests Should We Automate?” where I actually take the audience through a list of various test scenarios and I provide a formula that they can apply to these scenarios to determine if the test is worth automating. The formula contains key metrics such as risk, value, cost efficiency, and the historical data around that feature. Using this, they can come up with a score for each scenario which can then be used to rank automation priorities. I advise teams to put their focus on the top 25% of these scenarios – as they identify what’s most important to automate.
Q: In an ideal world, who should be involved in setting these priorities?
A: Realistically, this should be done by the automation engineer, but of course, it works better with more collaboration. Getting the business owner or even the testers involved in the discussion would ensure that the automation framework addresses business needs.
Q: How do you view the current state of test automation across industry sectors?
A: From a technology standpoint, it’s pretty much the same. I think the variance between sectors comes in when looking at things like what to automate and how much you should do. Some sectors, like finance, require much more thorough testing and a lot of things are higher-risk. That might mean you have to automate more, meaning either you have to hire more people to focus specifically on automation or developers have to be a lot more involved in it.
Q: Are these the kind of topics covered in Test Automation University? What are your plans for the project?
A: I’m so excited about Test Automation U! Before I moved to California two years ago, in addition to being an automation engineer, I also worked as an adjunct college professor - teaching computer science and programming courses. I love teaching and helping people to upskill. Even now, I teach workshops all over the world for automation engineers. My role as an automation advocate enables me to help people get automation right. However, I get more questions via email than I can realistically answer, so I know Test Automation U is needed. We currently have nine other instructors on board, all industry experts, and we’ll continue adding more.
While there will be tons of technical courses, our curriculum won’t be limited to just this. There’s a knowledge gap in the industry regarding the fundamentals. Things like why should you automate? What do you automate? How do you make sure you’re getting a return on your investment? How do you come up with a strategy? Often, we see teams jumping straight into the tools, coding and proofs-of-concept. But even if they get all of the technical concepts right, the entire initiative still fails because they don’t understand the foundational aspects.
These are the types of topics we’re looking to address.