Having a fast-to-production approach in terms of quality means you need to receive feedback on your products and services as fast as you can. To tackle such challenges, our testing processes grew from only manual check-ups to in-depth testing as well as covering products and services with health check monitoring and layers of automated tests.
In terms of fast execution of automated tests, we usually talk about unit or integration tests. Tests simulating a real user interaction through UI, where the complete user flow is being tested, are usually the slowest and have a higher potential for failure. Also, the feedback they provide is usually not presented in a human-friendly form, especially as products start to grow and services start to multiply, which makes keeping track of their changes and current state more and more difficult.
What were the obstacles we encountered and how did we solve them?
We created a powerful test suite powered by Selenium and TestNG, and decided to run it across multiple environments in our own grid of VMs. To speed up execution time, tests were set to be run in parallel mode across multiple browsers. At that point, we had a fully functional test suite which we could run only during a predefined scheduled timeframe. As we wanted to be able to run tests whenever we please – on demand, triggered (for instance, a trigger can be a successful deploy) or scheduled – we introduced Jenkins CI to our testing process.
As we operate in an environment where deploys are being performed every 6 minutes, running a complete functional test suite each time a deployment has been made wasn’t an option – results would not provide feedback in the desired time. Also, we wanted to have a targeted test suite aiming a product or service that’s being deployed instead of testing everything. We needed to group our tests by functionalities and prioritize them – here we introduced TestNG’s groups of tests feature to our testing process where we got a smaller yet targeted number of tests to run.
At this point, we were able to run a segmented test suite on different environments any time we wanted – we could see a deployment had started and when it was successfully finished, run our tests to get feedback whether the main functionalities for the deployed product or service were intact. But that would require manual work – checking deployment statuses on both environments and manually selecting groups of tests to be executed. So, we decided to automate that process as well – here we introduced the Jenkins plugin parameterized trigger, put our Jenkins job for running tests in downstream and wrote a small Groovy script to map products and services being deployed to our groups of tests. Now, whenever a successful deployment is being performed to a desired environment, our Jenkins job triggers a targeted test suite based on a product or service that is being deployed.
What about human-friendly feedback? We used all the data TestNG was already providing us but we structured it by product tested and created a template using Apache Velocity to build up a new HTML test reporter. Our current test report provides summary data of the test run: total number of tests executed, passed and failed, and also detailed data of the test run. Details include test names of all failed tests including links to our test management system where documentation on the failed test case can be found with detailed execution logs.
To wrap it all up
We strongly believe that by implementing continuous testing in our process we gained a higher quality of products and services. By including functional UI tests, even though they carry their own set of challenges, we were able to benefit from them and prevent issues in our products and services.
By Leda Link, Software Quality Assurance Analyst / Team Leader at Infobip
Development Careers at Infobip