- December 6, 2017
- Posted by: Abhay Das
- Category: Quality Engineering
Can speed and quality go together?
As we know, the DevOps approach strives to achieve this through parallel efforts at software development and operations readiness.
Keeping these 2 competing processes in lock-step is achieved by enforcing Quality Gates at the end of every step of the SDLC process, with clearly defined and calculable entry and exit criteria, as well as the clear responsibility of who is handing over and who is taking over the application.
These Quality Gates help to ensure that defects are logged and fixed within each step, and not allowed to flow further downstream.
To enforce the quality gates rigorously and comprehensively, software testing starts early as well. So the role of testing experiences a left shift, entering the picture right at the beginning of the SDLC rather than later, as is the won’t in the traditional method of linear (aka “Waterfall”) development.
This also achieves the purpose of speeding up the entire development and implementation process.
Test Driven Development, Continuous Integration, and Continuous Deployment all help achieve this speed, as well as assure quality.
Monitoring these processes in ensuring speed and quality is the DevOps dashboard that tracks the entire process, spot bugs at an early stage, and helps experienced user spot trends early.
Therefore, one of the critical steps in integrating the testing with the other two processes would be building metrics on the DevOps dashboard based on the project objectives.
The metrics should aim to assess the following
- Deployment success rate – the customer-facing quality metric
- App error rates for each Phase
- Incident severity surfaced at each Quality Gate
- Outstanding bugs (aka Technical Debt)
The metrics can help improve the understanding of the impact of a specific release as well as the response rates of the different teams to fix the bugs in the release and handle them better in the next.
At Indium Software, while the automated testing framework for the DevOps environment, iSAFE, speeds up the testing process, six metrics that help further improve the process to make it more efficient and effective include:
- Lead time for the start of test automation how quickly test automation can start is an area for constant improvement. In a typical engagement, the functional team writes the manual test cases and hands over to the automation team. With a 2% shift in Requirements for each calendar month of the project life, all delays will create a misalignment between the application and the automated test suite.
- FTR (First Time Right) percentage and Reverse Flow Metrics How many times can we get it right the first time (FTR) is another measure, calculated as a percentage of the overall automation effort and tracking the number of times FTR was achieved. The reverse is also kept track of – that is, how many times did we have to go back for clarifications to the Dev or Product management team, or even to a previous stage in the development lifecycle. This can, of course, be improved by better involvement at the time of design.
- Automation Code Coverage How relevant was the automation in a continuous delivery situation, how much of the application code was covered by the automation is another measure that helps evaluate the efficiency of the test automation effort.
- Percentage of In-sprint automation how much of the automation was completed within the sprint, and how much was picked up later by the automation team. Improving this metric forces quality in a large number of root causes:
- Quality of backlog grooming and stability of use cases within the sprint
- Level of detail of the user stories, which allows the automation team to start automation in parallel to the coding
- Level of involvement of the automation team in the sprint meetings
- Quality of the automation framework design and modularity allowing speed of automation
- Amount of API testing (faster, more robust) vs. UI based testing (susceptible to changes in the UI and P4 defects).As a best practice, the Indium functional tester details the manual test cases such that it can be automated directly into iSAFE.
- Maintainability and responsiveness measuring how modular and reusable was the automation and how many times did it need to be rewritten can help improve the test case scripting to cover more variations.
- Use of VM/ Containers for Testing In Production (TIPping) DevOps requires the testing team to make use of the integration between the Development and the Operations, to the extent of being able to spin up a machine for testing the Pre-Production code by restricting access, and to then switch it to full production mode, OR quickly roll back by reverting to the old status – measuring the efficiency with which this is achieved also is an important metric for DevOps test automation.
These metrics help keep the process efficient and achieve DevOps goals. They help reduce the time taken for QA test automation services, and also identify new areas that can be automated.
Is Your Application Secure? We’re here to help. Talk to our experts Now
Inquire Now
ROI of automation comes from the ability to execute the automation suite early, fast, and frequently so that issues can be fixed before being found by the end customer. The metrics given above help you find the bottlenecks to help achieve that end goal.