Tips for Creating a Successful QA Strategy

Built In Chicago connected with leaders from four QA teams to learn how they ensure success before, during and after development.

Written by Kelly O'Halloran
Published on Nov. 12, 2020
 Tips for Creating a Successful QA Strategy
Brand Studio Logo
spoton
spoton

Before any code is written at Club Automation, a conversation between QA testers and developers takes place.

“This allows the testers to ask lots of questions and clarify items in user story refinement meetings,” Butch Mayhew, the director of software test at the health and fitness club management software provider, said. 

The goal? To stop defects before they occur. 

It’s one of the QA practices that Mayhew and his team swear by, among others, like embedding members of their test team within software delivery teams and holding retrospectives.

Built In Chicago connected with Mayhew and leaders from three other QA teams to learn how else they ensure success before, during and after development. 

 

dscout
dscout
Anneliese Hernandez
QA Engineer • dscout

 

Prioritize automation

Within the last year, QA Engineer Anneliese Hernandez said dscout’s QA team has implemented new automation tests that have caught critical bugs and helped the team ship features with more confidence. Hernandez offered a few other practices that she credits to her team’s success at the customer insights platform.

 

What’s the most important best practice your QA team follows, and why?

Prioritizing automation has been an important QA practice at dscout, and we now have automated end-to-end tests in place for high-traffic areas of the platform and features that have previously experienced critical bugs. Within the last year and a half, our automated test suite has grown considerably and has helped to catch bugs with existing functionality before we even manually tested the code. We’ve also made a push to include automated tests for new features either before they’re released or shortly after. This practice has enabled us to ship features with more confidence and has cut down on the time it takes to run platform-wide regression tests, giving us more time for exploratory testing.

 

How do you determine your release criteria? 

Our release criteria at dscout is determined by our release plan for the feature. At a minimum for a beta release, our release criteria includes the feature meeting of the scoped requirements and passing a smoke test of critical functionality, with non-critical bugs identified and documented before the beta goes live. For the final release, our release criteria focuses on minimizing escaped defects and includes getting fixes in for any new and previously identified issues in the beta.

Additionally, we run automated end-to-end tests that cover core functionality before we go live. Once all development work is complete, we run a regression test of the feature and identify any remaining high priority bugs that need to be fixed immediately. Lower-priority, edge-case bugs are discussed with the team to see what can and should be fixed in the days or weeks post-release.  

 

How do you ensure the QA team stays up to date on shifting requirements, and how do you plan ahead to ensure changes are handled smoothly?

QA should be involved as soon as possible in the design and development process. In early planning meetings, initial decisions about a feature are made and getting the whole team on the same page early helps to ensure that things aren’t missed. As requirements inevitably shift throughout a feature’s development, documentation of these changes is crucial and can be as simple as a comment on a Jira card or wherever work is being tracked for the feature. If the QA team is not aware of changes to requirements before we begin testing, this can lead to confusion and wasted testing efforts. At the very least, if the QA team is involved in the design and development process from the beginning, we can call out any discrepancies with the originally scoped feature while testing and be made aware of requirements changes at this point, though ideally it should always happen earlier. 

 

Jeff Rogers
Head of QA • Tempus AI

 

Assess each release against its own risks, not a common standard

Tempus’s QA team doesn’t have a standard quality bar for product and feature releases across the health tech company’s systems. Instead, Head of QA Jeff Rogers said that each QA group assesses changes against the potential risks for individual releases.

 

What’s the most important best practice your QA team follows, and why?

We pair continuous delivery with an extreme focus on our foundational principles, which include our patients, the security of their data and regulatory compliance. We do this by limiting the blast radius for changes in product design and optimizing our test strategies around the various failure modes. This allows us to push the envelope on speed to market and continuous improvement.

 

How do you determine your release criteria? 

Tempus products are built, managed, sold and supported via an equal partnership between our engineering, product management and operations teams. Each group brings their own perspective on quality and the criteria for change and release and testing activities are aligned and negotiated as necessary. There is no standard quality bar for release across all systems; each group assesses each change against the potential risks to our core principles.

 

How do you ensure the QA team stays up to date on shifting requirements, and how do you plan ahead to ensure changes are handled smoothly?

At Tempus, QA is embedded on product scrum teams, tasked with consulting and coaching on adequate success criteria for all requirements, with an eye toward risk, compliance and customer impact. When requirements change, the agreed upon success criteria is evaluated by engineering, product, and QA team members, and commitments are restated and/or reforecasted. If and when issues arise, QA is accountable to all root cause findings and remediations.

 

Jake Lazarus
Lead QA Engineer • SpotOn

 

Focus on continuous improvement

While many QA teams focus on continuous development, the QA team at the SMB platform SpotOn prioritizes continuous improvement. Lead QA Engineer Jake Lazarus said this could show up through several practices, like expanding test coverage or learning new tools.  

 

What’s the most important best practice your QA team follows, and why?

Continuous improvement. We are always looking for ways to do our jobs better. This could mean defining or honing our process, expanding manual and automated test coverage or learning new technology to aid us in delivering the highest quality of products to our customers. We set goals as a department, as teams within the department and as individuals. Not only are we setting goals as a team, we are reaching them by relying on one another.

 

How do you determine your release criteria? 

Release criteria can vary from build to build and team to team. We have a lot of complex systems that all need to talk to each other and work in unison. We work closely with engineering and product teams on scope, dependencies and targeted release dates. Once the engineering team delivers a release candidate, we certify the build and begin drafting release notes to share with the product team. After a successful deployment to production, we commence post deployment production validation. After we have fully validated the deployment in production, we give the green light to send out release notes or announcements to the appropriate parties.

 

How do you ensure the QA team stays up to date on shifting requirements, and how do you plan ahead to ensure changes are handled smoothly?

We do this through communication and team cohesion. Everyone in the technology department is vested in the process. The QA team is ingrained with product, engineering and design teams, so a change in requirements is quickly escalated to the entire team, including QA. I have adopted a new motto since the pandemic hit: “Work as if you are sitting next to each other.” This means we are constantly engaged with each other. We collaborate on any changes in requirements or priorities as a team.

 

Butch Mayhew
Director of Software Test • Club Automation

 

Lean on exploratory testing

Director of Software Test Butch Mayhew said Club Automation’s QA team relies on exploratory testing as part of its two-week development cycles in lieu of scripted manual test cases. Mayhew explained the purpose behind this and how else it impacts the club management platform’s solutions.

 

What’s the most important best practice your QA team follows, and why?

Rather than relying on a mountain of scripted manual test cases, we default to exploratory testing. The goal of exploratory testing is to quickly learn about the software and how it’s changed with the latest code from the developers. This allows us to quickly identify, report and fix defects as a part of our two-week development cycles. We have also added automated checks in place for our critical path items, like areas that used to be scripted manual test cases. Our automated checks consist of both the UI and API layers, with the development team adding unit layer checks. With this approach, we are able to release new features with high confidence to our production environments and execute on our technology standard of excellence, which is to consistently deliver quality software. 

 

How do you determine your release criteria? 

All code that is in the development and test stage is completed during a two-week sprint cycle, and is merged into a pre-release branch and environment. Each ticket is merged as soon as testing passes, allowing other development teams to quickly pull other teams’ changes into their feature branches. This allows the testers to test against the latest validated code. 

At the end of our sprint, we have a code cutoff day or time when we merge our code from our pre-release branch to our release branch for regression testing. Because our automation checks have been running against the feature branches, or our pre-release branch, it is not likely that we find new issues in our release branch where we execute targeted regression testing. This consists of the test team reviewing each item quickly and performing one more risk analysis to try and identify any risky areas that need to be tested before our release to production. This process is typically complete within two business days.

 

How do you ensure the QA team stays up to date on shifting requirements, and how do you plan ahead to ensure changes are handled smoothly?

Our test team is embedded within the software delivery teams at Club Automation. This allows the testers to ask lots of questions and clarify items in user story refinement meetings before any code is written by the developers. That is the goal: to prevent defects from being written! But we do work in an agile environment, where we may learn new information tomorrow, which may require changes. As those things change, our testers are in those conversations ensuring they have a clear understanding of what’s changing and why it is changing. By no means are our teams perfect but we take time at the end of every sprint to hold a team retrospective. Here, we celebrate wins, give out kudos and bring up topics on how the team can improve going forward. Through this blameless process, the testers, developers and product managers seek to improve week-over-week and build trust with one another.

 

Responses have been edited for clarity and length.

Hiring Now
Capital One
Fintech • Machine Learning • Payments • Software • Financial Services