How Guru’s Engineers Use Cypress for Better Burn Testing

Need a better way to handle burn testing? Two of our front end engineers have found a way to use Cypress to improve the way they handle QA.
Table of Contents

We’re all about knowledge sharing at Guru, and when we discover something new and helpful, we want the world to know! Our engineering team uses Cypress in their testing and QA process. They’ve discovered a new way to help run better burn tests, and they want to share their new and improved process with other engineers and testers.

Over the past year, we’ve been focusing on increasing our test coverage — specifically, end-to-end testing that ensures different parts of our product flows function correctly as we make code updates. For this kind of testing, we leverage a tool called Cypress which mocks user behavior with our web app and Google Chrome extension. This testing suite runs on every pull request to ensure new code isn’t breaking things in our UI. We also want to block releases from going out if our Cypress suite fails on our production branch.

Ryan%20and%20Jack%20Talk%20About%20Cypress.png

What we look for in traditional testing

One issue we’ve dealt with is seeing some of our tests fail, but not for reasons connected to the UI or code. So, how does Cypress help us in these cases? We have Cypress act as a user whose behavior we can define. There are several variables that could lead to Cypress telling us there’s a failure, but from a user’s perspective, nothing seems wrong.

For example, there might be a test that says, “Go to this part of the webapp > Add a new Guru card > Expect to see a post-creation state”. If the loading state of creating a card hangs around for a millisecond too long and Cypress starts looking for the post-creation state before it’s there, this test could fail (most of the time, though, this test will pass). If it passes once on a pull request, we’re able to merge code that could sometimes flake. But we won’t know if it does; a passed test is no guarantee that the code is correct.

This kind of situation lowers the reliability of our test coverage. As a result, when a test failure pops up in our deployment pipeline, we need to verify if it’s a problem with the UI — or with the test. To fix this we started thinking about ways in which we could detect if a test might flake before it goes to our production code branch.

How burn testing helps

Guru_Collage_Image-Library-23.png

Enter burn testing. Burn testing is a process used to test something under more rigorous or extreme conditions. It's also sometimes called stress testing or load testing, depending on the method or specific area being tested.

At Guru, we use burn testing as a part of our Cypress suite for any new or modified tests that get added to our codebase. Before these tests can be merged in, we run them several times in a row and all must pass in order to move on to the next step in our CircleCI build pipeline.

This step happens immediately before we run the entire Cypress test suite. The advantage of this order is that if our burn testing step produces a failure, we can mark the entire check as a failure early. Skipping the rest of the suite allows us to save time and reduce the number of cycles it can take to ensure that newly introduced tests are as reliable and fault-tolerant as we need them to be.

To find the files we’re looking for, we use git diff on the current branch and supply the output as a parameter to a tool called cypress-repeat that lets us run those tests any specified number of times, effectively adding burn testing as a step in our end-to-end testing suite.

The outcome

Making this tweak to our testing plans has had some pretty solid positive results. For us, adding burn testing can reduce the time it takes to find unreliable tests by up to 30 minutes.  It also improves turnaround time for adding new functionality. Since the newly added tests are now run first, they allow us to verify stability before continuing on to the rest of the build process.

Overall, our continued focus on making our testing processes more efficient and robust increases the entire engineering team’s confidence in shipping new features quickly. We're fixing bugs faster and making it easier to work across all of our codebases. We hope that this post helps other Cypress users create better tests and entire teams build more efficiently.

We’re all about knowledge sharing at Guru, and when we discover something new and helpful, we want the world to know! Our engineering team uses Cypress in their testing and QA process. They’ve discovered a new way to help run better burn tests, and they want to share their new and improved process with other engineers and testers.

Over the past year, we’ve been focusing on increasing our test coverage — specifically, end-to-end testing that ensures different parts of our product flows function correctly as we make code updates. For this kind of testing, we leverage a tool called Cypress which mocks user behavior with our web app and Google Chrome extension. This testing suite runs on every pull request to ensure new code isn’t breaking things in our UI. We also want to block releases from going out if our Cypress suite fails on our production branch.

Ryan%20and%20Jack%20Talk%20About%20Cypress.png

What we look for in traditional testing

One issue we’ve dealt with is seeing some of our tests fail, but not for reasons connected to the UI or code. So, how does Cypress help us in these cases? We have Cypress act as a user whose behavior we can define. There are several variables that could lead to Cypress telling us there’s a failure, but from a user’s perspective, nothing seems wrong.

For example, there might be a test that says, “Go to this part of the webapp > Add a new Guru card > Expect to see a post-creation state”. If the loading state of creating a card hangs around for a millisecond too long and Cypress starts looking for the post-creation state before it’s there, this test could fail (most of the time, though, this test will pass). If it passes once on a pull request, we’re able to merge code that could sometimes flake. But we won’t know if it does; a passed test is no guarantee that the code is correct.

This kind of situation lowers the reliability of our test coverage. As a result, when a test failure pops up in our deployment pipeline, we need to verify if it’s a problem with the UI — or with the test. To fix this we started thinking about ways in which we could detect if a test might flake before it goes to our production code branch.

How burn testing helps

Guru_Collage_Image-Library-23.png

Enter burn testing. Burn testing is a process used to test something under more rigorous or extreme conditions. It's also sometimes called stress testing or load testing, depending on the method or specific area being tested.

At Guru, we use burn testing as a part of our Cypress suite for any new or modified tests that get added to our codebase. Before these tests can be merged in, we run them several times in a row and all must pass in order to move on to the next step in our CircleCI build pipeline.

This step happens immediately before we run the entire Cypress test suite. The advantage of this order is that if our burn testing step produces a failure, we can mark the entire check as a failure early. Skipping the rest of the suite allows us to save time and reduce the number of cycles it can take to ensure that newly introduced tests are as reliable and fault-tolerant as we need them to be.

To find the files we’re looking for, we use git diff on the current branch and supply the output as a parameter to a tool called cypress-repeat that lets us run those tests any specified number of times, effectively adding burn testing as a step in our end-to-end testing suite.

The outcome

Making this tweak to our testing plans has had some pretty solid positive results. For us, adding burn testing can reduce the time it takes to find unreliable tests by up to 30 minutes.  It also improves turnaround time for adding new functionality. Since the newly added tests are now run first, they allow us to verify stability before continuing on to the rest of the build process.

Overall, our continued focus on making our testing processes more efficient and robust increases the entire engineering team’s confidence in shipping new features quickly. We're fixing bugs faster and making it easier to work across all of our codebases. We hope that this post helps other Cypress users create better tests and entire teams build more efficiently.

Experience the power of the Guru platform firsthand – take our interactive product tour
Take a tour