In this first post on how we test software at Quid, my aim is to tease out different layers of testing built into our development process, and to call out when, where and how tests should be added, and most importantly, who should come up with them.

The biggest problem with testing is not doing any. To quote the Google SRE (Site Reliability Engineering) team mantra, “Hope is not a strategy”. If you didn’t test your feature / service / API / function, I’ll bet you good money that it’s buggy in ways that matter. If you tested it once when you built it, but don’t have it covered by a regression suite executing in your CI (Continuous Integration) framework, I’ll bet that it’s broken by now. You could proceed to run the code in production anyway, and pay the price in escalations from the field that take up valuable engineering time, support calls, and loss of users due to dissatisfaction with quality.


Some level of bugs and production failures can be expected and even planned for in the form of “error budget”, but to manage risk you gotta be testing. Let’s assume that you’re as much into testing your software as we are at Quid.

The challenge, then, is to make sure you’re testing the software as it should be, not as it happens to be. You could put a lot of effort into tests that are closely coupled with implementation, and end up spending substantial time regularly updating them, while not catching any bugs because the tests tautologically verify that the code works the way you wrote it instead of the way it was meant to work. To avoid this trap and the resulting disillusionment with the whole idea of testing, have your tests defined by Those Who Know Best ™.

Those Who Know Best will be different people at different stages of the software development process. For a complex product like Quid, with its rich visualization UI on top of a natural language-based data processing platform, the quality engineering strategy includes 7 layers.

1. Define how the feature should work.

Those Who Know Best = Product Management (PM).


At Quid, PM does the heavy lifting of figuring out what will best meet the user’s needs and therefore best positioned to define, in a testable way, how the feature should work. The Agile way is to encode this knowledge in a User Story with a set of formal Acceptance Criteria (AC). At Quid, the AC are validated by the PM upon feature completion, and we have an easy way to move from these AC to runnable tests via the RainforestQA framework; more on that later.

However, PM’s role focuses on the happy path of the feature, covering only how it should work, but not much on how it should NOT break. Considering possible failure modes and corner cases takes special skill which usually resides within a dedicated quality team, analogous to Google’s SDET (Software Developer in Test) role. At Quid, this is the Quality Engineering team.

2. Anticipate how the feature might break.

Those Who Know Best = Quality Engineering (QE).


As part of Quid’s planning process, QE helps the project team line up test cases and AC which covers failure scenarios, identify risk areas, plan for load testing, and advise on production monitoring of the new feature. This is done early in the process, before implementation, so engineering work can be informed by quality considerations.

At this stage, it also becomes clear what tools will be needed for later execution of the tests so QE can go off and build those tools.


3. Design and implement the feature, testing as you go.

Those Who Know Best = Engineers doing the work, of course.


Armed with the acceptance criteria set earlier, developers write unit tests and functional tests as they implement code. There’s a catch though: whenever the person checking quality is the same person who did the work, optimistic bias appears, and problems can be missed. That’s why writers have editors; the later code review step addresses part of the problem. The practice of test-driven development (TDD, aka “write the tests before the code”) helps avoid this pitfall. We encourage automated tests and mandate their inclusion in Pull Requests with the feature code. Our unit and functional tests are written in the following frameworks: nose and unittest for Python, Rspec for Ruby, mocha and Selenium for Javascript.

We have not, however, found that developers are the best people to put in place end-to-end tests in cases where requirements were defined by others (commonly by PM for user-facing product features) – read on to find out how we handle that layer of testing.

4. Review the code and tests.

Those Who Know Best = Engineering at large.


Review is essentially a crowdsourced step, aiming to bring in the wisdom of the local engineering community, and have an independent look at the produced code. At Quid, all code goes through peer review, and the designated reviewers must review tests as well as feature code.

5. Test for acceptance of the new feature.

Those Who Know Best = PM, the same people who defined how the feature should work.


This step is merely a walk through the acceptance criteria set at the start of the project, performed by PM. This is what I like to call “artisanal testing” (you heard it here first!), since the term “manual testing” has gotten a bad rap in the industry over the years; but truly, there is no substitute for exploratory testing done by qualified, caring humans. At Quid, we go beyond single-pass acceptance testing by encoding the new feature tests in RainforestQA framework to join the automated end-to-end regression suite.

6. Test for regressions.

Those Who Know Best = Automation?

That would be a proximate answer, because we take the CI system’s word for whether the build is green, as it’s not feasible to consistently test for regressions in any other way. The real knowledge, of course, comes from the people who put the regression tests in place. For the unit and functional tests, that would be all engineers, collectively.


For end-to-end system tests, the ideal answer would be “the user”, but of course you don’t want to subject your users to bugs just because they are best positioned to hit those bugs. At Quid, our proxy for the end user is the Client Services team who are internal power users of the product, as well as PM and to some degree QE. These three groups have heavily contributed to the development of our end-to-end regression suite, done in the RainforestQA framework.

7. Monitor in production.

Those Who Know Best = Engineering + Operations / DevOps.

Continuous monitoring of your production environment, while a crucial contributor to quality, is not what this post is about; mentioning it here for completeness.

Now I will briefly explain the value of RainforestQA approach to us at Quid, since I’ve referred to it three times above (a detailed blog post coming soon).

RainforestQA tests require no translation layer between “Those Who Know Best” and automated tests, unlike other testing frameworks including BDD ones like Gherkin -> Python -> Selenium. We’ve found real benefit in removing the middlemen from testing.


The RainforestQA test case scenarios reflect user stories and product workflows. They are written by PM and Client Services in natural English language, executed by crowdsourced testers on demand, and monitored by QE. This type of testing gives us the best of all worlds – automation of test cases with quick parallelized execution, ease of test development by non-coders (Those Who Know Best), and human interaction with the product while testing. We run Rainforest tests upon deploy to our staging environment, as a gate to promote the release to production. We also do nightly CI runs for cross-browser test coverage that double as load tests.


We are continually refining our approach, hence this post is not the gospel of quality engineering, but merely the best we’ve come up with so far at Quid. Stay tuned for more!

P.S. Thanks due to IKEA for assembly instructions for the ubiquitous Billy bookcase, 2011 kitchen repricing ad campaign, and Valentine’s Day tweets of 2014 that helped me debias the assembly man’s gender to some degree.

Interested in helping us solve awesome problems? If so, then head over to our careers page!