QA testing · Designers

QA testing with a small team best practices


May 14th, 2013

In a small team where there are only one or two engineers and designers and product / features are shipped 3 or 4 times a week, what are the best ways to do QA testing? As the designer I'm currently doing most of the testing. Most of the flows are either complicated, or have such tight turn around times its hard to plan for outsourcing.  Is it normal for the designer to do all the QA? Should engineers test their own features? What processes of QA testing have you found most effectively allow the team to continue to efficiently iterate, ship and build new things?

Michael Flynn Co Founder at Bootstrap Heroes

May 14th, 2013

What framework are you using? And do you have any automated browser tests (e.g. selenium) Those would be a good way to try and prevent some major regression errors. Pushing that many times a week requires a significant amount of development operations overhead (or someone who is always on edge worrying and looking) to make sure bugs aren't being pushed. Sounds stressful. I've been there.

Jean Barmash Engineering Program Manager at Tradeshift

May 14th, 2013

It is important to consider which phase you are in.  It sounds like it's fairly early, and a lot of things are frequently changing, so if so, you want to optimize for agility vs stability.     While you should not be in QA role long-term, if it allows you as a team to deliver things faster, I'd suggest you continue doing it. 

If things are changing frequently, then investing in automated testing is not as valuable, since when the flow will change next week, there is twice as much code to change (the product code and test code).

Also, having a non-engineer do the testing allows for different perspectives to be applied to using the product.  If the engineer who built a feature misunderstood it in some way, they will carry that misunderstanding to their testing.

Depending on your load, another thought is to spread the load a bit, so you are not the only person doing QA - allocate a few hours prior to release for all of you to do QA of different flows.  

Engineers are likely testing the features as they develop them, but it would slow them down a lot to do full regression testing often.  

When you are later on (esp. post-product-market fit), you will want to have a more sophisticated testing infrastructure, since the focus will be more on stability.  At that point it will make more sense to invest in automated testing.  

One thing you could try to do is to identify things that are not changing as much anymore, and outsource at least those flows, to make sure you don't have regressions there.  


May 14th, 2013

The engineers should, at the least, be writing unit tests. What kind of testing are you doing as the designer? Also, the type of QA infrastructure you need truly depends on the product being tested. Is this a web application, desktop application, mobile application (et cetera)?

Jimmy Jacobson Full Stack Developer and Cofounder at

May 14th, 2013

This all really depends on the team size.  

For a founding team plus first employees (3-5) everyone needs to take ownership and responsibility for new features and quality. Engineers should deliver polished work according to the spec/use cases that were decided on for the feature.  

It doesn't make any sense to me to write automated browser tests for features that might be changed, scuttled or iterated on in the early stage of a startup while you are looking for product/market fit.

As the engineering team grows and features become more stable it makes sense to hire out QA or write automated tests.

James Bond CTO at SupplyBetter

May 14th, 2013

Absolutely, engineers should test. But not primarily or exclusively by hand -- they should be creating automated tests (preferably developing using a specification-driven methodology, i.e. TDD/BDD). Don't aim for 100% coverage, but expect solid coverage of the more critical code paths (in RoR framework, I like to focus testing on models, which is where most of the business logic lives, and requests, to exercise end-to-end; and mostly skip testing controllers and views). 

You can also consider testing tools that allow non-programmers to write executable test specs (e.g. Cucumber, FITnesse); but only if that's really going to happen, i.e, a product owner writing the actual tests (I would recommend *against* programmers writing these, it just adds an extra translation step, without providing any value).


May 14th, 2013

These are great responses. I guess to follow up and clarify - I was asking more about testing different use cases and user flows, not actually unit testing. I totally agree that depending on what stage you're in and how quickly you are iterating unit test's will help, but might not be as efficient.

Aaron Perrin Software Architect / Senior Developer

May 21st, 2013

Unfortunately, there's no silver bullet answer.  I've seen teams test a lot and still create products that are inherently unstable.  On the other hand, I've also seen teams with little testing create products that are quite successful. 

Testing (and here, I distinguish between 'QA') is certainly a good practice and reduces the _risk_ of product failure.  But, like many risk mitigation strategies, it doesn't work 100% all the time.  Plus, there are certainly diminishing returns. 

The best thing to do is to have a team of skills developers who have shipped and maintained products successfully, many times, if possible.  They will have the intuition to be able to set a line in the sand between too much testing and too little.  The developers should certainly be doing most of the testing.  At least, they should have a good set of unit tests at all layers of the development stack.  They should also have automated functional tests as well as integration tests.  As the product is maintained a similar set of 'regression' tests should be produced and executed.

Ideally, they will want to automate testing as much as Humanly possible.  Testing is seriously time-consuming, and manual testing takes away precious time from value-added tasks.  However, there will still be some 'sanity testing' toward the end, where you and or the product-manager (are you the product manager?) walk through the application in it's 'test' environment.

This is a very deep topic.  But, I can quickly summarize some strategies and tactics: 
1. Test constantly throughout development.  Tests should be added in parallel with features in the feature branch.
2. Tests should be automated.  Use a continuous integration tool.  If your team is mature enough, consider continuous deployment, in some cases.
3. Use clean environments for testing.  That is, the development environments should look the same.  There should be development, testing, staging, and production environments.
4. Use bug tracking and regression tests.
5. Do acceptance tests, continuously (3-4 times a week), if features aren't exactly specified.
6. Keep metrics of errors found in testing environments and production.  Many bugs _are not reported_ by users; metrics will make you disciplined and help get an understanding of how many bugs are actually in the wild (and possibly destroying your customers' opinion of you).



Michael Flynn Co Founder at Bootstrap Heroes

May 14th, 2013

I agree with Jake you need someone to help just with testing. email me and I can hook you up with my testing guy if you'd like. He's good and pretty cheap.

Stefanie Kraus Senior Product Designer at HotelTonight

May 14th, 2013

What I have found very efficient in small cross-functional teams is to block off some time (this could be several hours, a full day or more) where everyone does QA and files bugs. Ideally, there are flows and use cases that are prepared ahead of time, and everyone in the team uses the same task flows.

A great side effect of this method is that people feel ownership over the product and think of this as a team effort.

Joseph Moniz Software Engineer

May 14th, 2013

Hi Abby,

At the last company i worked for i was part of a really small team. It was just me (an engineer) 2 other engineers and our Product Manager. We didn't have any Q/A resources or any designers either. Yet, we had a really fast build and ship loop and were practicing the somewhat controversial practice known and Continuous Delivery, which means we shipped to production multiple times a day.

We were able to ship to production about 3-4 times a day with high confidence that we weren't going to break anything because we had a really good peer code review process (via git pull requests) and we had a high level of code coverage via unit tests.

We used a system called Zombie.js (which my manager at the time built), which pretends to be a browser and actually runs through all the different user flows automatically and lets us know if anything was broken. By taking the time to write these fairly simple tests and adding them to our test suite we were able to add features to the site while doing a full browser regression on all the different flows we had in a matter of seconds instead of hours or days (the time it would have taken us by hand).

Based off of my past experience, having a good iterative flow boils down to "People, Process and Tools".

You need the motivated people that are self starters and care enough about quality to go out of their way to achieve it. You need a process that enables team members to surface issues as early on in the pipeline as possible and get things that work through as fast as possible and you need tools to automate as much of this stuff for you as possible so you can spend your time on more profitable things.

- Joseph Moniz

"Wake up early, Stay up late, Change the world"