QA testing · Designers

QA testing with a small team best practices

Jill

May 14th, 2013

In a small team where there are only one or two engineers and designers and product / features are shipped 3 or 4 times a week, what are the best ways to do QA testing? As the designer I'm currently doing most of the testing. Most of the flows are either complicated, or have such tight turn around times its hard to plan for outsourcing.  Is it normal for the designer to do all the QA? Should engineers test their own features? What processes of QA testing have you found most effectively allow the team to continue to efficiently iterate, ship and build new things?

Jean Barmash Engineering Program Manager at Tradeshift

May 14th, 2013

It is important to consider which phase you are in.  It sounds like it's fairly early, and a lot of things are frequently changing, so if so, you want to optimize for agility vs stability.     While you should not be in QA role long-term, if it allows you as a team to deliver things faster, I'd suggest you continue doing it. 

If things are changing frequently, then investing in automated testing is not as valuable, since when the flow will change next week, there is twice as much code to change (the product code and test code).

Also, having a non-engineer do the testing allows for different perspectives to be applied to using the product.  If the engineer who built a feature misunderstood it in some way, they will carry that misunderstanding to their testing.

Depending on your load, another thought is to spread the load a bit, so you are not the only person doing QA - allocate a few hours prior to release for all of you to do QA of different flows.  

Engineers are likely testing the features as they develop them, but it would slow them down a lot to do full regression testing often.  

When you are later on (esp. post-product-market fit), you will want to have a more sophisticated testing infrastructure, since the focus will be more on stability.  At that point it will make more sense to invest in automated testing.  

One thing you could try to do is to identify things that are not changing as much anymore, and outsource at least those flows, to make sure you don't have regressions there.  

Michael Flynn Co Founder at Bootstrap Heroes

May 14th, 2013

What framework are you using? And do you have any automated browser tests (e.g. selenium) Those would be a good way to try and prevent some major regression errors. Pushing that many times a week requires a significant amount of development operations overhead (or someone who is always on edge worrying and looking) to make sure bugs aren't being pushed. Sounds stressful. I've been there.

Anonymous

May 14th, 2013

The engineers should, at the least, be writing unit tests. What kind of testing are you doing as the designer? Also, the type of QA infrastructure you need truly depends on the product being tested. Is this a web application, desktop application, mobile application (et cetera)?

Jimmy Jacobson Full Stack Developer and Cofounder at Wedgies.com

May 14th, 2013

This all really depends on the team size.  

For a founding team plus first employees (3-5) everyone needs to take ownership and responsibility for new features and quality. Engineers should deliver polished work according to the spec/use cases that were decided on for the feature.  

It doesn't make any sense to me to write automated browser tests for features that might be changed, scuttled or iterated on in the early stage of a startup while you are looking for product/market fit.

As the engineering team grows and features become more stable it makes sense to hire out QA or write automated tests.

Jill

May 14th, 2013

These are great responses. I guess to follow up and clarify - I was asking more about testing different use cases and user flows, not actually unit testing. I totally agree that depending on what stage you're in and how quickly you are iterating unit test's will help, but might not be as efficient.

Aaron Perrin Software Architect / Senior Developer

May 21st, 2013

Unfortunately, there's no silver bullet answer.  I've seen teams test a lot and still create products that are inherently unstable.  On the other hand, I've also seen teams with little testing create products that are quite successful. 

Testing (and here, I distinguish between 'QA') is certainly a good practice and reduces the _risk_ of product failure.  But, like many risk mitigation strategies, it doesn't work 100% all the time.  Plus, there are certainly diminishing returns. 

The best thing to do is to have a team of skills developers who have shipped and maintained products successfully, many times, if possible.  They will have the intuition to be able to set a line in the sand between too much testing and too little.  The developers should certainly be doing most of the testing.  At least, they should have a good set of unit tests at all layers of the development stack.  They should also have automated functional tests as well as integration tests.  As the product is maintained a similar set of 'regression' tests should be produced and executed.

Ideally, they will want to automate testing as much as Humanly possible.  Testing is seriously time-consuming, and manual testing takes away precious time from value-added tasks.  However, there will still be some 'sanity testing' toward the end, where you and or the product-manager (are you the product manager?) walk through the application in it's 'test' environment.

This is a very deep topic.  But, I can quickly summarize some strategies and tactics: 
1. Test constantly throughout development.  Tests should be added in parallel with features in the feature branch.
2. Tests should be automated.  Use a continuous integration tool.  If your team is mature enough, consider continuous deployment, in some cases.
3. Use clean environments for testing.  That is, the development environments should look the same.  There should be development, testing, staging, and production environments.
4. Use bug tracking and regression tests.
5. Do acceptance tests, continuously (3-4 times a week), if features aren't exactly specified.
6. Keep metrics of errors found in testing environments and production.  Many bugs _are not reported_ by users; metrics will make you disciplined and help get an understanding of how many bugs are actually in the wild (and possibly destroying your customers' opinion of you).

Best,

Aaron

James Bond CTO at SupplyBetter

May 14th, 2013

Absolutely, engineers should test. But not primarily or exclusively by hand -- they should be creating automated tests (preferably developing using a specification-driven methodology, i.e. TDD/BDD). Don't aim for 100% coverage, but expect solid coverage of the more critical code paths (in RoR framework, I like to focus testing on models, which is where most of the business logic lives, and requests, to exercise end-to-end; and mostly skip testing controllers and views). 

You can also consider testing tools that allow non-programmers to write executable test specs (e.g. Cucumber, FITnesse); but only if that's really going to happen, i.e, a product owner writing the actual tests (I would recommend *against* programmers writing these, it just adds an extra translation step, without providing any value).

Mike Mitchell Consultant: Technology Development and Management

May 29th, 2013

I've spent most of my career working in a similar environment.
1. Bake the quality in and always test your own stuff. Unit and integration testing are integral to your development process. 
2. It always pays to put one non-developer between the engineer and the public.  

I think there are a few reasons for this being effective:
1. Even if they find nothing, they will give your engineers one of two things: confidence to move on, or enough resistance to prevent them "throwing garbage over the wall"
2. It's a classic visibility inside the box problem.  The human who saw inside the box cannot unlearn what they know,  and therefore can't evaluate whether the device created meets customer expectations. 
3. Often you are not testing what the engineer wrote, you are testing their assumptions.  The code could be the perfect answer to the wrong problem. 


Jonathan Vanasco

May 26th, 2013

Jean Barmosh and Jimmy Jacobson are correct.    I just want to elaborate with my experiences:

1. Integrated and System Testing is a requirement for large companies and stable products.  For small startups where you have 2 developers who are essentially rapid-prototyping, it is often an incredibly stupid idea and complete waste of resources.  Tests take time to write and manage.  You need to decide a tradeoff on what is more important now -- product features or automated tests?   With one engineer, that is going to have a huge impact on your velocity.   If the product is changing 3-4x a week under the current velocity with 1-2 developers , you're looking at only being able to do 1-2x deployments if these engineers are responsible for these tests.  You're also looking at spending a few weeks at 0-1 deployments for the core set of tests to be initially written.  

2. No matter what you automate, you should still have a human go through with a Visual + UI/UX QA . Integrated tests rarely cover everything - CSS will often break, and randoms buttons/links will 404.  

3. Having engineers test their own work is a generally bad idea.   Most will have "tired eyes", look at the pages and simply not realize obvious mistakes.  They've spent the past week so focused on the left side of the page, that the right side of the page could be completely missing.  Then you have the "cultural shift", where something that is acceptable (or a great idea!) to an Engineer is *entirely* unacceptable to a non-engineer.

As someone who has managed many development teams -- and someone who believes in frequent ships -- your characterization of the situation worries me.  The notion of only 1-2 people being responsible for the product AND doing multiple weekly product releases is troubling.  "Lean Startup" methods and frequent ships are great -- but even with automated build+deploys , there's a certain amount of admin overhead and oversight that should be going on.   I'd be inclined to think that this engineer is either being overloaded by the greater team or biting off more than they can chew. The idea of dropping on additional testing requirements to them seems like an untenable situation.   If something does break, you're taking a potentially frazzled engineer and throwing him into disaster recovery mode.

If there are only 1-2 engineers responsible for the product , then there are probably 1-2 non-engineers who should be helping out to reduce the workload on the product development team.  Personally, I would probably have them (i.e. you) start to work on building out the Selenium tests and I would target 1-2 product deployments a week (unless there are critical bug fixes, which are always ASAP).   My rule of thumb has always been that you shouldn't target more than one deployment per-engineer per-week.   ( if you have a small, advanced team that built out a huge testing and deployment structure, this doesn't apply.  ).

I agree with all the testing recommendations and strategies that people have been noting above.  At 3+ engineers, this is a totally different story -- but at 1-2 engineers ?  From a resource management perspective, it just worries me.

Michael Flynn Co Founder at Bootstrap Heroes

May 14th, 2013

I agree with Jake you need someone to help just with testing. email me mike600@gmail.com and I can hook you up with my testing guy if you'd like. He's good and pretty cheap.