Software Engineering · Engineering management

Should developers be testing their own code?

Rohit Mittal Data Scientist at Popsugar

January 21st, 2015

There is a lot of debate as to what degree of testing developers should do for their own code. On the one hand it can waste their time, on the other hand it can waste other's time. Should they be responsible for testing their own code and if so, how do you keep them accountable?

Jesse Tayler

Last updated on April 13th, 2017

It's not a one size fits all proposition.

Testing is a technical vocation, performed by professional computer scientists throughout the software lifecycle.

Testing is assurance. It is the National Transportation Safety Board of software construction. It is there to prevent accident, to determine probable cause and to formulate safety recommendation.

Testing assures software correctly achieves stated requirements and allows objective measures and assessments as to the degree of this conformance. Critically, Testing provides a bedrock of assurance and reliability; the underpinnings of responsive and agile construction.

Testing can also grind software construction to a halt.

The potential to shut down progress is real. Testing has an extremely broad set of professional techniques each evolving within dramatically different environments and sharing only distantly related evolutionary pressures and influences. This breadth of technique is a reflection of the incredible spectrum of purpose for which software is applied, and the maturity of modern software construction practice.

Put simply, there are many more ways to apply the wrong technique than there are ways to correctly assemble a useful Testing strategy.

Think of Testing as the underwriter’s insurance of your project. Underwriting is the business of determining risk potential and quantifying the results of safety procedures.

If the one-and-only concern of the NTSB were safety, the investigative agency would simply ground all airplanes and close all roads.

Testing must consider the economy in which it operates.

Ted Rogers

January 21st, 2015

Developers should absolutely be testing.  Testing your software is one of the most important steps in doing software development.  Ideally, the developer would write unit tests to test their software as well as manual testing. 
As @David Matousek said automated testing is great but not always a luxury everybody has the time or resources to setup.  Your QA resource and/or Product Owner should also testing to make sure the features work like they are supposed to and find bugs the developer did not.
Also, as a company that is just starting out, it is likely the software developers are some of the busiest ones in the company, so may need additional testing help to get MVP's working as quickly as possible.


February 27th, 2017

Developers should definitely be testing the code they write. However, this is true not only for developers, but for everybody. Marketing, sales, operations and engineering should have metrics that they should be measured by.

The question is how much testing and what type of testing – these are business driven. For example if company values ‘time to market’ more than ‘quality’ that they will do very basic testing and check if there is demand for the product. Only if there is demand, they will invest more into product.

Developers – in my opinion they should ‘prove’ that the software meets the requirements (functional and system). These days it is easy to do: there are lots of techniques and tools.

Art Yerkes Computer Software Professional

January 21st, 2015

Developers likely shouldn't be coming up with integration scenarios and carrying out in situ testing of the final product.  They should be writing real units and real unit tests for their own code, as this is the principle way in which the code can be exercised before it gets to production.  Designing software for testability also makes isolating failures and making later changes much easier, so I wholeheartedly recommend the practice of designing for testability and writing tests.

If your developers are having trouble writing at least basic tests, then they probably aren't designing testable code.

Lalit Sarna Business & Technology Leader

January 21st, 2015

QA's job is to stress test, basic fidelity should always be developers responsibility.  It is a no brainer on so many levels.

If a developer thinks that unit testing his code is a waste of time, it's best you give them a generous severance package and save your team.  Sorry if this sounds harsh. 

Regarding accountability :

Culture is the key . Peer pressure and transparency is about the only thing that sustains. 
All other methods of enforcement will eventually fail. 

Bad apples should not be tolerated. 1 B player is enough to spoil the entire team. 

People slip from time to time, and that is ok. Given them plenty of room to make mistakes  with clear feedback on how to improve and what you expect. 

If they are unable to improve, then help them find a place where they can thrive. 


David Matousek Director, Mobile Development Manager at John Hancock Financial Services

January 21st, 2015

First of all, testing code is NOT a waste of time no matter who is doing it.  Ideally, the developers should be doing some testing, but a using an automated test system is ideal.  Follow continuous delivery best practices.

In small startups, everyone should be using and testing the application as much as possible.

But the product owners and founders are responsible to ensure only quality products ship.  They should be testing!

Yvan Pearson

January 21st, 2015

Testing is not a waste of time. Finding bugs early is a lot cheaper to fix than fixing bugs in the field. Manually and/or automated testing should be mandatory, including unit testing or black box testing. Someone who believes testing isn't important shouldn't be writing software.


January 21st, 2015

Until you are large enough to get very specialized & make a conscious choice to employ specialist QA people (which there are good arguments against, BTW) then the answer to your first question is: of COURSE developers should test their code. In what other field would you consider a job "done" if the delivered artifact didn't work?

To your second question, how to hold them accountable: if you find that you need to hold them accountable, fire them and hire someone else. A small business can't afford to employ people who don't take responsibility for their work and/or don't want to do a quality job.

Kerry Davis

January 21st, 2015

The question is a bit vague because there are many forms of testing. There are three basic forms of software testing.

  1. Unit testing where the code under test is a new feature add or bug fix (aka unit) of the entire body of code
  2. Integration testing where you want to make sure the new feature / bug fix plays well with other parts of the code
  3. Regression/Stress testing which ensures the code is not only backward compatible under different operating environments but can also function well under an extreme load or system delays.

Assuming your are talking about a single developer, where they are the only one on the project for the next release, then absolutely yes to all of the above. However, if you are referring to a team effort of an ongoing project expected to operate on many different operating environments (hardware and software and network conditions). Then it is quite possible that only Unit Testing (and maybe integration) is the most efficient course and a separate release testing team should be handling Regression, Stress and Integration testing.

Michael Barnathan

January 21st, 2015

Developers should absolutely be responsible for their own code, and that includes testing it. Not writing tests wastes more time than writing them in the long run for any nontrivial project. Especially if your alternative to a good automated test suite is more manual testing. As for how to keep them accountable, other developers. Testing should be part of the development culture, and budgeted for in product estimates. A dedicated QA team is a supplement for developer testing, not a replacement for it. You can use metrics such as code coverage, but I'd be wary of relying too much on them. I've seen lots of code "100% covered" by poorly written smoke tests that don't actually exercise any of the conditions they're supposedly checking. I've also seen tests become very pedantic ("check that the plus operator adds correctly") in an attempt to get full coverage.