What happens when you ask an experienced developer when you should write automated tests?

Sure you'll get a few extremists.

😏 Only a peasant wouldn't always TDD with 100% test coverage.
😎 Us 1000x devs don't need no tests, bro!

But 9 times out of 10, you'll probably arrive at the universal experienced developer answer to everything.

It depends. ™

It depends on what?

Here is my mental framework of answering that question on a case-by-case basis.

📈 Specific vs 📉 vague requirements

The more development is like "filling in the spec", the more it benefits from techniques like TDD and testing in general.

When the "requirements" are being discovered through iteration, extensive testing can be a hindrance.

📈 Longer vs 📉 shorter term outlook

Tests are slow to maintain and costly. The scrappier the organization is (eg. a startup), the more onus there is on shipping now, paying technical debt later (if ever).

Enterprises have more to lose, so they are more risk adverse. Willing to pay the premium and go slower, but do it "right" the first time around.

📈 Short vs 📉 long release cycle

The more often a product is shipped, the more extensive test coverage it needs to have (see "continuous deployment"). Products with multi-year development cycles sometimes end up relying entirely on manual QA (eg. video games).

📈 High vs 📉 low risk functionality

In case of bugs, would a space shuttle crash? Or would paying customers get pissed off? Or would some internal employee have to deal with a broken admin panel for a bit?

Different risk profiles call for different onus on testing.

📈 Obscure vs 📉 widely used functionality

This is a bit paradoxical, but the more universal and far-reaching something is (eg. a base class, button component, login system), the less bespoke testing it needs. The users of the code (either human or digital) become its testers.

A custom monthly export used by only a few customers... now that's something that should be tested.

📈 Background vs 📉 user-facing functionality

Eg. a weekly cleanup CRON job vs a CRUD page.

The harder it is for manual QA to trigger and verify, the more automated testing is needed.

📈 Clean vs 📉 entangled code

Globals, mutations, and external dependencies can make testing tricky. When inputs and outputs are clear, tests are easy. The easier something is, the more it will be done.

TDD and functional programming are useful techniques for maximizing these aspects of a codebase (at the cost of others).

📈 Branching vs 📉 linear code

The more ifs and loops there are, the more there is to test.

Abstractions like OOP polymorphism and AOP can help "straighten" the code, which reduces the need for testing.

📈 Alive vs 📉 one-off or static code

Code that is either very short-lived (eg. a migrations) or long-lived (eg. framework) tends to be written once, manually tested, shipped and then forgotten. It doesn't benefit from automated testing as much as "alive" code, that gets changed or swapped in and out regularly (eg. business logic).

📈 Bug fix vs 📉 feature work

Bug is an indicator of a "bug prone" code, which should be hardened by tests. Also, tests are a nice way to "prove" the bug has been fixed.

For fresh feature code, it is often unclear how long it will live and how well it will "behave". So tests can be postponed until later.

📈 Dynamically vs 📉 statically typed language

Ruby, python, PHP and javascript programmers often need to write tests that type checkers in other languages provide for "free" (with "free" being intentionally quoted).

📈 Unchecked vs 📉 statically or runtime checked code

Linters and assertions that get triggered during startup can be cheaper alternatives to explicit tests (with their own costs on different axes).


My main point here is:

Tests are neither panacea nor useless.
They are just tools that grant certain benefits, at certain costs.
Circumstances, not ideology, should decide what to test and in what way.

TLDR: It depends.