This post is a response to Ovid's series about agility without testing:
I started to respond to the last and realized that my comment was long enough to be a blog post of my own.
First, let me say that I'm enjoying this series. Ovid and Abigail are both challenging conventional wisdom around technical debt and I think that's really healthy.
However, I note that Ovid's evidence in favor of emergent behavior is anecdotal, which is probably inevitable for this sort of thing, but dangerous. "It worked these handful of times I remember it" has confirmation bias and no statistical significance.
We can't run a real experiment, but we can run a thought experiment: 100 teams of strict TDD vs 100 teams of the Ovid approach [which he really ought to brand somehow] from the same starting point (perhaps in parallel universes) for a few months of development.
What could we expect? Certainly, the TDD teams will spend more of their time on testing than the Ovid teams. So the Ovid teams will deliver more features and fix more bugs in the same period of time.
If one believes even a little of the Lean Startup hype, the Ovid teams will have more opportunities to see customer reactions — they will have a shorter OODA loop.
On the flip side, the TDD team has less technical debt and lower risk profile. I disagree with the idea that technical debt is an option. I believe it does have an ongoing cost — that future development is less efficient and more time consuming to at least some degree.
I call this "servicing" technical debt, which is just like paying only the interest on your credit card. You might never pay down any of the technical debt, but as you accumulate more, you'll pay more to service it.
It seems clear to me that which result you prefer depends quite a lot on the maturity of the product (possibly expressed in terms of expected growth rate) and the overall risk level.
For a brand-new startup, the risk of failure is already pretty high regardless of coding style. A faster OODA loop probably reduces risk more than improved tests do, because the bigger risk is building something customers don't want. And with such a high risk of failure, there's a chance that you'll simply be able to default on technical debt.
If I can riff on the financial crisis, a startup has subprime technical debt. It's either successful — in which case there will be growth sufficient to pay off technical debt (if the risk/reward tradeoff justifies it) — or it fails, in which case the debt is irrelevant. Rapid growth deflates technical debt.
For a mature business, however, it might well go the other way. Risk to an existing profit stream is more meaningful and technical debt has to be paid off or serviced (rather than defaulted on) which reduces future profitability that might not be sufficiently offset by growth.
The quandary will be businesses — or products (if part of an established business) — that are in between infancy and maturity. There the "right" approach will depend heavily on the risk tolerance and growth prospects.
Regardless, I tend to strongly favor TDD at the unit-test level, where I find TDD helps me better define what I want out of a particular piece of code and then be sure that subsequent changes don't break that. At the unit test level, the effort put into testing can be pretty low and the benefits to my future code productivity fairly high.
But as the effort of testing some piece of behavior increases — due to external dependencies or other interaction effects — it's more tempting to me to let go of TDD and rely on human observation of the emergent behaviors because I'd rather spend my time coding features than tests.
I think that puts me a little closer to the Ovid camp than the strict TDD camp, but not all the way.