Or at least start testing the right things.
Why I don't really like test driven development
- Because the minute you add a label to an approach, within a week it becomes a "process", within a month someone will organise a conference and within six months its just more dogma and doctrine. But that aside...
- There's a chain. At one end are the people somewhat pompously referred to as "the business". At the other end an assortment of developers and designers patronisingly referred to as "geeks" and "creatives". The people at "the business" end want to solve a problem; the people at the building stuff end generally help to solve problems. The more links in the chain, the more noise gets introduced until you end up with requirements and "user stories" as chinese whispers. Professionalising a class of people into business analysts and product managers doesn't stop chinese whispers being chinese whispers.
Usually, the business outcomes are too coarse-grained to be used to directly write software (where do you start coding when the outcome is "save 5% of my operating costs"?) so we need to define requirements at some intermediate level in order to get work done.
The point being that by the time any of this stuff hits the designer / developer it's usually passed through the hands of several intermedaries and been reduced to some requirements / user stories. But requirements don't matter. They're just an abstraction to make it easier to start writing code. What matters are the "business" objectives. Or, without wanting to sound too New Labour, the "outcomes".
The usual pattern is to explain the what to the developer / designer and leave the how to them. Which might be fine. But explaining the why is probably more important. Who knows, they might even have an opinion on the what. Stranger things have happened.
Anyway, the more you separate developers and designers from the "why" the more we head back to the bad old days of waterfall, with the people doing the work sat at the end of the process being drip-fed user stories and expected to lay golden feature eggs.
- Requirements are fine as a starting point for code and using those requirements to generate tests for that code makes sense but you're only testing the code against the requirements. You're not testing the service / product / let's-just-call-it-a-website against business objectives and outcomes.
Businesses have all kinds of ways of measuring performance. That's what the final slide of the boss people's presentation on "KPIs" is all about. And anything that can be measured can be tested. The main problem is they usually get measured six months after the fact.
The objective might be to get more registered users; the requirement might be a simplified registration process and / or the ability to authenticate with 3rd party accounts. The objective might be less abandoned shopping carts; the requirement simplified checkout and / or one click purchase. You can measure any of these objectives / outcomes so you can test them. But software tests only test software against requirements and...
- ...code does not live in isolation. Until real code meets real data and real content and real copywriting and real design and real users with real needs (and probably a real marketing campaign) you can't measure the changes you make against real objectives.
- It's fine to have those screens in development corner that show regression tests passing and failing with green and red lights. But it would be good to see other screens showing real registration rate data, real close account rate data, real buy / play / consume button data, real abandoned shopping cart data, real inbound traffic from search engines or social media or whatever data.
- If you're measuring the impact of your work against real usage you can make tiny, tiny changes very, very quickly; isolate those changes from other changes in the system and see how they work for real people. Test code against requirements by all means but don't assume your tests tell you anything meaningful.