It’s no secret that I’m a firm believer in the effectiveness of test-driven development. But I don’t practice TDD 100% of the time. One of those times is when I’m just getting started with a new technology or library.

For me, one of the hardest aspects of TDD is writing a test when I have no idea what code I want to test. This was a major hurdle for me as I began working with JavaScript, React and the rest of that stack. I simply had no idea how to even write a test.

Spikes are great for this, of course. Go off for a few hours, play around, do research, and come back with a better sense of what you’re dealing with. But still, writing that first test can be paralyzing.

What I realized during my initial few weeks of working with a new tech stack is that I did, in fact, know what test I wanted to write. I just didn’t know how to set the test up, execute it, or validate the results. Pretty much every aspect of TDD 🙂

My “tests”

Let’s take a ubiquitous example to illustrate what I ended up doing – product search.

Syntax and usage of the actual test infrastructure are irrelevant for now – an implementation detail that I could deal with later as I learned more.

So I opened up a text editor and created a file called “product-search-tests.txt”

And I added the first test name. This part is easy and pretty much framework-agnostic anyway. “When search runs, and matching products exist for the given search term, return the matching products”.

Then the preconditions that I needed. “Product named keyboard is in stock. Product named keypad is out of stock. Product named fuzzy bunny is in stock.”

Of course, then the execution. Again, this was simple. “Run product search with search term key

Then the post-conditions. “Search results contain keyboard and do not contain keypad or fuzzy bunny

That was it. Text in notepad++. No code. No frameworks. Heck, no IDE.

Unsticking myself

Doing this made me realize I didn’t need to care about how a React app was structured. I didn’t get caught up in the details of how to set up an in-memory product catalog (do I use redux, context, something else?) or how to query the results (do I need a button click event handler, what about an API?).

I was able to completely focus the behavior.

That was enough for me to really get going. I ended up writing a bunch of “tests” for a lot of different scenarios. Ignoring the technology removed all of those constraints and kept my brain from trying to solve problems that may not even exist.

Making the tests real

Of course, a text document isn’t a test suite. It’s pretty much just a list of requirements. But this gave me a blueprint for the real test implementation. I was able to gradually translate each test into code. Of course, some tests changed. Some dropped off completely. I added others. As the code materialized, I split some tests out to more focused areas.

The appeal of test-after development

I’ve recommended to some people that I’ve mentored over the years to write their first tests with a pencil and a notepad. Developers typically want to explore the code and figure out the details of the test before they get started. This way they “know” what to test, so it’s “easier”.

I think this is the appeal with the more common test-after approach to unit testing. Here, you can figure everything out, run your manual tests to ensure it all works, then go back and write unit tests that end up proving that the code does what the developer made it do.

I think it’s been shown more than enough times that this approach is great for hitting code coverage metrics, but terrible for actually growing a code base organically through small steps. Much has been written about the fragility of test-after unit tests, so I won’t add redundancy here.

Is this really TDD?

This is absolutely not red-green-refactor, “by the book” TDD. But I don’t really care because I think this captures the intent of TDD. I think about the next chunk of behavior needed, write a test to validate a small increment of progress, and move on to the next increment. Of course, this approach defers the “validation” step until a better understanding of the problem emerges, but I’m OK with that.