Test-Driven Development

Test-Driven Development

Test-Driven Development (TDD) is a development practice where you write a failing test before writing the production code that satisfies it. The cycle is Red → Green → Refactor: write a test that fails (Red), write the minimum code to make it pass (Green), then clean up the design without breaking the test (Refactor). TDD is not primarily about test coverage — it is a design technique. Writing the test first forces you to define the public interface and expected behavior before implementation, which consistently produces smaller, more focused units with clearer contracts.

The Red-Green-Refactor Loop

Red   → Write a test for the next small behavior. It must fail (proves the test is real).
Green → Write the simplest code that makes the test pass. No gold-plating.
Refactor → Improve the code (extract methods, rename, remove duplication) while keeping tests green.

Each iteration should take 2–10 minutes. If a cycle takes longer, the step is too large — split it.

Concrete Example

Implementing a PriceCalculator that applies a discount for orders over $100:

// Step 1 — RED: write the test first
public class PriceCalculatorTests
{
    [Fact]
    public void AppliesDiscountWhenOrderExceedsThreshold()
    {
        var calc = new PriceCalculator(discountRate: 0.10m, threshold: 100m);
        decimal result = calc.Calculate(120m);
        Assert.Equal(108m, result); // 120 * 0.90
    }

    [Fact]
    public void NoDiscountBelowThreshold()
    {
        var calc = new PriceCalculator(discountRate: 0.10m, threshold: 100m);
        decimal result = calc.Calculate(80m);
        Assert.Equal(80m, result);
    }
}

The tests don't compile yet — PriceCalculator doesn't exist. That's the Red state.

// Step 2 — GREEN: minimum code to pass
public sealed class PriceCalculator(decimal discountRate, decimal threshold)
{
    public decimal Calculate(decimal amount) =>
        amount > threshold ? amount * (1 - discountRate) : amount;
}

Both tests pass. Now Refactor: the logic is already clean, so nothing to change. Move to the next behavior.

What TDD Improves (and What It Doesn't)

Improves:

Does not improve:

Pitfalls

Testing Implementation Instead of Behavior

What goes wrong: tests assert on internal state (private fields, method call counts) rather than observable outcomes. When you refactor the implementation, tests break even though behavior is unchanged.

Why it happens: writing tests after the fact often leads to "white-box" tests that mirror the implementation structure.

Mitigation: test through the public interface only. Assert on return values and observable side effects (e.g., what was written to the DB), not on how the code achieves them. If you need to verify a dependency was called, use a mock sparingly and only for the interaction that matters.

Over-Mocking

What goes wrong: every dependency is mocked, so tests pass but the real wiring is never exercised. A bug in the DI configuration or a wrong interface implementation goes undetected.

Why it happens: mocking is easy and makes tests fast. It becomes the default even when a real implementation (in-memory DB, fake clock) would be more reliable.

Mitigation: mock at the boundary of your system (HTTP clients, external queues, email senders). Use real or in-memory implementations for internal dependencies (repositories, domain services). Complement unit tests with integration tests that use real infrastructure.

Writing Tests After the Fact and Calling It TDD

What goes wrong: tests are written after implementation to hit a coverage target. The design benefits of TDD are lost — the code was already shaped by implementation thinking, not by the test-first interface design.

Why it happens: deadlines, habit, or misunderstanding TDD as "having tests" rather than "test-first design."

Mitigation: TDD is a discipline, not a metric. Coverage numbers don't distinguish test-first from test-after. The signal is whether the test forced a design decision.

Tradeoffs

Approach Strengths Weaknesses When to use
Strict TDD (test-first every line) Best design feedback, high confidence Slow for exploratory/spike work, overhead for trivial code Core domain logic, complex business rules, public library APIs
Test-after (write code, then tests) Faster initial development Weaker design signal, tests often mirror implementation Prototypes, scripts, UI glue code
No tests Fastest short-term Regressions accumulate, refactoring becomes risky Never in production code

Decision rule: apply strict TDD to domain logic and anything with non-trivial branching. For infrastructure glue (DI wiring, config parsing), write integration tests after. For throwaway spikes, skip tests entirely — but delete the spike before it becomes production code.

Questions

References


Whats next