When “doing it right” starts slowing everything down

There was a time when Clean Architecture felt unquestionably right.

Clear layers. Clean boundaries. Business logic isolated from frameworks.
The code looked professional. Reviews were smoother. Tests were easier to reason about.

And for a while, it worked.

Until the system grew just enough to expose the cost of being clean everywhere.

Not a big failure.
Not a dramatic collapse.

Just a slow, uncomfortable drag.

After six to twelve months, we started noticing:

  • Small changes taking longer than they should
  • Debugging production issues requiring too much layer-hopping
  • Junior engineers understanding Django, but not understanding our system

Nothing was obviously broken.
But progress was quietly slowing down.

At that point, the question stopped being
“Are we applying Clean Architecture correctly?”

And became something harder:

Which parts of Clean Architecture are actually helping us —
and which parts are just making us feel safer?


The context: Django, FastAPI, and a system that has lived

This wasn’t theoretical.

We were running:

  • Django / Django REST Framework for core APIs
  • A few FastAPI services for async or internal workloads
  • PostgreSQL, Redis, background workers
  • A small backend team (3–6 engineers), with on-call rotation
  • A codebase that had survived multiple refactors and feature cycles

We applied Clean Architecture seriously:

  • Domain, use cases, infrastructure, adapters
  • Repository interfaces for most models
  • Controllers calling use cases, never touching the ORM directly

On paper, it looked hard to argue with.


What was worth keeping — because it paid for itself

We didn’t abandon Clean Architecture.
We stopped applying it mechanically.

Dependency direction is not negotiable

One principle consistently proved its value:

Business logic should not depend on frameworks.

We kept:

  • Domain code free of Django ORM imports
  • Use cases unaware of HTTP, serializers, or request objects

Not for purity’s sake.
But because this boundary:

  • Reduced long-term coupling
  • Made refactoring survivable
  • Prevented business logic from dissolving into plumbing

It protected the parts that would be expensive to fix later.


Clear boundaries around “expensive” logic

Not all parts of the system carry the same weight.

Pricing rules, permissions, multi-step workflows —
these are expensive to change, and costly to get wrong.

Those stayed behind explicit boundaries.

Clean Architecture worked best when it helped us
invest complexity where it actually mattered.


What we stopped doing — because the cost wasn’t worth it

The problem wasn’t Clean Architecture itself.
The problem was applying it everywhere.

Repository interfaces for trivial CRUD

At one point, we had repository interfaces for almost every model.

In theory, they added abstraction.
In practice, they added work.

  • Adding a field meant touching multiple layers
  • Simple queries were wrapped in pass-through methods
  • Tests mocked repositories with no real behavior

Eventually, we had to say it out loud:

If your repository just forwards calls to the ORM,
the abstraction isn’t protecting you — it’s slowing you down.

For simple CRUD, we stopped pretending the ORM was the enemy.


Overdoing DTO and mapping layers

Entity → DTO → schema → response.
Input → DTO → domain object.

The diagrams looked great.
Production debugging did not.

When things broke, the question stopped being:

“Where is the business rule wrong?”

And became:

“Which layer transformed this value incorrectly?”

The second-order effects were real:

  • Higher cognitive load
  • Slower debugging
  • More time navigating code than understanding it

We kept mapping only where it bought us real flexibility.


Treating every module like core domain

This was the biggest mistake.

Admin features. Reporting endpoints. Internal tools.
We “cleaned” all of them.

The system didn’t need equal protection everywhere.

Once we asked:

“Which parts of this system will still matter in three years?”

The answer was obvious.


The turning point: changing the question

The shift happened when we stopped debating architecture
and started talking about cost of change.

We asked:

  • What actually breaks if we remove this abstraction?
  • How often does this part change?
  • Is this protecting a real risk — or an imagined one?

From there, Clean Architecture stopped being a default structure.
It became a selective tool.


The decision we landed on

We keep Clean Architecture where change is expensive —
and relax it where change is cheap.

In practice:

  • Core domain → strong boundaries
  • Peripheral features → simpler structure
  • CRUD-heavy areas → readability over purity
  • ORM → a tool, not a liability

This wasn’t a step backward.
It was a step toward mature engineering judgment.


The second-order effects we didn’t expect

After the change:

  • Features shipped faster
  • Production debugging became less draining
  • Junior engineers onboarded more quickly
  • Reviews focused on intent, not structure

Most importantly:

  • Clean Architecture stopped being something we defended
  • And became something we used deliberately

That shift alone was worth it.


Before adding another abstraction, ask yourself

  • Is this core business logic?
  • How often will this change?
  • What’s the real cost if this is wrong?
  • How painful will production debugging be?
  • Who maintains this a year from now?

If the answers don’t justify the abstraction, don’t add it.


Conclusion

Clean Architecture isn’t wrong.
Blind consistency is.

Architecture exists to manage cost over time,
not to signal correctness.

When you stop asking “Is this clean?”
and start asking “What does this protect us from?”
you stop following architecture —
and start practicing it.


Personal note

This isn’t an argument to abandon Clean Architecture.
It’s a reminder to own your architectural decisions,
instead of inheriting them.

If this post made you pause before adding another abstraction,
it has already done its job.

💬 Bình luận