How to Vibe Code Without Creating Tech Debt

cancel
Showing results for 
Show  only  | Search instead for 
Did you mean: 
Community Blogs
6 min read
L2 Linker

smadugundi_1-1769611196213.png

 

Simple principles that make AI-assisted coding reliable in real codebases.

 

Vibe coding is now part of daily engineering work: describe intent, let an AI assistant generate code, iterate fast, ship faster.

We did that too. And it worked—at first.

Then we started noticing patterns repeating across changes. Each one looked small. Together, they quietly added tech debt at the speed of generation.

This post is a practical playbook for getting better results from vibe coding—without losing speed.

Starter kit (template + examples): https://github.com/saneroen/vibe-coding-starter-kit.git

 

What we noticed when we vibe coded without structure

 

1) “It compiled” became the definition of “done”

 

The assistant would place code wherever it fit. The change worked, but the repo became harder to understand. Over time, the codebase lost its shape.

 

2) Business logic leaked into infrastructure—and stayed there

 

Handlers started doing validation, persistence, retries, error mapping, and domain decisions in one place. Changing a business rule meant editing route code, DB code, and vendor logic together.

 

3) Every feature introduced a new pattern

 

One module used env vars directly. Another used a config dict. Another introduced a wrapper. Same story for database calls and error handling. Soon, refactoring felt risky because there was no “one way” to do things.

 

4) Provider choices turned into “string soup”

 

We saw "postgres", "pg", "postgresql" appear in the same codebase over time. It wasn’t malice—it’s what happens when speed wins and contracts don’t exist.

 

5) Testing became expensive

 

When business logic was tangled with infrastructure, tests required real databases, real network calls, and brittle setup. The result: fewer tests and slower releases.

At that point, vibe coding wasn’t making us faster anymore.

 

What changed: we followed a few simple principles

 

We didn’t over-engineer. We added guardrails that made the repo predictable—for humans and for AI.

The results were immediate:

 

  • the assistant stopped guessing where things belong,
  • changes became smaller and more targeted,
  • tests became cheaper to generate,
  • swapping technology stopped threatening business logic.

 

Here are the principles.

 

Principle 1: Keep business logic independent of technology choices

 

Tech changes constantly: databases, SDKs, observability stacks, queues, identity providers, model providers.

If business logic depends directly on those choices, every change becomes a rewrite.

 

So we enforced one rule:

 

Business logic must not depend on infrastructure details.

 

Treat infrastructure as replaceable.

This single rule is the foundation for portability and long-term speed.

 

Principle 2: Make “where code goes” obvious (so AI doesn’t guess)

 

Most vibe-coding mistakes happen because the assistant doesn’t know where a change belongs.

 

Use a predictable layout with clear boundaries:

 

  • Domain: business concepts and rules
  • Application: use cases + contracts
  • Infrastructure: DBs, vendors, SDKs, integrations
  • Presentation: API/CLI/UI entrypoints and adapters

 

And enforce one rule:

 

Dependencies point inward.

 

Infrastructure depends on application/domain, not the other way around.

Once this shape exists, AI stops spraying logic everywhere and starts following the structure.

 

Principle 3: Write interfaces first (contracts beat guesses)

 

When you ask an assistant to “add persistence” or “integrate service X,” it often takes the shortest path:

 

  • direct SDK calls inside handlers or use cases,
  • queries sprinkled everywhere,
  • inconsistent retries and error handling.

 

Instead:

 

All external interactions should go through interfaces defined in the application layer.

 

Examples:

 

  • UserRepository
  • NoteRepository
  • ObjectStorage
  • NotificationClient
  • LLMProvider

 

Now the assistant has a contract to implement. Your use cases remain clean. Your tests become simple.

 

Principle 4: Use enums for explicit provider selection

 

Without enums, provider selection becomes string soup:

 

  • "postgres", "pg", "postgresql"
  • "openai", "OpenAI", "oai"

 

Enums force a finite, explicit set of choices:

 

  • DatabaseProvider.POSTGRES
  • DatabaseProvider.SQLITE
  • StorageProvider.S3
  • StorageProvider.LOCAL

 

This reduces config drift, improves code reviews, and makes AI output more reliable because the assistant can see valid options.

 

A practical pattern:

 

  1. Enum defines valid providers
  2. Factory maps enum → implementation
  3. Use cases depend only on interfaces
  4. Infrastructure implements interfaces per provider

 

Principle 5: Add agent.md so the repo is AI-operable

 

Clean structure helps humans.

 

agent.md helps AI assistants behave consistently inside your repo.

 

Without it, every assistant session becomes a fresh interpretation of your codebase. With it, you get predictable output:

 

  • where code should go,
  • what rules must be followed,
  • what patterns to copy,
  • what to avoid.

 

If you adopt only one habit from this post, make it this:

 

Every repo should include agent.md.

 

What to include in agent.md

 

Keep it short and opinionated:

 

  • repo goal (what it is / isn’t)
  • boundaries + dependency rule
  • “where code goes” mapping
  • interface rules (contracts in application layer)
  • provider selection rules (enums + factory)
  • observability rules (don’t couple use cases to a vendor logger/tracer)
  • testing rules (mock interfaces in unit tests)
  • naming conventions + examples to follow
  • guardrails (don’t refactor unrelated files unless asked)

 

What life looked like after these principles

 

Changes became smaller

 

A feature mostly touched:

 

  • a use case,
  • an interface (if needed),
  • an infrastructure adapter (if needed),
  • a thin presentation entry.

 

The assistant got more accurate

 

Instead of inventing new patterns, it followed the repo shape:

 

  • domain stayed pure,
  • use cases orchestrated via interfaces,
  • infrastructure contained SDK details.

 

Provider swaps stopped being scary

 

Instead of rewriting business logic, we implemented a new adapter and updated config.

 

Testing became cheap

 

Unit tests mocked interfaces. Integration tests focused on adapters only.

That’s when vibe coding started making us faster again—because we weren’t creating tech debt with every iteration.

 

The “Vibe Coding Rules” checklist (copy/paste)

 

  • Don’t import vendor SDKs in domain/application
  • Add interfaces before implementations
  • Use enums for provider selection (no magic strings)
  • Keep use cases pure: orchestration only
  • Infrastructure implements interfaces, nothing else
  • Presentation adapts inputs/outputs, doesn’t hold business logic
  • Unit tests mock interfaces
  • Integration tests validate adapters
  • Update agent.md when patterns evolve
  • Don’t introduce new patterns without aligning repo rules

 

Starter kit

 

If you want a working baseline you can clone and use immediately:

 

https://github.com/saneroen/vibe-coding-starter-kit.git

 

It includes:

 

  • clean architecture layout,
  • interfaces + enums + factory wiring,
  • a sample use case,
  • a CLI entrypoint,
  • unit tests,
  • and a reusable agent.md.

 

TL;DR

 

Vibe coding is not the problem. Unstructured vibe coding is.
A little structure goes a long way: clear boundaries, explicit contracts, safe configuration, and repo-local instructions (agent.md).

  • 111 Views
  • 0 comments
  • 0 Likes
Register or Sign-in
Labels
Contributors