Clearwateranalytics.com is now cwan.com. Your Clearwater and Enfusion credentials have not changed.
Blog
11 min read

Vibe Check: Software Engineering with AI

By Eric Christianson

Software engineering is changing fast in the age of AI, and the way we design, build, and ship software is evolving with it. This multi-part series dives into practical techniques for using AI to boost productivity, keep code quality high, and help your teams move faster—without losing control of your architecture or standards.

Introduction

Teams are already using AI to build real features in real codebases, —not just experiments or demos. The open question is how to make that work predictable and sustainable as systems grow more complex.

This post focuses on three things:

  • The vision: a development cycle where AI can carry a business requirement all the way to production-ready code.
  • The transition phase we’re in now: AI as a powerful assistant that still needs structure, context, and guardrails.
  • The practical foundation: treating AI as a junior developer, and capturing how we work together in a few small but important documents.

By the end, you should have a clear mental model for AI’s role on the team and a concrete starting point for WORKING-TOGETHER.md and CODING-STANDARDS.md in your own coding project.

The Vision: From Requirements to Production-Ready Code

The industry’s ambitious goal is clear: transform a business requirement into production-ready code through AI automation—a development cycle where AI carries work from idea to implementation with minimal human intervention.

But there’s a critical insight that unlocks this vision: the same principles that make human developers successful apply to AI systems.

Just as vague requirements lead to failed projects and overly verbose specifications cause confusion, AI coding assistants need well-crafted context to deliver reliable results. The breakthrough isn’t in the AI’s capabilities alone, but in learning how to communicate requirements with the right balance of clarity and detail.

When we provide AI with well-structured context—clear objectives, specific constraints, relevant examples, and architectural guidance—we dramatically reduce guesswork and hallucination. Instead of improvising, the AI follows a clear path from business need to deployable feature.

The future of development isn’t just about smarter AI; it’s about becoming better requirement architects who can design the right inputs for automated code generation.

AI-Assisted Software Engineering: Beyond Vibe Coding

Until that vision is realized, humans are still responsible for building and maintaining high-quality software assets. The question becomes: how do we work with today’s AI tools in a way that is reliable, repeatable, and worth the investment?

The difference between effective AI assistance and “vibe coding” lies in having a systematic approach rather than letting the AI rely on intuitive guessing.

When developers provide AI with:

  • Access to technical documentation and requirements.
  • Clear examples of existing code patterns.
  • Explicit architectural constraints and boundaries.

…the AI becomes a reliable implementer instead of a creative guesser.

This is where the mental model of AI as a junior developer is powerful. In this model:

  • The AI excels at execution, reading documentation, following established patterns, and implementing solutions precisely.
  • The human acts as senior architect, making design decisions, clarifying requirements, and reviewing outcomes.

Crucially, the AI should not:

  • Make architectural decisions on its own.
  • Fill in missing requirements with assumptions.
  • Redesign systems because it “seems cleaner” without explicit direction.

Thinking of the current phase as a transition period helps:

  • Acknowledge the gap between our long-term vision and today’s reality.
  • Connect solid software design principles directly into AI-assisted code creation.
  • Frame AI as an effective tool that, with proper guidance and review, can significantly improve feature velocity.

The rest of this post focuses on the practical pieces you can put in place now—project-level collaboration rules and coding standards—that make this AI-as-junior-developer model work in real codebases.

Markdown is the language AI speaks.

It’s easy for humans to read and easy for AI to parse. Think of your markdown files as the instruction manual that tells AI how to behave in your project.

Project-Level AI Collaboration Rules

Most teams have coding standards. Very few have explicit collaboration rules for how they’ll work with AI.

Without those rules written down, you get the worst of both worlds:

  • AI improvises when it should ask questions.
  • Engineers get surprised by changes they didn’t intend.
  • Everyone spends more time managing miscommunication than building features.

The fix is to treat these rules as a small, versioned artifact in your project—written in markdown—so both humans and AI can see and follow them. You might keep them in a dedicated file (such as WORKING-TOGETHER.md), or you might embed them into a tool-specific configuration file (like CLAUDE.md or a rules file). The filename is flexible; the content is what matters.

Four parts go a long way.

  1. Clear Roles – You’re the Architect, AI Is the Coder

You’re the architect, AI is the coder. AI doesn’t get to make design decisions or guess what you want. It asks questions and implements what you tell it to do.

Here’s one way to express that as a markdown prompt block for your AI tool:

# Our Pair Programming Team

- **You are the senior architect** – provide guidance, control commits, make decisions.
- **I am the implementer** – follow your guidance, ensure architectural alignment.
- **We use git** – you control all commits, I implement and explain changes.
- **Incremental development** – you request features step-by-step, I implement exactly what's asked.
- **One step at a time** – I won't do more than requested.
- **Report discoveries** – if directions are unclear or I find issues, I report and wait for input.
- **Present decision points** – I show options and wait for your decision.

You can also experiment with a more concise variant:

Alternative concise rule:

“AI generates code and suggestions, but humans own commits and architecture decisions. AI should not introduce new frameworks, services, or patterns without an explicit request.”

Both formats communicate the same idea; some tools respond better to a short, high-signal rule, others to a more explicit list.

  1. Structured Communication – No More Guessing Games

AI works best when it knows how you expect it to communicate. Instead of ad hoc back-and-forth, give it a script you can paste directly into your rules file.

For example:

## Communication Rules

- **STOP** when stuck, confused, or discovering issues.
- **REPORT** findings with full context and options.
- **WAIT** for your input before proceeding.
- **DOCUMENT** everything for transparency.

You can extend that with more explicit analysis steps:

- **ANALYZE** – restate the problem and evidence.
- **EXPLAIN** – outline what you plan to change and why.
- **GET APPROVAL** – wait for an explicit "go ahead" before larger changes.
- **ASK, don’t assume** – when something is unclear, ask targeted questions instead of guessing.

Alternative concise rule:

“For any non-trivial change, AI must restate the problem, propose an approach, and wait for confirmation before editing files.”

This turns AI from a black box into a collaborator who talks through the plan instead of silently changing things.

  1. Validation Checkpoints – Catch Problems Early

Small course corrections beat large surprises. A few simple checkpoints keep you in control of what AI is doing.

You can express your implementation flow as a prompt like this:

## Implementation Flow

- **Before** – confirm understanding, verify architecture fit, get approval.
- **During** – one step at a time, report issues immediately, document changes.
- **After** – summarize what was done, explain integration, report next TODOs.

Alternative concise rule:

“At the end of each step, AI summarizes the files changed, the behavior modified, and any follow-up TODOs before we move on.”

These checkpoints are cheap, and they prevent the “I have no idea what just changed” feeling that kills trust.

  1. Quality Standards – Follow the Playbook

Finally, you can set expectations for code quality so AI doesn’t “cut corners” in ways your team wouldn’t.

For example, your high-level prompt might look like:

## Code Quality Standards

- **Maintain consistency** – use established project patterns and conventions.
- **Quality first** – clean, readable code that follows team standards.
- **Link to coding standards** – follow the rules described in our coding standards document.

An example on how to link another document in a prompt

- **Coding Standards** – see `docs/CODING-STANDARDS.md` patterns

You can extend that with more detailed DO/DON’T guidance in the same markdown rules file or in a separate coding standards doc that you link to:

- **DO** follow patterns in the repo and respect existing architecture and module boundaries.
- **DO** add or update tests when changing behavior.
- **DO** update relevant docs when introducing new patterns.
- **DON’T** modify unrelated code "for cleanliness" without explicit agreement.
- **DON’T** introduce new architectural patterns without discussion.
- **DON’T** skip explanations for non-trivial changes.

Alternative concise rule:

“Any change to business logic must include at least one test update (unit or integration) and a brief explanation of the behavior change.”

Writing these rules in markdown gives you a concrete contract for how you’ll collaborate. You can paste them into a collaboration rules file or integrate them into whatever AI tooling your team uses.

Coding Standards as the Team Playbook

If collaboration rules describe how you and AI work together, coding standards describe how the code itself should look and behave.

AI is extremely sensitive to examples. If your codebase is inconsistent, your prompts are vague, and your standards are scattered across Confluence, you’ll get equally inconsistent results from AI. A single, markdown-based coding standards document gives AI a concrete playbook to follow.

Before diving into specifics, it helps to name the key concepts your coding standards should cover:

  • Principle priorities – a clear hierarchy when DRY, SOLID, KISS, and YAGNI conflict (no guessing which matters more).
  • Formatting rules – pointers to config files (Prettier, ESLint, TypeScript) instead of hardcoded, stale docs.
  • Architectural patterns – component structure, naming conventions, file organization; follow existing patterns, don’t invent new ones.
  • Quality gates – testing requirements, error handling, documentation standards; non-negotiable minimums.

The following sections show how to turn each of these areas into markdown prompts that AI can follow.

  1. Core Principles

Start by making your core principles explicit. This gives AI a shared vocabulary with the team.

## Core Principles

- **DRY** – extract shared utilities, components, and constants.
- **SOLID** – single responsibility, focused interfaces, dependency injection.
- **KISS** – simple, readable code with clear names.
- **YAGNI** – only implement what's needed now.

This mirrors what you’d expect from human engineers and gives AI a list of principles to reference when making tradeoffs.

  1. Principle Conflict Resolution → Stop AI From Overthinking

When coding principles conflict, AI needs to know which one wins. Otherwise it can spend a lot of effort trying to satisfy everything perfectly.

You can encode your priorities like this:

## Principle Conflict Resolution

**Priority Order**: YAGNI > KISS > SOLID > DRY

- **Early Development**: YAGNI > DRY (avoid premature abstraction).
- **Complex Features**: KISS > SOLID (favor simple, understandable designs).
- **Mature Code**: SOLID > DRY (good design over aggressive deduplication).

**Apply DRY only when**: 3+ instances AND stable requirements.

Optional concise variant:

Concise principle priority rule:
“When DRY, SOLID, KISS, and YAGNI conflict, prioritize YAGNI > KISS > SOLID > DRY. Apply DRY only after 3+ stable repetitions, and favor simple, well-factored code over perfect abstraction.”

  1. Code Formatting → Point to Truth Sources

Formatting rules in documentation go stale quickly. Instead of duplicating them, point AI to the actual config files that your tooling already uses.

## Code Formatting

- **Prettier** – see `.prettierrc` for current rules (`npm run prettier:fix`).
- **ESLint** – auto-fix on save, organize imports (see `.vscode/settings.json`).
- **TypeScript** – strict mode, path aliases (see `tsconfig.json` paths).

This keeps the markdown light while still giving AI the right pointers into your real sources of truth.

  1. Component Structure → Follow the Pattern

For UI work especially, structure matters. Show AI exactly how components should look and ask it to copy that pattern instead of inventing its own.

## Component Structure

```typescript

interface ComponentProps {
  // Props interface first
}

export const Component: React.FC<ComponentProps> = ({ prop1, prop2 }) => {
  // Hooks at the top
  const [state, setState] = useState()
  const { data } = useQuery()

  // Event handlers with useCallback
  const handleClick = useCallback(() => {
    // Implementation
  }, [dependencies])

  // Early returns for loading/error states
  if (loading) return <Spinner />
  if (error) return <ErrorMessage />

  // Main render
  return <div>{/* JSX */}</div>
}

Over time, this makes AI-generated components blend into the surrounding codebase instead of feeling like they came from a different project.

  1. Testing and Quality Gates

Finally, set clear expectations for tests, error handling, and documentation. These are your quality gates—what must be true before code is considered ready.

## Testing and Quality Gates

- **Testing patterns** – Link to testing patterns.
- **Coverage** – use Jest (`npm test`), aim for meaningful tests over raw coverage metrics.
- **Separate concerns** – keep data fetching separate from presentation.
- **Focused interfaces** – prefer small, specific prop interfaces.
- **Clear naming** – use descriptive variables and function names.
- **Error handling** – provide graceful fallbacks and user-friendly messages.
- **Appropriate logging** - have log levels and actionable messages/data for triage.
- **Reuse existing patterns** – follow established project conventions.

An example on how to link another document in a prompt

- **Testing patterns** – see `docs/TEST-PATTERNS.md` for mocking, MSW integration, and component testing.

Together, these prompts give AI a concrete sense of your coding style, structure, and quality expectations. Like your collaboration rules, you can evolve this document as your codebase and practices mature.

Tool Callout: Where These Prompts Live
So far, we’ve treated collaboration rules and coding standards as markdown text. Different tools have different ways to load that text and turn it into behavior, but the pattern is the same: write the rules once, let the tool read them automatically.

  • Claude Code (Anthropic) – uses CLAUDE.md files as instructions. A .claude.md file at the project root can include your collaboration rules and coding standards for the entire project. Additional CLAUDE.md files in subdirectories can scope rules to specific areas (for example, frontend/CLAUDE.md or services/payments/CLAUDE.md). A personal CLAUDE.md in your home directory can capture individual preferences.
  • Windsurf – uses a project-specific rules configuration to define how AI behaves in that workspace. You can include your markdown prompts for collaboration and coding standards there so the assistant reads them automatically whenever you work in that project.
  • OpenHands – uses policies and configuration to define what the agent is allowed to do and how it should behave. The same markdown rules can be translated into OpenHands policies, guiding its actions and guardrails instead of relying on ad hoc prompts.

In each case, the goal is the same: move expectations out of one-off conversations and into versioned markdown that lives with your code.

Looking Ahead: Practical tool applications, managing feature additions, Markdown, and Design

In this post, we focused on why naive AI usage fails and how to reset expectations by treating AI as a junior developer operating inside clear working agreements and coding standards.

That’s the foundation. But once you put AI into your daily workflow, a new set of questions appears:

  • How do you keep the AI’s plan visible instead of buried in a chat history?
  • How do you use markdown TODOs and feature requirement files to direct AI step-by-step instead of giving it huge, ambiguous tasks?
  • Does AI coding agent memory management matter?
  • How do you standardize test patterns so AI writes valuable tests instead of just more tests?
  • How do you use markdown and lightweight design artifacts to keep humans and AI aligned on architecture?

Those are the problems the second part of this series will tackle.

About the Author

Eric Christianson is a Staff Software Engineer at Clearwater Analytics (CWAN) with over 30 years of experience developing innovative technology solutions for leading companies. He focuses on creating user-friendly experiences and fostering collaboration across teams to solve complex challenges. Eric is currently leading efforts to build customer-facing applications developed with AI coding agents and providing AI-assisted workflows.