Clearwateranalytics.com is now cwan.com. Your Clearwater and Enfusion credentials have not changed.
Blog
20 min read

Beyond Vibe Coding: Software Engineering with AI in Practice

Software Engineering with AI in Practice
huge
By Eric Christianson
Introduction

In the first post of this series, we looked at why naive “just ask the AI to code” approaches feel unpredictable, and how clear collaboration rules and coding standards make AI behave more like a reliable junior developer than a creative guesser.

This post goes beyond vibe coding and shows practical ways to work with AI coding agents to produce better-engineered software products in real projects.

As a senior engineer, you still own design, requirements, and architectural decisions. AI is most effective when you use chat to explore design options and clarify incomplete requirements, then capture the agreed plan in markdown. From there, you validate each step with incremental changes and tests to prevent regressions so progress stays both transparent and correct. The practical question becomes:

How do you make AI’s plan visible, keep it aligned with your design, and ensure tests and documentation stay integrated into your day-to-day workflow?

We’ll focus on three levers you control directly:

  • Markdown feature files and TODOs as your steering wheel for AI execution.
  • Testing guardrails that keep AI writing tests that actually protect your system.
  • Design artifacts and tool memory that help you and AI stay aligned on architecture over time.

By the end, you’ll have a practical playbook for working with AI coding agents: how to turn vague tickets into clear feature definitions and visible markdown TODO lists, structure work as small reviewed commits, and use testing-as-you-go to prevent regressions. You’ll also see how to visualize architecture with lightweight diagrams so design, implementation, and tests stay aligned as the codebase evolves.

1. The Transparency Problem

Most AI coding tools do a surprising amount of planning that you never see.

They break problems into subtasks, choose an order of operations, and quietly maintain their own internal “todo list” as they work. Even when they show you a few bullet points in the chat window, that’s only a thin slice of the actual reasoning happening in the background.

That creates three problems:

  • Hidden internal planning – You can’t inspect or edit the real plan the AI is following.
  • Partial visibility – When the AI shows todos, you don’t know what it left out or silently changed.
  • Session isolation – Whatever plan did exist evaporates as soon as the tab closes or the context window resets.

If you’ve ever had an AI “helpfully” tweak a few extra lines of code or string content you didn’t mention, you’ve already met this hidden plan. Somewhere, a little invisible project manager decided those changes were part of the sprint—it just forgot to loop you in.

Claude Code’s Plan Mode

Anthropic’s Claude Code agent exposes this hidden planning explicitly through an optional plan mode for larger tasks. Instead of diving straight into edits, it first generates a step-by-step plan you can review and adjust, then executes against that plan.

If you’re using Claude Code, it’s worth experimenting with plan mode whenever you ask for non-trivial changes (new features, refactors, multi-file edits). You still want the final plan written into your feature markdown file, but plan mode can give you a clearer starting point and make it easier to catch misunderstandings before code changes happen.

For humans, keeping the whole plan in your head leads to forgotten requirements and missed steps. AI runs into the same problem when the real plan only lives in its own working memory or a single session—you have less opportunity to review or correct it before code changes start.

The fix is moving the plan out of the AI’s head and into shared, persistent artifacts—markdown files, checklists, and test plans that live next to your code.

2. Team Project Management vs Personal TODOs

Team project management tools already do their job well.

  • Jira, Linear, Azure DevOps, and similar tools handle priorities, dependencies, and releases.
  • They often hold the official requirements, but those requirements aren’t engineering plans- they describe what to build, not how to build it.
  • Product and project leaders use them to track what should happen across the team and when.

What they don’t give you is a step-by-step execution plan for how one engineer and one AI assistant will actually build a specific feature.

That’s where personal TODOs come in.

  • Team tools answer: What work exists? What’s the status?
  • Personal TODOs answer: What’s the implementation plan? What are the concrete steps that I want AI to execute. These are generally in a specific order and can be as detailed as pseudo code.

When you drop a Jira story directly into an AI chat and say “implement this,” you’re asking the AI to invent that execution plan in its own private memory. Instead, you want to:

  • Translate the story into a sequence of small, explicit steps.
  • Make that sequence visible in a markdown file that both you and the AI can see.
  • Treat that list as the steering wheel: you decide what’s on it and in what order; the AI helps you execute.
Using AI to Refine Requirements

When you paste a ticket’s requirements into an AI coding agent, don’t stop at “understood.” Ask it to identify what’s unclear, list assumptions it would otherwise make, and propose follow-up questions. Iterate on that dialog until you both have a shared, concrete understanding. From there, you can work together to turn the clarified requirements into a step-by-step plan in a feature markdown file.

You don’t stop using Jira or your existing processes. You add a layer underneath them: per-feature TODOs and checklists that make the AI’s plan transparent and controllable.

 

The key insight
large

 

The key insight: you and AI work together to turn requirements into an explicit implementation plan. That plan becomes the shared artifact you both execute against.

In the next section, we’ll put those TODOs into feature markdown files so both you and the AI are always working from the same shared plan.

3. Practical Feature Markdown Planning

Once you’ve used chat to clarify the requirements for a feature, you need a place to capture:

  • Refined requirements
  • Clarified questions and answers
  • Step-by-step instructions (TODOs)
  • Test plans and checkpoints
  • Design notes and diagrams

A feature markdown file is the natural home for all of that.

3.1 From Jira Story to Feature File

A typical flow looks like this:

 

Jira Story to Feature File
large

 

  1. Start from a Jira (or Linear/Azure DevOps) story.
  2. Create a feature file in your repo, for example:
    • docs/features/feature-101.md
  3. Paste the story into the top of the file under a ## Requirements section.
  4. Use a short series of interactive prompts with the AI to:
    • Surface questions about unclear requirements and capture your answers in the feature file.
    • Propose an initial implementation plan as a checklist of TODOs, then refine and reorder it based on your feedback.
    • Recommend early validation and testing checkpoints, and update them as the design becomes clearer.

Here’s the kind of prompt you might give the AI:

Create a feature file

`docs/features/feature-101.md` for this feature.

Here’s the Jira story to seed your understanding:
[paste story details]

Structure the file with these sections:

  1. **Requirements** – restate the story in your own words and list open questions.
  2. **Implementation Plan** – `[ ]` checkbox TODOs for tracking progress.
  3. **Implementation Workflow** – the steps you should follow for each TODO (for example: show which files you plan to change, explain your approach, wait for my approval, then run or update tests).
  4. **Checkpoints** – where we should pause and validate behavior.
  5. **Test Plan** – initial list of tests we’ll need (we’ll refine using TEST-PATTERNS.md later).

This will be an interactive process: start by asking what requirements need clarification, then iterate as you propose and refine the TODO list and testing checkpoints based on my answers.

You’re not asking the AI to disappear and “handle it.” You’re asking it to build the working surface you’ll both use.

3.2 Treating the Checklist as the Steering Wheel

Once you have the feature file and initial TODOs:

  • You review and edit the TODO list.
  • You explicitly choose which [ ] item to work on next.
  • You ask the AI to focus on one checkbox at a time.

For example:

“Let’s work on the next TODO in docs/features/feature-101.md.”

As you and the AI complete work:

  • The AI updates [ ] → [x] in the feature file.
  • You both see the live progress and remaining scope.
  • The plan survives beyond a single chat session because it lives in versioned markdown.

There’s also a small but real benefit: humans get the satisfaction of checking boxes, and AI tools get a clearer definition of “done” than a vague “keep going” in chat.

Commit Progress Frequently

Consider committing code after each meaningful TODO is completed. Small, reviewed commits make it easier to see exactly what the AI changed, spot unintended edits, and connect each change back to a specific checkbox in your feature file.

In the rest of this post, we’ll keep using this feature-file idea as the hub. We’ll plug in:

  • A feature-level test plan section inside docs/features/feature-101.md that lists the concrete tests for this change.
  • Team-wide testing guardrails from TEST-PATTERNS.md so every feature uses the same testing philosophy.
  • Design and architecture diagrams via Mermaid.
  • Tool memory as a helper around these markdown artifacts.

Transparent incremental steps lead to less developer frustration and more architecturally sound, testable code.

4. Markdown as the Language of Design and Tests

If feature files are the hub, markdown is the road surface everything runs on.

Markdown hits a sweet spot:

  • Human-readable – easy for engineers to review and edit.
  • Machine-friendly – easy for AI to parse and for tools to transform.
  • Flexible – supports text, checklists, code snippets, and diagrams in one place.

In practice, a feature file might include:

  • ## Requirements – restated story, edge cases, and clarifications.
  • ## Implementation Plan – the checkbox TODOs you and the AI are working through.
  • ## Implementation Workflow – agreed steps the AI should follow for each TODO.
  • ## Test Plan – initial unit, integration, and E2E test ideas.
  • ## Design & Diagrams – Mermaid diagrams for flow, architecture, or sequences.
  • ## Decisions – tradeoffs you made and why.
  • ## Open Questions – unresolved items to revisit.

Over time, this file becomes the single story of the feature—from problem to design to tests and implementation. AI isn’t just responding to whatever you said in the last prompt; it’s working inside a stable context that you can edit and refine.

# Feature 101 – Short, descriptive title

## Requirements

– Restated story in your own words

– Edge cases and constraints

– Open questions to resolve

## Implementation Plan

– [ ] Step 1 – small, concrete change

– [ ] Step 2 – next change

– [ ] Step 3 – follow-up work

## Implementation Workflow

For each TODO above, the AI should:

1. Show which files it plans to change.

2. Explain the approach.

3. Wait for approval.

4. Apply the change.

5. Run or update tests.

## Test Plan
– Unit tests: what behaviors will be covered

– Integration/E2E tests: which workflows should be verified

## Design & Diagrams

– Mermaid diagrams for data flow, component relationships, or sequences

## Decisions

– Key tradeoffs and why they were chosen

## Open Questions

– Items that still need clarification or follow-up

In Blog 1, we said: “Markdown is the language AI speaks.” In this post, we’re putting that into practice. You’re not just pasting prompts into chat; you’re curating a set of markdown documents—feature files, coding standards, testing patterns—that define how AI collaborates with your team.

From Feature File to Merge Request

Because the feature file captures requirements, TODOs, test plans, and decisions, AI can also use it to draft merge request descriptions. Instead of trying to remember everything that changed, you can ask the AI to summarize docs/features/feature-101.md into a concise GitLab MR description that explains the intent, key changes, tests added or updated, and any known follow-ups.

5. Test Patterns: TEST-PATTERNS.md

Your feature file’s test plan describes what you’ll test for a specific change. A TEST-PATTERNS.md document describes how your team writes tests in general: mocking strategy, file and naming patterns, unit vs integration strategy, coverage expectations, and any tool- or framework-specific guidance (for example, that you use Jest or prefer MSW for API mocking).

Without this kind of shared reference, AI focuses on test metrics instead of writing high-value tests that protect customer experience. It’s easy for it to generate:

  • Large numbers of shallow tests.
  • Over-mocked tests that assert implementation details.
  • Tests that pass even when user-facing behavior is broken.

The metrics can say 100% code coverage while your gut says 0% confidence that the code actually works.

A TEST-PATTERNS.md document gives AI a testing knowledge base it can use to build consistent test scaffolding:

  • Where different types of tests live and how they’re named.
  • What to mock and what to test for real.
  • How to focus on externally visible behavior.
  • What makes a test meaningful rather than just more coverage.
5.1 Test Organization – Structure Tests Predictably

AI needs clear rules about where tests go and how to organize them. Without this, tests end up scattered and inconsistent.

## Test Organization Guidelines

– **Unit tests** – test individual components/functions in isolation.

– **Integration tests** – test how components work together.

– **Test location** – say whether tests live side-by-side with code or in a `__tests__` directory.

— e.g., `Component.tsx` and `Component.test.tsx` in the same folder.

– **File naming** – define consistent patterns that make tests easy to find.

– **Test grouping** – keep related tests organized together logically.

You don’t have to adopt a particular layout, but you should give AI clear project-level patterns so tests stay consistent and easy to discover for both humans and tools.

Example file structure (one possible pattern):

src/components/ComponentName/

├── Component.tsx

├── Component.test.tsx # Unit tests with mocked dependencies

└── Component.msw.test.tsx # Integration tests with MSW (if you use it)

One powerful way to do this is to include project-level code patterns directly in TEST-PATTERNS.md so AI can copy them when generating new tests.

Example integration test with MSW (if your project uses it):

describe(‘Component with API’, () => {

beforeAll(() => server.listen())

afterEach(() => server.resetHandlers())

afterAll(() => server.close())

 

it(‘loads and displays data’, async () => {


render(<Component />)


await screen.findByText(‘loaded data’)


})


})

By giving AI a concrete pattern like this, you’re not just telling it “use MSW”—you’re showing exactly how MSW tests should look in your project so it can generate consistent scaffolding.

Using consistent structure and examples keeps tests uniform and cohesive across the project. It gives AI (and humans) a better chance of understanding existing patterns, finding tests that already exist, and implementing new tests in the right place.

5.2 Mocking Philosophy – Test the Right Level

Most AI coding tools learn how to write tests by analyzing your existing code and documentation. They infer patterns: where tests live, how they’re named, what they assert. A TEST-PATTERNS.md file makes it much more likely that those inferred patterns match what you actually want.

Out of the box, many tools can generate unit, functional, and basic regression tests on their own. Where they often need more guidance is choosing the right boundaries for API and integration testing. AI thrives on structured tasks and defined success criteria; without that structure, it’s easy for it to treat “more coverage” as the primary goal instead of writing tests that exercise meaningful behavior.

A mocking philosophy section in TEST-PATTERNS.md gives the AI a concrete target:

## Mocking Guidelines

– **Mock external dependencies** – APIs, third‑party services, file system, database.

– **Don’t mock what you’re validating** – keep the component or function under test real.

– **Mock at the boundary** – intercept network calls and infrastructure, not domain logic.

– **Keep mocks simple** – return predictable data; avoid re‑implementing business rules in mocks.

– **Test behavior, not implementation** – assert outcomes that matter to users or callers.

Instead of inventing its own approach, the assistant can align test generation with these rules.

// ✅ DO: Mock external API hooks or clients

jest.mock(‘@services/api’, () => ({

useGetDataQuery: jest.fn(),

useUpdateDataMutation: jest.fn(),

}))

 

// ❌ DON’T: Mock the code you’re trying to validate

jest.mock(‘./Component’, () => ({

internalHelper: jest.fn(),

}))

In your feature file or prompts, you can tie this together:

“When adding or updating tests for this feature, follow the Mocking Guidelines section in docs/TEST-PATTERNS.md.

Mock only external boundaries like network calls or storage; keep the component or function under test real, and focus assertions on observable behavior.”

That combination—clear test patterns plus explicit prompts—helps the AI aim coverage at the layers you care about instead of just increasing the number of tests.

5.3 Testing Approach – Test What Matters

Given just a codebase, many AI tools will happily generate more tests. What they don’t automatically know is which behaviors actually matter: which flows are riskier, which edge cases are important, and which parts of the system are already covered.

A Testing Approach section in TEST-PATTERNS.md gives the assistant a checklist for where to look and what to prioritize when proposing or adding tests:

Testing Approach

– **Start from behavior** – derive tests from user stories, feature files, and public APIs.

– **Test public interfaces** – functions, components, and endpoints that external callers use.

– **Test expected outcomes** – returned values, state changes, rendered output, or side effects that matter.

– **Cover edge cases** – invalid inputs, error conditions, and boundary scenarios called out in requirements.

– **Avoid implementation testing** – don’t assert on private helpers, internal state, or specific hook usage.

– **Prefer fewer, meaningful tests** – focus on scenarios that would hurt users if they broke.

You can then point the AI at both your feature file and this section when asking for tests:

“For this feature, read docs/features/feature-101.md and the Testing Approach section in docs/TEST-PATTERNS.md.

Propose tests that cover the main workflows and edge cases described there, focusing on public interfaces and observable behavior.”

Example:

// ✅ DO: Test behavior and outcomes

it(‘shows a loading indicator while data is fetching’, () => {

mockQuery.mockReturnValue({ isLoading: true, data: null,

error: null })

render()

expect(screen.getByTestId(‘loading-spinner’)).toBeInTheDocument()

})

 

// ❌ DON’T: Test internal implementation details

it(‘calls useReportQuery hook three times’, () => {

const spy = jest.spyOn(apiHooks, ‘useReportQuery’)

render()

expect(spy).toHaveBeenCalledTimes(3) // fragile and tied to implementation

})

By combining:

  • Feature files (what this change is supposed to do), and
  • Testing approach guidelines (how your team prefers to validate behavior),

you give the AI a much clearer target, so its proposed tests are more likely to follow your intended patterns and focus on behavior that matters.

5.4 Quality Standards – Make Tests Meaningful

The same software engineering principles that guide your coding standards should also guide your tests. Clear intent, single responsibility, and reliable behavior matter just as much in the test suite as they do in production code.

You can capture those expectations directly in TEST-PATTERNS.md so they become part of the context AI sees when it generates or updates tests:

## Quality Standards

– **Clear test names** – describe the behavior being verified in plain language.

– **Single responsibility** – each test should verify one specific behavior or scenario.

– **Reliable assertions** – tests should fail when behavior breaks, and pass when it works.

– **Independent tests** – avoid coupling tests to each other or to run order.

– **Quality threshold** – define a minimum standard (for example, coverage or critical paths) so tests focus on the areas that matter most.

Providing these test pattern prompts will give context to the AI to follow in how to write the tests:

“When adding or updating tests, follow the Quality Standards section in docs/TEST-PATTERNS.md. Favor clear names, single responsibility, reliable assertions, and independence between tests.”

That way, the assistant is working from the same quality bar you would apply if you were writing the tests by hand.

6. Tool Memory and Persistent Context

Modern AI tools often include some form of memory—a way to remember preferences, patterns, or past work across sessions. In Windsurf, for example, memories persist within a workspace and can be created automatically by the assistant or explicitly by the user.

That’s powerful, but there are important limits: in current tools, memories are usually tied to a single developer and workspace or project, they only capture slices of past context, and they can drift out of sync with the real codebase over time.

6.1 What Tool Memory Is Good At

Within a single developer’s workspace, memory can help with:

  • Remembering your personal workflow patterns (how you like components structured, your preferred libraries).
  • Recalling frequently used prompts or coding patterns.
  • Keeping track of small preferences that don’t belong in team-wide docs.

Under the hood, tools usually:

  • Store short text snippets or files as memories that describe preferences, patterns, or project-specific notes.
  • Select a few relevant memories for each request based on simple heuristics or search, then include them alongside your prompt.
  • Use those memories as extra context, not as a guarantee—if a memory is missing, vague, or outdated, the tool will still respond, just with less accurate guidance.
6.2 Why Memory Isn’t Enough for Teams

For team collaboration, memory has real limitations:

  • Individual vs team context – your memories aren’t shared with teammates.
  • Staleness – memories can become outdated as the codebase and practices evolve.
  • Opacity – others can’t see or review what’s stored in your personal memory.

That’s why the core, team-wide rules still belong in versioned markdown documents in the repo, such as:

  • WORKING-TOGETHER.md – collaboration rules for humans and AI.
  • CODING-STANDARDS.md – principle priorities, formatting guidance, patterns.
  • TEST-PATTERNS.md – everything from the previous section.
  • Feature files like docs/features/feature-101.md.
  • Project-level rules files (for example, .windsurf/rules or CLAUDE.md).
6.3 Tool Examples: Windsurf and Claude Code

Both Windsurf and Claude Code support persistent context, but they do it in different ways.

Windsurf (Cascade Memories and Rules)

  • Auto and explicit memories – Cascade can auto-generate memories during a conversation, or you can explicitly ask it to create one.
  • Workspace-scoped – memories are tied to a specific workspace; they’re not shared across workspaces or between teammates by default.
  • UI-managed – you can browse, edit, and delete memories in the Memories & Rules panel.
  • Complemented by rules files – global and workspace-level markdown rules (global_rules.md, .windsurf/rules) give Cascade project-wide behavior that lives in versioned files.

Claude Code (File-Based Project Memories)

  • Markdown-first memory – Claude Code loads memory files (such as CLAUDE.md and any imported markdown) automatically into the project’s context when the workspace starts.
  • Hierarchical loading – files higher in the directory tree are loaded first, with more specific files overriding or extending them.
  • Versioned with code – these memory files live alongside your source, can be code-reviewed, and evolve through normal git workflows.
  • Quick additions with # – inside Claude Code, you can press #, type a short note, and hit Enter to append that memory into the appropriate CLAUDE.md file so it’s available in future sessions.

Both approaches reinforce the same theme: use the tool’s memory features as a convenience layer, but keep the durable rules and patterns in human-readable, versioned markdown that your team controls.

6.4 Best Practices for Memory and Markdown

Tool memories are great at accumulating fragments of recent work—what you’ve been doing in this workspace or project—but they don’t automatically give you a complete architectural picture.

A practical split looks like this:

  • Use tool memory for:
    • Iterative, task-level context (recent features, commands, and patterns the agent keeps revisiting).
    • Personal workflow preferences and frequently repeated snippets that help you move faster.
  • Use markdown for:
    • Team decisions and architectural patterns that should outlive any single session.
    • Project-wide coding standards and testing practices.
    • Feature design documents that capture requirements, TODOs, and design notes while a change is in progress.

Many teams find it useful to add an ARCHITECTURE.md at the root of the repo to capture high-level system design, key flows, and major dependencies in one place that both humans and AI can reference.
And in daily work:

  • Explicitly point the AI to the right markdown docs at the start of a session.
  • Periodically clean up memories so stale entries don’t interfere with actual code changes that happened between sessions.
  • Consider adding a project-level rules file (like .windsurf/rules in Windsurf or CLAUDE.md in Claude Code) that tells the tool where to find WORKING-TOGETHER.md, CODING-STANDARDS.md, TEST-PATTERNS.md, and your feature files.

Feature requirements docs don’t have to be permanent artifacts; many teams keep them in lowercase files scoped to a single feature or ticket and clean them up once the work is complete.

When you describe patterns, processes, designs, and feature requirements in markdown, you give AI a consistent, shareable context that works across tools and across the whole team—not just within one person’s memory for a single session.

7. Visual Design with Mermaid and AI

Architecture diagrams are one of the fastest ways to align humans and AI on how a system works—or how it should work.

The good news is that AI is surprisingly good at turning code into diagrams and using diagrams as design input to guide code changes. Once again, markdown is the medium.

7.1 Code to Diagrams – Understand Existing Systems

When you point AI at a feature or subsystem and ask for diagrams, you’re asking it to:

  • Analyze the code.
  • Identify key components and relationships.
  • Present that structure visually.

Helpful prompts include:

  • “Document the current [feature] architecture.”
  • “Create a Mermaid diagram showing component relationships for the reporting module.”
  • “What are the responsibilities of each component in this flow?”
  • “Show the data flow through this system from API to UI.”

The assistant can respond with Mermaid diagrams like:

large

Use cases:

  • Onboarding new developers.
  • Understanding legacy systems you didn’t build.
  • Documenting previously undocumented code.
  • Preparing for refactoring work.
  • Preparing for larger code reviews.

If you’re using VS Code, installing a markdown preview extension that supports Mermaid (such as Markdown Preview Enhanced) lets you render these diagrams directly from your feature files.

7.2 Diagrams to Better Code – Design Enhancement

Diagrams also help before and during implementation:

  • For new features – create diagrams before coding to validate the approach.
  • For code reviews – generate architecture docs to explain complex changes.
  • For refactors – visualize current vs target state to identify improvement opportunities.

You might ask:

  • “Propose a high-level architecture diagram for adding CSV export to the reporting page.”
  • “Show a sequence diagram for our new authentication flow, based on this feature file.”
  • “Compare the current dependency graph to the proposed one and highlight risk areas.”

 

Engineering Blog Picture 5
large

 

Fifteen minutes of diagramming with AI—grounded in your code and feature files—can save hours of review time and misunderstandings later.

8. Putting It All Together

In the first post of this series, we focused on why naive AI usage fails and how to reset expectations by treating AI as a junior developer operating inside clear working agreements and coding standards.
In this post, we turned that philosophy into a concrete workflow for senior engineers:

  • Make AI’s plan visible – use feature markdown files and checkbox TODOs to keep the execution plan out of the AI’s head and in your repo.
  • Give AI testing guardrails – use TEST-PATTERNS.md to define where tests live, what to mock, what to test, and what “good” looks like.
  • Anchor work in markdown – make sure key requirements, designs, tests, and decisions live in docs your team can see. Short‑lived TODOs can stay in feature requirements docs or personal lists; long‑lived features get shared, versioned docs.
  • Use tool memory wisely – lean on it for personal preferences, but put team rules and feature context in shared documents.
  • Visualize architecture with AI – generate and refine Mermaid diagrams to understand existing systems and design new ones.

A chat‑only workflow makes every feature feel like a new conversation; a markdown‑anchored workflow makes it feel like you and the AI are picking up the same notebook every time.

A simple end-to-end flow might look like this:

  1. A Jira ticket is created for adding CSV export to a reporting page.
  2. You create docs/features/reporting-csv-export.md and paste the story into a ## Requirements section.
  3. With AI, you build a checklist of TODOs and a preliminary test plan in that file.
  4. You link in WORKING-TOGETHER.md, CODING-STANDARDS.md, and TEST-PATTERNS.md so the assistant knows how to behave.
  5. You work through TODOs one at a time, updating tests according to TEST-PATTERNS.md.
  6. You ask AI to generate a sequence diagram to document the new data flow.
  7. When the feature ships, you can either keep the feature requirements doc as a record, or fold its outcomes into longer-lived design docs (for example, updating ARCHITECTURE.md or a module-level design document).

These techniques optimize AI collaboration across the complete development lifecycle—architecture design, implementation, testing, documentation, and workflow optimization—while keeping you in the senior engineer role. Markdown is the language AI speaks; using it intentionally turns AI from an unpredictable partner into a reliable implementer working inside your standards and design.

The difference is simple: less vibe, more engineering.

About the Author

Eric Christianson is a Staff Software Engineer at Clearwater Analytics (CWAN) with over 30 years of experience developing innovative technology solutions for leading companies. He focuses on creating user-friendly experiences and fostering collaboration across teams to solve complex challenges. Eric is currently leading efforts to build customer-facing applications developed with AI coding agents and providing AI-assisted workflows.

Eric Christianson
small
Eric Christianson