Webinar | CWAN GenAI Office Hours for Insurance: Live Session 3
Take your investment operations to the next level with the third session in our CWAN GenAI Office Hours for Insurance series.
Software engineering is changing fast in the age of AI, and the way we design, build, and ship software is evolving with it. This multi-part series dives into practical techniques for using AI to boost productivity, keep code quality high, and help your teams move faster—without losing control of your architecture or standards.
Teams are already using AI to build real features in real codebases, —not just experiments or demos. The open question is how to make that work predictable and sustainable as systems grow more complex.
This post focuses on three things:
By the end, you should have a clear mental model for AI’s role on the team and a concrete starting point for WORKING-TOGETHER.md and CODING-STANDARDS.md in your own coding project.
The industry’s ambitious goal is clear: transform a business requirement into production-ready code through AI automation—a development cycle where AI carries work from idea to implementation with minimal human intervention.
But there’s a critical insight that unlocks this vision: the same principles that make human developers successful apply to AI systems.
Just as vague requirements lead to failed projects and overly verbose specifications cause confusion, AI coding assistants need well-crafted context to deliver reliable results. The breakthrough isn’t in the AI’s capabilities alone, but in learning how to communicate requirements with the right balance of clarity and detail.
When we provide AI with well-structured context—clear objectives, specific constraints, relevant examples, and architectural guidance—we dramatically reduce guesswork and hallucination. Instead of improvising, the AI follows a clear path from business need to deployable feature.
The future of development isn’t just about smarter AI; it’s about becoming better requirement architects who can design the right inputs for automated code generation.
Until that vision is realized, humans are still responsible for building and maintaining high-quality software assets. The question becomes: how do we work with today’s AI tools in a way that is reliable, repeatable, and worth the investment?
The difference between effective AI assistance and “vibe coding” lies in having a systematic approach rather than letting the AI rely on intuitive guessing.
When developers provide AI with:
…the AI becomes a reliable implementer instead of a creative guesser.
This is where the mental model of AI as a junior developer is powerful. In this model:
Crucially, the AI should not:
Thinking of the current phase as a transition period helps:
The rest of this post focuses on the practical pieces you can put in place now—project-level collaboration rules and coding standards—that make this AI-as-junior-developer model work in real codebases.
Markdown is the language AI speaks.
It’s easy for humans to read and easy for AI to parse. Think of your markdown files as the instruction manual that tells AI how to behave in your project.
Most teams have coding standards. Very few have explicit collaboration rules for how they’ll work with AI.
Without those rules written down, you get the worst of both worlds:
The fix is to treat these rules as a small, versioned artifact in your project—written in markdown—so both humans and AI can see and follow them. You might keep them in a dedicated file (such as WORKING-TOGETHER.md), or you might embed them into a tool-specific configuration file (like CLAUDE.md or a rules file). The filename is flexible; the content is what matters.
Four parts go a long way.
You’re the architect, AI is the coder. AI doesn’t get to make design decisions or guess what you want. It asks questions and implements what you tell it to do.
Here’s one way to express that as a markdown prompt block for your AI tool:
# Our Pair Programming Team - **You are the senior architect** – provide guidance, control commits, make decisions. - **I am the implementer** – follow your guidance, ensure architectural alignment. - **We use git** – you control all commits, I implement and explain changes. - **Incremental development** – you request features step-by-step, I implement exactly what's asked. - **One step at a time** – I won't do more than requested. - **Report discoveries** – if directions are unclear or I find issues, I report and wait for input. - **Present decision points** – I show options and wait for your decision.
You can also experiment with a more concise variant:
Alternative concise rule:
“AI generates code and suggestions, but humans own commits and architecture decisions. AI should not introduce new frameworks, services, or patterns without an explicit request.”
Both formats communicate the same idea; some tools respond better to a short, high-signal rule, others to a more explicit list.
AI works best when it knows how you expect it to communicate. Instead of ad hoc back-and-forth, give it a script you can paste directly into your rules file.
For example:
## Communication Rules - **STOP** when stuck, confused, or discovering issues. - **REPORT** findings with full context and options. - **WAIT** for your input before proceeding. - **DOCUMENT** everything for transparency. You can extend that with more explicit analysis steps: - **ANALYZE** – restate the problem and evidence. - **EXPLAIN** – outline what you plan to change and why. - **GET APPROVAL** – wait for an explicit "go ahead" before larger changes. - **ASK, don’t assume** – when something is unclear, ask targeted questions instead of guessing.
Alternative concise rule:
“For any non-trivial change, AI must restate the problem, propose an approach, and wait for confirmation before editing files.”
This turns AI from a black box into a collaborator who talks through the plan instead of silently changing things.
Small course corrections beat large surprises. A few simple checkpoints keep you in control of what AI is doing.
You can express your implementation flow as a prompt like this:
## Implementation Flow - **Before** – confirm understanding, verify architecture fit, get approval. - **During** – one step at a time, report issues immediately, document changes. - **After** – summarize what was done, explain integration, report next TODOs.
Alternative concise rule:
“At the end of each step, AI summarizes the files changed, the behavior modified, and any follow-up TODOs before we move on.”
These checkpoints are cheap, and they prevent the “I have no idea what just changed” feeling that kills trust.
Finally, you can set expectations for code quality so AI doesn’t “cut corners” in ways your team wouldn’t.
For example, your high-level prompt might look like:
## Code Quality Standards - **Maintain consistency** – use established project patterns and conventions. - **Quality first** – clean, readable code that follows team standards. - **Link to coding standards** – follow the rules described in our coding standards document.
An example on how to link another document in a prompt
- **Coding Standards** – see `docs/CODING-STANDARDS.md` patterns
You can extend that with more detailed DO/DON’T guidance in the same markdown rules file or in a separate coding standards doc that you link to:
- **DO** follow patterns in the repo and respect existing architecture and module boundaries. - **DO** add or update tests when changing behavior. - **DO** update relevant docs when introducing new patterns. - **DON’T** modify unrelated code "for cleanliness" without explicit agreement. - **DON’T** introduce new architectural patterns without discussion. - **DON’T** skip explanations for non-trivial changes.
Alternative concise rule:
“Any change to business logic must include at least one test update (unit or integration) and a brief explanation of the behavior change.”
Writing these rules in markdown gives you a concrete contract for how you’ll collaborate. You can paste them into a collaboration rules file or integrate them into whatever AI tooling your team uses.
If collaboration rules describe how you and AI work together, coding standards describe how the code itself should look and behave.
AI is extremely sensitive to examples. If your codebase is inconsistent, your prompts are vague, and your standards are scattered across Confluence, you’ll get equally inconsistent results from AI. A single, markdown-based coding standards document gives AI a concrete playbook to follow.
Before diving into specifics, it helps to name the key concepts your coding standards should cover:
The following sections show how to turn each of these areas into markdown prompts that AI can follow.
Start by making your core principles explicit. This gives AI a shared vocabulary with the team.
## Core Principles - **DRY** – extract shared utilities, components, and constants. - **SOLID** – single responsibility, focused interfaces, dependency injection. - **KISS** – simple, readable code with clear names. - **YAGNI** – only implement what's needed now.
This mirrors what you’d expect from human engineers and gives AI a list of principles to reference when making tradeoffs.
When coding principles conflict, AI needs to know which one wins. Otherwise it can spend a lot of effort trying to satisfy everything perfectly.
You can encode your priorities like this:
## Principle Conflict Resolution **Priority Order**: YAGNI > KISS > SOLID > DRY - **Early Development**: YAGNI > DRY (avoid premature abstraction). - **Complex Features**: KISS > SOLID (favor simple, understandable designs). - **Mature Code**: SOLID > DRY (good design over aggressive deduplication). **Apply DRY only when**: 3+ instances AND stable requirements.
Optional concise variant:
Concise principle priority rule:
“When DRY, SOLID, KISS, and YAGNI conflict, prioritize YAGNI > KISS > SOLID > DRY. Apply DRY only after 3+ stable repetitions, and favor simple, well-factored code over perfect abstraction.”
Formatting rules in documentation go stale quickly. Instead of duplicating them, point AI to the actual config files that your tooling already uses.
## Code Formatting - **Prettier** – see `.prettierrc` for current rules (`npm run prettier:fix`). - **ESLint** – auto-fix on save, organize imports (see `.vscode/settings.json`). - **TypeScript** – strict mode, path aliases (see `tsconfig.json` paths).
This keeps the markdown light while still giving AI the right pointers into your real sources of truth.
For UI work especially, structure matters. Show AI exactly how components should look and ask it to copy that pattern instead of inventing its own.
## Component Structure
```typescript
interface ComponentProps {
// Props interface first
}
export const Component: React.FC<ComponentProps> = ({ prop1, prop2 }) => {
// Hooks at the top
const [state, setState] = useState()
const { data } = useQuery()
// Event handlers with useCallback
const handleClick = useCallback(() => {
// Implementation
}, [dependencies])
// Early returns for loading/error states
if (loading) return <Spinner />
if (error) return <ErrorMessage />
// Main render
return <div>{/* JSX */}</div>
}
Over time, this makes AI-generated components blend into the surrounding codebase instead of feeling like they came from a different project.
Finally, set clear expectations for tests, error handling, and documentation. These are your quality gates—what must be true before code is considered ready.
## Testing and Quality Gates - **Testing patterns** – Link to testing patterns. - **Coverage** – use Jest (`npm test`), aim for meaningful tests over raw coverage metrics. - **Separate concerns** – keep data fetching separate from presentation. - **Focused interfaces** – prefer small, specific prop interfaces. - **Clear naming** – use descriptive variables and function names. - **Error handling** – provide graceful fallbacks and user-friendly messages. - **Appropriate logging** - have log levels and actionable messages/data for triage. - **Reuse existing patterns** – follow established project conventions.
An example on how to link another document in a prompt
- **Testing patterns** – see `docs/TEST-PATTERNS.md` for mocking, MSW integration, and component testing.
Together, these prompts give AI a concrete sense of your coding style, structure, and quality expectations. Like your collaboration rules, you can evolve this document as your codebase and practices mature.
Tool Callout: Where These Prompts Live
So far, we’ve treated collaboration rules and coding standards as markdown text. Different tools have different ways to load that text and turn it into behavior, but the pattern is the same: write the rules once, let the tool read them automatically.
In each case, the goal is the same: move expectations out of one-off conversations and into versioned markdown that lives with your code.
In this post, we focused on why naive AI usage fails and how to reset expectations by treating AI as a junior developer operating inside clear working agreements and coding standards.
That’s the foundation. But once you put AI into your daily workflow, a new set of questions appears:
Those are the problems the second part of this series will tackle.
Eric Christianson is a Staff Software Engineer at Clearwater Analytics (CWAN) with over 30 years of experience developing innovative technology solutions for leading companies. He focuses on creating user-friendly experiences and fostering collaboration across teams to solve complex challenges. Eric is currently leading efforts to build customer-facing applications developed with AI coding agents and providing AI-assisted workflows.