Those wrong guesses have a habit of slipping into production if you're moving fast.
Guardrails aren't about slowing AI down—they're about keeping you in control.
Step 1: Define the Sandbox
Before you start prompting, decide what kinds of tasks AI is allowed to help with—and what's off-limits. For me:
Allowed:
- Boilerplate, scaffolding, syntax corrections
- Documentation, refactoring known-safe patterns
Banned:
- Credentials, production configs
- Unvetted architectural designs
- Sensitive environment details
Conditional:
- Infrastructure definitions if they're based on a pre-agreed pattern and fully reviewed by a human
Having these rules ahead of time keeps you from overstepping in the heat of "this is going really fast."
Step 2: Control the Inputs
Everything you feed AI is an input you can't un-share. Even if the vendor swears it's safe, treat it like an untrusted third-party service.
- Strip out proprietary details.
- Swap real names for placeholders (company-prod-db → example-db)
- Share only the minimum context needed to get the answer
If you wouldn't paste it in a public GitHub repo, don't paste it into AI.
Step 3: Keep AI in the "Scaffolding Zone"
I never ask AI to design the entire system. Instead:
"I don't ask it for complete solutions, but I do ask it for simple scaffolding of agreed-upon solutions."
That might mean:
- A Terraform skeleton for a known module type
- A Helm chart stub following an existing standard
- A CI/CD workflow outline with placeholders for secrets and environment-specific steps
The final wiring—the sensitive parts, the business-specific logic—comes from me or my team.
Step 4: Bake in Review Points
Guardrails aren't just about what you ask AI to do; they're about what happens after it does it.
Mandatory peer review:
Every AI-generated change gets the same code review as human-written code.
Static checks:
Linting, policy-as-code, security scanning before merge.
Golden paths:
Pre-approved templates AI must follow, reducing the chance of rogue patterns.
Step 5: Build and Share Prompt Recipes
Reusable prompts keep outputs consistent and predictable.
Examples I use:
Terraform Module Skeleton:
"Generate a Terraform module for an S3 bucket with variables for name, tags, and versioning. Do not set IAM policies. Use AWS provider v5.x syntax."
Reusable Workflow:
"Create a GitHub Actions workflow for Node.js 20 with pnpm caching, tests, and artifact upload. No deploy step. Pin all actions by SHA."
Helm Chart Starter:
"Generate a Helm chart for a stateless app with configurable replicas and service type. Leave all values empty for customization."
These are safe, bounded, and predictable—because they've been tested before.
Step 6: Love It, Hate It, Use It Anyway
I'll be honest—AI is in everything I do now, from client work to my personal projects. And I love it. I love that it's fast, tireless, and willing to grind through the boring stuff without complaint. I love that I can throw half-baked ideas at it and get something workable in minutes.
But it also drives me insane. It's overconfident. It makes tiny wrong turns that spiral into big detours. It forgets context you just gave it. It's like working with a hyperactive junior engineer who's read every manual but never shipped production code.
That's why guardrails matter. They're not there to "protect" AI from making mistakes—they're there to protect me from accepting them without realizing it.
The Bottom Line
Guardrails aren't just "write better prompts." They're an agreement—between you, your team, and the AI—about what it can and can't touch, how it should deliver work, and how that work gets reviewed.
AI is a fantastic tool in the hands of engineers who set the rules. Without them, it's just another fast way to get into trouble.
Next in the Series
The "InfraGPT" wish list—what a DevOps-focused GPT would need to actually understand infrastructure at an expert level.