Credentials, Secrets, and Permissions
I don't care how "secure" the prompt box claims to be—credentials and secrets never touch it. Same goes for writing IAM roles, RBAC configs, or anything that controls who can do what in my systems. AI can produce examples, sure, but it doesn't understand my org's security model, least privilege principles, or compliance requirements.
And even if it did, I wouldn't trust it to get every nuance right. A single overly permissive wildcard in an IAM policy can undo years of security hardening.
Networking Topology
This is another no-go. AI doesn't know the quirks of my VPN setup, peering connections, or the dozens of historical reasons why that one subnet exists. Networking is one of those "looks fine on paper" problems—until latency spikes, packets vanish, or you discover your prod traffic is taking the scenic route through a dev network.
AI can suggest generic topologies, but the real work is about trade-offs, constraints, and history it can't possibly know.
Resource Provisioning Strategy
Yes, GPT can write Terraform, Pulumi, or CloudFormation. But deciding what to provision, how to size it, and where to place it? That's strategy—part technical, part business, part tribal knowledge.
Provisioning isn't just about spinning up resources; it's about cost models, scaling patterns, failover design, and compliance zones. AI can't weigh those factors without human context, and "good enough" guesses can turn into expensive mistakes.
The Common Thread
If the decision has lasting impact on security, stability, or cost, I keep AI in an advisory role. It can draft, suggest, and validate—but I'm making the final call. In other words: AI can write the scaffolding, but the blueprint is still mine.
Next in the Series
We'll look at how to build guardrails around AI use—prompt templates, fenced patterns, and safe workflows that let you get the benefits without the blowups.