Specs Are Now Code
Patterns and Practices for AI-Driven Development
Let me share what's been making the rounds in my network—there's a growing consensus that specifications are becoming the new code when building with AI. These conversations keep coming from seasoned technologists who want AI's speed without abandoning the engineering practices they've earned the hard way. The debate's split between using ATDD/BDD approaches or lighter formats that let anyone on the team contribute.
Today I'm digging into learnings from the AI Engineer World's Fair (which I regret missing in person). I'll share insights about parallelizing AI coding agents, automated guard rails, and the AI Development patterns I've been testing and refining.
Key Takeaways from the AI Engineer World's Fair
Patrick Debois surfaced several insights from the conference:
AI coding agents are table stakes now. Every major AI coding tool has an "agent mode." The conference dedicated an entire Software-Engineering-Agents track with live demos.
Your six-month-old workflows are obsolete. Tool vendors have completely revamped their interfaces. If you're still using yesterday's approaches, you're chasing your tail.
Specifications are now code. With markdown "rules.md" files now standard in AI editors, more formal specs can replace throwaway prompts. These persist across environments and become executable through tests. As one keynote speaker noted: "In the near future, the person who communicates best becomes the programmer." Check out OpenAI's Model Spec on GitHub for a concrete example.
Agents run everywhere—from IDE to cloud. Long-running agents execute in secure, sandboxed environments with strict permission controls. These aren't your typical serverless functions—they're purpose-built for AI workloads.
Parallel execution unlocks parallel exploration. Cloud containers spin up concurrent agent instances, each on its own git branch, exploring multiple implementations simultaneously. The best results merge back into the IDE.
AI is shifting quality left—way left. AI-driven QA tools integrate directly into pull requests and developer environments. We're catching issues at the moment of creation, not days later.
Reality check on productivity claims. Those 5× or 10× gains? They're real, but only when task complexity, codebase maturity, and language popularity align. Set realistic expectations based on your specific context.
For those interested in Specification-Driven Development for AI, check out David Vydra's new LinkedIn group: SX: Specification Experience.
Implementing Parallel AI Coding Agents
Patrick Debois has it right: we've evolved from single-threaded AI prompts to orchestrated agent fleets. These agents decompose specifications into independent tasks, executing in isolated environments—git worktrees, containers, cloud sandboxes. They return results for human review, accelerating delivery while enabling exploratory development.
But here's the rub: this introduces complexity around merging outputs, managing risk boundaries, and consolidating knowledge. Emerging orchestrators help, but you need solid practices around task decomposition and result consolidation. Debois suggests we're entering the "Continuous Imagination" era—specification-driven, parallel AI workflows that blur the lines between ideation and implementation.
Automated Guardrails: Making "Vibe Coding" Safe
Andy Rea's SecurityBoulevard article demonstrates that "vibe coding" without controls leads to technical debt and security nightmares. His Acronym Creator demo shows the fix: embed AI-friendly guardrails throughout your pipeline. Pre-commit hooks for secrets detection, formatting, and linting. CI pipelines with SonarCloud, Semgrep, and history-wide secret scanning. An 80% coverage gate as your quality backstop.
This isn't about slowing down—it's about going fast safely. Automate the checks, make them frictionless, and let developers focus on creating value. When an AI assistant hits a failed check, it uses that feedback to iterate and improve. It's teaching these systems to play by the rules while maintaining velocity.
Patterns for AI-Assisted Development
I've been documenting the AI development patterns emerging across teams. The collection spans Foundation, Development, and Operations patterns—over thirty patterns including Rules as Code, Specification-Driven Development, and Policy-as-Code Generation. Each pattern includes prerequisites, implementation guidance, and integration points with existing practices. You can read a post describing the patterns here.
These patterns are based on my experiences and patterns I've used and observed, and I fully expect them to evolve as the community discovers better approaches. They're organized by maturity level with clear dependencies. The patterns address coding standards, workflows, testing, security, deployment, and monitoring—forming a structured progression for scaling AI-driven development.
The Bottom Line
"Vibe Coding" now has its own Wikipedia page. That tells you everything about where we are in this journey. But here's what matters: the most effective will not abandoning engineering discipline for AI speed. The most effective are evolving our practices to harness AI while maintaining—and often improving—quality, security, and maintainability.
The key insight? Specifications are becoming executable code, not just documentation. When you combine this with proper guardrails and parallel execution, you get a development approach that's both fast and reliable. No more dog-and-pony shows about 10× productivity—just solid engineering practices adapted for the AI era.


