AI-assisted workflows for translating business requirements into infrastructure code
- What is Specification-Driven Development?
- About this project
- Get Started
- Cloud Provider Examples
- Tips for Best Results
- Supported AI Agents
- IaC Specify CLI Reference
- Infrastructure Architecture Section
- Environment Variables
- Core Philosophy
- Development Phases
- Experimental Goals
- Prerequisites
- Learn More
- Detailed Process
- Contributing
- Future Ideas
- Support
- Acknowledgements
- License
Specification-Driven Development (SDD) is an emerging methodology where detailed specifications are created before code. The specification becomes your single source of truth, guiding AI agents to generate implementation plans and production-ready code. This approach clarifies intent upfront, reduces misalignment, and enables iterative refinement through living documents that evolve with your project.
Learn more: Red Hat on SDD • Martin Fowler on SDD • ThoughtWorks Radar
Potential benefits for infrastructure:
- Supports starting with high-level requirements that AI agents can use to generate detailed specifications
- Encourages separating requirements (what you need) from implementation details (specific cloud services)
- Provides templates and commands to help AI agents translate specs into IaC configurations
- Enables specification updates that can inform changes to plans and code
- Facilitates team alignment through explicit, reviewable specifications
IaC Spec Kit is a specialized implementation of the GitHub Spec Kit toolkit, adapted for Infrastructure as Code workflows with Terraform and cloud providers. As SDD is an emerging trend, this project explores how specification-driven approaches can improve infrastructure development—an experimental field where we're discovering what's possible with AI-assisted infrastructure provisioning.
See the complete workflow in action (2 minutes):
This visual guide shows the end-to-end process of using IaC Spec Kit to generate infrastructure specifications, plans, and Terraform code for a WordPress deployment on IBM Cloud.
- Infrastructure command namespace: All commands use
.iacprefix (/iac.principles,/iac.specify,/iac.plan,/iac.tasks,/iac.implement) - IaC-centric templates: Templates designed for cloud resources, networking, security, and compliance. The toolkit is slightly geared towards Terraform, but you can use any IaC tool.
- Multi-cloud support: Works with any cloud provider - AWS, Azure, GCP, IBM Cloud, Oracle Cloud, and more
- Infrastructure principles: Governance frameworks for cloud infrastructure, security standards, and cost management
IaC Spec Kit is designed to work with any cloud provider. The toolkit encourages AI agents to separate generic infrastructure requirements (what you need) from cloud-specific implementation (how to build it):
- Principles and Specifications use generic infrastructure terms - IaC Spec Kit encourages describing requirements using terms like "managed database", "object storage", "encryption key management" rather than cloud-specific service names
- Plans and Implementation are cloud-specific - IaC Spec Kit helps AI agents translate generic requirements into specific services like AWS RDS, Azure Database, Cloud SQL, or IBM Databases for MySQL
This separation is intended to:
- Help you focus on what you need rather than how to build it, with templates that guide AI agents through cloud-specific implementation details
- Make specifications more accessible to team members less familiar with a specific cloud provider
- Support switching cloud providers by re-running
/iac.planwith a different cloud - Enable deploying the same specification to multiple clouds
- Facilitate comparing cloud provider options before committing
IaC Spec Kit is an experimental project exploring how Specification-Driven Development can improve infrastructure as code workflows. Contributions help refine templates, expand compatibility, and discover what works best with AI-assisted infrastructure provisioning. See CONTRIBUTING.md for guidelines on how to participate, and IDEAS.md for potential areas to explore.
Prerequisites: This CLI tool requires uv - a fast Python package installer. If you don't have it yet, install it with:
# macOS/Linux
curl -LsSf https://astral.sh/uv/install.sh | sh
# Windows
powershell -c "irm https://astral.sh/uv/install.ps1 | iex"Choose your preferred installation method:
Install once and use everywhere:
uv tool install iac-specify-cli --from git+https://github.com/ibm/iac-spec-kit.gitThen use the tool directly:
iac-specify init <PROJECT_NAME>
iac-specify checkTo upgrade iac-specify run:
uv tool install iac-specify-cli --force --from git+https://github.com/ibm/iac-spec-kit.gitRun directly without installing:
uvx --from git+https://github.com/ibm/iac-spec-kit.git iac-specify init <PROJECT_NAME>Benefits of persistent installation:
- Tool stays installed and available in PATH
- No need to create shell aliases
- Better tool management with
uv tool list,uv tool upgrade,uv tool uninstall - Cleaner shell configuration
Launch your AI assistant in the project directory. The /iac.* commands are available in the assistant.
Use the /iac.principles command to create your project's governing principles and development guidelines. IaC Spec Kit uses these principles to guide AI agents through all subsequent development phases.
/iac.principles This is a development environment. Keep it simple, focus on basic security, and keep costs low. Use Terraform.
Use the /iac.specify command to describe what you want to build. Focus on the what and why. IaC Spec Kit guides AI agents to avoid tech stack details at this stage.
/iac.specify I need to deploy WordPress for my small business website. Should handle a few thousand visitors per day, needs to be secure with automated backups. Budget is around $500/month. Use the official WordPress Docker image.
Use the /iac.plan command to provide your tech stack and architecture choices. IaC Spec Kit helps AI agents translate your generic requirements into cloud-specific services.
/iac.plan Deploy in us-south. Use Code Engine for containers, Databases for MySQL, Cloud Object Storage for media.
Use /iac.tasks to create an actionable task list from your implementation plan.
/iac.tasks
Use /iac.implement to execute all tasks and build your feature according to the plan.
/iac.implement
For the original Spec-Driven Development methodology, see the GitHub Spec Kit documentation.
See what IaC Spec Kit generates: wordpress-ibm-cloud - A complete WordPress deployment on IBM Cloud created using the workflow below, showing the generated specifications, plans, tasks, and Terraform code.
Get started with your cloud provider:
| Cloud Provider | Example Workflows |
|---|---|
| IBM Cloud | Simple VPC • Static Website • WordPress • Landing Zone • Three-Tier Web App • Data Pipeline • Microservices |
| AWS | Simple VPC • Static Website • WordPress • Landing Zone • Three-Tier Web App • Data Pipeline • Microservices |
| Azure | Simple VPC • Static Website • WordPress • Landing Zone • Three-Tier Web App • Data Pipeline • Microservices |
| GCP | Simple VPC • Static Website • WordPress • Landing Zone • Three-Tier Web App • Data Pipeline • Microservices |
Explore examples: See examples/ for complete workflows showing how the same requirements deploy to different cloud providers.
Learn more: Read Writing Tech-Agnostic Infrastructure Specifications to understand how to write specifications using generic infrastructure terms instead of cloud-specific service names.
The latest frontier models from leading AI providers typically offer improved reasoning capabilities and better understanding of infrastructure patterns. When configuring your AI assistant, consider selecting the most advanced model available for your use case.
For optimal results, configure your AI tool with appropriate Model Context Protocol (MCP) servers before starting your project. MCP servers provide your AI assistant with direct access to cloud provider APIs, Terraform registries, and other infrastructure tools, which can help improve accuracy and reduce hallucinations by providing real-time, verified information.
Recommended MCP servers by cloud provider:
| Cloud Provider | MCP Server | Description |
|---|---|---|
| IBM Cloud | TIM (Terraform IBM Modules) | Focused on IBM Cloud module discovery and intelligent IaC generation |
| AWS | AWS Terraform MCP Server | Prioritizes AWSCC provider with security scanning and best-practice automation |
| Azure | Azure Terraform MCP Server | Best-practices guidance with built-in validation for Azure resources |
| Azure | HashiCorp Terraform MCP Server | Multi-cloud support with Terraform Registry integration |
| Google Cloud | GCP Tools MCP Server | Automates Google Cloud Platform infrastructure setup via gcloud and Terraform |
| Google Cloud | HashiCorp Terraform MCP Server | Multi-cloud support with Terraform Registry integration |
| Multi-Cloud | HashiCorp Terraform MCP Server | Open-source, supports major providers via registry introspection and policy checks |
For AI tools that do not currently support web search (no built-in research agent), we strongly recommend adding a search MCP server such as Brave Search MCP or Perplexity MCP. This can be particularly helpful during the /iac.plan phase when the AI needs to research current cloud services, Terraform provider versions, and best practices.
Without search capabilities, AI tools tend to rely on inherent knowledge from their training data, which can lead to hallucinations or outdated information. Search-enabled agents can verify current service offerings, pricing, and technical specifications in real-time.
You can start a new chat with an empty context between slash commands to save tokens and reduce costs. The IaC Spec Kit approach of storing outputs as markdown files (principles.md, spec.md, plan.md, tasks.md) acts as persistent memory that subsequent chats can pull from. The AI assistant reads these files as needed, so you don't need to maintain long conversation histories.
This is particularly useful for:
- Long-running projects where context windows become expensive
- Switching between different aspects of your infrastructure
- Collaborating with team members who can start fresh with the same context files
If your AI agent supports custom modes or configurations, create one that enables all necessary tools for infrastructure work. At minimum, ensure your mode allows:
- File read/write operations - Essential for creating and updating specification files
- MCP server calls - Required for cloud provider and Terraform integrations
- Command execution - Needed for running Terraform validation commands
For example, in agents like Claude Code or IBM Bob, you can create a custom "Infrastructure" mode that pre-enables these capabilities, streamlining your workflow and reducing the need to grant permissions repeatedly.
| Agent | Support | Notes |
|---|---|---|
| Claude Code | ✅ | |
| GitHub Copilot | ✅ | |
| Gemini CLI | ✅ | |
| Cursor | ✅ | |
| Qwen Code | ✅ | |
| opencode | ✅ | |
| Windsurf | ✅ | |
| Kilo Code | ✅ | |
| Auggie CLI | ✅ | |
| CodeBuddy CLI | ✅ | |
| Roo Code | ✅ | |
| Codex CLI | ✅ | |
| Amazon Q Developer CLI | Amazon Q Developer CLI does not support custom arguments for slash commands. | |
| Amp | ✅ | |
| IBM Bob | ✅ | IDE-based agent with slash command support |
The iac-specify command supports the following options:
| Command | Description |
|---|---|
init |
Initialize a new IaC Specify project from the latest template |
check |
Check for installed tools (git, claude, gemini, code/code-insiders, cursor-agent, windsurf, qwen, opencode, codex, bob) |
| Argument/Option | Type | Description |
|---|---|---|
<project-name> |
Argument | Name for your new project directory (optional if using --here, or use . for current directory) |
--ai |
Option | AI assistant to use: claude, gemini, copilot, cursor-agent, qwen, opencode, codex, windsurf, kilocode, auggie, roo, codebuddy, amp, q, or bob |
--script |
Option | Script variant to use: sh (bash/zsh) or ps (PowerShell) |
--ignore-agent-tools |
Flag | Skip checks for AI agent tools like Claude Code |
--no-git |
Flag | Skip git repository initialization |
--here |
Flag | Initialize project in the current directory instead of creating a new one |
--force |
Flag | Force merge/overwrite when initializing in current directory (skip confirmation) |
--skip-tls |
Flag | Skip SSL/TLS verification (not recommended) |
--debug |
Flag | Enable detailed debug output for troubleshooting |
--github-token |
Option | GitHub token for API requests (or set GH_TOKEN/GITHUB_TOKEN env variable) |
# Basic project initialization
iac-specify init my-infrastructure
# Initialize with specific AI assistant (example: IBM Bob)
iac-specify init my-infrastructure --ai bob
# Initialize with PowerShell scripts (Windows/cross-platform)
iac-specify init my-infrastructure --ai copilot --script ps
# Initialize in current directory
iac-specify init . --ai bob
# or use the --here flag
iac-specify init --here --ai bob
# Force merge into current (non-empty) directory without confirmation
iac-specify init . --force --ai bob
# or
iac-specify init --here --force --ai bob
# Skip git initialization
iac-specify init my-infrastructure --ai bob --no-git
# Enable debug output for troubleshooting
iac-specify init my-infrastructure --ai bob --debug
# Use GitHub token for API requests (helpful for corporate environments)
iac-specify init my-infrastructure --ai bob --github-token ghp_your_token_here
# Check system requirements
iac-specify checkAfter running iac-specify init, your AI coding agent will have access to these slash commands for structured development:
Essential commands for the Spec-Driven Development workflow:
| Command | Description |
|---|---|
/iac.principles |
Create or update project governing principles and development guidelines |
/iac.specify |
Define what you want to build (requirements and user stories) |
/iac.plan |
Create technical implementation plans with your chosen tech stack |
/iac.tasks |
Generate actionable task lists for implementation |
/iac.implement |
Execute all tasks to build the feature according to the plan |
Additional commands for enhanced quality and validation:
| Command | Description |
|---|---|
/iac.clarify |
Clarify underspecified areas (recommended before /iac.plan) |
/iac.analyze |
Cross-artifact consistency & coverage analysis (run after /iac.tasks, before /iac.implement) |
/iac.checklist |
Generate custom quality checklists that validate requirements completeness, clarity, and consistency (like "unit tests for English") |
When using /iac.plan for infrastructure projects, your plan.md will include an Infrastructure Architecture section with:
- Cloud Provider Selection: Which provider and why (AWS, Azure, GCP, etc.)
- Compute Resources: VMs, containers, serverless, load balancers
- Data Storage: Databases, object storage, caching layers
- Networking: VPCs, subnets, security groups, routing
- Security: IAM roles, encryption, secrets management
- Environment Configuration: Development, staging, production settings
- State Management: Terraform backend configuration, workspace strategy
# 1. Establish principles
/iac.principles This is a development environment. Keep it simple, focus on basic security, and keep costs low. Use Terraform.
# 2. Create infrastructure specification (using generic infrastructure terms)
/iac.specify I need to deploy WordPress for my small business website. Should handle a few thousand visitors per day, needs to be secure with automated backups. Budget is around $500/month.
# 3. Create technical plan (specify cloud provider and services)
/iac.plan Deploy in us-south. Use Code Engine for containers, Databases for MySQL, Cloud Object Storage for media.
# 4. Generate tasks (includes terraform validation checkpoints)
/iac.tasks
# 5. Implement (AI generates Terraform .tf files)
/iac.implement
Important: IaC Spec Kit is designed to generate infrastructure as code (terraform, pulumi, ansible, kube manifest). Actual provisioning (terraform apply, kubectl apply) is a manual step you control and that is outside the scope of IaC Spec Kit.
| Variable | Description |
|---|---|
SPECIFY_FEATURE |
Override feature detection for non-Git repositories. Set to the feature directory name (e.g., 001-vpc-infrastructure) to work on a specific feature when not using Git branches.**Must be set in the context of the agent you're working with prior to using /iac.plan or follow-up commands. |
Specification-Driven Development emphasizes intent-driven development, rich specification creation, and multi-step refinement. For the complete philosophy, see the GitHub Spec Kit documentation.
IaC Spec Kit applies these principles to infrastructure provisioning with additional focus on:
- Cloud resource specifications: Infrastructure requirements using generic terms (avoid cloud-specific service names)
- Terraform module design: Reusable, composable infrastructure components
- Security and compliance: Built-in governance and policy validation
- Multi-cloud patterns: Portable infrastructure specifications across cloud providers
| Phase | Focus | Key Activities |
|---|---|---|
| 0-to-1 Development ("Greenfield") | Generate from scratch |
|
| Creative Exploration | Parallel implementations |
|
| Iterative Enhancement ("Brownfield") | Brownfield modernization |
|
| Infrastructure-as-Code | Infrastructure provisioning |
|
As SDD is an emerging trend, this implementation explores several areas in the context of Infrastructure as Code:
- Create infrastructure using diverse cloud providers
- Validate the hypothesis that Spec-Driven Development is a process not tied to specific cloud platforms, IaC tools, or frameworks
- Support multi-cloud and hybrid cloud scenarios
- Demonstrate mission-critical infrastructure development
- Incorporate organizational constraints (cloud providers, compliance requirements, engineering practices)
- Support enterprise security standards and compliance requirements
- Build infrastructure for different workload types and requirements
- Support various development approaches (from manual provisioning to fully automated IaC)
- Provide robust iterative infrastructure development workflows
- Extend processes to handle upgrades and modernization tasks
- Linux/macOS/Windows
- Supported AI coding agent.
- uv for package management
- Python 3.11+
- Git
If you encounter issues with an agent, please open an issue so we can refine the integration.
- GitHub Spec Kit - Original Spec-Driven Development methodology and documentation
- Detailed Walkthrough - Step-by-step implementation guide for infrastructure projects
Click to expand the detailed step-by-step walkthrough
You can use the Specify CLI to bootstrap your project, which will bring in the required artifacts in your environment. Run:
iac-specify init <project_name>Or initialize in the current directory:
iac-specify init .
# or use the --here flag
iac-specify init --here
# Skip confirmation when the directory already has files
iac-specify init . --force
# or
iac-specify init --here --forceYou will be prompted to select the AI agent you are using. You can also proactively specify it directly in the terminal:
iac-specify init <project_name> --ai bob
iac-specify init <project_name> --ai claude
iac-specify init <project_name> --ai copilot
# Or in current directory:
iac-specify init . --ai bob
iac-specify init . --ai codex
# or use --here flag
iac-specify init --here --ai bob
iac-specify init --here --ai codex
# Force merge into a non-empty current directory
iac-specify init . --force --ai bob
# or
iac-specify init --here --force --ai bobThe CLI will check if you have Bob, Claude Code, Gemini CLI, Cursor CLI, Qwen CLI, opencode, Codex CLI, or Amazon Q Developer CLI installed. If you do not, or you prefer to get the templates without checking for the right tools, use --ignore-agent-tools with your command:
iac-specify init <project_name> --ai claude --ignore-agent-toolsGo to the project folder and run your AI agent. In our example, we're using bob.
You will know that things are configured correctly if you see the /iac.principles, /iac.specify, /iac.plan, /iac.tasks, and /iac.implement commands available.
The first step should be establishing your project's governing principles using the /iac.principles command. IaC Spec Kit uses these principles to guide AI agents toward consistent decision-making throughout all subsequent development phases:
/iac.principles This is an enterprise landing zone for a regulated industry. Security and compliance are critical. Multiple environments need strong isolation. Use Terraform.
This step creates or updates the .specify/memory/principles.md file with your project's foundational guidelines. IaC Spec Kit helps AI agents reference these principles during specification, planning, and implementation phases.
With your project principles established, you can now create the functional specifications. Use the /iac.specify command and then provide the concrete requirements for the infrastructure you want to develop.
[!IMPORTANT] Be as explicit as possible about what you are trying to build and why. For best results, do not focus on describing the details of cloud services at this point. IaC Spec Kit guides AI agents to use generic infrastructure terms instead.
An example prompt:
/iac.specify I need an enterprise landing zone for our organization. Requirements: separate account groups for production, staging, development, and shared services. Centralized networking with hub-and-spoke topology. All logs aggregated to security account. Policy-based guardrails to enforce compliance. Cost tracking by environment. We need to comply with SOC 2 and Financial Services Cloud requirements.
After this prompt is entered, you should see your AI agent kick off the planning and spec drafting process. IaC Spec Kit's commands and templates guide the agent through this process, and the agent will also trigger some of the built-in scripts to set up the repository.
Once this step is completed, you should have a new branch created (e.g., 001-landing-zone), as well as a new specification in the specs/001-landing-zone directory.
The produced specification should contain a set of infrastructure requirements and functional requirements, as defined in the template.
At this stage, your project folder contents should resemble the following:
└── .specify
├── memory
│ └── principles.md
├── scripts
│ ├── check-prerequisites.sh
│ ├── common.sh
│ ├── create-new-feature.sh
│ ├── setup-plan.sh
│ └── update-agent-context.sh
├── specs
│ └── 001-landing-zone
│ └── spec.md
└── templates
├── plan-template.md
├── spec-template.md
└── tasks-template.md
With the baseline specification created, you can go ahead and clarify any of the requirements that were not captured properly within the first shot attempt.
You should run the structured clarification workflow before creating a technical plan to reduce rework downstream.
Preferred order:
- Use
/iac.clarify(structured) – sequential, coverage-based questioning that records answers in a Clarifications section. - Optionally follow up with ad-hoc free-form refinement if something still feels vague.
If you intentionally want to skip clarification (e.g., spike or exploratory prototype), explicitly state that so the agent doesn't block on missing clarifications.
Example free-form refinement prompt (after /iac.clarify if still needed):
For the database services, we need PostgreSQL 14+ with point-in-time recovery enabled. Backup retention should be 30 days for production and 7 days for non-production environments. The database should be deployed in a private subnet with no direct internet access.
You should also ask your AI agent to validate the Review & Acceptance Checklist, checking off the things that are validated/pass the requirements, and leave the ones that are not unchecked. The following prompt can be used:
Read the review and acceptance checklist, and check off each item in the checklist if the infrastructure spec meets the criteria. Leave it empty if it does not.
It's important to use the interaction with your AI agent as an opportunity to clarify and ask questions around the specification - do not treat its first attempt as final.
You can now be specific about the tech stack and other technical requirements. You can use the /iac.plan command that is built into the project template with a prompt like this:
/iac.plan We'll use IBM Cloud Enterprise account groups. Transit Gateway for networking. Security and Compliance Center for compliance. Activity Tracker and Log Analysis for centralized logging.
The output of this step will include a number of implementation detail documents, with your directory tree resembling this:
.
├── memory
│ └── principles.md
├── scripts
│ ├── check-prerequisites.sh
│ ├── common.sh
│ ├── create-new-feature.sh
│ ├── setup-plan.sh
│ └── update-agent-context.sh
├── specs
│ └── 001-landing-zone
│ ├── contracts
│ │ └── terraform-outputs.md
│ ├── data-model.md
│ ├── plan.md
│ ├── quickstart.md
│ ├── research.md
│ └── spec.md
└── templates
├── plan-template.md
├── spec-template.md
└── tasks-template.md
Check the research.md document to ensure that the right tech stack is used, based on your instructions. You can ask your AI agent to refine it if any of the components stand out, or even have it check the locally-installed version of Terraform or cloud provider CLI tools.
Additionally, you might want to ask your AI agent to research details about the chosen tech stack if it's something that is rapidly changing (e.g., Kubernetes versions, Terraform provider versions), with a prompt like this:
I want you to go through the implementation plan and implementation details, looking for areas that could benefit from additional research as IBM Cloud services and Terraform providers are rapidly changing. For those areas that you identify that require further research, I want you to update the research document with additional details about the specific versions that we are going to be using in this infrastructure and spawn parallel research tasks to clarify any details using research from the web.
During this process, you might find that your AI agent gets stuck researching the wrong thing - you can help nudge it in the right direction with a prompt like this:
I think we need to break this down into a series of steps. First, identify a list of tasks that you would need to do during implementation that you're not sure of or would benefit from further research. Write down a list of those tasks. And then for each one of these tasks, I want you to spin up a separate research task so that the net result is we are researching all of those very specific tasks in parallel. What I saw you doing was it looks like you were researching IBM Cloud services in general and I don't think that's gonna do much for us in this case. That's way too untargeted research. The research needs to help you solve a specific targeted question.
[!NOTE] Your AI agent might be over-eager and add components that you did not ask for. Ask it to clarify the rationale and the source of the change.
With the plan in place, you should have your AI agent run through it to make sure that there are no missing pieces. You can use a prompt like this:
Now I want you to go and audit the implementation plan and the implementation detail files. Read through it with an eye on determining whether or not there is a sequence of tasks that you need to be doing that are obvious from reading this. Because I don't know if there's enough here. For example, when I look at the core implementation, it would be useful to reference the appropriate places in the implementation details where it can find the information as it walks through each step in the core implementation or in the refinement.
This helps refine the implementation plan and helps you avoid potential blind spots that your AI agent missed in its planning cycle. Once the initial refinement pass is complete, ask your AI agent to go through the checklist once more before you can get to the implementation.
You can also ask your AI agent (if you have the GitHub CLI installed) to go ahead and create a pull request from your current branch to main with a detailed description, to make sure that the effort is properly tracked.
[!NOTE] Before you have the agent implement it, it's also worth prompting your AI agent to cross-check the details to see if there are any over-engineered pieces (remember - it can be over-eager). If over-engineered components or decisions exist, you can ask your AI agent to resolve them. Ensure that your AI agent follows the principles as the foundational piece that it must adhere to when establishing the plan.
With the implementation plan validated, you can now break down the plan into specific, actionable tasks that can be executed in the correct order. Use the /iac.tasks command to have IaC Spec Kit guide AI agents in generating a detailed task breakdown from your implementation plan:
/iac.tasks
This step creates a tasks.md file in your feature specification directory that contains:
- Task breakdown organized by infrastructure component - Each component becomes a separate implementation phase with its own set of tasks
- Dependency management - Tasks are ordered to respect dependencies between components (e.g., networking before compute, compute before databases)
- Parallel execution markers - Tasks that can run in parallel are marked with
[P]to optimize development workflow - File path specifications - Each task includes the exact file paths where Terraform configuration should be created
- Validation checkpoints - Each component phase includes checkpoints to validate (for example with
terraform validate,terraform fmt, andtflint) - Checkpoint validation - Each infrastructure component phase includes checkpoints to validate independent functionality
The generated tasks.md provides a clear roadmap for the /iac.implement command. IaC Spec Kit helps AI agents ensure systematic implementation that maintains infrastructure quality and allows for incremental delivery of infrastructure components.
Once ready, use the /iac.implement command to execute your implementation plan:
/iac.implement
The /iac.implement command helps AI agents:
- Validate that all prerequisites are in place (principles, spec, plan, and tasks)
- Parse the task breakdown from
tasks.md - Execute tasks in the correct order, respecting dependencies and parallel execution markers
- Generate IaC configuration, such as Terraform configuration files (.tf)
- Provide progress updates and handle errors appropriately
[!IMPORTANT] The AI agent will execute local CLI commands (such as
terraform,ibmcloud,aws, etc.) - make sure you have the required tools installed on your machine.
Once the implementation is complete, review the generated Terraform code and run validation commands:
terraform init
terraform validate
terraform fmt -check
tflintResolve any validation errors by providing feedback to your AI agent. Remember that terraform apply is a manual step you control - review the plan carefully before applying changes to your infrastructure.
IaC Spec Kit is an experimental project exploring how Specification-Driven Development improves infrastructure as code workflows with AI assistance. Contributions are welcome in any area—template refinement, documentation, examples, validation improvements, cloud provider support, AI agent compatibility, or entirely new ideas.
See CONTRIBUTING.md for detailed guidelines on development setup, testing workflow, and PR submission.
Potential areas for exploration include template refinement, validation improvements, AI agent compatibility, automation tooling, extensibility mechanisms, and patterns for other infrastructure domains. See IDEAS.md for the complete list. All ideas are open for anyone to pick up and explore.
For support, please open a GitHub issue. Bug reports, feature requests, and questions about using Spec-Driven Development for Infrastructure as Code are welcome.
This project is built upon the GitHub Spec Kit toolkit created by:
- John Lam
- Den Delimarsky
- The GitHub Spec Kit community
We are grateful for their foundational work in creating tools and patterns for Specification-Driven Development. This implementation adapts their toolkit specifically for Infrastructure as Code workflows.
This project is licensed under the terms of the MIT open source license. Please refer to the LICENSE file for the full terms.