Terraform_Infrastructure
Infrastructure as Code (IaC)
Overview
Infrastructure as Code (IaC) is the practice of using code to manage and provision computing infrastructure, replacing manual configuration methods. With IaC, developers and operations teams describe their desired infrastructure in configuration files that can be versioned, tested, and managed like application code.
This approach supports DevOps practices, enabling CI/CD pipelines, automation of infrastructure management, and closer collaboration between development and operations.
How IaC Works
- Code as Infrastructure: Similar to how software code defines app behavior, IaC defines infrastructure components like servers, networks, storage, and operating systems.
- Configuration Files: These files act as source code, enabling reproducibility and standardization.
- Tools & Languages: Different IaC tools support multiple specification languages (e.g., Terraform with HCL, Ansible with YAML, Pulumi with Python/C#/Go).
- Development Environment: IaC can be written and tested in IDEs with error detection and linting.
- Version Control: IaC scripts can be maintained in Git, with commits tracking every change for consistency, traceability, and rollback.
Benefits of IaC
- Consistency & Reproducibility: Eliminates "works on my machine" issues by standardizing environments.
- Versioning & Auditability: Infrastructure changes are tracked, enabling rollbacks and accountability.
- Automation: Reduces manual setup, accelerates deployments, and lowers error rates.
- Scalability: Easily scale infrastructure up or down by updating configuration code.
- Cost Efficiency: Automating provisioning optimizes resource usage and reduces operational costs.
Using IaC to Deploy Resources
Virtual Machines (VMs)
- Deploy across multiple cloud providers with consistent configurations.
- Example: Define VM size, image, and networking in IaC scripts.
Networks
- Define and deploy:
- Virtual networks
- Subnets
- Routing tables
- Security groups
- Example: Terraform can configure a full AWS VPC (with subnets, route tables, security rules).
Load Balancers
- Automate deployment and configuration to ensure high availability and traffic distribution.
Connection Topologies
- Deploy VPNs, peering connections, or direct connects for secure and reliable cloud or hybrid networking.
Storage
- Automate creation of resources such as:
- Storage accounts
- S3 buckets
- Databases
Identity & Access Management (IAM)
- Use IaC to define roles, policies, and permissions consistently across environments.
Containers & Orchestration
- Manage Kubernetes clusters or Dockerized applications declaratively with IaC tools.
Key Takeaways
- IaC transforms infrastructure management by enabling automation, consistency, scalability, and cost-efficiency.
- It supports DevOps and CI/CD workflows, bridging development and operations.
- Resources like VMs, networks, load balancers, IAM, storage, and containers can all be provisioned declaratively.
- As cloud environments grow more complex, IaC remains essential for streamlining operations and accelerating digital transformation.
Cloud Infrastructure Automation
Learning Objective
After completing this video, you will be able to identify how to use scripts and infrastructure automation tools to create network infrastructures, resources, and services.
Key Concepts
- Cloud Infrastructure Automation: The process of using software tools and scripts to automatically manage, configure, and provision cloud resources without manual intervention.
- Infrastructure as Code (IaC): Define and manage infrastructure using code (e.g., JSON, YAML) for version control, consistency, and reproducibility.
- Auto Scaling: Dynamically adjust resources up or down based on demand for performance and cost efficiency.
- CI/CD Pipelines: Integrate automation with continuous integration/continuous deployment for seamless application and infrastructure delivery.
- Configuration Management: Maintain consistent settings across resources, ensuring compliance with organizational policies.
Cloud vs On-Premises Automation
- Both share similar tools and processes.
- Cloud automation focuses on virtual infrastructure and services.
- On-premises environments often deal with physical hardware.
- Manual management in cloud environments limits scalability and efficiency, making automation essential.
Uses of Cloud Infrastructure Automation
- Resource Provisioning: Rapidly provision servers, storage, and networking using templates or policies.
- Application Deployment: Consistent, automated deployments across environments.
- Disaster Recovery: Automated backups and recovery plans ensure data integrity and business continuity.
- Security & Compliance: Automated checks, audits, and policy enforcement to maintain a secure cloud environment.
Benefits
- Efficiency: Reduces time and effort for routine tasks.
- Consistency: Minimizes errors by ensuring uniform configurations.
- Cost Optimization: Dynamically manage resources to avoid unnecessary costs.
- Faster Time-to-Market: Speeds up app and service delivery.
- Enhanced Security: Continuous monitoring and automated controls strengthen security posture.
Challenges
- Complexity: Designing and maintaining automation in complex environments requires expertise.
- Initial Investment: High upfront costs in tools, training, and setup.
- Security Risks: Misconfigured automation can introduce vulnerabilities.
- Skill Gap: Shortage of professionals skilled in automation tools and practices.
The Terraform Infrastructure as Code System
This section introduces Terraform, a leading open-source Infrastructure as Code (IaC) tool developed by HashiCorp. Terraform allows developers and operations teams to define, provision, and manage infrastructure across multiple cloud providers using a declarative language. It is a key component in modern DevOps practices, ensuring consistent, automated, and scalable infrastructure management.
Why Terraform?
Traditionally, setting up servers (Windows or Linux) required manual tasks like adjusting settings, running scripts, and configuring storage and networking. As infrastructures grew into hundreds or thousands of servers, this became error-prone and inefficient. Terraform and IaC solve these challenges by:
- Automating infrastructure provisioning.
- Ensuring consistency and repeatability.
- Reducing human error through automation.
- Supporting drift management (keeping infrastructure aligned with its declared state).
Declarative vs Imperative Approaches
- Imperative: Define how to achieve results (step-by-step commands).
- Declarative (Terraform): Define what the end state should be. Terraform figures out the steps.
Terraform configurations are idempotent: applying the same configuration multiple times will not create unintended changes. Terraform automatically resolves resource dependencies (e.g., ensures an EC2 instance exists before attaching a load balancer).
Core Features
- Multi-cloud support: AWS, Azure, GCP, and many more.
- Declarative syntax: Infrastructure is described in HashiCorp Configuration Language (HCL) or JSON.
- State management: Maintains a state file to track current infrastructure vs. desired state.
- Modularity: Encourages reusable modules to standardize infrastructure.
- Execution plans: Previews changes before applying (safe, predictable updates).
Common Use Cases
- Provisioning cloud resources: Quickly create servers, databases, load balancers.
- Complex multi-tier applications: Define interdependent resources across services.
- Resource scheduling: Manage container workloads (e.g., Kubernetes, Nomad).
- Disposable environments: Spin up and tear down development, staging, or testing setups.
- Multi-cloud deployments: Manage redundancy and optimize costs across providers.
- Policy as Code: Enforce compliance and organizational policies.
Terraform Architecture
-
Terraform Core (CLI)
- The main binary and command-line tool for interacting with Terraform.
- Open-source and available on GitHub.
-
Providers
- Connect Terraform to APIs of cloud platforms or services.
- Thousands exist (AWS, Azure, GCP, Kubernetes, etc.).
- Found on the Terraform Registry.
-
State File
- JSON file storing current infrastructure state and dependencies.
- Used to reconcile declared vs. actual resources.
- Crucial for planning safe updates.
-
Configuration Files
- Written in HCL, defining the desired state of resources.
-
Modules
- Self-contained packages of configuration for code reuse and scalability.
-
Resources
- Actual infrastructure objects (VMs, networks, databases, etc.).
-
Data Sources
- Fetch external information for use in configurations.
-
Variables & Outputs
- Variables: Parameterize configs for flexibility.
- Outputs: Share useful results or values across modules.
-
Backends
- Define how state is stored and operations are executed (e.g., remote backends for collaboration).
-
Provisioners
- Run scripts locally or remotely during resource lifecycle (used sparingly, not best practice).
Benefits of Using Terraform
- Consistency: Infrastructure is provisioned identically each time.
- Version Control: Infrastructure code can be tracked, reviewed, and rolled back.
- Collaboration: Teams can adopt software development practices (branching, merging, reviews).
- Automation: Reduces manual intervention and speeds up deployment.
- Documentation: Configurations serve as living documentation for infrastructure.
Terraform Language Concepts and Syntax
Introduction
This section introduces the Terraform language and the syntax used to provision resources. Terraform primarily uses HashiCorp Configuration Language (HCL), which is human-readable and machine-friendly. JSON can also be used, but HCL is preferred for its readability and flexibility.
Terraform configurations are typically written across multiple .tf files, but a minimal setup can live in a single file. These configurations are composed of blocks and arguments (attributes). Blocks represent infrastructure components, while arguments define their properties.
Key Terraform Language Elements
Blocks
Blocks are the foundation of Terraform syntax. Each block encapsulates attributes and functions. Common block types include:
- terraform block → Configures Terraform itself (versions, providers, backends).
- provider block → Defines which provider Terraform interacts with (AWS, Azure, GCP, etc.).
- resource block → Defines infrastructure objects (VMs, storage, networking).
- variable block → Defines parameters for flexible and reusable configurations.
- data block → Fetches data from existing resources.
- module block → Encapsulates reusable configurations.
- output block → Exposes values after resources are created.
- locals block → Defines reusable values or expressions scoped to a module.
- lifecycle block → Customizes resource creation, updates, and deletions.
- provisioners → Run commands/scripts on local or remote resources (last resort, used sparingly).
Arguments (Attributes)
Arguments are key-value pairs inside a block. They configure properties like region, AMI, instance type, etc.
Example 1: Terraform, Provider, and Resource Blocks
terraform {
required_providers {
aws = {
version = ">= 5.0.0"
source = "hashicorp/aws"
}
}
}
provider "aws" {
region = "us-west-2"
}
resource "aws_instance" "my_server" {
ami = "ami-0c55b159cbfafe1f0"
instance_type = "t2.micro"
}
- terraform block → Configures the AWS provider (requires v5.0.0+).
- provider block → Sets region to
us-west-2. - resource block → Provisions an EC2 instance named
my_serverwith a specific AMI and instance type.
Example 2: Variables and Data Sources
variable "aws_region" {
default = "us-west-1"
}
provider "aws" {
region = var.aws_region
}
data "aws_availability_zones" "available" {}
Example 3: Modules
module "network" {
source = "./modules/network"
vpc_id = "vpc-123456"
}
- module block → Reuses a network module stored locally.
- Encourages modularity and reusability of infrastructure code.
Example 4: Outputs, Lifecycle, Locals, and Provisioners
resource "aws_instance" "web" {
ami = "ami-0c55b159cbfafe1f0"
instance_type = "t2.micro"
lifecycle {
create_before_destroy = true
}
provisioner "remote-exec" {
inline = [
"sudo apt-get update -y",
"sudo apt-get install -y nginx"
]
}
tags = {
Name = local.full_name
}
}
output "instance_ip" {
value = aws_instance.web.public_ip
}
locals {
first_name = "Dev"
last_name = "Ops"
full_name = "${local.first_name}-${local.last_name}"
}
- lifecycle block → Ensures new instances are created before old ones are destroyed.
- provisioner (remote-exec) → Installs Nginx after provisioning.
- output block → Exposes the public IP of the EC2 instance.
- locals block → Defines reusable expressions (here used for naming tags).
Key Takeaways
- Terraform configurations are written in HCL, which is modular, reusable, and human-friendly.
- Blocks define infrastructure, arguments configure details.
- Variables, data sources, and modules make configurations scalable and reusable.
- Outputs and locals improve readability and reusability.
- Provisioners should be avoided unless absolutely necessary (prefer using cloud-init or configuration management tools).
With these concepts, you can confidently read and write Terraform configurations that manage cloud infrastructure effectively.
The Core Terraform Workflow
Introduction
The core Terraform workflow can be summarized as Write → Init → Plan → Apply → (Destroy). These steps guide you through defining, provisioning, and managing infrastructure via the Terraform CLI. While often summarized as write, plan, apply, the full workflow includes more detail and best practices.
Workflow Steps
1. Write
- Create or update Terraform configuration files in HCL or JSON.
- Define:
- resources
- variables
- outputs
- modules (for reusability and maintainability).
- Organize code for scalability and collaboration.
2. Initialize
Run:
terraform init
- Downloads and installs provider plugins.
- Initializes the backend for state storage.
- Fetches required modules.
- Must be run when using new configurations or adding modules/providers.
3. Plan
Run:
terraform apply
- Executes the actions proposed in the plan.
- Prompts for confirmation (unless
-auto-approveis used). - Updates the state file with the latest infrastructure state.
- Outputs values defined in output blocks.
5. Destroy (Optional)
Run:
terraform destroy
- Destroys all resources defined in the configuration.
- Used when tearing down infrastructure.
Best Practices
- Version Control: Store Terraform configs in Git for collaboration.
- Workspaces: Manage multiple environments (e.g., dev, staging, prod).
- State Management:
- Use remote backends (e.g., S3, Terraform Cloud).
- Back up and secure state files.
- CI/CD Integration: Automate Terraform runs in pipelines for consistency.
- Security: Manage sensitive data securely (avoid hardcoding secrets).
Key Takeaways
- The Terraform workflow is more than just write, plan, apply.
- Initialization (
init) is required before runningplanorapply. - State management is critical for correctness and collaboration.
- Destroy provides full lifecycle management when infrastructure is no longer needed.
- Following best practices ensures security, scalability, and team efficiency.
State Management in Terraform
Overview
Terraform state is a central part of Terraform’s Infrastructure as Code model. It serves as the source of truth about what Terraform manages—whether resources are in the cloud, on-premises, or with another provider. Understanding how state works and how to manage it is crucial for reliability, collaboration, and secure deployments.
What is Terraform State?
- Terraform state is a file (
terraform.tfstate) that keeps a mapping between:- Your configuration files (what you declare in
.tf). - The real-world infrastructure (what actually exists).
- Your configuration files (what you declare in
- Stored in JSON format, it includes:
- Managed resources and their attributes.
- Dependencies between resources.
- Used to determine which resources need to be:
- Created
- Updated
- Destroyed
Why is Terraform State Important?
- Change detection – Allows Terraform to compare desired vs. actual state.
- Dependency management – Ensures resources are handled in the correct order.
- Performance – Reduces the need to query cloud APIs repeatedly.
- Tracking – Helps Terraform understand what it controls across multiple providers.
Default Storage
- By default, Terraform stores state locally in
terraform.tfstatewithin the working directory. - This is fine for individual use but problematic for teams, where shared access and locking are required.
Best Practices for State Management
- Use remote backends (instead of local storage):
- Examples: AWS S3, Azure Storage, Terraform Cloud.
- Benefits: shared access, state locking, versioning.
- Separate states by environment:
- Example: one state file for dev, one for staging, one for production.
- Prevents cascading errors across environments.
- Secure state files:
- Never commit to version control.
- Encrypt at rest and in transit.
- Enable state locking:
- Prevents concurrent modifications and corruption.
- Supported by most remote backends.
- Use version control for configs, not state:
- Store
.tffiles in Git. - Keep state files separate.
- Store
Terraform Commands for State
-
terraform state listLists all resources in the current state. -
terraform state show <resource_address>Displays detailed attributes of a specific resource. -
terraform state rm <resource_address>Removes a resource from state without deleting it. ⚠️ On nextterraform apply, Terraform will attempt to recreate it. -
terraform import <resource_address> <resource_id>Imports an existing resource into Terraform’s state. Useful for bringing manually created infrastructure under Terraform management.
Workspaces
- Workspaces allow you to manage multiple states from the same configuration.
- Each workspace gets its own state file.
- Useful for switching between environments without modifying
.tffiles.
Key Takeaways
- Terraform state is the single source of truth for infrastructure management.
- Use remote, secure, and isolated state files in team environments.
- Follow best practices for locking, encryption, and separation of environments.
- Learn and use Terraform’s state commands for advanced state manipulation.
- Treat state as critical infrastructure data—it’s as important as the
.tffiles themselves.
Terraform Errors and Rollbacks
Overview
Terraform rollback refers to reverting infrastructure to a previously known good state. This is often required when a deployment fails or introduces unexpected issues. While Terraform CLI does not have a native rollback command, rollbacks can be achieved through multiple approaches depending on the setup and tools in use.
Rollback Methods in Terraform
1. Manual Rollback with State Files
- Terraform state (
terraform.tfstate) should always be backed up. - To roll back:
- Replace the current state file with a previous backup.
- Apply again to restore infrastructure alignment.
2. Remote State Versioning
- Many backends (e.g., AWS S3 with versioning, HCP Terraform) support state history.
- To roll back:
- Retrieve a previous state version.
- Promote it as the current state.
- Provides safety and audit trails.
3. Git-Based Configuration Rollback
- Roll back
.tffiles to a previous stable commit in version control. - Run
terraform applyto align infrastructure with the rolled-back configuration.
4. Custom Scripts and Tools
- Automate rollback by:
- Downloading a prior state version.
- Uploading it as the new current state.
- Useful for complex CI/CD pipelines.
Third-Party Platforms Supporting Rollback
Several platforms extend Terraform with rollback and state management features:
- Harness → Built-in rollback step to last successful state.
- Spacelift → Advanced IaC workflow automation with rollback support.
- env0 → Provides rollback as part of IaC management features.
- Scalr → Terraform automation with versioning and rollback options.
- Atlantis → Open-source; relies on Git for rollback (not direct rollback support).
Rollbacks in HCP Terraform
HCP Terraform (formerly Terraform Cloud) provides robust rollback capabilities:
- Maintains multiple state versions per workspace.
- Rollback is performed via an API call:
- Workspace is locked before rollback.
- A new state is created as a copy of the chosen previous version.
- Becomes the new current state.
- Benefits:
- Full state history.
- API integration for automation in CI/CD pipelines.
- Guaranteed consistency of rolled-back state.
- Locking prevents conflicts.
- Flexible rollback to any previous version.
Considerations and Risks
- Rollbacks can cause drift between real infrastructure and state if resources were changed outside Terraform.
- Always:
- Maintain regular backups of state.
- Document infrastructure changes clearly.
- Test rollbacks in non-production environments when possible.
Key Takeaways
- Terraform CLI does not include a direct rollback command.
- Rollback can be performed through:
- Manual backups
- Remote state versioning
- Git commits
- Third-party tools
- HCP Terraform provides the most reliable rollback process with API-driven version control.
- Rollbacks must be approached cautiously to avoid state/infrastructure mismatches.
Managing an AWS SSO Admin Account in IAM Identity Center
Overview
Configuring AWS IAM Identity Center (formerly AWS Single Sign-On) with an admin account provides secure, short-term credentials for managing AWS environments. This approach follows best practices by reducing reliance on long-lived IAM user credentials and enforcing centralized identity management with MFA.
Steps to Configure an Admin Account
1. Create an AWS Organization
- Sign in with the root user account.
- Open AWS Organizations.
- Select Create an organization.
2. Enable IAM Identity Center
- Navigate to IAM Identity Center.
- Click Enable.
- You’ll be redirected to the Identity Center Dashboard.
3. Customize Access Portal
- In Settings → Identity source, expand Actions.
- Select Customize AWS access portal URL.
- Enter a subdomain (e.g.,
terraform-on-aws) and confirm. - Save changes → The portal URL is updated.
4. Configure Authentication & Management
- Authentication tab → Enable MFA for added security.
- Management tab → Optionally set a Delegated administrator account.
- (Optional) Apply Tags for easier resource tracking.
5. Create a Permission Set
- Go to Permission sets → Create permission set.
- Choose Predefined permission set.
- Select AdministratorAccess.
- Adjust Session duration (default 1 hr → set to 12 hrs).
- Review and create → Permission set is available.
6. Add a New User
- Navigate to Users → Add user.
- Provide:
- Username
- Email address
- First/Last name
- Leave "Send email with password setup instructions" checked.
- Review and add the user → Notification confirms success.
7. Create a Group
- Go to Groups → Create group.
- Name it (e.g.,
IaCAdmins). - Add the previously created user.
- Create group → Success confirmation appears.
8. Assign Admin Permissions
- Go to Multi-account permissions → AWS accounts.
- Select the management account (
terraform-on-aws). - Choose Assign users or groups.
- Select IaCAdmins group → Next.
- Attach AdministratorAccess permission set → Next.
- Review and submit → Success confirmation.
9. User Sign-In & MFA Setup
- New user receives an invitation email → Accept invitation.
- Set password → Sign in with username & password.
- Register MFA device (Authenticator app, security key, or built-in authenticator).
- MFA is confirmed.
10. Access Admin Account
- User signs in to the AWS access portal.
- Under Accounts, expand
terraform-on-aws. - Admin sees:
- AdministratorAccess permission set.
- Access keys for temporary credentials.
Key Benefits
- Secure Access: Uses short-term credentials instead of long-lived IAM keys.
- Centralized Control: Permissions managed via IAM Identity Center.
- Scalable: Groups like
IaCAdminssimplify multi-user access control. - Strong Authentication: MFA ensures compliance and security.
✅ With this setup, an AWS SSO account is successfully configured with Administrator privileges in IAM Identity Center.
Configuring Multi-Factor Authentication (MFA) for IAM Identity Center Users
This section explains how to configure Multi-Factor Authentication (MFA) for IAM Identity Center users in AWS. MFA is a security best practice that adds an additional layer of protection beyond just a username and password.
Accessing IAM Identity Center
- Sign in to the AWS Access Portal.
- Expand the available account and choose a role (e.g.,
AdministratorAccess). - This opens the AWS Management Console with the selected credentials.
- From the console, navigate to IAM Identity Center (via Recently visited services or search).
Configuring MFA in IAM Identity Center
- In IAM Identity Center, go to Settings from the left navigation panel.
- Under Settings, open the Authentication tab.
- Scroll down to the Multi-factor authentication (MFA) section.
- Select Configure to open the MFA configuration page.
MFA Settings Options
-
Prompt users for MFA
- Only when sign-in context changes: Prompts users only when circumstances (e.g., device, network) differ.
- Every time they sign in: Always prompts users (highest security).
- Never: Disables MFA (not recommended).
-
Authentication Methods Supported Users can register and authenticate with:
- Security keys and built-in authenticators (e.g., Windows Hello PIN, Apple TouchID).
- Authenticator apps such as Google Authenticator or Microsoft Authenticator.
-
If no MFA device is registered Options include:
- Require registration of an MFA device at sign-in (recommended).
- Send a one-time password (OTP) by email.
- Block sign-in entirely.
- Allow sign-in without MFA (least secure).
-
MFA Device Management Administrators can decide who is allowed to manage MFA devices.
Practical Example: Using Windows Hello as MFA
-
On a Windows machine, open Settings → Accounts → Sign-in options.
-
Available authentication methods may include:
- Facial recognition (requires camera).
- Fingerprint recognition (requires biometric hardware).
- PIN (via Windows Hello).
- Security keys.
-
In this demo, a PIN is set up:
- Select Set up under PIN (Windows Hello).
- Authenticate with account password.
- Enter and confirm a new PIN.
Once configured, the PIN can be used as an MFA factor when signing into AWS via Windows Hello.
Key Takeaways
- MFA significantly enhances security and is strongly recommended by AWS.
- IAM Identity Center provides flexibility in how often MFA is required, what devices are supported, and how unregistered users are handled.
- Built-in authenticators like Windows Hello can serve as MFA devices, making it easier for users to comply without needing external hardware.
- Organizations should configure policies to require MFA registration at first sign-in to ensure consistency across all accounts.
Installing and Configuring the AWS CLI for Remote Access with Short-Term Credentials
This guide explains how to install and configure the AWS Command Line Interface (CLI) for remote access using short-term credentials via AWS IAM Identity Center (formerly AWS SSO).
1. Accessing AWS IAM Identity Center
-
Log in to the AWS access portal using the URL provided in the IAM Identity Center invitation email. Example: https://terraform-on-aws.awsapps.com/start
-
In the portal, expand Access keys under your AWS account. This provides two important values:
-
SSO Start URL
-
SSO Region
These will be needed later during CLI configuration.
2. Installing the AWS CLI
- Navigate to AWS Documentation.
- Either scroll to Developer Resources → AWS CLI or search for: install aws cli v2
- Select Get started with the AWS CLI → Install/Update.
- Choose your operating system. For Windows:
msiexec.exe /i https://awscli.amazonaws.com/AWSCLIV2.msi
- Run the command in PowerShell or a terminal.
- Follow the install wizard (accept defaults).
- Confirm with Yes if User Account Control appears.
- Select Finish to complete.
3. Preparing the Development Environment
- Example project structure (Terraform-based):
TERRAFORM/
├── projects/
│ ├── mobileapp/
│ │ └── environment/
│ └── webapp/
│ └── environment/
├── examples/
└── modules/
- Recommended VS Code extensions:
- HashiCorp HCL
- HashiCorp Terraform
- A
README.mdfile can be created for documentation and previewed in VS Code.
4. Configuring AWS CLI with SSO
In the terminal:
aws configure sso
Steps:
- Enter SSO session name:
my-session
- Enter SSO start URL (copied from AWS access portal)-
- Enter SSO region, e.g.:
us-west-1
- Accept default SSO registration scopes (
sso:account:access). - Browser will open -> confirm Authorization request.
- Approve the request -> return to terminal.
- Enter CLI default client region:
us-east-1
- For default output format, press Enter (default =
json). - CLI will generate a profile name (e.g., including account number). Press Enter.
5. Editing the AWS Config File
Open the config file in .aws/configand update the profile to be the default:
[default]
sso_start_url = https://terraform-on-aws.awsapps.com/start
sso_region = us-west-1
sso_account_id = <your-account-id>
sso_role_name = AdministratorAccess
region = us-east-1
output = json
Save the file.
6. Verifying AWS CLI Setup
Run the following to test connectivity:
aws sts get-caller-identity
Expected JSON output:
{
"UserId": "AROAXXXXXXXXXXXXX",
"Account": "123456789012",
"Arn": "arn:aws:sts::123456789012:assumed-role/AdministratorAccess/<sso-user>"
}
This confirms:
- AWS CLI is installed correctly.
- CLI is authenticated via IAM Identity Center.
- Short-term credentials are working for secure remote access.
✅ Result: AWS CLI is now configured for remote access using temporary credentials through IAM Identity Center.
Installing and Preparing the Terraform Core CLI
This guide explains how to install the Terraform core CLI on Windows and prepare it for provisioning infrastructure resources.
1. Downloading Terraform
- Open your browser and navigate to Terraform by HashiCorp.
- Click the Download button, which redirects to the HashiCorp developer page.
- On the Install Terraform page:
- Expand the version dropdown and select the latest version (e.g., 1.9.6).
- Under Windows, select 64-bit (AMD64) and click Download.
- Once downloaded, open the
.zipfile to accessterraform.exe.
2. Installing Terraform (Manual Method)
- Drag
terraform.exeto a folder of your choice (e.g.,C:\terraform). - Add this folder to your system
PATHenvironment variable:- Open Start → type
environment variables→ select Edit the system environment variables. - Click Environment Variables.
- Under System variables, locate Path → Edit → New → add
C:\terraform. - Click OK to save changes.
- Open Start → type
- This allows you to run
terraformfrom any terminal.
3. Installing Terraform via WinGet (Recommended)
- Open Visual Studio Code and open the TERMINAL pane.
- Search for HashiCorp packages:
winget search hashicorp
- Copy the **Id** of Terraform (e.g., `Hashicorp.Terraform`) and run:
```powershell
winget install Hashicorp.Terraform
- Restart VSCode if necessary to recognize the Terraform command.
4. Verifying Installation
- Check Terraform version
terraform -version
Expected output:
Terraform v1.12.2
- Access CLI help:
terraform -help
Shows usage:
terraform [global options] <subcommands> [args]
- Help for a specific command:
terraform init -help
5. Key Terraform Commands
| Command | Description |
|---|---|
init | Initialize a Terraform working directory. |
validate | Validate Terraform configuration files. |
plan | Show changes that Terraform will make. |
apply | Apply the planned changes to infrastructure. |
destroy | Remove resources defined in configuration. |
✅ Result: Terraform CLI is installed and ready to provision resources on your system.
Provisioning an EC2 Instance in AWS Using Terraform
This guide demonstrates how to deploy an EC2 instance on AWS using Terraform.
1. Project Structure
Assuming the Terraform project structure is as follows:
webapp/
│
├── main.tf
├── variables.tf
└── terraform.tfstate (generated after apply)
2. main.tf
Defines the Terraform configuration, AWS provider, and the EC2 resource.
terraform {
required_providers {
aws = {
source = "hashicorp/aws"
version = "~> 5.0"
}
}
}
provider "aws" {
region = var.aws_region
}
resource "aws_instance" "web_server" {
ami = var.ami_us-east-1_linux2
instance_type = "t2.micro"
tags = {
Environment = var.environment
Name = "web_server"
Terraform = "true"
}
}
- ami: Amazon Machine Image ID for the instance (Amazon Linux 2).
- instance_type: t2.micro (eligible for free tier).
- tags: Key-value pairs for resource identification.
3. variables.tf
Defines variables referenced in main.tf:
variable "ami_us-east-1_linux2" {
description = "The Amazon Linux 2 AMI in us-east-1"
type = string
default = "ami-0e54eba7c51c234f6"
}
variable "aws_region" {
description = "AWS region to deploy resources"
type = string
default = "us-east-1"
}
variable "environment" {
description = "Deployment environment"
type = string
default = "development"
}
4. Deploying the EC2 Instance
- Navigate to the project directory:
cd webapp
- Initialize Terraform:
terraform init
- Downloads provider plugins.
- Creates
.terraformdirectory andterraform.lock.hclfile.
- Preview the deployment plan:
terraform plan
- Shows which resources will be created (
+sign indicates new resources).
- Apply the configuration:
terraform apply
- Terraform prompts for confirmation. Type
yesto proceed. - Creates the EC2 instance.
5. Verify Deployment
- Sign in to the AWS Management Console.
- Navigate to EC2 → Instances in the selected region (
us-east-1). - You should see the instance
web_server:- Instance type: t2.micro
- Tags:
- Name:
web_server - Environment:
development - Terraform:
true
- Name:
✅ Result: EC2 instance is deployed and managed by Terraform. Future updates or deletions should be done via Terraform to maintain Infrastructure as Code consistency.
Modifying and Applying Changes to an Existing EC2 Instance Using Terraform
This guide explains how to update an existing Terraform-managed EC2 instance on AWS.
1. Background
- Terraform keeps track of your infrastructure using the
terraform.tfstatefile. - Some changes can be applied in place (like tags), while others require recreating the resource (like AMI changes).
- Always review the execution plan before applying changes.
2. Updating a Tag In-Place
Step 1: Edit the resource in main.tf
resource "aws_instance" "web_server" {
ami = var.ami_us-east-1_linux2
instance_type = "t2.micro"
tags = {
Environment = var.environment
Name = "web_application_server" # Updated Name tag
Terraform = "true"
}
}
- Changing tags does not require destroying the instance.
- Save the file.
Step 2: Review the execution plan
terraform plan
- Terraform indicates
~for in-place updates.
Step 3: Apply the change
terraform apply
- Type
yesto confirm. - EC2 instance remains running; the tag is updated.
3. Changing the AMI (Requires Replacement)
Step 1: Update the AMI in main.tf
resource "aws_instance" "web_server" {
ami = var.ami_us-east-1_linux2023 # New AMI
instance_type = "t2.micro"
tags = {
Environment = var.environment
Name = "web_application_server"
Terraform = "true"
}
}
- Terraform knows that AMI cannot be changed in place, so the instance must be destroyed and recreated.
Step 2: Review the execution plan
terraform plan
- Terraform will display:
-= destroy old instance+= create new instance
Step 3: Apply the change
terraform apply
- Type
yesto confirm. - Old instance is terminated, and a new instance is created with the updated AMI.
- Resource name in Terraform remains
aws_instance.web_server.
4. Key Takeaways
- In-place updates: Tags, security groups, and other mutable attributes can often be updated without destroying the resource.
- Replacement required: Immutable attributes like the AMI require Terraform to destroy and recreate the instance.
- Execution plan: Always review
terraform planbefore applying changes to understand what Terraform will do. - Resource names: The Terraform resource identifier (e.g.,
aws_instance.web_server) stays the same even if the underlying resource is replaced.
✅ Result: You can safely modify attributes of EC2 instances using Terraform while understanding which changes are in-place versus destructive.
Destroying an EC2 Instance Using Terraform
This guide explains how to safely destroy a Terraform-managed EC2 instance in AWS.
1. Why Destroy Infrastructure?
- Eliminates unnecessary costs.
- Reduces exposure to security risks.
- Cleans up resources no longer in use.
2. Recommended Workflow
- Review the destruction plan before executing.
- Use
terraform plan -destroyto preview resources that will be destroyed. - Confirm the destruction when running
terraform destroy.
3. Previewing Destruction
Step 1: Generate a destroy plan
terraform plan -destroy -out="destroy.tfplan"
-destroytells Terraform to create a destruction plan.-outsaves the plan to a file for review and collaboration.
Step 2: Review the plan
terraform show .\destroy.tfplan
- Red
-signs indicate resources to be destroyed. - Confirms which resources are targeted.
4. Destroying the Resource
Step 1: Execute the destroy command
terraform destroy
Step 2: Confirm the destruction
- Type
yeswhen prompted. - Terraform will destroy the resource(s) in the proper order, respecting dependencies.
5. Verifying Destruction
- Refresh the EC2 Instances page in AWS Console.
- The instance state should progress from Shutting-down to Terminated.
6. Key Takeaways
terraform destroysafely removes all resources managed by the configuration.- Plan before destroying to avoid accidental deletions.
- Terraform automatically determines the correct order for resource destruction based on dependencies.
- Even simple resources, like a single EC2 instance, follow the same workflow as complex configurations.
✅ Result: Your EC2 instance and any dependent Terraform-managed resources are fully destroyed, freeing up costs and reducing security exposure.
Managing Terraform State Using the CLI
Terraform stores the current status of managed infrastructure in a state file. This file is essential for tracking resources and ensuring Terraform can make accurate updates. The CLI provides commands to inspect, manage, and manipulate this state.
1. Default State File
- By default, Terraform stores state locally in
terraform.tfstate(JSON format). - A backup file
terraform.tfstate.backupis automatically created during updates. - Always ensure the state is up-to-date before running Terraform commands.
2. Viewing the State
Show the entire state
terraform show
- Displays the current state of resources in a human-readable format.
- Can also specify a file path:
terrafor show demo.tfstate
List all resources
terraform state list
- Outputs all resources in the state.
- Can filter by resource type and name:
terraform state list aws_instance.web_server
Show details for a single resource
terraform state show -state="demo.tfstate" aws_instance.web_server
- Provides detailed attributes, metadata, and computed values for a specific resource.
3. Modifying the State
Move a resource
terraform state mv -state="demo.tfstate" aws_instance.web_server aws_instance.web_application_server
- Renames or moves a resource within the state.
- Useful for refactoring without destroying the resource.
- Terraform automatically creates a backup before the move.
Remove a resource
terraform state rm -state="demo.tfstate" aws_instance.web_application_server
- Removes a resource from the state without deleting it from the actual infrastructure.
- After this, Terraform will no longer manage the resource.
4. Best Practices
- Backup state files before making manual modifications.
- Use
terraform showandterraform state listto inspect state before changes. - Use
terraform state mvandterraform state rmfor safe, controlled modifications. - Avoid directly editing the JSON of the state file unless absolutely necessary.
✅ Outcome: Using the CLI, you can inspect, move, or remove resources from Terraform state files safely, keeping your infrastructure and state synchronized
Running terraform plan and terraform apply in the CLI
Terraform allows you to preview and apply changes to your infrastructure using the CLI. The plan and apply commands form the core workflow for safely managing infrastructure.
1. Terraform Plan
The terraform plan command compares your configuration files with the current state and outputs an execution plan. It does not make any changes—it only shows what will happen if you apply the configuration.
Basic Usage
terraform plan
- Shows any changes to be made (additions, modifications, or deletions).
- No changes are applied yet.
Plan Options
- Destroy Mode: Preview resources that would be destroyed.
terraform plan -destroy
- Refresh-Only Mode: Updates the state file without making changes.
terraform plan -refresh-only
- Output Plan to File: Save the plan for later application.
terraform plan -out="testing.tfplan"
- Specify Variables: Override variables at the CLI.
terraform plan -var="environment=staging"
- Use TFVars File:
terraform plan -var-file="terraform.tfvars"
2. Terraform Apply
The terraform apply command applies changes to match your configuration. It can apply directly from the CLI or from a saved plan file.
Basic Usage
terraform apply
- Prompts for confirmation before applying changes.
- Shows which resources are added, changed, or destroyed.
Apply with Variables
terraform apply -var="environment=staging"
- Inline variables override those in
variables.tfor.tfvarsfiles.
Apply a Saved Plan File
terraform apply "testing.tfplan"
- Ensures that the exact reviewed plan is applied.
Common Options
-auto-approve: Skip interactive approval.-backup=path: Specify a custom state backup location.-lock=false: Disable state locking for special cases.-state=path: Specify a custom state file path.
3. Best Practices
- Always run
terraform planfirst to review changes. - Save plans with
-outfor review or approval before applying. - Use variables to keep configuration flexible and reusable.
- Review CLI output carefully before applying changes to prevent unintended destruction.
✅ Outcome: Using terraform plan and terraform apply, you can safely preview and implement changes to your infrastructure, with options for review, variable overrides, and automation.
Troubleshooting Terraform Errors When Deploying to AWS
When deploying resources to AWS using Terraform, errors can arise from multiple sources. Understanding these sources and how to troubleshoot them is key to maintaining reliable infrastructure.
1. Types of Terraform Errors
- Language Errors
- Syntax mistakes, typos, or misconfigured Terraform code.
- State Errors
- Occur when Terraform's state file is out of sync with real-world resources.
- Can cause resources to be incorrectly updated, recreated, or destroyed.
- Core Errors
- Bugs in Terraform's internal logic that handle operations, state, or resource dependencies.
- Provider Errors
- Occur in provider plugins (e.g., AWS provider) due to authentication, API issues, or invalid resource mappings.
2. Example: Permission-Based Error
Scenario
- An EC2 instance creation succeeds.
- Modifying an attribute (e.g.,
instance_type) fails due to permission restrictions.
Root Cause
- An inline IAM policy denied the
ec2:ModifyInstanceAttributeaction. - Terraform generates the following error:
operation error EC2: ModifyInstanceAttribute, response error StatusCode: 403
api error UnauthorizedOperation
with an explicit deny in an identity-based policy
Key Insight
- Terraform logs and error messages indicate exactly which API call failed and why.
- Some actions (like instance creation) may succeed even when permissions for modifications are denied.
3. Troubleshooting Steps
-
Check Error Messages
- Look for API operation and status codes (e.g., 403 Unauthorized).
- Identify explicit denies in IAM or permission-based policies.
-
Adjust Configuration or Permissions
- Comment out problematic lines in
main.tf. - Ensure IAM policies grant required actions for planned changes.
- Comment out problematic lines in
# instance_type = "t2.small" # Disabled to prevent ModifyInstanceAttribute error
- Refresh Terraform State
- Keep state synchronized with the real-world environment:
terraform refresh
- Enable Detailed Logging*
- Set environment variables to capture verbose logs.
# PowerShell example
$env:TF_LOG="DEBUG"
$env:TF_LOG_PATH="log.txt"
terraform apply
TF_LOGlevels: TRACE, DEBUG, INFO, WARN, ERROR.TF_LOG_PATH: file path to persist logs.
- Analyze Logs
- Inspect
log.txtfor detailed information about the sequence of operations and API calls. - Logs can also be stored in CloudWatch, S3, or analyzed with Athena for large-scale auditing.
- Inspect
- Remove Restrictive Policies
- Delete any temporary inline IAM policies that caused errors.
4. Best Practices for Avoiding Configuration Errors
- Validate IAM permissions before deploying resources.
- Run
terraform planto preview changes before applying. - Use logs for debugging to pinpoint permission or API-related issues.
- Synchronize state files when making out-of-band changes.
- Collaborate and review policies to prevent accidental denials.
✅ Outcome: With detailed error messages, state refresh, and logging, Terraform provides a clear path to diagnose and fix configuration or permission-related errors in AWS deployments.
Terraform Modules: Overview, Components, and Best Practices
1. What Are Terraform Modules?
A Terraform module is a container for multiple resources that are used together in your Infrastructure as Code (IaC).
- A module is essentially a collection of
.tffiles in the same directory. - Every Terraform configuration is part of a module:
- Root module: the directory where Terraform commands are executed.
- Child modules: modules called from the root or other modules.
Modules are fundamental for:
- Organizing complex infrastructure.
- Reusing configurations across projects and teams.
- Enforcing consistency and reducing errors.
2. Why Use Modules?
Challenges Without Modules
- Difficult navigation and readability in large configurations.
- Risk of accidental changes affecting unrelated parts.
- Duplication of configurations across environments (dev, staging, prod).
- Collaboration issues—hard to share and reuse configurations.
- Increased complexity and maintenance overhead.
Benefits of Modules
- Organization: group related configurations for easier navigation.
- Encapsulation: isolate parts of infrastructure, minimizing errors and conflicts.
- Reusability: share modules across environments, teams, or projects.
- Consistency: enforce standards and reduce misconfigurations.
- Accessibility: empower teams to use pre-approved modules without deep Terraform knowledge.
- Maintainability: apply updates consistently across all environments.
3. Components of a Terraform Module
A well-structured module typically includes:
main.tf→ primary configuration for resources.variables.tf→ input definitions to make modules reusable.outputs.tf→ essential values exposed for use by other modules.versions.tf→ specifies Terraform and provider versions for compatibility.providers.tf→ defines required providers and their settings.README.md→ documentation of purpose, usage, inputs, outputs, and dependencies.- Submodules → optional; break down large modules into smaller reusable pieces.
Not every module needs all these files. Simple modules may only include main.tf.
4. Best Practices for Working with Modules
-
Encapsulation & Reuse
- Design modules for use across multiple environments/projects.
- Avoid duplication by abstracting reusable logic.
-
Version Control
- Pin module versions to ensure stable, reproducible deployments.
-
Variables Over Hard-Coding
- Use variables for flexibility and portability.
- Avoid hard-coded values that limit reusability.
-
Minimal Outputs
- Only expose necessary values.
- Too many outputs complicate the interface.
-
Documentation
- Always include a
README.md. - Document purpose, inputs, outputs, prerequisites, and dependencies.
- Always include a
-
Single Responsibility
- Keep modules focused on one function or resource type.
- Break down large modules into smaller submodules.
-
Testing
- Test modules independently before integrating into production.
-
Use Community Modules
- Leverage Terraform Registry for well-maintained, community-supported modules.
-
Naming Conventions
- Adopt clear, consistent naming for modules and resources.
-
Formatting
- Use
terraform fmtto enforce consistent code style and readability.
- Use
5. Key Takeaway
Terraform modules simplify, standardize, and scale infrastructure management. By structuring code into modules and following best practices, teams achieve:
- Cleaner organization.
- Easier collaboration.
- Reusable and maintainable IaC.
- Reliable deployments across all environments.
Terraform Module Blocks
Overview
This section explains how module blocks are used to call other modules in Terraform. A module in Terraform is a collection of resources managed together. Every configuration has at least one module: the root module.
-
Root Module:
- The top-level directory where
terraform applyis executed. - It's an implicit module and acts as the entry point of the configuration.
- You don’t call it using a
moduleblock. - All resources, data sources, and outputs in the root directory belong to it.
- The top-level directory where
-
Child Modules:
- Called from the root (or another child module) using module blocks.
- Can be reused within the same config or across multiple configs.
- Allow you to package and reuse resource setups.
Calling Child Modules
To call (invoke) a child module, you use a module block inside another module.
Example:
module "app-servers" {
source = "./app-servers"
servers = 3
}
- Label (
app-servers): Local name to reference the child module instance. - Block body: Defines behavior using arguments.
source(required): Specifies the location of the module. Can be local or remote.- Other arguments: Input variables defined in the child module.
⚠️ Note: The Terraform style guide recommends underscores for identifiers, but hyphens are common for module names, especially shared/public ones.
Module Sources
- Local directory: Path to a folder with the module’s config.
- Remote sources: GitHub, Terraform Registry, or private registries.
- Multiple uses: Same
sourcecan be used in multiple module blocks to create separate instances with different variable values.
Whenever you add, remove, or update module blocks, rerun:
terraform init
- By default, this does not upgrade existing modules.
- To upgrade:
terraform init -upgrade
Versioning
- Applies only to registry-based modules (e.g., Terraform Registry).
- Use the
versionargument to set constraints.
Example:
module "s3_bucket" {
source = "terraform-aws-modules/s3-bucket/aws"
version = "4.1.2"
}
- Ensures consistency and avoids unexpected updates.
- Local path modules do not support versioning.
Meta-Arguments for Modules
Terraform supports optional meta-arguments for finer control:
count– Create multiple instances of a module with a single block.for_each– Similar tocountbut provides more flexibility for mapping resources.providers– Pass specific provider configs to a child module. Defaults to parent’s provider if not defined.depends_on– Define explicit dependencies between a module and other resources.
Outputs from Modules
Child modules can expose specific values to the calling module using outputs.
Example:
If app-servers defines an output lb_dns, the parent module can access it as:
module.app-servers.lb_dns
This allows controlled sharing of data between modules.
Refactoring & Resource Movement
When moving resources between modules:
- Terraform treats them as new resources (old ones destroyed, new ones created).
- To avoid disruptions, use the
movedblock:
moved{
from = aws_instance.old
to = module.new.aws_instance.new
}
This remaps addresses so Terraform recognizes resources as moved rather than replaced.
Replacing Resources
Sometimes, a resource inside a module must be replaced (e.g., hardware issues).
Use the -replace option during planning:
terraform plan -replace=module.app-servers.aws_instance[0]
- Replacement is explicit.
- Cannot replace all instances at once; must be specified individually.
Removing Modules
- To remove: delete the module block from the config.
- By default, Terraform will destroy resources managed by that module.
- To keep resources intact but remove them from state:
removed {
from = module.app-servers
}
This detaches the resources from Terraform management without destroying them.
Key Takeaways
- Every Terraform project has a root module.
- Module blocks are used to call reusable child modules.
- Always define a
sourcewhen calling modules. - Use version constraints for registry-based modules to ensure stability.
- Meta-arguments like
count,for_each, anddepends_ongive flexibility in module usage. - Use outputs to pass values between modules.
- Use
movedblocks to refactor without destroying resources. - Use
-replacefor controlled replacements. - Use
removedblocks to detach resources without deletion.
Terraform Module Source Argument
The source argument in a Terraform module block specifies where Terraform should find the source code for the desired child module.
It is evaluated during the terraform init step, where Terraform downloads or references the module code so that it can be used in subsequent commands like plan and apply.
You can reuse the same source address in multiple module blocks, each with its own input variables, allowing you to create different instances of infrastructure from a single reusable module.
Why the source Argument Matters
- Defines location: Points Terraform to where module code lives (local path, registry, GitHub repo, S3, etc.).
- Reusability: Same module can be used across multiple environments (e.g.,
devvsprod) with different variable inputs. - Versioning: Many sources support explicit versioning (
reffor Git,versionfor registry). - Automation: Ensures teams consistently reference the same module codebase across deployments.
Supported Module Sources
1. Local Paths
- Begin with
./or../. - Directly reference files on your machine.
- No download step required — Terraform immediately uses the latest content on disk.
- Example:
module "network" {
source = "./modules/vpc"
}
2. Terraform Registry
- Primary way to share/distribute modules.
- Format:
NAMESPACE/NAME/PROVIDER - Supports versioning with
versionargument. - Example:
module "vpc" {
source = "terraform-aws-modules/vpc/aws"
version = "5.13.0"
}
- Namespace: organization or user (e.g.,
terraform-aws-modules). - Name: module type (e.g.,
vpc). - Provider: target platform (e.g.,
aws).
3. GitHub & Git Repositories
- Both HTTPS and SSH protocols supported.
- Specify branch, tag, or commit with
?ref=. - Example:
mmodule "vpc" {
source = "github.com/your-org/vpc-module.git?ref=v1.0.0"
}
- Works with Bitbucket and generic Git repos as well.
- For Mercurial repositories, prefix with
hg::.
4. Archives via HTTP(S)
- Supports direct
.ziparchives over HTTPS. - Example:
module "storage" {
source = "https://example.com/modules/storage.zip"
}
5. Amazon S3
- Prefix with
s3:: - Module must be in
.zipformat. - Terraform uses AWS credentials for authentication.
module "logging" {
source = "s3::https://my-bucket.s3.amazonaws.com/logging-module.zip"
}
6. Google Cloud Storage
- Prefix with
gcs:: - Also requires
.zipformat. - Uses Google Cloud SDK credentials.
module "compute" {
source = "gcs::https://storage.googleapis.com/my-bucket/compute-module.zip"
}
7. Subdirectories
- Use double slashes
//to target a subdirectory inside a repo or archive. - Example:
module "frontend" {
source = "github.com/your-org/infra-modules.git//frontend?ref=v1.2.0"
}
Real-World Example: Multi-Environment Deployment
A single module can be reused to deploy development and production VPCs with different configurations.
module "vpc-development" {
source = "github.com/your-org/vpc-module.git?ref=v1.0.0"
vpc_name = "dev_vpc"
cidr_block = "10.0.0.0/16"
public_subnets = ["10.0.1.0/24", "10.0.2.0/24"]
private_subnets = ["10.0.3.0/24", "10.0.4.0/24"]
environment = "development"
}
module "vpc-production" {
source = "github.com/your-org/vpc-module.git?ref=v1.0.0"
vpc_name = "prod_vpc"
cidr_block = "10.1.0.0/16"
public_subnets = ["10.1.1.0/24", "10.1.2.0/24"]
private_subnets = ["10.1.3.0/24", "10.1.4.0/24"]
environment = "production"
}
Key Notes:
- Both use the same GitHub module (
vpc-module). - Version is locked at
v1.0.0. - Different CIDR blocks and subnets ensure no IP conflicts.
- The
environmentvariable helps with tagging, logging, or conditional logic inside the module.
Best Practices
- Keep main module code in the root directory of a repo for easy consumption.
- Lock module versions (
versionorref) to prevent breaking changes. - Use registries for shared modules, local paths for quick iteration.
- Namespace modules properly when publishing (organization or team-based).
Terraform Module Meta-Arguments
Terraform module meta-arguments provide additional control over module behavior, enabling scalability, dependency management, and configuration flexibility. Common meta-arguments include providers, depends_on, count, and for_each.
Providers
- Defines which provider configurations from the parent module should be passed into a child module.
- By default, child modules inherit the parent’s default provider.
- Use cases:
- Managing resources across multiple regions/accounts.
- Assigning different provider configs to different child modules.
- Example:
module "example" {
source = "./example-module"
providers = {
aws = aws.us_west
}
}
depends_on
- Ensures explicit dependencies between modules/resources when Terraform cannot infer them automatically.
- Introduced in Terraform 0.13.
- Must be a list of specific resources or module references.
- Best practice: use only when necessary to avoid overly cautious plans.
- Example:
module "example" {
source = "./example-module"
depends_on = [module.other]
}
count
- Creates multiple instances of a module from a single configuration.
- Uses
count.indexto assign unique values per instance. - Best for identical/near-identical instances.
- Cannot be used with
for_eachin the same block. - Example:
module "ec2-instances" {
count = 3
source = "./ec2-instance-module"
instance_type = "t2.micro"
ami = "ami-0c55b159cbfafe1f0"
availability_zone = var.availability_zones[count.index]
}
variable "availability_zones" {
type = list(string)
default = ["us-west-1a", "us-west-1b", "us-west-1c"]
}
for_each
- Iterates over a map or set of strings to create module instances with distinct configurations.
- Introduced for modules in Terraform 0.13.
- Provides more flexibility than
count. - Cannot be used with
countin the same block. - Example:
variable "environments" {
type = map(string)
default = {
dev = "10.0.0.0/16"
prod = "10.1.0.0/16"
}
}
module "vpc" {
for_each = var.environments
source = "./modules/vpc"
cidr_block = each.value
environment = each.key
}
Key Takeaways
- providers: Control provider configurations per module.
- depends_on: Explicitly manage module/resource dependencies.
- count: Scale modules with identical configurations.
- for_each: Scale modules with distinct configurations using maps/sets.
Terraform Standard Module Structure
Terraform recommends a standard file and directory layout for creating reusable modules. While only the root module is strictly required, following the standard structure ensures better documentation, automation, and ease of use, especially when modules are shared publicly or across teams.
Root Module (Required)
The root module is the entry point for Terraform. It must reside in the root directory of the repository. Typical files:
main.tf→ Primary entry point where all resources are defined.variables.tf→ Contains input variable declarations. Provide descriptions for clarity and documentation.outputs.tf→ Contains output declarations, also with descriptions.README.md→ Explains the module’s purpose and usage. May include:- Overview of the infrastructure created
- Examples of integration with other resources
- Optional diagrams for complex environments
Tip: Terraform tooling automatically generates inputs/outputs documentation, so the README should focus on module purpose and usage examples.
License
LICENSEfile is recommended for both public and private modules.- Public modules must have one, as organizations require clear licensing terms.
Examples Directory
examples/should exist at the root.- Contains usage examples showing how to consume the module.
- Each example should include:
- A README.md describing its purpose.
- Terraform code blocks demonstrating usage.
- Use source URLs (not relative paths) in module calls within examples, since these are often copied into external repositories.
Nested Modules
- Located in a
modules/subdirectory. - Help break down complex infrastructure into smaller, reusable pieces.
- Each nested module may include:
main.tf,variables.tf,outputs.tf,README.md
- Convention:
- Nested module with a README → intended for external use.
- Without a README → intended for internal use.
Best Practices
- Reference nested modules with relative paths so they are treated as part of the same package.
- Avoid deep nesting chains (module calls another, which calls another, etc.).
- Makes maintenance, customization, and troubleshooting harder.
- Keep modules more independent for better usability.
Extended Structures
Some repositories include extra directories depending on organizational needs:
environments/→ For environment-specific variables/settings (e.g.,dev,staging,prod).scripts/→ Utility scripts for testing or automation.test/→ Integration or unit test configurations.
Minimal Module Structure (Recommended)
root-module/
├── main.tf
├── variables.tf
├── outputs.tf
├── README.md
└── LICENSE
Complete Module Structure (Example)
root-module/
├── main.tf
├── variables.tf
├── outputs.tf
├── README.md
├── LICENSE
├── modules/
│ ├── vpc/
│ │ ├── main.tf
│ │ ├── variables.tf
│ │ ├── outputs.tf
│ │ └── README.md
│ └── iam/
│ ├── main.tf
│ ├── variables.tf
│ ├── outputs.tf
│ └── README.md
└── examples/
├── simple/
│ ├── main.tf
│ └── README.md
└── production/
├── main.tf
└── README.md
Key Takeaways
- Root module is the only required component.
- Minimal module structure:
main.tf,variables.tf,outputs.tf,README.md. - Nested modules go in
modules/, avoid deep nesting. - Examples should be in
examples/with proper READMEs and external-friendly source references. - Add LICENSE for clarity and compliance.
Providers Within Terraform Modules
This section explains how providers are defined, managed, and passed within Terraform modules. Providers are essential in Terraform because they manage all operations on resources (create, update, destroy, refresh). Every resource must be tied to a provider configuration, and these configurations are generally defined in the root module.
Key Concepts
1. Provider Configurations in Root Module
- Provider blocks should only be defined in the root module.
- Child modules must not include their own provider blocks, as this makes them harder to reuse and manage.
- Example root module with two AWS providers:
# root module
provider "aws" {
region = "us-west-2"
alias = "west"
}
provider "aws" {
region = "us-east-1"
alias = "east"
}
Here, two provider configurations (aws.west and aws.east) are defined.
2. Passing Providers to Child Modules
- Providers are passed implicitly (inherited) or explicitly using the
providersargument inside the module block. - Example of explicit provider passing:
# call child modules
module "west_region" {
source = "./path-to-module"
providers = { aws = aws.west }
}
module "east_region" {
source = "./path-to-module"
providers = { aws = aws.east }
}
This ensures each child module gets the correct provider configuration.
3. Child Module Provider Declaration
- Child modules declare which providers they require but do not configure them.
# child module
terraform {
required_providers {
aws = {
source = "hashicorp/aws"
version = ">= 5.0.0"
}
}
}
resource "aws_instance" "example" {
ami = "ami-12345678"
instance_type = "t2.micro"
}
- The provider configuration (e.g., AWS region) is passed from the root module.
4. Provider Version Constraints
- Each module must declare its own
required_providersblock. - Ensures Terraform uses a single compatible provider version across all modules.
terraform {
required_providers {
aws = {
source = "hashicorp/aws"
version = ">= 5.0.0"
}
}
}
- Example: If one module requires AWS
>= 5.0.0and another requires>= 5.2.0, Terraform will choose a version compatible with both.
5. Provider Aliases in Modules
- Use
configuration_aliasesin the root module to support multiple provider configurations:
# root module
terraform {
required_providers {
aws = {
source = "hashicorp/aws"
version = ">= 5.0.0"
configuration_aliases = [ aws.east ]
}
}
}
provider "aws" {
region = "us-west-1"
}
provider "aws" {
alias = "east"
region = "us-east-1"
}
- Passing alias configuration to child module:
# root module
module "my-ec2" {
source = "./modules/my-ec2"
providers = { aws = aws.east }
}
- Inside the child module:
# child module (modules/my-ec2/main.tf)
resource "aws_instance" "web_server" {
provider = aws
ami = "ami-12345678"
instance_type = "t2.micro"
tags = {
Name = "Web Server"
}
}
The child module references only aws, keeping it reusable.
6. Legacy Modules and Compatibility Issues
- Before Terraform v0.11, modules often included their own provider blocks.
- This caused issues when removing modules, as resources and providers were deleted together.
- Terraform v0.13 introduced advanced features (
for_each,count,depends_on) that are incompatible with modules containing internal provider blocks. - Best practice: move all provider configurations to the root module and pass them explicitly via
providersin module blocks.
Best Practices
- Define provider blocks only in the root module.
- Always use
required_providersin every module to declare dependencies and version constraints. - Use provider aliases to manage multiple configurations (e.g., multiple AWS regions).
- Avoid deep nesting of providers and keep child modules provider-agnostic.
- When upgrading from older Terraform versions, refactor modules to remove internal provider blocks
Terraform Provider Requirements
Terraform uses providers (plugins) to manage connections with remote systems like AWS, Azure, GCP, and many others. Every Terraform module needs to specify the providers it depends on so Terraform can automatically install and manage them.
This is achieved with the required_providers block, which is placed inside the terraform block.
Defining Required Providers
Example:
terraform {
required_providers {
aws = {
source = "hashicorp/aws"
version = "~> 5.0"
}
}
}
Breakdown
aws: The local name for the provider within this module. Must be unique per module.source: Defines where Terraform will download the provider from (here: HashiCorp's public registry).version: Specifies version constraints.~>= pessimistic constraint operator (allows patch/minor upgrades, avoids major breaking changes).
When is a terraform Block Required?
While simple Terraform configurations might not need a terraform block, you should define one if:
- You want to lock provider versions to avoid unexpected breaking changes.
- You are using a custom provider source (not
hashicorpregistry). - You need to enforce consistency across environments by pinning versions.
Components of a Provider Requirement
1. Local Names
- Defined within a module.
- Used throughout your config when referring to the provider.
- Best practice: use the provider’s preferred local name (usually matches the resource prefix, e.g.,
aws_instance).
2. Source Address
- A provider’s unique global identifier (where Terraform downloads it from).
- Format:
HOSTNAME/NAMESPACE/TYPE- HOSTNAME (optional, defaults to
registry.terraform.io) - NAMESPACE (org/publisher, e.g.,
hashicorp) - TYPE (platform/system name, e.g.,
aws)
- HOSTNAME (optional, defaults to
- Examples:
- Shorthand:
hashicorp/random - Full:
registry.terraform.io/hashicorp/random
- Shorthand:
3. Handling Local Name Conflicts
If two providers share the same preferred name:
terraform {
required_providers {
hashicorp-random = {
source = "hashicorp/random"
version = ">= 3.0"
}
myorg-random = {
source = "myorg/random"
version = ">= 1.0"
}
}
}
- Use a compound local name (namespace + type, separated by a hyphen).
Provider Version Constraints
- Always specify provider versions to avoid unpredictable behavior.
- Without constraints, Terraform may select any available version.
Best practices:
- For modules: specify a minimum version (
>= 3.0). - For root modules: add an upper limit as well (
~> 5.0). - Use pessimistic constraint (
~>) for safe upgrades within minor versions. - Child modules should only define minimum versions; the root module enforces upper limits.
Dependency Lock File
- Terraform maintains a lock file:
terraform.lock.hcl - Automatically created/updated when running
terraform init - Ensures consistent provider versions across teams/environments
- Should be committed to version control for review and reproducibility.
Additional Considerations
- Built-in Provider: Terraform includes a built-in provider with source
terraform.io/builtin/terraform. You don’t need to declare it, but it may appear in error logs. - Private Providers: Organizations can host private providers in local directories or private registries, following the same source/address structure.
- Compatibility: Provider source addresses were introduced in v0.13. For backward compatibility with older versions:
- Use lowercase provider names.
- Ensure provider name matches the type used in the
source.
✅ In Summary:
- Use the
terraformblock withrequired_providersto define dependencies. - Pin provider versions to ensure consistency and stability.
- Handle conflicts with compound local names.
- Commit
terraform.lock.hclto enforce version consistency. - Follow best practices for root vs. child module version constraints.
Terraform Provider Configuration
Presented by: Joseph Khoury Learning Objective: Configure a provider in a root module
Overview
Providers in Terraform are plugins that allow interaction with cloud platforms, SaaS products, and other APIs. Some providers require configuration such as endpoint URLs or regions before use. Provider configurations are defined in the root module of your Terraform setup.
Example: Passing Provider Configurations
provider "aws" {
region = "us-west-2"
profile = "my-profile"
}
- region → Specifies where resources will be provisioned.
- profile → References a credentials profile stored in the AWS CLI configuration (
~/.aws/config), avoiding hardcoding of access keys.
This aligns with best practices by leveraging short-term credentials (e.g., AWS SSO) instead of static keys.
Provider Configuration in the Root Module
- Provider configurations should be declared in the root module.
- Child modules can inherit these configurations.
- Root modules can also specify which configurations to use when calling child modules.
Meta-Arguments
- alias → Used for multiple configurations of the same provider (e.g., different regions).
- version → Deprecated in provider blocks. Instead, define constraints in
required_providers.
Example: Required Providers with Aliases
terraform {
required_providers {
aws = {
source = "hashicorp/aws"
version = ">= 5.0.0"
configuration_aliases = [ aws.east ]
}
}
}
provider "aws" {
region = "us-west-1"
}
provider "aws" {
alias = "east"
region = "us-east-1"
}
Here: Default provider → us-west-1 Alias (east) → us-east-1
Passing Provider Configurations to Child Modules
Root modules can pass provider configurations when calling child modules:
# root module
module "my-ec2" {
source = "./modules/my-ec2"
providers = {
aws = aws.east
}
}
Inside the child module:
# child module (modules/my-ec2/main.tf)
resource "aws_instance" "web" {
provider = aws
ami = "ami-12345678"
instance_type = "t2.micro"
tags = {
Name = "WebServerInstance"
}
}
- The child module references
aws, without knowing the details ofaws.east. - This makes the module flexible and reusable.
Key Takeaways
- Provider configurations belong in the root module.
- Use aliases for multiple provider configurations.
- Pass provider configurations from root to child modules for flexibility and reusability.
- Always follow best practices: use short-term credentials, version control, and avoid hardcoding secrets.
Publishing and Refactoring Terraform Modules
Publishing Modules
Terraform modules can be published on the Terraform Registry, which provides:
- Versioning support
- Auto-generated documentation
- Parsed details (name, provider, inputs, outputs, dependencies)
- Support for submodules and examples
Requirements for Publishing
- Host on a public GitHub repository
- Naming convention:
terraform-<PROVIDER>-<NAME> - One-sentence description of the module
- Standard Terraform module structure
- Semantic version tags for releases (
x.y.z, e.g.,1.0.1orv1.0.1)
How Versioning Works
- Create and push a semantic version tag → triggers a webhook → syncs the new version to the registry.
- If sync fails → use Re-Sync Module option in the registry.
Removing Modules
- Owners can remove versions or entire modules.
- Discouraged unless there’s a critical issue.
Refactoring Modules
Refactoring improves structure without changing external behavior. As configurations grow, you may need to:
- Reorganize modules
- Rename resources
- Adjust module inputs
Moved Blocks (Terraform v1.1+)
The moved block feature makes refactoring safe by updating Terraform’s state without destroying infrastructure.
Syntax:
moved {
from = aws_instance.old
to = aws_instance.new
}
- from → original resource location
- to → new resource location
- Ensures Terraform updates the state file without re-creating resources
Refactoring Use Cases
- Renaming resources to follow conventions
- Enabling
countorfor_eachto scale resources - Splitting monolithic modules into multiple submodules
- Reorganizing modules for better structure
Terraform can chain multiple moved blocks for complex refactorings.
Best Practices
- Always use moved blocks when renaming/moving resources
- Document changes clearly in code
- Refactor in small, manageable steps
- Don’t remove moved blocks too early → keep them until all environments have applied the refactored code
Key Takeaways
- Publishing requires GitHub hosting, naming convention, and semantic versioning.
- Refactoring with moved blocks preserves infrastructure state during structural changes.
- Regular, well-documented refactoring ensures clean, scalable, and reliable IaC.
Exploring the Terraform Registry
Overview
This demo explores how to identify and research modules in the official Terraform Registry, focusing on determining their use and compatibility for infrastructure needs. It also introduces Terraform’s built-in functions from the official documentation.
Accessing the Terraform Registry
- Navigate to registry.terraform.io.
- Options available:
- Browse Providers
- Browse Modules
- Browse Policy Libraries
- Browse Run Tasks
Since the focus is on modules, select Browse Modules.
Filtering Modules
- On the left pane, filter by Provider (e.g., select AWS).
- Common namespace: terraform-aws-modules (e.g.,
iam,vpc).
Exploring a Module (Example: VPC)
Module Page Contents
- Description: Explains purpose and architecture.
- Version Selector: Defaults to the latest stable version, but older versions are available.
- Published Details: Publisher name, published date, and repository link.
- Example Usage: Code snippets to integrate the module into configurations.
module "vpc" {
source = "terraform-aws-modules/vpc/aws"
version = "5.13.0"
}
- Best Practice: Always specify a
versionto avoid unexpected changes when modules update.
Module Documentation Tabs
- Readme – Overview, architecture, and usage notes.
- Inputs – Optional and required arguments (e.g., 230 supported input arguments for VPC).
- Each includes name, description, type, and default (if any).
- Outputs – Values returned after provisioning (e.g., VPC ID, availability zones).
- Example: 111 outputs for the VPC module.
- Dependency – Lists required modules or providers (e.g., AWS provider).
- Resources – Lists AWS resources created (e.g., VPC, Subnets).
Terraform Documentation Reference
Navigate to developer.hashicorp.com → Terraform → Configuration Language.
- Expand Functions for a categorized list:
- Numeric
- String
- Collection
- IP Network
- And more.
These functions can be used in any configuration, not just registry modules.
Example Functions
- Collection Function: slice
- Extracts portions of lists.
- IP Network Function: cidrsubnet
- Calculates subnet addresses.
These functions are useful when defining or manipulating data within Terraform configurations.
Key Takeaways
- Use Terraform Registry to find reusable, community-vetted modules.
- Always check:
- Inputs (arguments)
- Outputs (values returned)
- Dependencies (modules/providers required)
- Resources (what gets provisioned)
- Follow best practices:
- Always pin module versions.
- Review module documentation thoroughly before use.
- Leverage Terraform functions to manipulate and manage configuration data efficiently.
Configuring Terraform Modules in HCL
This video demonstrates how to incorporate Terraform Registry modules into an HCL configuration, including setting required variables and referencing module outputs.
Key Steps
1. Define Terraform Requirements (terraform.tf)
- Specify provider and version requirements.
- Example: use hashicorp/aws provider (
~> 5.0) and Terraform version>= 1.9.0.
2. Configure AWS Provider (main.tf)
- Define provider block with:
region(fromvar.aws_region)default_tagsfor consistent tagging.
3. Add the VPC Module
- Source:
terraform-aws-modules/vpc/aws(version5.13.0). - Set key inputs:
name→ fromvar.vpc_namecidr→ fromvar.vpc_cidrazs→ generated with built-in functions:slice(tolist(local.az_names), 0, min(3, length(local.az_names)))
- Configure subnets using
cidrsubnet:- Private subnets:
10.0.0.0/24,10.0.1.0/24 - Public subnets:
10.0.100.0/24,10.0.101.0/24
- Private subnets:
- Enable NAT Gateway for private subnet Internet access.
- Apply tags and configure public subnet settings.
4. Add the EC2 Instances Module
- Source: Terraform Registry EC2 instance module.
- Key inputs:
count = 2→ launch two instancesname = "web_server-${count.index + 1}"amiandinstance_typefrom variables- Security groups and subnet IDs reference VPC module outputs:
module.vpc.default_security_group_idmodule.vpc.public_subnets[0]
5. Use Data Sources and Locals
- Data source:
aws_availability_zones→ query available AZs. - Locals block:
local.az_names = data.aws_availability_zones.available.names(shorthand for repeated references).
6. Define Variables (variables.tf)
- Centralize configuration values.
- Example:
vpc_cidr = "10.0.0.0/16"
7. Define Outputs (outputs.tf)
- Expose useful information after
terraform apply. - Example: VPC ID, subnet IDs, or instance details.
Summary
By combining provider definitions, Terraform Registry modules, variables, and outputs, this configuration builds a modular, reusable AWS infrastructure. Modules simplify provisioning by encapsulating best practices while still allowing customization through variables and outputs.
Deploying Infrastructure in AWS with Terraform Registry Modules
This section demonstrates deploying AWS infrastructure using Terraform Registry modules and the standard workflow of terraform init, terraform plan, and terraform apply.
Configuration Files
1. terraform.tf
Defines the Terraform settings:
terraform {
required_providers {
aws = {
source = "hashicorp/aws"
version = "~> 5.0"
}
}
required_version = ">= 1.9.0"
}
- Ensures AWS provider version
5.xis used. - Requires Terraform CLI
>= 1.9.0.
2. main.tf
Contains provider, modules, and supporting blocks.
Provider block:
provider "aws" {
region = var.aws_region
default_tags {
tags = {
Environment = var.environment
Terraform = "true"
}
}
}
VPC Module (from Terraform Registry):
module "vpc" {
source = "terraform-aws-modules/vpc/aws"
version = "5.13.0"
name = var.vpc_name
cidr = var.vpc_cidr
azs = slice(tolist(local.az_names), 0, min(3, length(local.az_names)))
private_subnets = [
cidrsubnet(var.vpc_cidr, 8, 0),
cidrsubnet(var.vpc_cidr, 8, 1)
]
public_subnets = [
cidrsubnet(var.vpc_cidr, 8, 100),
cidrsubnet(var.vpc_cidr, 8, 101)
]
enable_nat_gateway = true
map_public_ip_on_launch = true
tags = {
Name = var.vpc_name
}
public_subnet_tags = {
Name = "${var.vpc_name}-public-subnet"
}
private_subnet_tags = {
Name = "${var.vpc_name}-private-subnet"
}
}
- Automates subnetting and NAT gateway creation.
- Ensures best practices for high availability.
EC2 Instances Module (from Terraform Registry):
module "ec2-instances" {
source = "terraform-aws-modules/ec2-instance/aws"
version = "5.5.0"
count = 2
name = "web_server-${count.index + 1}"
ami = var.ami_us-east-1_linux2023
instance_type = var.instance_type
vpc_security_group_ids = [module.vpc.default_security_group_id]
subnet_id = module.vpc.public_subnets[0]
tags = {
Name = "web_server-${count.index + 1}"
}
}
- Launches two EC2 instances in the VPC’s first public subnet.
- Uses outputs from the VPC module for networking.
Supporting Blocks:
data "aws_availability_zones" "available" {}
locals {
az_names = data.aws_availability_zones.available.names
}
3. variables.tf
Defines reusable inputs (e.g., aws_region, vpc_cidr, vpc_name, instance_type, ami).
4. outputs.tf
Exposes important values after deployment:
output "vpc_id" {
value = module.vpc.vpc_id
}
output "public_subnet_ids" {
value = module.vpc.public_subnets
}
output "nat_gateway_public_ips" {
value = module.vpc.nat_public_ips
}
output "ec2_instance_public_ips" {
value = module.ec2-instances.public_ip
}
Workflow: init → plan → apply
- Initialize modules & providers
terraform init
Downloads providers into .terraform/providers/. Downloads modules into .terraform/modules/. Generates modules.json to track module metadata.
- Preview changes
terraform plan
- Shows an execution plan.
- Example: 25 resources to add.
- Apply configuration
terraform apply
- Provisions AWS infrastructure.
- At completion, outputs values from
outputs.tf.
- Destroy resources (when done
terraform apply
- Cleans up resources to avoid costs.
Key Takeaways
- Terraform Registry modules encapsulate best practices and simplify resource provisioning.
- Outputs help integrate module results into future workflows.
- Standard Terraform workflow (
init → plan → apply → destroy) ensures safe, predictable infrastructure deployment.
Creating a Local Terraform Module for Static Website Hosting with S3
This section demonstrates how to create a local Terraform module to manage AWS S3 buckets for hosting static websites.
Root Module (webapp)
Files
main.tf→ calls the local modulevariables.tf→ definesaws_regionandenvironmentoutputs.tf→ exposes website-related outputsREADME.md&LICENSE
Example: Calling the Module
module "website-s3-bucket" {
source = "./modules/s3-static-website"
bucket_name = "jk-website-bucket" # must be globally unique
destroy_bucket = true
index_suffix = "index.html"
error_key = "error.html"
bucket_tags = {
Owner = "development team"
}
}
Root Module Outputs
output "website_url" {
value = "http://${module.website-s3-bucket.website_endpoint}"
}
Other outputs include:
website_bucket_arnwebsite_bucket_namewebsite_bucket_domainwebsite_bucket_tags
Child Module (modules/s3-static-website)
Files
main.tf→ resources for S3 bucket + website hostingvariables.tf→ defines required/optional inputsoutputs.tf→ shares bucket and website detailsREADME.md,LICENSEwww/→ containsindex.htmlanderror.html
Key Resources in main.tf
- S3 Bucket
resource "aws_s3_bucket" "s3_bucket" {
bucket = var.bucket_name
force_destroy = var.destroy_bucket
tags = var.bucket_tags
}
- Website Configuration
resource "aws_s3_bucket_website_configuration" "s3_bucket" {
bucket = aws_s3_bucket.s3_bucket.id
index_document { suffix = var.index_suffix }
error_document { key = var.error_key }
}
- S3 Objects (website files)
index.htmlerror.htmlEach includes:bucketkeysourcecontent_type = "text/html"etag = filemd5(...)(tracks file changes)
- Public Access Block
resource "aws_s3_bucket_public_access_block" "public_access_block" {
bucket = aws_s3_bucket.s3_bucket.id
block_public_acls = true
block_public_policy = false
ignore_public_acls = true
restrict_public_buckets = false
}
- Bucket Policy (public read access)
resource "aws_s3_bucket_policy" "s3_bucket" {
bucket = aws_s3_bucket.s3_bucket.id
policy = jsonencode({
Statement = [{
Sid = "PublicReadGetObject"
Effect = "Allow"
Principal = "*"
Action = "s3:GetObject"
Resource = "${aws_s3_bucket.s3_bucket.arn}/*"
}]
})
}
Child Module Outputs
bucket_arnbucket_namebucket_domainbucket_tagswebsite_endpoint→ used in root module
Workflow: init → plan → apply → test
- Initialize
terraform init
- Prepares providers and local modules.
- Plan
terraform plan
- Shows 6 resources to be created.
- Apply
terraform apply
- Provisions bucket + objects + policy-
- Outputs full website URL.
- Test the website
- Visit
http://<website_endpoint> - Displays homepage (
index.html)
- Clean up
terraform destroy
- Removes all resources.
Key Takeaways
-
Local modules improve reusability and organization.
-
bucket_nameis required; others are optional. -
Static website hosting requires:
- S3 bucket
- Website configuration
- Public read policy
- Uploaded content (index + error pages)
-
Outputs simplify referencing website details across modules.
Customizing Modules Using Attributes
In this video, you’ll learn how to refactor a module by incorporating an object to define the module’s attributes.
Traditionally, multiple input variables are defined separately (strings, maps, booleans, etc.). By switching to a single object variable, related attributes can be grouped together, simplifying the module interface and making it more modular.
Key Points
- Object variables allow grouping multiple attributes into a single structure.
- Individual attributes can be defined as optional with default values.
- The
optional()modifier:- First argument = attribute type (e.g.,
string,bool,map(string)). - Second argument = default value (if provided).
- First argument = attribute type (e.g.,
- Required attributes (e.g.,
bucket_name) remain mandatory. - Referencing object attributes uses dot notation:
- Example:
var.config.bucket_name,var.config.destroy_bucket.
- Example:
Refactoring Steps
- Replace multiple variables with one object variable in
variables.tf:
variable "config" {
description = "Values for the configuration of a static website"
type = object({
bucket_name = string
destroy_bucket = optional(bool, false)
index_suffix = optional(string, "index.html")
error_key = optional(string, "error.html")
bucket_tags = optional(map(string), {})
})
}
- Update references in child module
main.tf:
bucket = var.config.bucket_nameforce_destroy = var.config.destroy_buckettags = var.config.bucket_tagssuffix = var.config.index_suffixkey = var.config.error_key
- Update the root module to pass the
configobject:
config = {
bucket_name = "jk-website-bucket"
destroy_bucket = true
index_suffix = "index.html"
error_key = "error.html"
bucket_tags = {
Owner = "development team"
}
}
Validation
- Run:
terraform init
terraform plan
terraform apply
- The module provisions 6 resources successfully, and the website deploys as expected.
✅ This approach improves modularity, flexibility, and maintainability by consolidating related variables into a structured object.
Setting up an HCP Account
In this video, you’ll learn how to create a HashiCorp Cloud Platform (HCP) Terraform account and prepare the platform for managing configurations.
Key Steps
1. Authenticate with AWS
- Sign in to the AWS Identity Center access portal.
- In Visual Studio Code, run:
aws sso login
- Confirm and allow access in the browser.
- From the AWS account portal, retrieve **access keys** (Access Key ID, Secret Access Key, and Session Token) — required later for HCP Terraform.
### 2. Create an HCP Terraform Account
- In the browser, navigate to [app.terraform.io](https://app.terraform.io)
- Select **Free account** and provide **Username, Email, and Password**.
- Confirm the email to activate the account.
### 3. Create an Organization
- After logging in, navigate to **Organizations** → **Create organization**.
- Enter a name (e.g., `jk-demo-org`) and email.
- Select **Create organization**.
### 4. Create a Workspace
- From the **Create a new Workspace** page, choose **CLI-Driven Workflow**.
- Name the workspace (e.g., `webapp`) and create it.
- The workspace provides a code block (`cloud` block) to add to your Terraform configuration.
Example:
```hcl
terraform {
cloud {
organization = "jk-demo-org"
workspaces {
name = "webapp"
}
}
}
5. Configure Variable Sets
- In the organization settings, select Variable sets → Create variable set.
- Name it (e.g.,
AWS Environment Variables). - Apply it globally.
- Add three environment variables (mark as Sensitive and Environment variable):
AWS_ACCESS_KEY_IDAWS_SECRET_ACCESS_KEYAWS_SESSION_TOKEN
6. Handle Dependencies in Code
- In
main.tf, usedepends_onto ensure AWS resources (like S3 public access blocks) are fully propagated before applying policies. - This avoids false permission errors when running Terraform remotely via HCP.
7. Authenticate with Terraform Cloud
- In the terminal, run:
terraform login
- Generate a user token in the HCP Terraform UI, copy it, and paste it into the terminal.
8. Initialize, Plan and Apply
- Run:
-
terraform init terraform plan terraform apply - Terraform initializes with HCP as the backend.
- Remote runs are triggered in the workspace, with logs streaming to the terminal.
- After
apply, the static website deploys successfully.
9. Review Runs and State
- In HCP Terraform, under the workspace:
- Runs: shows history of all Terraform runs.
- States: displays JSON state representation of the infrastructure.
10. Cleanup
- When done, destroy resource with:
terraform destroy
Why it matters
By setting up HCP Terraform with an AWS backend, you enable:
- Centralized state management (avoiding local file conflicts).
- Remote execution for collaboration and reliability.
- Secure handling of credentials via variable sets.
- Improved workflow control using organizations and workspaces.
Configuring an OAuth Connection for GitHub in HCP Terraform
This guide explains how to establish an OAuth connection between an HCP Terraform organization and a GitHub account. Doing so provides secure access and seamless integration for Terraform workflows that rely on repositories hosted in GitHub.
Prerequisites
- A GitHub account (new or existing).
- Admin access to both:
- The GitHub account/organization.
- The HCP Terraform organization where integration will occur.
- Multi-factor authentication enabled on GitHub (recommended).
Step 1: Prepare GitHub Account
- Go to github.com and create/sign in to an account.
- Complete CAPTCHA verification and confirm email.
- (Optional) Set up multi-factor authentication for added security.
Step 2: Navigate HCP Terraform
- Sign in to HCP Terraform.
- From the Organizations page, select your organization.
- In the left navigation menu, go to:
- Settings → Version Control → Providers.
- Select Add a VCS provider.
Step 3: Add GitHub as a Provider
- On the Add VCS Provider page, choose:
- GitHub → GitHub.com (Custom).
- Leave this page open — you will return here to enter credentials from GitHub.
Step 4: Register an OAuth App in GitHub
-
Click Register a new OAuth Application (link provided by Terraform).
-
On GitHub, the Register a new OAuth app page opens with pre-populated settings. Required fields include:
- Application name (e.g., HCP Terraform Integration).
- Homepage URL (from HCP Terraform).
- Authorization callback URL (from HCP Terraform).
-
Click Register application.
Step 5: Customize OAuth App (Optional)
- Upload an HCP Terraform logo to help identify requests from this integration.
- Save changes and confirm.
Step 6: Generate Credentials
- On the OAuth app settings page, locate:
- Client ID → copy this value.
- Client Secret → click Generate new client secret and copy it. ⚠️ Note: you cannot view this secret again later.
Step 7: Enter Credentials in HCP Terraform
- Return to the Add VCS Provider page in HCP Terraform.
- Paste:
- Client ID into the Client ID field.
- Client Secret into the Client Secret field.
- Click Connect and continue.
Step 8: Authorize the Connection
- On GitHub, approve the authorization request for HCP Terraform.
- Confirm access permissions such as:
- Repositories.
- Webhooks.
- User data (if required).
Step 9: Finalize Setup
- Back in HCP Terraform, move to the Advanced Settings step.
- (Optional) Configure scopes like workspaces, modules, SSH keys, or policies.
- For this demo, skip advanced settings and click Finish.
✅ Result
You have successfully established an OAuth connection between your HCP Terraform organization and GitHub account. This enables Terraform to securely pull source code, track repositories, and automate workflows with GitHub as your VCS provider.
Additional Context
- OAuth vs. Personal Access Tokens (PAT): OAuth provides a more secure, revocable, and organization-wide method compared to user-scoped PATs.
- Best Practices:
- Restrict OAuth scopes to the minimum required.
- Rotate and regenerate credentials if you suspect compromise.
- Use separate GitHub organizations for production and test environments.
- Next Steps:
- Connect Terraform workspaces to specific repositories.
- Define policies for repository access control.
- Enable CI/CD pipelines with GitHub Actions for automated Terraform runs.
Managing Private Modules from Terraform Registry in HCP Terraform
This guide walks through importing and using private modules from the Terraform Registry to provision AWS resources in an HCP Terraform environment.
Prerequisites
- An HCP Terraform account with an existing organization.
- AWS credentials stored in a variable set.
- A GitHub account connected to HCP Terraform via OAuth.
Step 1: Update AWS Access Keys
- In HCP Terraform, go to Settings → Variable sets.
- Select your AWS environment variable set.
- Edit and update the AWS access key and secret if expired.
- Save the variable changes.
Step 2: Create a Private Module Repository
-
In GitHub, create a repository following the format:
terraform-<PROVIDER>-<NAME>Example:terraform-aws-s3-static-website. -
Upload your module code (e.g., S3 static website files).
-
Commit with the message
initial commit. -
Create a release with a tag (e.g.,
v1.0.0).
- HCP Terraform requires a tagged release for publishing.
Step 3: Publish the Module to Private Registry
- In HCP Terraform, go to Registry → Publish → Module.
- Select GitHub.com (Custom) as the VCS provider.
- Choose the module repo (
terraform-aws-s3-static-website). - Confirm and publish the module.
- Review module details including:
- Usage instructions
- Inputs and Outputs
- Resources to be provisioned
Step 4: Create an Application Repository
- In GitHub, create another repo (e.g.,
s3-webapp-root).
- This contains the Terraform configuration that consumes the module.
- Upload configuration files.
- Commit with message
initial commit.
Step 5: Create a Workspace in HCP Terraform
- In HCP Terraform, go to Workspaces → New → Workspace.
- Select Version Control Workflow.
- Connect to GitHub.com (Custom) and choose the
s3-webapp-rootrepo. - Name the workspace and create it.
Step 6: Configure Workspace Variables
- HCP Terraform will detect required input variables.
- For example, provide a unique S3 bucket name:
myinitials-website-bucket - Save the variables.
Step 7: Run the Plan and Apply
- Start a new run.
- Review the plan summary (e.g., 6 resources to be created).
- Confirm and apply.
- Once applied, Terraform outputs include the website URL.
- Open the URL to verify the S3 static website is deployed.
Step 8: Destroy the Resources
- From the workspace, go to Settings → Destruction and Deletion.
- Select Queue destroy plan.
- Enter the workspace name to confirm.
- Confirm and apply the destroy plan.
- Terraform deletes all previously created resources.
✅ Result
You have successfully:
- Created and published a private module in HCP Terraform.
- Provisioned AWS resources using the module.
- Cleaned up by destroying the resources after testing.
Additional Notes
- Module naming convention is critical:
terraform-<PROVIDER>-<NAME>. - Tagged releases (e.g.,
v1.0.0) are required for publishing modules. - Private modules streamline reusability and standardization across teams.
- Best practice: use separate repos for modules and applications consuming them.
Adding Public Providers and Modules in HCP Terraform Registry
This session demonstrates how to curate public providers and modules into an organization's private HCP Terraform Registry. This capability allows organizations to reuse community-maintained Terraform code securely while still enforcing internal governance.
Navigating to Registry
- Sign in to HCP Terraform and go to the Organizations page.
- Select your organization.
- From the left-hand navigation menu, choose Registry.
The Registry provides options to search the public registry for both modules and providers.
Adding a Public Module (Example: AWS VPC Module)
- From the Registry, select Search public registry.
- In the search field, type
vpc. - Under Modules, select
terraform-aws-modules/vpc.
Module Details Page
- Shows the module name, public badge, description, and publishing organization.
- Includes version selector (defaulting to the latest stable version).
- Displays repository link with the
terraform-provider-resourcenaming format. - Provides example usage code for Terraform configurations.
- Contains tabs for:
- Readme – documentation
- Inputs/Outputs – variables and return values
- Dependencies/Resources – related components
- Shows submodules and examples for quick integration.
- Right-hand panel includes:
- Add to HCP Terraform button
- Example usage snippet
- Download statistics
Adding to Private Registry
- Select Add to HCP Terraform.
- Confirm in the Add module to organization dialog.
- The module is now available in your private registry.
Adding a Public Provider (Example: AWS Provider)
- In the Registry, switch to the Providers tab.
- Select Search public registry.
- Search for
awsand choose the AWS provider by HashiCorp.
Provider Details Page
- Shows provider name, public badge, description, and owning organization (e.g., HashiCorp).
- Version dropdown allows selecting specific versions.
- Displays publishing date (e.g., 6 days ago → actively maintained).
- Shows usage statistics (higher provisions = wider adoption).
- Includes source code repository link.
- Tabs:
- Overview – details and summary
- Documentation – official usage guides
Adding to Private Registry
- Click Add to HCP Terraform.
- Confirm in the Add public provider dialog.
- The provider is added to your private registry.
Removing Modules and Providers
To maintain governance, you can remove public assets:
- Go to the Registry.
- Select the provider or module.
- Open the Manage Provider/Module dropdown.
- Choose Remove from organization.
- Confirm removal by typing
remove.
After removal:
- The Registry shows only your private modules and no longer lists the removed public ones.
Key Takeaways
- Curating Public Assets: Teams can safely bring in well-maintained community modules and providers into a controlled private environment.
- Governance & Security: Central admins can remove unused or risky modules/providers.
- Efficiency: Developers reuse proven Terraform code (e.g., AWS VPC module) instead of building from scratch.
- Visibility: Metadata such as publishing dates, maintainers, and download counts help assess reliability and community adoption.
