Lab 0: Getting Started
Objective
Set up your development environment, configure AWS credentials, install required tools, and deploy a simple S3 bucket to verify your setup.
Estimated Time
3.5-4.5 hours
Prerequisites
- Personal AWS account (or AWS Academy sandbox access)
- GitHub account
- Command-line terminal access
Tasks
Part 1: Install Required Tools (30 minutes)
1.1 Install Terraform
Important: You need Terraform 1.9.0 or later for S3 native state locking support.
macOS (using Homebrew):
brew tap hashicorp/tap
brew install hashicorp/tap/terraformLinux:
wget -O- https://apt.releases.hashicorp.com/gpg | sudo gpg --dearmor -o /usr/share/keyrings/hashicorp-archive-keyring.gpg
echo "deb [signed-by=/usr/share/keyrings/hashicorp-archive-keyring.gpg] https://apt.releases.hashicorp.com $(lsb_release -cs) main" | sudo tee /etc/apt/sources.list.d/hashicorp.list
sudo apt update && sudo apt install terraformWindows (using Chocolatey):
choco install terraformVerify installation (ensure version is 1.9.0 or later):
terraform version1.2 Install AWS CLI
Follow the official AWS CLI installation guide for your operating system.
Verify installation:
aws --version1.3 Install Infracost
macOS/Linux:
curl -fsSL https://raw.githubusercontent.com/infracost/infracost/master/scripts/install.sh | shWindows:
choco install infracostRegister for Infracost API key:
infracost configure set api_key YOUR_API_KEY1.4 Install VS Code and Extensions (Recommended)
- Download VS Code
- Install extensions:
- HashiCorp Terraform
- AWS Toolkit
Part 2: Configure AWS Credentials (30 minutes)
2.1 Create IAM User
- Log in to AWS Console
- Navigate to IAM → Users → Create User
- Create user with programmatic access
- Attach policy:
AdministratorAccess(for learning purposes only) - Save access key ID and secret access key
2.2 Configure AWS CLI
aws configureEnter:
- AWS Access Key ID
- AWS Secret Access Key
- Default region:
us-east-1 - Default output format:
json
Verify:
aws sts get-caller-identityPart 3: Fork and Clone Repository (10 minutes)
3.1 Fork the Repository
- Navigate to https://github.com/shart-cloud/labs_terraform_course
- Click "Fork" to create your own copy
- Clone your fork:
git clone https://github.com/YOUR-USERNAME/labs_terraform_course.git
cd labs_terraform_courseNote: Replace YOUR-USERNAME with your actual GitHub username.
Part 4: Set Up Billing Alerts with Terraform (20 minutes)
The billing setup has been pre-configured for you in the common/billing-setup/ directory.
4.1 Navigate to Billing Setup Directory
cd common/billing-setup4.2 Create Your Configuration File
Copy the example file and edit it with your information:
cp terraform.tfvars.example terraform.tfvarsEdit terraform.tfvars using your preferred text editor:
student_name = "your-github-username"
alert_email = "your-email@example.com"
monthly_budget_limit = "20"4.3 Deploy Your Billing Budget
# Initialize Terraform
terraform init
# Review what will be created
terraform plan
# Create the budget
terraform applyType yes when prompted.
4.4 Confirm Email Subscription
Important: Check your email for an SNS subscription confirmation message. Click the "Confirm subscription" link to activate budget alerts.
4.5 Verify Budget Creation
# View the outputs (including next steps)
terraform output
# Optionally verify in AWS Console
aws budgets describe-budgets --account-id $(aws sts get-caller-identity --query Account --output text)Part 4.5: Understanding the Terraform Workflow (10 minutes)
Before writing code, understand the typical Terraform workflow:
Write → Init → Plan → Apply → Verify
↑ ↓
← Modify ← Review ← Destroy ←
Core Commands:
-
terraform init- Downloads provider plugins
- Initializes backend (state storage)
- Run once per directory or when providers change
-
terraform fmt- Automatically formats your code to HCL standards
- Run before committing code
- Use
-checkflag to verify without modifying
-
terraform validate- Checks syntax and internal consistency
- Doesn't check if resources can actually be created
- Runs offline (no API calls)
-
terraform plan- Creates an execution plan
- Shows what will change:
+create,-destroy,~modify - Makes API calls to check current state
- Always run before apply!
-
terraform apply- Executes the plan
- Creates/modifies/deletes real infrastructure
- Asks for confirmation (type
yes) - Updates state file
-
terraform destroy- Removes all infrastructure
- Use for cleanup
- Also asks for confirmation
-
terraform output- Displays output values
- Can be run anytime after infrastructure is created
Best Practice Workflow:
terraform fmt && terraform validate && terraform planRun this before every apply to catch errors early.
Part 5: Deploy Test Infrastructure (60 minutes)
Navigate to your student work directory:
cd week-00/lab-00/student-workIn this section, you'll build your first Terraform configuration step-by-step. Instead of copying code all at once, you'll learn what each piece does and why it's needed.
5.1 Understanding HCL (HashiCorp Configuration Language)
Terraform uses HCL to describe infrastructure. HCL is declarative - you describe the desired state, not the steps to get there.
Key HCL syntax concepts:
- Blocks: Containers for configuration (e.g.,
resource,provider) - Arguments: Assign values (e.g.,
region = "us-east-1") - Expressions: Reference values (e.g.,
aws_s3_bucket.test_bucket.id) - Comments: Use
#for single line or/* */for multi-line
5.2 Step 1: Configure Terraform and AWS Provider
Every Terraform configuration needs two things:
- Terraform block: Specifies required Terraform version and providers
- Provider block: Configures the cloud platform you're using
Create main.tf and add:
# Terraform block - defines version requirements
terraform {
required_version = ">= 1.9.0" # Minimum version needed for S3 native locking
required_providers {
aws = {
source = "hashicorp/aws" # Where to download the AWS provider
version = "~> 5.0" # Use any 5.x version (but not 6.0)
}
}
}
# Provider block - configures AWS
provider "aws" {
region = "us-east-1" # AWS region where resources will be created
}What this does:
- The
terraformblock is metadata about your configuration required_versionensures team members use compatible Terraform versionsrequired_providerstells Terraform which plugins to download- The
providerblock configures authentication and default settings for AWS
Test it:
terraform initYou should see Terraform download the AWS provider. This creates a .terraform directory with provider plugins.
5.3 Step 2: Create Your First Resource - A Basic S3 Bucket
Now add a resource block to main.tf. Resources are the core of Terraform - they represent infrastructure objects.
Resource block syntax:
resource "PROVIDER_TYPE" "LOCAL_NAME" {
argument1 = value1
argument2 = value2
}Add this to your main.tf (below the provider block):
# Resource block - creates an S3 bucket
resource "aws_s3_bucket" "test_bucket" {
bucket = "terraform-lab-00-YOUR-GITHUB-USERNAME" # Replace with your GitHub username
}Understanding this resource:
aws_s3_bucket- The resource type (from AWS provider)test_bucket- Local name to reference this resource elsewherebucket- The globally unique name for your S3 bucket
Important: S3 bucket names must be globally unique across all AWS accounts. Use your GitHub username or student ID to avoid conflicts.
Test it:
terraform fmt # Format your code
terraform validate # Check for syntax errors
terraform plan # Preview what will be createdThe plan command shows you what Terraform will do without actually doing it. You should see it wants to create 1 resource.
Do NOT apply yet - we need to add required tags first!
5.4 Step 3: Add Required Tags
All resources in this course require specific tags for tracking and cost management. Tags are key-value pairs that help organize and identify resources.
Understanding Required Tags:
Name: Human-readable resource identifierEnvironment: Deployment context (e.g., "Learning", "Production")ManagedBy: Shows infrastructure is managed by TerraformStudent: Your GitHub username for resource ownership trackingAutoTeardown: Set to "8h" to trigger automatic cleanup after 8 hours (prevents unexpected AWS charges)
Update your aws_s3_bucket resource to include tags:
resource "aws_s3_bucket" "test_bucket" {
bucket = "terraform-lab-00-YOUR-GITHUB-USERNAME" # Replace with your GitHub username
tags = {
Name = "Lab 0 Test Bucket"
Environment = "Learning"
ManagedBy = "Terraform"
Student = "your-github-username" # Replace with your GitHub username
AutoTeardown = "8h"
}
}What's new:
- The
tagsargument takes a map (key-value pairs) in curly braces - Each tag is specified as
key = "value" - Tags are metadata - they don't affect functionality but help with organization
💡 Pro Tip: Use Variables for Easier Configuration
Instead of manually replacing YOUR-GITHUB-USERNAME everywhere, you can use Terraform variables. Create a file called variables.tf:
variable "student_name" {
description = "Your GitHub username"
type = string
default = "YOUR-GITHUB-USERNAME" # Replace this once
}Then create terraform.tfvars to set the value:
student_name = "your-actual-github-username"Update your main.tf to use the variable:
resource "aws_s3_bucket" "test_bucket" {
bucket = "terraform-lab-00-${var.student_name}" # Uses variable
tags = {
Name = "Lab 0 Test Bucket"
Environment = "Learning"
ManagedBy = "Terraform"
Student = var.student_name # Uses variable
AutoTeardown = "8h"
}
}Benefits:
- Change your username in ONE place (
terraform.tfvars) - Reuse across all resources
- Easier to maintain
Important: Make sure .gitignore includes *.tfvars so you don't commit it!
For this lab, using variables is OPTIONAL - you can hardcode your username if you prefer.
Test your changes:
terraform fmt
terraform validate
terraform plan5.5 Step 4: Deploy Your Basic S3 Bucket
Now let's actually create the bucket:
terraform applyTerraform will show you the plan again and ask for confirmation. Type yes to proceed.
What just happened:
- Terraform compared your desired state (the code) to actual state (what exists in AWS)
- It created an execution plan
- It called AWS APIs to create the bucket
- It saved the current state to
terraform.tfstate
Verify in AWS:
aws s3 ls | grep terraform-lab-00Or check the AWS Console: https://s3.console.aws.amazon.com/s3/buckets
5.6 Step 5: Add Versioning (Using a Separate Resource)
In Terraform, S3 bucket features like versioning are configured as separate resources. This follows AWS best practices for fine-grained control.
Add this new resource to main.tf:
# Enable versioning on the S3 bucket
resource "aws_s3_bucket_versioning" "test_bucket_versioning" {
bucket = aws_s3_bucket.test_bucket.id # Reference to our bucket
versioning_configuration {
status = "Enabled"
}
}Understanding references:
aws_s3_bucket.test_bucket.idreferences the bucket we created earlier- This creates a dependency: Terraform knows it must create the bucket before versioning
.idis an attribute exported by theaws_s3_bucketresource
Nested blocks:
versioning_configurationis a nested block (a block within a block)- It groups related settings together
Apply the change:
terraform plan # See that only versioning will be added
terraform applyNotice Terraform only modifies what changed - it doesn't recreate the bucket.
5.7 Step 6: Add Encryption
Add encryption to protect your data at rest:
# Enable server-side encryption
resource "aws_s3_bucket_server_side_encryption_configuration" "test_bucket_encryption" {
bucket = aws_s3_bucket.test_bucket.id
rule {
apply_server_side_encryption_by_default {
sse_algorithm = "AES256" # AWS managed encryption
}
}
}Understanding nested blocks:
- This resource has multiple levels of nesting:
rule→apply_server_side_encryption_by_default - Each level groups related configuration
AES256uses AWS-managed encryption keys (no extra cost)
Apply the change:
terraform plan
terraform apply5.8 Step 7: Add Outputs
Outputs let you extract information from your infrastructure. They're printed after apply and can be queried later.
Create a new file outputs.tf:
# Output the bucket name
output "bucket_name" {
description = "Name of the S3 bucket"
value = aws_s3_bucket.test_bucket.id
}
# Output the bucket ARN
output "bucket_arn" {
description = "ARN of the S3 bucket"
value = aws_s3_bucket.test_bucket.arn
}Understanding outputs:
outputblocks expose values from your infrastructuredescriptiondocuments what the output representsvaluecan reference resource attributes- ARN (Amazon Resource Name) is a unique identifier for AWS resources
Apply and see outputs:
terraform applyYou should see the outputs printed at the end.
Query outputs anytime:
terraform output
terraform output bucket_name # Get a specific output5.9 Your Complete Configuration
At this point, your main.tf should look like this:
Click to see complete main.tf
# Terraform block - defines version requirements
terraform {
required_version = ">= 1.9.0"
required_providers {
aws = {
source = "hashicorp/aws"
version = "~> 5.0"
}
}
}
# Provider block - configures AWS
provider "aws" {
region = "us-east-1"
}
# Resource block - creates an S3 bucket
resource "aws_s3_bucket" "test_bucket" {
bucket = "terraform-lab-00-YOUR-GITHUB-USERNAME" # Replace with your GitHub username
tags = {
Name = "Lab 0 Test Bucket"
Environment = "Learning"
ManagedBy = "Terraform"
Student = "your-github-username" # Replace with your GitHub username
AutoTeardown = "8h"
}
}
# Enable versioning on the S3 bucket
resource "aws_s3_bucket_versioning" "test_bucket_versioning" {
bucket = aws_s3_bucket.test_bucket.id
versioning_configuration {
status = "Enabled"
}
}
# Enable server-side encryption
resource "aws_s3_bucket_server_side_encryption_configuration" "test_bucket_encryption" {
bucket = aws_s3_bucket.test_bucket.id
rule {
apply_server_side_encryption_by_default {
sse_algorithm = "AES256"
}
}
}And your outputs.tf:
Click to see complete outputs.tf
# Output the bucket name
output "bucket_name" {
description = "Name of the S3 bucket"
value = aws_s3_bucket.test_bucket.id
}
# Output the bucket ARN
output "bucket_arn" {
description = "ARN of the S3 bucket"
value = aws_s3_bucket.test_bucket.arn
}5.10 Run Infracost
Before considering your infrastructure "done", always check the cost:
infracost breakdown --path .Expected cost: ~$0.50/month for minimal storage
Understanding the output:
- S3 storage cost is typically $0.023 per GB/month
- Data transfer out has costs (but transfers in are free)
- Your bucket with no data has minimal cost
5.11 Final Verification
Verify your configuration is clean:
terraform fmt -check # Should show no changes needed
terraform validate # Should show "Success!"Verify in AWS Console:
- Navigate to: https://s3.console.aws.amazon.com/s3/buckets
- Find your bucket (should be named
terraform-lab-00-YOUR-USERNAME) - Click on the bucket name
- Go to Properties tab
- Scroll down to Bucket Versioning - should show "Enabled"
- Check Default encryption - should show "Enabled" with SSE-S3 (AES-256)
- Go to Tags tab - verify all 5 required tags are present
Verify with AWS CLI:
# Check bucket exists
aws s3 ls | grep terraform-lab-00
# Check versioning status
aws s3api get-bucket-versioning --bucket terraform-lab-00-YOUR-USERNAME
# Check encryption
aws s3api get-bucket-encryption --bucket terraform-lab-00-YOUR-USERNAME5.12 Key Terraform Concepts You Just Learned
1. Infrastructure as Code (IaC)
- Your infrastructure is defined in version-controlled files
- Changes are reviewable and repeatable
- No more clicking in consoles or running manual scripts
2. Declarative vs. Imperative
- Declarative: "I want a bucket with versioning" (what you want)
- Imperative: "Create bucket, then enable versioning" (how to do it)
- Terraform figures out the "how" for you
3. Resource Dependencies
- Terraform automatically determines the order to create resources
- When you referenced
aws_s3_bucket.test_bucket.id, you created an implicit dependency - Explicit dependencies can be set with
depends_on(you'll learn this later)
4. State Management
- The
terraform.tfstatefile tracks what Terraform created - Never manually edit this file
- It's how Terraform knows what exists vs. what you want
- In future labs, you'll store state remotely in S3
5. Idempotency
- Running
terraform applymultiple times with the same code produces the same result - If nothing changed in your code, Terraform won't modify anything
- Try it: run
terraform applyagain - it should say "No changes"
6. Resource Addressing
- Each resource has an address:
TYPE.NAME(e.g.,aws_s3_bucket.test_bucket) - Use this to reference resources elsewhere in your code
- Also used in commands:
terraform state show aws_s3_bucket.test_bucket
Try these commands to explore:
# Show the state of a specific resource
terraform state show aws_s3_bucket.test_bucket
# List all resources in state
terraform state list
# Show all outputs
terraform output
# See the state file (but don't edit it!)
cat terraform.tfstate | jq '.resources' # Requires jq installedPart 5.5: Set Up Remote State Storage (30 minutes)
CRITICAL: Before you submit your work via Git, you need to migrate your state file to remote storage. If you commit and push with only a local state file, you won't be able to manage your infrastructure later!
Why Remote State Matters
Right now, your infrastructure state is stored in a local file: terraform.tfstate
Problems with local state:
- ❌ If you lose the file, Terraform "forgets" what it created
- ❌ Can't manage resources from multiple computers
- ❌ Can't collaborate with team members
- ❌ Easy to accidentally commit sensitive data to Git
- ❌ After you push to GitHub, you can't easily destroy resources from another location
Solution: Remote state in S3
- ✅ State stored in AWS S3 bucket
- ✅ Accessible from anywhere with AWS credentials
- ✅ Encrypted and versioned
- ✅ Native locking prevents conflicts (Terraform 1.9+)
- ✅ Never committed to Git
Step 1: Create a State Bucket
First, create a dedicated S3 bucket to store your Terraform state files. This bucket will be used for ALL your labs.
💡 Pro Tip: Use a variable to make this easier:
# Get your AWS account ID and save it to a variable
export AWS_ACCOUNT_ID=$(aws sts get-caller-identity --query Account --output text)
# Verify it was set
echo $AWS_ACCOUNT_ID
# Should output something like: 123456789012Create the state bucket:
# Using the variable (recommended)
aws s3 mb s3://terraform-state-$AWS_ACCOUNT_ID --region us-east-1
# Or manually replace YOUR-ACCOUNT-ID:
# aws s3 mb s3://terraform-state-YOUR-ACCOUNT-ID --region us-east-1Example output:
make_bucket: terraform-state-123456789012Enable versioning (protects against accidental deletions):
# Using the variable
aws s3api put-bucket-versioning \
--bucket terraform-state-$AWS_ACCOUNT_ID \
--versioning-configuration Status=Enabled
# Or manually:
# aws s3api put-bucket-versioning --bucket terraform-state-YOUR-ACCOUNT-ID --versioning-configuration Status=EnabledEnable encryption:
# Using the variable
aws s3api put-bucket-encryption \
--bucket terraform-state-$AWS_ACCOUNT_ID \
--server-side-encryption-configuration '{
"Rules": [{
"ApplyServerSideEncryptionByDefault": {
"SSEAlgorithm": "AES256"
}
}]
}'
# Or manually replace YOUR-ACCOUNT-ID in the bucket name aboveVerify the bucket was created:
aws s3 ls | grep terraform-state
# Should show something like:
# 2025-11-13 12:34:56 terraform-state-123456789012Step 2: Configure Backend in Your Lab
Now tell Terraform to use this bucket for state storage.
Create a new file backend.tf in week-00/lab-00/student-work/:
# Backend configuration for remote state storage
terraform {
backend "s3" {
bucket = "terraform-state-YOUR-ACCOUNT-ID" # Replace with your actual account ID
key = "week-00/lab-00/terraform.tfstate"
region = "us-east-1"
encrypt = true
use_lockfile = true # Native S3 locking (Terraform 1.9+)
}
}Understanding the backend block:
bucket- The S3 bucket you just createdkey- Path within the bucket (organizes state files by lab)region- AWS region where the bucket existsencrypt- Encrypts state at restuse_lockfile- Uses S3's native locking to prevent concurrent modifications
💡 Quick way to get your bucket name:
# If you still have the variable set from Step 1:
echo "terraform-state-$AWS_ACCOUNT_ID"
# Or get it fresh:
echo "terraform-state-$(aws sts get-caller-identity --query Account --output text)"Copy the output and paste it as your bucket value in backend.tf.
Example backend.tf with real account ID:
terraform {
backend "s3" {
bucket = "terraform-state-123456789012"
key = "week-00/lab-00/terraform.tfstate"
region = "us-east-1"
encrypt = true
use_lockfile = true
}
}Step 3: Migrate State to S3
Now migrate your existing local state to S3:
# Re-initialize Terraform with the new backend
terraform init -migrate-stateWhat happens:
- Terraform detects you changed from local to S3 backend
- It asks: "Do you want to copy existing state to the new backend?"
- Type
yes - Your local state is uploaded to S3
- Terraform now uses S3 for all operations
Verify the migration:
# Check that state is in S3
aws s3 ls s3://terraform-state-YOUR-ACCOUNT-ID/week-00/lab-00/
# Should show: terraform.tfstateTest it works:
terraform planShould show "No changes" - Terraform successfully read state from S3!
Step 4: Clean Up Local State (Optional)
After successful migration, you can remove the local state files:
# These are no longer needed
rm terraform.tfstate
rm terraform.tfstate.backup # If it existsImportant: Do this ONLY after confirming state is in S3!
Step 5: Update .gitignore
Make sure your .gitignore prevents accidentally committing state files.
Check if there's a .gitignore in your lab directory:
ls -la week-00/lab-00/student-work/.gitignoreIf it doesn't exist, create one:
cat > week-00/lab-00/student-work/.gitignore << 'EOF'
# Terraform files
.terraform/
*.tfstate
*.tfstate.*
*.tfvars
.terraform.lock.hcl
# Sensitive files
terraform.rc
.terraformrc
EOFVerify state files won't be committed:
cd week-00/lab-00/student-work
git statusYou should NOT see terraform.tfstate in the list of files to be committed.
Understanding What You Committed
After setting up remote state, your Git repository will contain:
- ✅
main.tf- Your infrastructure code - ✅
outputs.tf- Output definitions - ✅
backend.tf- Backend configuration (no secrets) - ✅
.gitignore- Protects sensitive files - ❌
terraform.tfstate- NOT committed (in S3) - ❌
.terraform/- NOT committed (provider plugins)
This is the correct setup! Code is in Git, state is in S3.
Test Remote State Works
To prove remote state works, try this:
# See your current directory
pwd
# Move to home directory
cd ~
# Try to run Terraform from a different location
cd ~/git/shart-cloud-gh/terraform-course/week-00/lab-00/student-work
terraform init
terraform planIt should work because state is in S3, not tied to your local directory!
Troubleshooting Remote State
"Error: Failed to get existing workspaces"
- Double-check your bucket name in
backend.tf - Verify the bucket exists:
aws s3 ls | grep terraform-state - Check AWS credentials:
aws sts get-caller-identity
"Error: Error loading state: AccessDenied"
- Your IAM user needs S3 permissions
- Verify:
aws s3 ls s3://terraform-state-YOUR-ACCOUNT-ID
"Backend initialization required"
- Run
terraform initwhenever you change backend configuration - Use
-migrate-stateflag to move existing state
Want to move back to local state?
# Remove or comment out the backend block in backend.tf
# Then run:
terraform init -migrate-statePart 6: Submit Your Work (20 minutes)
Before submitting: Make sure you've completed all steps in Part 5 AND Part 5.5:
- ✅ A working
main.tfwith bucket, versioning, and encryption - ✅ An
outputs.tfwith bucket name and ARN outputs - ✅ A
backend.tfwith S3 remote state configured - ✅ Created state bucket in S3
- ✅ Migrated state to S3 (
terraform init -migrate-state) - ✅ Verified state is in S3 (not local)
- ✅
.gitignoreprevents committing state files - ✅ A working
main.tfwith bucket, versioning, and encryption - ✅ An
outputs.tfwith bucket name and ARN outputs - ✅ Run
terraform fmtto format your code - ✅ Run
terraform validateto check for errors - ✅ Successfully applied your configuration (
terraform apply) - ✅ Verified the bucket exists in AWS Console
- ✅ Run
infracost breakdown --path .and reviewed costs - ✅ Created a state storage bucket in S3
- ✅ Configured remote backend in
backend.tf - ✅ Migrated state to S3 successfully
6.1 Set Up GitHub Secrets (First Time Only)
Before creating your first PR, you need to configure GitHub Actions secrets in your fork. These secrets allow the automated grading workflows to run.
Method 1: Using GitHub Web UI (Recommended)
-
Go to your fork:
https://github.com/YOUR-USERNAME/labs_terraform_course -
Click Settings (top navigation bar)
-
In the left sidebar, expand Secrets and variables → Click Actions
-
Click New repository secret button
-
Add each of the following secrets:
Secret 1: AWS_ACCESS_KEY_ID
- Name:
AWS_ACCESS_KEY_ID - Value: Your AWS access key (from
aws configuresetup)
Secret 2: AWS_SECRET_ACCESS_KEY
- Name:
AWS_SECRET_ACCESS_KEY - Value: Your AWS secret access key
Secret 3: INFRACOST_API_KEY
- Name:
INFRACOST_API_KEY - Value: Get it by running:
infracost configure get api_key
- Name:
Method 2: Using GitHub CLI (Alternative)
If you prefer command line and have gh CLI installed with proper permissions:
# Make sure you're authenticated with workflow scope
gh auth login --scopes repo,workflow
# Set secrets
gh secret set AWS_ACCESS_KEY_ID -R YOUR-USERNAME/labs_terraform_course
# (Paste your AWS access key when prompted)
gh secret set AWS_SECRET_ACCESS_KEY -R YOUR-USERNAME/labs_terraform_course
# (Paste your AWS secret key when prompted)
gh secret set INFRACOST_API_KEY -R YOUR-USERNAME/labs_terraform_course
# (Paste your Infracost API key when prompted)Common Issues:
- "HTTP 403: Resource not accessible" - Use the web UI method instead. Codespaces tokens don't have
workflowscope. - "Multiple remotes detected" - Use the
-Rflag:gh secret set SECRET_NAME -R YOUR-USERNAME/labs_terraform_course - Can't find Settings - Make sure you're on YOUR fork, not the original repo
- These secrets are only accessible to workflows in YOUR fork
- Never commit credentials to Git
- Secrets are encrypted and not visible after creation
See STUDENT_SETUP.md for detailed instructions.
6.2 Commit and Push Your Work
# Create a branch (recommended)
git checkout -b week-00-lab-00
# Add your files (main.tf, outputs.tf, backend.tf, .gitignore)
git add week-00/lab-00/student-work/
# Verify state files are NOT being committed
git status
# You should see:
# main.tf
# outputs.tf
# backend.tf
# .gitignore
# You should NOT see terraform.tfstate or .terraform/
# Commit
git commit -m "Week 0 Lab 0 - Your Name"
# Push to your fork
git push origin week-00-lab-006.3 Create Pull Request in Your Fork
CRITICAL: Create the PR within YOUR fork, not to the main repository! PRs to the main repo won't have access to your secrets.
Method 1: Using GitHub CLI (Recommended)
# Create PR in YOUR fork (base and head both in your fork)
gh pr create --repo YOUR-USERNAME/labs_terraform_course \
--base main \
--head week-00-lab-00 \
--title "Week 0 Lab 0 - Your Name" \
--body "Completing Lab 0: S3 bucket setup with remote state"
# Example:
# gh pr create --repo jsmith/labs_terraform_course --base main --head week-00-lab-00 --title "Week 0 Lab 0 - John Smith" --body "Lab 0 submission"Method 2: Using GitHub Web UI
- Go to your fork:
https://github.com/YOUR-USERNAME/labs_terraform_course - Click Pull requests → New pull request
- Important: Click "compare across forks" if needed, then set:
- Base repository:
YOUR-USERNAME/labs_terraform_coursebase:main - Head repository:
YOUR-USERNAME/labs_terraform_coursecompare:week-00-lab-00
- Base repository:
- Title:
Week 0 Lab 0 - [Your Name] - Fill out the PR template
- Click Create pull request
Verify Your PR is Correct:
- The PR URL should be:
https://github.com/YOUR-USERNAME/labs_terraform_course/pull/X - NOT:
https://github.com/shart-cloud/labs_terraform_course/pull/X - Both base and head should show YOUR username
Why this matters:
- PRs within your fork can access the secrets you configured
- PRs to the main repo cannot access secrets (security feature)
- The grading workflow needs AWS credentials from your secrets
6.4 Wait for Automated Grading
The grading workflow will automatically run and:
- ✅ Check code formatting (
terraform fmt) - ✅ Validate configuration (
terraform validate) - ✅ Run lab-specific tests
- ✅ Generate cost estimates (Infracost)
- ✅ Perform security scanning (Checkov)
- ✅ Calculate your grade (0-100 points)
- ✅ Post detailed results as a PR comment
Expected grade breakdown:
- Code Quality: 25 points
- Functionality: 30 points
- Cost Management: 20 points
- Security: 15 points
- Documentation: 10 points
6.5 Review Your Grade and Iterate
- Check the automated comment on your PR for your grade
- If you need to improve your score:
- Fix the issues mentioned in the feedback
- Commit and push your changes
- The workflow will automatically re-run and update your grade
- Once satisfied, tag your instructor:
@shart-cloudin a PR comment
Part 7: Cleanup (10 minutes)
After your PR is reviewed and graded:
7.1 Destroy S3 Infrastructure
Because you set up remote state, you can destroy your resources from any location:
cd week-00/lab-00/student-work
# Initialize (pulls state from S3)
terraform init
# Destroy resources
terraform destroyType yes to confirm.
Benefits of remote state:
- You can run this from any computer (as long as you have AWS credentials)
- Even if you cloned your repo to a new machine, Terraform knows what resources exist
- The state in S3 tracks everything you created
Alternative: Wait 8 hours for the auto-teardown GitHub Action to destroy resources automatically (based on the AutoTeardown = "8h" tag).
7.2 (Optional) Clean Up State Bucket
Important: Keep your state bucket active if you have more labs to complete!
The state bucket you created (terraform-state-YOUR-ACCOUNT-ID) will be used for ALL labs in this course. Only remove it at the end of the semester:
# ONLY do this after completing ALL labs and destroying ALL infrastructure
# First, verify all state files are for destroyed resources
aws s3 ls s3://terraform-state-YOUR-ACCOUNT-ID/ --recursive
# Delete all state files
aws s3 rm s3://terraform-state-YOUR-ACCOUNT-ID/ --recursive
# Delete the bucket
aws s3 rb s3://terraform-state-YOUR-ACCOUNT-IDCost: S3 storage for state files costs less than $0.01/month, so keeping it is fine.
7.3 Keep or Remove Billing Budget
Important: The billing budget should typically be kept active throughout the course to monitor costs. However, if you need to remove it:
cd ../../common/billing-setup
terraform destroyNote: We recommend keeping the billing budget active for the entire semester.
Deliverables
See SUBMISSION.md for the complete checklist.
Troubleshooting
Common Terraform Errors
"Error: Invalid block definition"
- Check your HCL syntax - missing braces
{}or brackets - Ensure blocks are properly closed
- Run
terraform fmtto auto-fix formatting issues
"Error: Reference to undeclared resource"
- You're referencing a resource that doesn't exist
- Check spelling:
aws_s3_bucket.test_bucket(nottest_buckets) - Ensure the resource is defined before you reference it
"Error: Duplicate resource block"
- You have two resources with the same type and name
- Each resource must have a unique combination of type + name
- Example: You can't have two
resource "aws_s3_bucket" "test_bucket"blocks
"Error: Missing required argument"
- A resource is missing a required field
- Check the Terraform AWS provider documentation
- Example:
aws_s3_bucketrequires thebucketargument
Terraform Init Fails
- Check internet connection
- Verify Terraform is properly installed
- Try removing
.terraformdirectory and re-runningterraform init - Check that your
terraformblock has correctrequired_providerssyntax
AWS Authentication Errors
- Verify
aws configurewas run correctly - Check credentials with
aws sts get-caller-identity - Ensure IAM user has proper permissions (AdministratorAccess for learning)
- Check that credentials aren't expired (AWS Academy credentials expire)
S3 Bucket Name Conflicts
"Error: BucketAlreadyExists" or "Error: InvalidBucketName"
- S3 bucket names must be globally unique across ALL AWS accounts
- Use your student ID or GitHub username in the bucket name
- Bucket names must be lowercase, no spaces, 3-63 characters
- Valid:
terraform-lab-00-johnsmith - Invalid:
Terraform Lab 00,lab00,my_bucket
"Error: error creating S3 bucket ... Access Denied"
- Your AWS credentials don't have S3 permissions
- Verify IAM user has
AmazonS3FullAccessorAdministratorAccess - Run
aws s3 lsto test S3 access
Infracost Errors
- Ensure you've run
infracost auth loginor configured an API key - Check API key is valid:
infracost configure get api_key - If using Infracost for the first time, sign up at https://www.infracost.io/
State File Issues
"Error: state snapshot was created by Terraform vX.Y.Z"
- Your Terraform version is older than what created the state file
- Upgrade Terraform:
brew upgrade terraform(macOS) or reinstall - Never downgrade Terraform after creating state
"Error: acquiring state lock"
- Another Terraform process is running (maybe in another terminal?)
- Wait for it to finish or find and kill the process
- Check for stale lock files in S3:
aws s3 ls s3://terraform-state-YOUR-ACCOUNT-ID/week-00/lab-00/ - Last resort:
terraform force-unlock LOCK_ID(use carefully!)
"Error: Failed to get existing workspaces: NoSuchBucket"
- The state bucket doesn't exist
- Check bucket name in
backend.tfmatches what you created - Verify bucket exists:
aws s3 ls | grep terraform-state - Recreate bucket if needed:
aws s3 mb s3://terraform-state-YOUR-ACCOUNT-ID
"Error: Backend initialization required, please run 'terraform init'"
- You changed backend configuration
- Run
terraform initto reconfigure - Use
-migrate-stateif moving from local to S3 or vice versa
"Error: Error loading state: AccessDenied"
- Your IAM user can't access the state bucket
- Verify:
aws s3 ls s3://terraform-state-YOUR-ACCOUNT-ID/ - Your IAM user needs S3 read/write permissions
Accidentally committed state to Git?
# Remove from Git history (before pushing!)
git rm --cached terraform.tfstate
git rm --cached terraform.tfstate.backup
# Make sure .gitignore is set up
echo "*.tfstate*" >> .gitignore
git add .gitignore
git commit -m "Remove state files and update .gitignore"Lost your state file completely?
- If it's in S3: Just run
terraform initto download it - If you deleted the S3 bucket: You'll need to import resources manually or recreate them
- This is why remote state is critical!
Verify Before Asking for Help
Run these diagnostic commands:
# Check Terraform version
terraform version
# Check AWS credentials
aws sts get-caller-identity
# Check Terraform syntax
terraform fmt -check
terraform validate
# See detailed error output
TF_LOG=DEBUG terraform plan 2>&1 | lessLearning Outcomes Checklist
After completing this lab, you should be able to:
Tool Setup:
- ✅ Install and verify Terraform 1.9.0+, AWS CLI, and Infracost
- ✅ Configure AWS credentials and verify access
- ✅ Set up billing alerts and budgets using Terraform
Terraform Fundamentals:
- ✅ Understand HCL syntax (blocks, arguments, expressions)
- ✅ Explain the difference between terraform, provider, resource, and output blocks
- ✅ Write a terraform block with version constraints
- ✅ Configure an AWS provider
- ✅ Create resources with proper syntax
- ✅ Use resource references to create dependencies
- ✅ Add tags to AWS resources
- ✅ Define and use output values
Terraform Workflow:
- ✅ Use
terraform initto initialize a configuration - ✅ Use
terraform fmtto format code - ✅ Use
terraform validateto check syntax - ✅ Use
terraform planto preview changes - ✅ Use
terraform applyto create infrastructure - ✅ Use
terraform outputto query values - ✅ Use
terraform destroyto clean up resources
State Management:
- ✅ Understand why remote state is critical
- ✅ Create and configure an S3 bucket for state storage
- ✅ Configure Terraform backend for S3
- ✅ Migrate local state to remote S3 backend
- ✅ Understand state locking with S3 native locking (Terraform 1.9+)
- ✅ Protect state files with .gitignore
AWS & Cost Management:
- ✅ Create an S3 bucket with versioning and encryption
- ✅ Run Infracost to estimate infrastructure costs
- ✅ Understand AWS resource tagging for cost tracking
Development Practices:
- ✅ Submit work via Git and GitHub pull requests
- ✅ Follow infrastructure as code best practices
Next Steps
Proceed to Week 1, Lab 1 where you'll deploy more complex infrastructure with multiple resources.
Support
- Office hours: [Schedule TBD]
- Discussion forum: [Link TBD]
- Email instructor for urgent issues