Amazon Elastic Container Service (ECS) is a fully managed container orchestration service that simplifies deploying, managing, and scaling containerized applications. AWS Fargate, a serverless compute engine for containers, eliminates the need to manage underlying infrastructure. ECS Exec enables secure command execution inside running containers for debugging and troubleshooting. This article demonstrates how to create an ECS service using Fargate, enable ECS Exec, and provision it with Terraform.
Prerequisites
- AWS account with appropriate permissions
- Terraform installed (version 1.5.0 or later)
- AWS CLI configured with credentials
- Basic knowledge of Docker, ECS, and Terraform
Step 1: Define the Terraform Configuration
Create a Terraform configuration to provision the necessary AWS resources, including a VPC, ECS cluster, task definition, and service with Fargate launch type and ECS Exec enabled.
Directory Structure
ecs-fargate-terraform/ ├── main.tf ├── variables.tf ├── outputs.tf └── provider.tf
Provider Configuration
Define the AWS provider and required Terraform version.
terraform { required_providers { aws = { source = "hashicorp/aws" version = "~> 5.0" } } } provider "aws" { region = var.region }
Variables
Define input variables for reusability and flexibility.
variable "region" { description = "AWS region" type = string default = "us-east-1" } variable "cluster_name" { description = "Name of the ECS cluster" type = string default = "my-ecs-cluster" } variable "service_name" { description = "Name of the ECS service" type = string default = "my-ecs-service" } variable "container_image" { description = "Docker image for the container" type = string default = "amazon/amazon-ecs-sample" }
Main Configuration
Create the VPC, ECS cluster, task definition, and service with ECS Exec enabled.
# VPC Configuration module "vpc" { source = "terraform-aws-modules/vpc/aws" version = "~> 5.0" name = "ecs-vpc" cidr = "10.0.0.0/16" azs = ["${var.region}a", "${var.region}b"] private_subnets = ["10.0.1.0/24", "10.0.2.0/24"] public_subnets = ["10.0.101.0/24", "10.0.102.0/24"] enable_nat_gateway = true single_nat_gateway = true }
ECS Cluster
resource "aws_ecs_cluster" "this" { name = var.cluster_name }
IAM Role for ECS Task Execution
resource "aws_iam_role" "ecs_task_execution_role" { name = "ecsTaskExecutionRole" assume_role_policy = jsonencode({ Version = "2012-10-17" Statement = [ { Action = "sts:AssumeRole" Effect = "Allow" Principal = { Service = "ecs-tasks.amazonaws.com" } }] }) } resource "aws_iam_role_policy_attachment" "ecs_task_execution_role_policy" { role = aws_iam_role.ecs_task_execution_role.name policy_arn = "arn:aws:iam::aws:policy/service-role/AmazonECSTaskExecutionRolePolicy" }
IAM Role for ECS Exec
resource "aws_iam_role_policy" "ecs_exec_policy" { name = "ecsExecPolicy" role = aws_iam_role.ecs_task_execution_role.id policy = jsonencode({ Version = "2012-10-17" Statement = [ { Effect = "Allow" Action = [ "ssmmessages:CreateControlChannel", "ssmmessages:CreateDataChannel", "ssmmessages:OpenControlChannel", "ssmmessages:OpenDataChannel" ] Resource = "*" } ] }) }
ECS Task Definition
resource "aws_ecs_task_definition" "this" { family = "my-task" network_mode = "awsvpc" requires_compatibilities = ["FARGATE"] cpu = "256" memory = "512" execution_role_arn = aws_iam_role.ecs_task_execution_role.arn container_definitions = jsonencode([{ name = "my-container" image = var.container_image essential = true portMappings = [{ containerPort = 80 hostPort = 80 }] linuxParameters = { initProcessEnabled = true } }]) }
ECS Service
resource "aws_ecs_service" "this" { name = var.service_name cluster = aws_ecs_cluster.this.id task_definition = aws_ecs_task_definition.this.arn desired_count = 1 launch_type = "FARGATE" network_configuration { subnets = module.vpc.private_subnets security_groups = [aws_security_group.ecs_service.id] assign_public_ip = false } enable_execute_command = true }
Security Group for ECS Service
resource "aws_security_group" "ecs_service" { name = "ecs-service-sg" vpc_id = module.vpc.vpc_id ingress { from_port = 80 to_port = 80 protocol = "tcp" cidr_blocks = ["0.0.0.0/0"] } egress { from_port = 0 to_port = 0 protocol = "-1" cidr_blocks = ["0.0.0.0/0"] } }
Outputs
Define outputs to retrieve important resource IDs.
output "ecs_cluster_id" { description = "ID of the ECS cluster" value = aws_ecs_cluster.this.id } output "ecs_service_name" { description = "Name of the ECS service" value = aws_ecs_service.this.name }
Step 2: Initialize and Apply Terraform
- Initialize the Terraform working directory:
terraform init
- Preview the changes:
terraform plan
- Apply the configuration:
terraform apply
Step 3: Using ECS Exec
Once the service is running, use ECS Exec to access the container for debugging.
- Enable ECS Exec: The
enable_execute_command = true
in theaws_ecs_service
resource enables ECS Exec. - Run a command:
aws ecs execute-command --cluster my-ecs-cluster \
--task <task-id> \
--container my-container \
--command "/bin/sh" \
--interactive
Replace<task-id>
with the ID of a running task, which you can find using the AWS CLI or Console. - Inside the container, you can run commands like
ls
,cat
, or any debugging tools available in the container.
Step 4: Cleaning Up
To avoid unnecessary costs, destroy the resources when done:
terraform destroy
Best Practices
- Security: Restrict the security group ingress rules to specific CIDR blocks or use a load balancer.
- Logging: Enable AWS CloudWatch Logs in the task definition for better monitoring.
- Scaling: Configure auto-scaling policies for the ECS service based on CPU/memory metrics.
- Secrets Management: Use AWS Secrets Manager for sensitive data instead of hardcoding in the task definition.
Conclusion
This guide demonstrated how to create an AWS ECS service using Fargate, enable ECS Exec for debugging, and provision it with Terraform. By following these steps, you can deploy containerized applications efficiently and troubleshoot them using ECS Exec.