In parts 1-3, we created a static website using Hugo, we output static html files with the “hugo” command, and we copied those files into a Docker image, which we plan to deploy to AWS Fargate.
Aside from basic SOA and NS records we create manually in AWS Route53, everything else in our stack is created with Terraform, from files in the “build” directory at the root of the app.
In a nutshell, we are taking the container we made in Part 3, and running it behind an Application Load Balancer (ALB) in AWS’s Fargate flavor of its Elastic Container Service (ECS). Fargate runs containers using serverless, meaning that we have no Elastic Compute Cloud (EC2) instances to manage. We’re letting AWS handle DNS through Route53, and our SSL certificate through Amazon Certificate Manager (ACM).
Here’s what our Terraform templates do:
The tricky parts:
We begin by creating a Route53 record manually, with the default NS and SOA entries. It’s certainly possible to allow Terraform to create these, but, once we create them, we don’t want to allow them to be destroyed, because each time we create new NS records, AWS will choose new nameservers for us, and then we have to enter new nameservers with our registrar.
Everything else in this project is created by Terraform; here is a rundown of our templates:
1# config.tf
2provider "aws" {
3 region = "us-west-2"
4 profile = "tfuser"
5}
6
7terraform {
8 required_version = ">= 1.0"
9 # "backend" block cannot reference variables, so they are hardcoded here
10 backend "s3" {
11 bucket = "cwmcelfresh-terraform"
12 key = "terraform.tfstate"
13 region = "us-west-2"
14 profile = "tfuser"
15 }
16
17 required_providers {
18 aws = {
19 source = "hashicorp/aws"
20 version = "~> 3.69.0"
21 }
22 }
23}
The idea here is to let the user that is running Terraform code assume a role that allows it to do only what the templates call for. Amazon has a AmazonECSTaskExecutionRolePolicy for just this purpose.
1# iam.tf
2
3# This is the role under which ECS will execute our task. This role becomes more important
4# as we add integrations with other AWS services later on.
5
6# The assume_role_policy field works with the following aws_iam_policy_document to allow
7# ECS tasks to assume this role we're creating.
8resource "aws_iam_role" "charlie_blog_task_execution_role" {
9 name = "charlie-blog-task-execution-role"
10 assume_role_policy = data.aws_iam_policy_document.ecs_task_assume_role.json
11}
12
13data "aws_iam_policy_document" "ecs_task_assume_role" {
14 statement {
15 actions = ["sts:AssumeRole"]
16
17 principals {
18 type = "Service"
19 identifiers = ["ecs-tasks.amazonaws.com"]
20 }
21 }
22}
23
24# The role we need to execute this task
25data "aws_iam_policy" "ecs_task_execution_role" {
26 arn = "arn:aws:iam::aws:policy/service-role/AmazonECSTaskExecutionRolePolicy"
27}
28
29# Attach the above policy to the execution role.
30resource "aws_iam_role_policy_attachment" "ecs_task_execution_role" {
31 role = aws_iam_role.charlie_blog_task_execution_role.name
32 policy_arn = data.aws_iam_policy.ecs_task_execution_role.arn
33}
1# variables.tf
2# These variables are available to any .tf file in this directory
3# Often, variables have defaults, but that is not useful here.
4# These variables are set by exporting TF_VAR_<variable_name> as
5# environment variables, ie TF_VAR_charlie_blog_aws_region and
6# TF_VAR_charlie_blog_image
7
8variable "charlie_blog_image" {
9 type = string
10}
11
12variable "charlie_blog_aws_region" {
13 type = string
14}
In route53.tf, we create A records for both mcelfresh.info, and *.mcelfresh.info, so that later, we can use any mcelfresh.info subdomain we want to. acm.tf creates an SSL certificate for both as well, so that mcelfresh.info and any of its subdomains we use later will be covered by our SSL cert.
These two files work together to validate our SSL cert, by creating a CNAME in DNS, with a unique name / value pair. Our code creates an SSL cert, then the aws_acm_certificate_validation block looks up that CNAME. The idea here is that in order for Amazon Certifiate Manager (ACM) to allow us to create an SSL cert, it needs to know we have control of our DNS. So AWS says “create a DNS CNAME with this unique name / value pair”, and then it waits to see whether we’ve created that. Once AWS sees we have created that CNAME, it issues our (free) SSL cert to us.
1# route53.tf
2
3# Set our route53 zone for use in this template
4data "aws_route53_zone" "mcelfresh_info" {
5 name = "mcelfresh.info"
6}
7
8# DNS A record for mcelfresh.info
9resource "aws_route53_record" "apex" {
10 zone_id = data.aws_route53_zone.mcelfresh_info.zone_id
11 name = "mcelfresh.info"
12 type = "A"
13 alias {
14 name = aws_alb.charlie_blog.dns_name
15 zone_id = aws_alb.charlie_blog.zone_id
16 evaluate_target_health = true
17 }
18 depends_on = [aws_alb.charlie_blog]
19}
20
21# DNS A record for *.mcelfresh.info
22resource "aws_route53_record" "wildcard" {
23 zone_id = data.aws_route53_zone.mcelfresh_info.zone_id
24 name = "*.mcelfresh.info"
25 type = "A"
26 alias {
27 name = aws_alb.charlie_blog.dns_name
28 zone_id = aws_alb.charlie_blog.zone_id
29 evaluate_target_health = true
30 }
31 depends_on = [aws_alb.charlie_blog]
32}
33
34# DNS CNAME record for for Amazon Certificate Manager (acm) validation
35resource "aws_route53_record" "acm_validation" {
36 for_each = {
37 for dvo in aws_acm_certificate.mcelfresh_info.domain_validation_options : dvo.domain_name => {
38 name = dvo.resource_record_name
39 record = dvo.resource_record_value
40 type = dvo.resource_record_type
41 }
42 }
43
44 allow_overwrite = true
45 name = each.value.name
46 records = [each.value.record]
47 ttl = 60
48 type = each.value.type
49 zone_id = data.aws_route53_zone.mcelfresh_info.zone_id
50}
1# acm.tf
2
3# Our SSL certificate. Note that although docs claim the wildcard encompasses
4# mcelfresh.info, it doesn't -- so here we include both wildcard *.mcelfresh.info
5# and mcelfresh.info
6resource "aws_acm_certificate" "mcelfresh_info" {
7 lifecycle {
8 create_before_destroy = true
9 }
10 domain_name = "mcelfresh.info"
11 validation_method = "DNS"
12 subject_alternative_names = [
13 "*.mcelfresh.info"
14 ]
15}
16
17# See resource "aws_alb_listener" "charlie_blog_https" in alb.tf
18resource "aws_acm_certificate_validation" "mcelfresh_info" {
19 certificate_arn = aws_acm_certificate.mcelfresh_info.arn
20 validation_record_fqdns = [for record in aws_route53_record.acm_validation : record.fqdn]
21}
We set up our network so that there is one Internet Gateway (igw) into our Application Load Balancer (ALB). All port 80 traffic is redirected to 443, thus requiring SSL at the ALB.
ALB communicates with our container via port 80.
Note that we set up communication between our ALB and our private Elastic Container Registry (ECR) via AWS Privatelink. The alternative is to set up a NAT Gateway, and pull our Docker image over the Internet. We prefer to access our private ECR inside our private network, rather than over the ’net. And, AWS charges about $1/day for each NAT Gateway, and we don’t want to pay that.
We need to set up access to S3 because AWS stores image layers there. S3 and ECR access must be over port 443. These Privatelink VPCs are a bit tricky; we’ll go into more detail in our vpcs.tf.
1# network.tf
2
3## Create redundant public and private subnets in two availability zones
4resource "aws_subnet" "public_d" {
5 vpc_id = aws_vpc.app_vpc.id
6 cidr_block = "10.0.1.0/25"
7 availability_zone = "${var.charlie_blog_aws_region}d"
8
9 tags = {
10 "Name" = "public | ${var.charlie_blog_aws_region}d"
11 }
12}
13
14resource "aws_subnet" "public_e" {
15 vpc_id = aws_vpc.app_vpc.id
16 cidr_block = "10.0.1.128/25"
17 availability_zone = "${var.charlie_blog_aws_region}c"
18
19 tags = {
20 "Name" = "public | ${var.charlie_blog_aws_region}c"
21 }
22}
23
24resource "aws_subnet" "private_d" {
25 vpc_id = aws_vpc.app_vpc.id
26 cidr_block = "10.0.2.0/25"
27 availability_zone = "${var.charlie_blog_aws_region}d"
28
29 tags = {
30 "Name" = "private | ${var.charlie_blog_aws_region}d"
31 }
32}
33
34resource "aws_subnet" "private_e" {
35 vpc_id = aws_vpc.app_vpc.id
36 cidr_block = "10.0.2.128/25"
37 availability_zone = "${var.charlie_blog_aws_region}c"
38
39 tags = {
40 "Name" = "private | ${var.charlie_blog_aws_region}c"
41 }
42}
43
44# public and private route tables
45resource "aws_route_table" "public" {
46 vpc_id = aws_vpc.app_vpc.id
47 tags = {
48 "Name" = "public"
49 }
50}
51
52resource "aws_route_table" "private" {
53 vpc_id = aws_vpc.app_vpc.id
54 tags = {
55 "Name" = "private"
56 }
57}
58
59# associate subnets with route tables
60resource "aws_route_table_association" "public_d_subnet" {
61 subnet_id = aws_subnet.public_d.id
62 route_table_id = aws_route_table.public.id
63}
64
65resource "aws_route_table_association" "private_d_subnet" {
66 subnet_id = aws_subnet.private_d.id
67 route_table_id = aws_route_table.private.id
68}
69
70resource "aws_route_table_association" "public_e_subnet" {
71 subnet_id = aws_subnet.public_e.id
72 route_table_id = aws_route_table.public.id
73}
74
75resource "aws_route_table_association" "private_e_subnet" {
76 subnet_id = aws_subnet.private_e.id
77 route_table_id = aws_route_table.private.id
78}
79
80resource "aws_internet_gateway" "igw" {
81 vpc_id = aws_vpc.app_vpc.id
82}
83
84# public gateway route
85resource "aws_route" "public_igw" {
86 route_table_id = aws_route_table.public.id
87 destination_cidr_block = "0.0.0.0/0"
88 gateway_id = aws_internet_gateway.igw.id
89}
90
91# http ingress for alb
92resource "aws_security_group" "http" {
93 name = "http"
94 description = "HTTP traffic"
95 vpc_id = aws_vpc.app_vpc.id
96
97 ingress {
98 from_port = 80
99 to_port = 80
100 protocol = "TCP"
101 cidr_blocks = ["0.0.0.0/0"]
102 }
103 tags = {
104 Name = "http"
105 }
106}
107
108# https ingress for alb
109resource "aws_security_group" "https" {
110 name = "https"
111 description = "HTTPS traffic"
112 vpc_id = aws_vpc.app_vpc.id
113
114 ingress {
115 from_port = 443
116 to_port = 443
117 protocol = "TCP"
118 cidr_blocks = ["0.0.0.0/0"]
119 }
120 tags = {
121 Name = "https"
122 }
123}
124
125# egress for alb
126resource "aws_security_group" "egress_alb" {
127 name = "egress_alb"
128 description = "Allow all outbound traffic from alb"
129 vpc_id = aws_vpc.app_vpc.id
130 egress {
131 from_port = 80
132 to_port = 80
133 protocol = "TCP"
134 cidr_blocks = ["0.0.0.0/0"]
135 }
136 egress {
137 from_port = 443
138 to_port = 443
139 protocol = "TCP"
140 cidr_blocks = ["0.0.0.0/0"]
141 }
142 tags = {
143 Name = "egress_alb"
144 }
145}
146
147# ingress to s3 and ecr
148resource "aws_security_group" "vpce" {
149 name = "vpce"
150 vpc_id = aws_vpc.app_vpc.id
151 ingress {
152 from_port = 443
153 to_port = 443
154 protocol = "TCP"
155 cidr_blocks = [aws_vpc.app_vpc.cidr_block]
156 }
157 tags = {
158 Name = "vpce"
159 }
160}
161
162# egress from ecs task to s3 and ecr
163resource "aws_security_group" "ecs_task" {
164 name = "ecs"
165 vpc_id = aws_vpc.app_vpc.id
166 egress {
167 from_port = 443
168 to_port = 443
169 protocol = "tcp"
170 cidr_blocks = [aws_vpc.app_vpc.cidr_block]
171 }
172 egress {
173 from_port = 443
174 to_port = 443
175 protocol = "tcp"
176 prefix_list_ids = [aws_vpc_endpoint.s3.prefix_list_id]
177 }
178 tags = {
179 Name = "ecs_task"
180 }
181}
This template implements Amazon Privatelink for all our internal resources:
The tricky parts:
1# vpcs.tf
2
3# VPC Endpoints
4# enable_dns_hostnames and enable_dns_support are required for the below VPC
5# endpoints that use private_dns_enabled = true
6# Think about it: our app_vpc needs some way to find the VPC endpoints
7resource "aws_vpc" "app_vpc" {
8 cidr_block = "10.0.0.0/16"
9 enable_dns_hostnames = true
10 enable_dns_support = true
11}
12
13# All the below vpc endpoints are in private subnets
14# We want our VPC endpoints to be in private subnets so that they are not
15
16# Gateway-type VPCs must use route_table_ids and not subnet_ids
17resource "aws_vpc_endpoint" "s3" {
18 vpc_id = aws_vpc.app_vpc.id
19 service_name = "com.amazonaws.${var.charlie_blog_aws_region}.s3"
20 vpc_endpoint_type = "Gateway"
21 route_table_ids = [aws_route_table.private.id]
22 tags = {
23 Name = "s3-endpoint"
24 }
25}
26
27# Interface-type VPCs must use subnet_ids instead of route_table_ids
28resource "aws_vpc_endpoint" "dkr" {
29 vpc_id = aws_vpc.app_vpc.id
30 private_dns_enabled = true
31 service_name = "com.amazonaws.${var.charlie_blog_aws_region}.ecr.dkr"
32 vpc_endpoint_type = "Interface"
33 security_group_ids = [
34 aws_security_group.vpce.id,
35 ]
36 subnet_ids = [aws_subnet.private_d.id,
37 aws_subnet.private_e.id, ]
38 tags = {
39 Name = "dkr-endpoint"
40 }
41}
42
43# Interface-type VPCs must use subnet_ids instead of route_table_ids
44resource "aws_vpc_endpoint" "dkr_api" {
45 vpc_id = aws_vpc.app_vpc.id
46 private_dns_enabled = true
47 service_name = "com.amazonaws.${var.charlie_blog_aws_region}.ecr.api"
48 vpc_endpoint_type = "Interface"
49 security_group_ids = [
50 aws_security_group.vpce.id,
51 ]
52 subnet_ids = [aws_subnet.private_d.id,
53 aws_subnet.private_e.id, ]
54 tags = {
55 Name = "dkr-api-endpoint"
56 }
57}
58
59# Interface-type VPCs must use subnet_ids instead of route_table_ids
60resource "aws_vpc_endpoint" "logs" {
61 vpc_id = aws_vpc.app_vpc.id
62 private_dns_enabled = true
63 service_name = "com.amazonaws.${var.charlie_blog_aws_region}.logs"
64 vpc_endpoint_type = "Interface"
65 security_group_ids = [
66 aws_security_group.vpce.id,
67 ]
68 subnet_ids = [aws_subnet.private_d.id,
69 aws_subnet.private_e.id, ]
70 tags = {
71 Name = "logs-endpoint"
72 }
73}
These files are pretty straightforward. fargate.tf sets up our Elastic Container Service (ECS) as a Fargate service, sets up its network, and tells it where to find our container. alb.tf sets up our load balancer, with all its required components, tells all port 80 traffic it has to go to port 443, etc.
Of note, as mentioned in part 3, our load balancer creates a health_check, as AWS requires.
1# fargate.tf
2
3# Fargate is a serverless container service, that makes container management easier
4# by removing the need to manage the underlying infrastructure.
5
6# Domain name mcelfresh.info is hardcoded everywhere in an attempt to make it
7# obvious what's going on with DNS records.
8
9# Region and image name are passed in. See the variables.tf file
10
11# We need a cluster in which to put our service.
12resource "aws_ecs_cluster" "app" {
13 name = "app"
14}
15
16# Log groups hold logs from our app.
17resource "aws_cloudwatch_log_group" "charlie_blog" {
18 name = "/ecs/charlie-blog"
19}
20
21# The main service.
22resource "aws_ecs_service" "charlie_blog" {
23 name = "charlie-blog"
24 task_definition = aws_ecs_task_definition.charlie_blog.arn
25 cluster = aws_ecs_cluster.app.id
26 launch_type = "FARGATE"
27
28 desired_count = 1
29
30 load_balancer {
31 target_group_arn = aws_lb_target_group.charlie_blog.arn
32 container_name = "charlie-blog"
33 container_port = "80"
34 }
35
36 network_configuration {
37 assign_public_ip = false
38
39 security_groups = [
40 aws_security_group.ecs_task.id,
41 aws_security_group.egress_alb.id,
42 aws_security_group.http.id,
43 ]
44
45 subnets = [
46 aws_subnet.private_d.id,
47 aws_subnet.private_e.id,
48 ]
49 }
50}
51
52# The task definition for our app.
53resource "aws_ecs_task_definition" "charlie_blog" {
54 family = "charlie-blog"
55
56 container_definitions = <<EOF
57 [
58 {
59 "name": "charlie-blog",
60 "image": "${var.charlie_blog_image}",
61 "portMappings": [
62 {
63 "containerPort": 80
64 }
65 ],
66 "logConfiguration": {
67 "logDriver": "awslogs",
68 "options": {
69 "awslogs-region": "${var.charlie_blog_aws_region}",
70 "awslogs-group": "/ecs/charlie-blog",
71 "awslogs-stream-prefix": "ecs"
72 }
73 }
74 }
75 ]
76
77EOF
78
79 execution_role_arn = aws_iam_role.charlie_blog_task_execution_role.arn
80
81 # These are the minimum values for Fargate containers.
82 cpu = 256
83 memory = 512
84 requires_compatibilities = ["FARGATE"]
85
86 # This is required for Fargate containers (more on this later).
87 network_mode = "awsvpc"
88}
1# alb.tf
2
3# Load balancer target group
4resource "aws_lb_target_group" "charlie_blog" {
5 name = "charlie-blog"
6 port = 80
7 protocol = "HTTP"
8 target_type = "ip"
9 vpc_id = aws_vpc.app_vpc.id
10
11 health_check {
12 enabled = true
13 path = "/health"
14 }
15
16 depends_on = [aws_alb.charlie_blog]
17}
18
19# Load balancer
20resource "aws_alb" "charlie_blog" {
21 name = "charlie-blog-lb"
22 internal = false
23 load_balancer_type = "application"
24
25 subnets = [
26 aws_subnet.public_d.id,
27 aws_subnet.public_e.id,
28 ]
29
30 security_groups = [
31 aws_security_group.http.id,
32 aws_security_group.https.id,
33 aws_security_group.egress_alb.id,
34 ]
35
36 depends_on = [aws_internet_gateway.igw]
37}
38
39# Load balancer listener on port 80 redirects all traffic to 443
40resource "aws_alb_listener" "charlie_blog_http" {
41 load_balancer_arn = aws_alb.charlie_blog.arn
42 port = "80"
43 protocol = "HTTP"
44
45 default_action {
46 type = "redirect"
47
48 redirect {
49 port = "443"
50 protocol = "HTTPS"
51 status_code = "HTTP_301"
52 }
53 }
54}
55
56# Load balancer port 443 listener
57resource "aws_alb_listener" "charlie_blog_https" {
58 load_balancer_arn = aws_alb.charlie_blog.arn
59 port = "443"
60 protocol = "HTTPS"
61 certificate_arn = aws_acm_certificate_validation.mcelfresh_info.certificate_arn
62
63 default_action {
64 type = "forward"
65 target_group_arn = aws_lb_target_group.charlie_blog.arn
66 }
67}
Most of this setup works as you would expect. We need DNS, an SSL cert, a load balancer, then something that runs our container(s). Fargate seems great – we’ll have to see how it performs / how much it costs.
The tricky parts: