In this 2nd article, I will continue from the first article and show you how to deploy my blokaly.com website built with Hugo on to AWS. There are 2 ways we can deploy: the HTTP way and the HTTPS way. HTTPS (Hypertext Transfer Protocol Secure) is a secure version of the HTTP protocol that uses the SSL/TLS protocol for encryption and authentication. It involves setting up the SSL/TLS certificate, so a bit trickier than HTTP. We will first deploy our website on to AWS using HTTP, following the principles of this guide: hosting a static website using Amazon S3 , and in the next article we will add more configurations and turn our http website into https.

Introduction

Terraform

HashiCorp Terraform is an infrastructure as code tool that lets you define both cloud and on-prem resources in human-readable configuration files that you can version, reuse, and share. You can then use a consistent workflow to provision and manage all of your infrastructure throughout its lifecycle. Terraform can manage low-level components like compute, storage, and networking resources, as well as high-level components like DNS entries and SaaS features.

AWS S3

Amazon Simple Storage Service (Amazon S3 ) is an object storage service that offers industry-leading scalability, data availability, security, and performance. Customers of all sizes and industries can use Amazon S3 to store and protect any amount of data for a range of use cases, such as data lakes, websites, mobile applications, backup and restore, archive, enterprise applications, IoT devices, and big data analytics.

tfenv

tfenv is a Terraform version manager, which can help you to install and use a specific version of Terraform.

Build

Prerequisites

The tools you will need to complete the build:

  • AWS Console
  • AWS Comand Line Interface
  • Terraform by HashiCorp
  • tfenv (optional)
  • A purchased domain, I got mine from Namecheap

Instructions

  1. First, if you don’t have an AWS account, then you can follow this instruction to set up an AWS account and create an administrator user. There are 2 options to create the user, IAM or IAM Identity Centre. I chose IAM Identity Centre. Note, this is different to the root user during your AWS account sign up. The root user has access to all AWS services and resources in the account. As a security best practice, assign administrative access to an administrative user, and use only the root user to perform tasks that require root user access.
  2. Then, follow this guide to install AWS CLI. The AWS Command Line Interface (AWS CLI) is an open source tool that enables you to interact with AWS services using commands in your command-line shell.
  3. Next, either install Terraform directly following this guide or use tfenv to install the terraform. My terraform version is v1.3.6 .
  4. After you completed the above 3 steps, now let’s setup an AWS S3 bucket as the terraform backend . A backend defines where Terraform stores its state data files. By default, Terraform uses a backend called local, which stores state as a local file on disk. If you are OK with local configuration for the moment, then you can skip this step. I followed this guide to setup my AWS S3 backend with some amendments.
    Here are my S3 bucket properties:
    Property Value
    Bucket Versioning Disabled
    Encryption key type SSE-KMS
    Bucket Key Disabled
    Server access logging Disabled
    Static website hosting Disabled
    Here is my bucket policy:
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
  {
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Principal": {
                "AWS": "<user_arn>"
            },
            "Action": "s3:ListBucket",
            "Resource": "<bucket_arn>"
        },
        {
            "Effect": "Allow",
            "Principal": {
                "AWS": "<user_arn>"
            },
            "Action": [
                "s3:GetObject",
                "s3:PutObject",
                "s3:DeleteObject"
            ],
            "Resource": "<bucket_arn>/*"
        }
    ]
}
  1. Now, let’s create a folder named terraform under the Hugo root folder and create a file named main.tf. Then first add several definitions:
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
terraform {  
  required_providers {  
    aws = {  
      source  = "hashicorp/aws"  
      version = "~> 3.0"  
    }  
  }

  # remove this block if backend not required
  backend "s3" {  
    bucket  = "<s3_bucket_name>"  
    key     = "hugo/terraform.tfstate"  
    region  = "ap-southeast-1"  
    encrypt = true  
    profile = "<awscli_profle>"  
  }  
}  
  
provider "aws" {  
  region  = "ap-southeast-1"  
  profile = "<awscli_profle>"  
}  
   
variable "domain_name" {  
  type        = string  
  description = "The domain name for the website."  
}  
  
variable "bucket_name" {  
  type        = string  
  description = "The name of the bucket without the www. prefix. Normally domain_name."  
}

Replace <s3_bucket_name> with your S3 bucket name and <awscli_profle> with your AWS CLI profile. I’m using ap-southeast-1 as my region, please change it to your hosting region.
The following code will create a new AWS S3 bucket to host the Hugo files, then create an AWS Route53 zone and 2 records to associate with the created S3 bucket:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
resource "aws_s3_bucket" "www_bucket" {  
  bucket = "www.${var.bucket_name}"  
}  
  
resource "aws_s3_bucket_acl" "www_bucket_acl" {  
  bucket = aws_s3_bucket.www_bucket.id  
  acl    = "public-read"  
}  
  
resource "aws_s3_bucket_cors_configuration" "www_bucket_cors" {  
  bucket = aws_s3_bucket.www_bucket.id  
  cors_rule {  
    allowed_headers = ["Authorization", "Content-Length"]  
    allowed_methods = ["GET", "POST"]  
    allowed_origins = ["https://www.${var.domain_name}"]  
    max_age_seconds = 3000  
  }  
}  
  
resource "aws_s3_bucket_policy" "bucket-www_bucket_policy" {  
  bucket = aws_s3_bucket.www_bucket.id  
  policy = data.aws_iam_policy_document.iam-policy-www.json  
}  
  
data "aws_iam_policy_document" "iam-policy-www" {  
  statement {  
    sid       = "AllowPublicRead"  
    effect    = "Allow"  
    resources = [  
      "arn:aws:s3:::www.${var.domain_name}",  
      "arn:aws:s3:::www.${var.domain_name}/*",  
    ]    actions = ["S3:GetObject"]  
    principals {  
      type        = "*"  
      identifiers = ["*"]  
    }  }}  
  
resource "aws_s3_bucket_website_configuration" "www_bucket_website" {  
  bucket = aws_s3_bucket.www_bucket.bucket  
  
  index_document {  
    suffix = "index.html"  
  }  
  
  error_document {  
    key = "404.html"  
  }  
}  
  
resource "aws_route53_zone" "main" {  
  name = var.domain_name  
}  
  
resource "aws_route53_record" "root_a" {  
  zone_id = aws_route53_zone.main.zone_id  
  name    = var.domain_name  
  type    = "A"  
  alias {  
    name                   = aws_s3_bucket_website_configuration.www_bucket_website.website_domain  
    zone_id                = aws_s3_bucket.www_bucket.hosted_zone_id  
    evaluate_target_health = false  
  }  
}  
  
resource "aws_route53_record" "www_a" {  
  zone_id = aws_route53_zone.main.zone_id  
  name    = "www.${var.domain_name}"  
  type    = "A"  
  alias {  
    name                   = aws_s3_bucket_website_configuration.www_bucket_website.website_domain  
    zone_id                = aws_s3_bucket.www_bucket.hosted_zone_id  
    evaluate_target_health = false  
  }  
}
  1. Save the main.tf file, and execute terraform init command. If the init command completed successfully, then you can execute terraform plan to check your settings and terraform apply to build and deploy your settings onto AWS.
  2. Last step, go to your AWS Route53 hosted zone and find your newly created record, something like this:
    then go to the management page where you purchased your domain, in my case namecheap, change the DNS to the values found for your AWS Route53 record.
  3. Congratulations! If you have completed all the previous steps. Next, see my first article to write your first post.
  4. To build and deploy your Hugo posts to AWS S3, execute:
1
2
$ hugo
$ aws s3 sync public s3://www.<domain_name> --delete

Hugo uses public folder as the default, change to your configured folder.

In my next article, I will write about how to deploy the Hugo posts to a HTTPS website using AWS Cloudfront .

References