[LINUX] Build AWS EC2 and RDS with Terraform Terraform 3 minutes cooking

Introduction

This article describes how to build AWS EC2 and RDS with Terraform.

It takes about 3 minutes to execute with terraform apply. It can be built with the waiting time of cup ramen.

The following environment (*) will be deployed. In addition, EC2 also performs the minimum OS setup.

terraform.png

If you want to start from the basics of Terraform, please refer to the previously written The true value of Terraform automation starting with Oracle Cloud.

(*) The tf file described in this article is available on GitHub.

Terraform construction

In order to build with Terraform in the AWS environment, create a user with IAM and prepare the necessary credential information. After that, install Terraform and prepare the tf file necessary for executing Terraform.

The prerequisites for the AWS environment are listed below.

-[x] Create user with IAM -[x] You have granted the required permissions -[x] You have created a key pair for use with SSH

Create tf file

I will explain about the tf file.

First, change to any working directory. In this article, the directory structure is as follows, and the current directory is the common directory. There is no problem with other directories for the location of the ssh directory.

--Directory structure

.
|-- common
|   |--  userdata
|        |-- cloud-init.tpl
|   |-- ec2.tf
|   |-- env-vars
|   |-- network.tf
|   |-- provider.tf
|   |-- rds.tf
`-- ssh
    |-- id_rsa
    |-- id_rsa.pub

--Various file descriptions

file name role
cloud-init.tpl Initial build script for EC2
ec2.tf EC2 tf file
env-vars Variable tf file used by the provider
network.tf Network tf file
provider.tf Provider tf file
rds.tf RDS tf file
id_rsa SSH private key
id_rsa.pub SSH public key
### Authentication
export TF_VAR_aws_access_key="<access_Paste the contents of the key >"
export TF_VAR_aws_secret_key="<secret_Paste the contents of the key >"

(*) Paste the contents of access_key and secret_key into the quotation marks, respectively.

# Variable
variable "aws_access_key" {}
variable "aws_secret_key" {}
variable "region" {
    default = "ap-northeast-1"
}

# Provider
provider "aws" {
    access_key = var.aws_access_key
    secret_key = var.aws_secret_key
    region = "ap-northeast-1"
}
# vpc
resource "aws_vpc" "dev-env" {
    cidr_block = "10.0.0.0/16"
    instance_tenancy = "default"
    enable_dns_support = "true"
    enable_dns_hostnames = "false"
    tags = {
      Name = "dev-env"
    }
}

# subnet
## public
resource "aws_subnet" "public-web" {
    vpc_id = "${aws_vpc.dev-env.id}"
    cidr_block = "10.0.1.0/24"
    availability_zone = "ap-northeast-1a"
    tags = {
      Name = "public-web"
    }
}

## praivate
resource "aws_subnet" "private-db1" {
    vpc_id = "${aws_vpc.dev-env.id}"
    cidr_block = "10.0.2.0/24"
    availability_zone = "ap-northeast-1a"
    tags = {
      Name = "private-db1"
    }
}

resource "aws_subnet" "private-db2" {
    vpc_id = "${aws_vpc.dev-env.id}"
    cidr_block = "10.0.3.0/24"
    availability_zone = "ap-northeast-1c"
    tags = {
      Name = "private-db2"
    }
}

# route table
resource "aws_route_table" "public-route" {
    vpc_id = "${aws_vpc.dev-env.id}"
    route {
        cidr_block = "0.0.0.0/0"
        gateway_id = "${aws_internet_gateway.dev-env-gw.id}"
    }
    tags = {
      Name = "public-route"
    }
}

resource "aws_route_table_association" "public-a" {
    subnet_id = "${aws_subnet.public-web.id}"
    route_table_id = "${aws_route_table.public-route.id}"
}

# internet gateway
resource "aws_internet_gateway" "dev-env-gw" {
    vpc_id = "${aws_vpc.dev-env.id}"
    depends_on = [aws_vpc.dev-env]
    tags = {
      Name = "dev-env-gw"
    }
}
# Security Group
resource "aws_security_group" "public-web-sg" {
    name = "public-web-sg"
    vpc_id = "${aws_vpc.dev-env.id}"
    ingress {
        from_port = 22
        to_port = 22
        protocol = "tcp"
        cidr_blocks = ["0.0.0.0/0"]
    }

    ingress {
        from_port = 80
        to_port = 80
        protocol = "tcp"
        cidr_blocks = ["0.0.0.0/0"]
    }

    egress {
        from_port = 0
        to_port = 0
        protocol = "-1"
        cidr_blocks = ["0.0.0.0/0"]
    }
    tags = {
      Name = "public-web-sg"
    }
}

resource "aws_security_group" "praivate-db-sg" {
    name = "praivate-db-sg"
    vpc_id = "${aws_vpc.dev-env.id}"
    ingress {
        from_port = 5432
        to_port = 5432
        protocol = "tcp"
        cidr_blocks = ["10.0.1.0/24"]
    }

    egress {
        from_port = 0
        to_port = 0
        protocol = "-1"
        cidr_blocks = ["0.0.0.0/0"]
    }
    tags = {
      Name = "public-db-sg"
    }
}

# EC2 Key Pairs
resource "aws_key_pair" "common-ssh" {
  key_name   = "common-ssh"
  public_key = "<Paste the contents of the public key>"
}

# EC2
resource "aws_instance" "webserver" {
    ami = "ami-011facbea5ec0363b"
    instance_type = "t2.micro"
    key_name   = "common-ssh"
    vpc_security_group_ids = [
      "${aws_security_group.public-web-sg.id}"
    ]
    subnet_id = "${aws_subnet.public-web.id}"
    associate_public_ip_address = "true"
    ebs_block_device {
      device_name    = "/dev/xvda"
      volume_type = "gp2"
      volume_size = 30
      }
    user_data          = "${file("./userdata/cloud-init.tpl")}"
    tags  = {
        Name = "webserver"
    }
}

# Output
output "public_ip_of_webserver" {
  value = "${aws_instance.webserver.public_ip}"
}

(*) Cidr_blocks described in Security Group is an example. When actually using it, pay close attention to security, and especially SSH should limit the source.

# RDS
resource "aws_db_subnet_group" "praivate-db" {
    name        = "praivate-db"
    subnet_ids  = ["${aws_subnet.private-db1.id}", "${aws_subnet.private-db2.id}"]
    tags = {
        Name = "praivate-db"
    }
}

resource "aws_db_instance" "test-db" {
  identifier           = "test-db"
  allocated_storage    = 20
  storage_type         = "gp2"
  engine               = "postgres"
  engine_version       = "11.5"
  instance_class       = "db.t3.micro"
  name                 = "testdb"
  username             = "test"
  password             = "test"
  vpc_security_group_ids  = ["${aws_security_group.praivate-db-sg.id}"]
  db_subnet_group_name = "${aws_db_subnet_group.praivate-db.name}"
  skip_final_snapshot = true
}

(*) The password value is shown as an example. Not all strings are available.

#cloud-config

runcmd:
#Change host name
  - hostnamectl set-hostname webserver

#Package installation
##Only security related updates installed
  - yum update --security -y

## PostgreSQL client programs
  - yum install -y postgresql.x86_64

#Time zone change
##Backup of configuration file
  - cp  -p /etc/localtime /etc/localtime.org

##Create symbolic links
  - ln -sf  /usr/share/zoneinfo/Asia/Tokyo /etc/localtime

Terraform construction

First, perform the following preparatory work.

--Enable environment variables $ source env-vars --Check environment variables $ env

After completing the preparatory work, it is finally time to build Terraform. Terraform construction work is the next 3 steps!

  1. Initialize with terraform init
  2. Confirm with terraform plan
  3. Apply with terraform apply

The explanation of the terraform command is omitted.

After executing terraform apply, each resource is created when the message ** Apply complete! ** is output.   You can connect to RDS by running psql from an EC2 instance, or from a SQL client such as DBeaver via SSH.

terraform apply

plan has been generated and is shown below.
Resource actions are indicated with the following symbols:
  + create

Terraform will perform the following actions:

  # aws_db_instance.test-db will be created
  + resource "aws_db_instance" "test-db" {
      + address                               = (known after apply)
      + allocated_storage                     = 20
      + apply_immediately                     = (known after apply)
      + arn                                   = (known after apply)
      + auto_minor_version_upgrade            = true
      + availability_zone                     = (known after apply)
      + backup_retention_period               = (known after apply)
      + backup_window                         = (known after apply)
      + ca_cert_identifier                    = (known after apply)
      + character_set_name                    = (known after apply)
      + copy_tags_to_snapshot                 = false
      + db_subnet_group_name                  = "praivate-db"
      + endpoint                              = (known after apply)
      + engine                                = "postgres"
      + engine_version                        = "11.5"
      + hosted_zone_id                        = (known after apply)
      + id                                    = (known after apply)
      + identifier                            = "test-db"
      + identifier_prefix                     = (known after apply)
      + instance_class                        = "db.t3.micro"
      + kms_key_id                            = (known after apply)
      + license_model                         = (known after apply)
      + maintenance_window                    = (known after apply)
      + monitoring_interval                   = 0
      + monitoring_role_arn                   = (known after apply)
      + multi_az                              = (known after apply)
      + name                                  = "testdb"
      + option_group_name                     = (known after apply)
      + parameter_group_name                  = (known after apply)
      + password                              = (sensitive value)
      + performance_insights_enabled          = false
      + performance_insights_kms_key_id       = (known after apply)
      + performance_insights_retention_period = (known after apply)
      + port                                  = (known after apply)
      + publicly_accessible                   = false
      + replicas                              = (known after apply)
      + resource_id                           = (known after apply)
      + skip_final_snapshot                   = true
      + status                                = (known after apply)
      + storage_type                          = "gp2"
      + timezone                              = (known after apply)
      + username                              = "test"
      + vpc_security_group_ids                = (known after apply)
    }

/*Omission*/

aws_db_instance.test-db: Still creating... [3m0s elapsed]
aws_db_instance.test-db: Creation complete after 3m5s [id=test-db]

Apply complete! Resources: 13 added, 0 changed, 0 destroyed.

Outputs:

public_ip_of_webserver =<IP address (*)>

(*) The EC2 public IP is output.

knowledge

The points to keep in mind when creating resources in the AWS environment are described below.

--Availability Zone Creating an RDS requires the specification of multiple Availability Zones. It cannot be created by itself. At the time of writing this article, in the case of Japan Relusion, it is necessary to specify the following two Availability Zones.

zones: ap-northeast-1c, ap-northeast-1a, ap-northeast-1d.

--If you want to delete RDS when executing terraform destroy RDS requires a snapshot to be created by default when deleting resources, so you must specify the skip_final_snapshot option in the tf file to be true in order to do aterraform destroy. The default is false. Be careful when doing it in a production environment.

--RDS PostgreSQL specification The default for PostgreSQL collation and Ctype in RDS is ** en_US.UTF-8 **. When considering performance, it seems better to connect with psql, drop it, and recreate it.

--When you create an SSH key pair with Openssh If you create an SSH key pair in Openssh and connect via SSH using a SQL client such as DBeaver, you need to change the key format.

in conclusion

That's it for Terraform 3 minutes cooking: cooking :.

Recommended Posts

Build AWS EC2 and RDS with Terraform Terraform 3 minutes cooking
Easily build network infrastructure and EC2 with AWS CDK Python
Create Amazon Linux with AWS EC2 and log in
ruby environment construction with aws EC2
[AWS] Build an ECR with AWS CDK
Manage your data with AWS RDS
Passwordless authentication with RDS and IAM (Python)
Use Jupyter Lab and Jupyter Notebook with EC2
AWS EC2 instance launch and ssh connection
[AWS] Link Lambda and S3 with boto3
Overwrite data in RDS with AWS Glue
Touch AWS with Serverless Framework and Python
Build a TensorFlow development environment on Amazon EC2 with command copy and paste
Build a WardPress environment on AWS with pulumi
Build python environment with pyenv on EC2 (ubuntu)
Build serverless facial recognition with Terraform (Amazon Rekognition)
Build a cheap summarization system with AWS components
Build a Django environment with Vagrant in 5 minutes
[AWS] Let's build an ECS Cluster with CDK
Build a virtual environment with pyenv and venv
Describe ec2 with boto3 and retrieve the value
Build PyPy and Python execution environment with Docker
[AWS EC2] How to install only MySQL client on Amazon Linux 2 and connect to RDS