[LINUX] The true value of Terraform automation starting with Oracle Cloud

Introduction

terraform Advent Calendar 2019 Day 6: Christmas_tree :.

This article is about Terraform automation in an Oracle Cloud environment.

The cloud infrastructure used will be Oracle Cloud, but the basic way of writing tf files will not change, so the general idea can be applied to other clouds as well.

In this article, we will start with declarative configuration management, explain the outline of Terraform, basic usage, and deploy the following environment (*). Please enjoy the power of Terraform!

terraform.png

--Instance builds 2 Web servers, 1 DB server, and 1 operation management server --Set secondary_vnic to separate business LAN and operational LAN --Import your own SSL certificate into your load balancer and configure round robin and backend --Apache is installed on the Web server, and PostgreSQL is installed on the DB server.

(*) The tf file described in this article is available on GitHub.

Declarative configuration management

Before learning Terraform, let's know ** Declarative Configuration Management ** and ** Purpose of Configuration Management Tools **. You can maximize the value of Terraform!

Declarative configuration management

** Cloud Native ** (*) is an approach to abstract conventional infrastructure resources and develop applications in a cloud environment.

In addition, since cloud native is centered on cloud infrastructure, it has a high affinity with applications that are divided into the smallest components such as container-type disguise technology and microservices. Since it is possible to flexibly respond to the development method of a short cycle system such as agile, high-speed release can be expected.

For example, developing a waterfall in a traditional on-premises environment has the following phases for infrastructure resources:

--Determine server resources by performance estimation of requirement definition (non-functional requirements) --Design system configuration and parameters with basic design and detailed design --Construction and testing to complete the infrastructure environment

Then, when entering the operation phase after release and performing service management according to ITIL, each process of configuration management, change management, and release management occurs.

However, due to the democratization of the cloud and the penetration of container virtualization technology, the buzzwords of DevOps and cloud native were born, and as technologies that support the life cycle of configuration management from construction, configuration management tools such as Puppet and Chef and orchestration such as Kubernetes With the attention of configuration tools, the approach to systems has evolved into a declarative procedure.

Now, the concept of infrastructure in cloud native can be said to be a cloud native architecture based on ** declarative configuration management **.

By assuming declarative configuration management, operational design can be taken into consideration when designing the system architecture. In the past, automation can be considered at the time of design even for issues that have become apparent when operation begins. It also facilitates scalability.

(*) The word cloud native is not strictly defined as the meaning of the word, like artificial intelligence.

Purpose of the configuration management tool

Think about the purpose of the configuration management tool.

** From a DevOps perspective, it's not about deploying configuration management tools, it's about how much you can optimize your development process by leveraging configuration management tools. ** **

Previously, I wrote in What is Ansible? Purpose of configuration management tool-Understanding the fastest way to introduce Ansible, but the purpose of introducing the configuration management tool By clarifying, you can see what you want to do.

Therefore, if you do not clarify the purpose of the configuration management tool and understand and predict the benefits and post-operation flow, you will not be able to maximize the benefits.

Therefore, with Terraform, you can automate the previously manual creation of cloud infrastructure resources. Also, if you combine it with Ansible etc., you can work efficiently with OS and MW setup all at once.

** As a result, by introducing configuration management tools such as Terraform, you can dramatically reduce the time-consuming work and devote that time to other important issues. ** **

Also, from the perspective of idempotence, it will be the same no matter who does it, so we will prevent operational mistakes and maintain quality. It works great not only in the development phase but also in the operations phase.

terraform自動化.png

In this article, we will use Terraform and cloud-init to create resources for cloud infrastructure and initially set up the OS and MW.

Terraform overview

Terraform is one of the configuration management tools for managing the resources of cloud infrastructure.

スクリーンショット 2019-12-05 10.46.43.png

It provides a process to automate provisioning, which is essential for modern application development such as ** DevOps ** and ** Infrastructure as Code **.

Using Terraform is very simple. You can build a cloud infrastructure simply by describing resource information such as instances and networks in a definition file called a tf file and executing the terraform command.

As an image, it is very similar to the kitchen appliance "slow cooker" that is popular in the United States. The process of setting ingredients and pressing a switch to complete a dish is exactly the same.

tfファイル.png

But it's not a ** silver bullet **.

銀の弾丸.png

** After trying various things with Terraform, I want to do that too, but there are some things that can not be done due to the specifications of the cloud provider. ** **

When deploying Terraform, it's a good idea to do a lot of validation to clarify what you can and cannot do.

Terraform construction

In order to build with Terraform in the Oracle Cloud environment, create a user and prepare the necessary credential information. After that, install Terraform and prepare the tf file necessary for executing Terraform.

The prerequisites for an Oracle Cloud environment are listed below.

-[x] Oracle Cloud user creation -[x] API public key creation and setting (API public key creation procedure is Required Keys and OCIDs )) -[x] Creating a public key for SSH (The procedure for creating a public key for SSH is [Managing Key Pairs on Linux Instances](https://docs.cloud.oracle.com/iaas/Content/Compute/Tasks/managingkeypairs. See htm? Highlight = ssh))

Terraform installation

In this article, we will use Mac as an example.

  1. First, download the binary from Download Terraform.
  2. After downloading, extract it to any directory.
  3. Pass the path to the .bashrc file etc. The following is an example of passing the path under the application.
export PATH=$PATH:/Applications/

After passing the path, execute the following command, and if the version information is output, the path is passed.

# terraform -v

Terraform v0.12.7

Your version of Terraform is out of date! The latest version
is 0.12.13. You can update by downloading from www.terraform.io/downloads.html

(*) The above is output because the version of Terraform you are using is old. It will not be displayed if you are using a newer version of Terraform.

Installing Terraform Provider for Oracle Cloud Infrastructure

It is automatically downloaded when terraform init, which will be described later, is executed, so no separate download is required.

Create tf file

I will explain about the tf file.

First, change to any working directory. In this article, the directory structure is as follows, and the current directory is the oci directory.

--Directory structure

.
|-- common
|   |--  userdata
|        |-- cloud-init1.tpl
|        |-- cloud-init2.tpl
|   |-- compute.tf
|   |-- env-vars
|   |-- lb.tf
|   |-- network.tf
|   |-- provider.tf
|   |-- securitylist.tf
|-- oci_api_key.pem
|-- oci_api_key_public.pem
`-- ssh
    |-- id_rsa
    |-- id_rsa.pub

--Various file descriptions

file name role
cloud-init1.tpl Initial build script for web server
cloud-init2.tpl Initial build script for DB server
compute.tf Instance tf file
env-vars Variable tf file used by the provider
lb.tf Load balancer tf file
network.tf Network tf file
provider.tf Provider tf file
securitylist.tf Security list tf file
oci_api_key.pem API private key
oci_api_key_public.pem API public key
id_rsa SSH private key
id_rsa.pub SSH public key

** The points when creating a tf file are described below. ** **

--Tf file name should be easy to understand and divided for each resource --Items in the tf file are not described unless they are changed from the default. --The order of the items in the tf file does not matter, but decide the rules and write them in an easy-to-understand order. --When describing the value of the tf file, it is not necessary to fill the space between =, so if you align it, decide a rule such as align it and describe it in an easy-to-understand manner --There are symbols in the resource name that cannot be used, such as underscores.

This section describes the tf file used in this article. (*)

(*) Some values are credential information, so x is used as an example.

### Authentication details
export TF_VAR_tenancy_ocid=ocid1.tenancy.oc1..aaaaaaaaxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
export TF_VAR_user_ocid=ocid1.user.oc1..aaaaaaaaxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
export TF_VAR_fingerprint=12:34:56:78:90:ab:cd:ef:12:34:56:78:90:ab:cd:ef
export TF_VAR_private_key_path=/oci/oci_api_key.pem
export TF_VAR_private_key_password=xxxxxxxx

### Region
export TF_VAR_region=ap-tokyo-1

### Compartment
export TF_VAR_compartment_ocid=ocid1.compartment.oc1..aaaaaaaaxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx

### Public/private keys used on the instance
export TF_VAR_ssh_public_key=$(cat /oci/ssh/id_rsa.pub)
export TF_VAR_ssh_private_key=$(cat /oci/ssh/id_rsa)

(*) Private_key_password is not required if you do not set a passphrase for the API private key.

# Variable
variable "tenancy_ocid" {}
variable "user_ocid" {}
variable "fingerprint" {}
variable "private_key_path" {}
variable "private_key_password" {}
variable "region" {}
variable "compartment_ocid" {}
variable "ssh_private_key" {}
variable "ssh_public_key" {}

# Configure the Oracle Cloud Infrastructure provider with an API Key
provider "oci" {
  tenancy_ocid = "${var.tenancy_ocid}"
  user_ocid = "${var.user_ocid}"
  fingerprint = "${var.fingerprint}"
  private_key_path = "${var.private_key_path}"
  private_key_password = "${var.private_key_password}"
  region = "${var.region}"
}
# Virtual Cloud Network
## vcn1
resource "oci_core_virtual_network" "vcn1" {
   display_name = "vcn1"
   compartment_id = "${var.compartment_ocid}"
   cidr_block = "10.0.0.0/16"
   dns_label = "vcn1"
}

## vcn2
resource "oci_core_virtual_network" "vcn2" {
   display_name = "vcn2"
   compartment_id = "${var.compartment_ocid}"
   cidr_block = "192.168.0.0/16"
   dns_label = "vcn2"
}

# Subnet
## Subnet LB
resource "oci_core_subnet" "LB_Segment" {
  display_name        = "Development environment_LB segment"
  compartment_id = "${var.compartment_ocid}"
  vcn_id         = "${oci_core_virtual_network.vcn1.id}"
  cidr_block          = "10.0.0.0/24"
  route_table_id      = "${oci_core_default_route_table.default-route-table1.id}"
  security_list_ids   = ["${oci_core_security_list.LB_securitylist.id}"]
}

## Subnet Web
resource "oci_core_subnet" "Web_Segment" {
  display_name        = "Development environment_WEB segment"
  compartment_id = "${var.compartment_ocid}"
  vcn_id         = "${oci_core_virtual_network.vcn1.id}"
  cidr_block          = "10.0.1.0/24"
  route_table_id      = "${oci_core_default_route_table.default-route-table1.id}"
  security_list_ids   = ["${oci_core_security_list.Web_securitylist.id}"]
}

## Subnet DB
resource "oci_core_subnet" "DB_Segment" {
  display_name        = "Development environment_DB segment"
  compartment_id = "${var.compartment_ocid}"
  vcn_id         = "${oci_core_virtual_network.vcn1.id}"
  cidr_block          = "10.0.2.0/24"
  route_table_id      = "${oci_core_route_table.nat-route-table.id}"
  prohibit_public_ip_on_vnic = "true"
  security_list_ids   = ["${oci_core_security_list.DB_securitylist.id}"]
}

## Subnet Operation
resource "oci_core_subnet" "Ope_Segment" {
  display_name        = "Development environment_Investment segment"
  compartment_id = "${var.compartment_ocid}"
  vcn_id         = "${oci_core_virtual_network.vcn2.id}"
  cidr_block          = "192.168.1.0/24"
  route_table_id      = "${oci_core_default_route_table.default-route-table2.id}"
  security_list_ids   = ["${oci_core_security_list.Ope_securitylist.id}"]
}

# Route Table
## default-route-table1
resource "oci_core_default_route_table" "default-route-table1" {
  manage_default_resource_id = "${oci_core_virtual_network.vcn1.default_route_table_id}"

  route_rules {
    destination = "0.0.0.0/0"
    destination_type = "CIDR_BLOCK"
    network_entity_id = "${oci_core_internet_gateway.internet-gateway1.id}"
  }
}

## nat-route-table
resource "oci_core_route_table" "nat-route-table" {
  display_name   = "nat-route-table"
  compartment_id = "${var.compartment_ocid}"
  vcn_id         = "${oci_core_virtual_network.vcn1.id}"
  route_rules {
    destination        = "0.0.0.0/0"
    network_entity_id = "${oci_core_nat_gateway.nat-gateway.id}"
  }
}

## default-route-table2
resource "oci_core_default_route_table" "default-route-table2" {
  manage_default_resource_id = "${oci_core_virtual_network.vcn2.default_route_table_id}"

  route_rules {
    destination = "0.0.0.0/0"
    destination_type = "CIDR_BLOCK"
    network_entity_id = "${oci_core_internet_gateway.internet-gateway2.id}"
  }
}

# Internet Gateway
## internet-gateway1
resource "oci_core_internet_gateway" "internet-gateway1" {
  display_name   = "internet-gateway1"
  compartment_id = "${var.compartment_ocid}"
  vcn_id         = "${oci_core_virtual_network.vcn1.id}"
}

## internet-gateway2
resource "oci_core_internet_gateway" "internet-gateway2" {
  display_name   = "internet-gateway2"
  compartment_id = "${var.compartment_ocid}"
  vcn_id         = "${oci_core_virtual_network.vcn2.id}"
}

# Nat-Gateway
resource "oci_core_nat_gateway" "nat-gateway" {
  display_name   = "nat-gateway"
  compartment_id = "${var.compartment_ocid}"
  vcn_id         = "${oci_core_virtual_network.vcn1.id}"
}

(*) If you do not want to allow public IP addresses for subnets, describe prohibit_public_ip_on_vnic =" true ".

/* Load Balancer */

resource "oci_load_balancer" "load-balancer" {
  shape          = "100Mbps"
  compartment_id = "${var.compartment_ocid}"

  subnet_ids = [
    "${oci_core_subnet.LB_Segment.id}",
  ]

  display_name = "load-balancer"
}

resource "oci_load_balancer_backend_set" "lb-bes1" {
  name             = "lb-bes1"
  load_balancer_id = "${oci_load_balancer.load-balancer.id}"
  policy           = "ROUND_ROBIN"

  health_checker {
    port                = "80"
    protocol            = "HTTP"
    response_body_regex = ".*"
    url_path            = "/"
  }
}

resource "oci_load_balancer_certificate" "lb-cert1" {
  load_balancer_id   = "${oci_load_balancer.load-balancer.id}"
  ca_certificate     = "-----BEGIN CERTIFICATE-----\nMIIC9jCCAd4CCQD2rPUVJETHGzANBgkqhkiG9w0BAQsFADA9MQswCQYDVQQGEwJV\nUzELMAkGA1UECAwCV0ExEDAOBgNVBAcMB1NlYXR0bGUxDzANBgNVBAoMBk9yYWNs\nZTAeFw0xOTAxMTcyMjU4MDVaFw0yMTAxMTYyMjU4MDVaMD0xCzAJBgNVBAYTAlVT\nMQswCQYDVQQIDAJXQTEQMA4GA1UEBwwHU2VhdHRsZTEPMA0GA1UECgwGT3JhY2xl\nMIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA30+wt7OlUB/YpmWbTRkx\nnLG0lKWiV+oupNKj8luXmC5jvOFTUejt1pQhpA47nCqywlOAfk2N8hJWTyJZUmKU\n+DWVV2So2B/obYxpiiyWF2tcF/cYi1kBYeAIu5JkVFwDe4ITK/oQUFEhIn3Qg/oC\nMQ2985/MTdCXONgnbmePU64GrJwfvOeJcQB3VIL1BBfISj4pPw5708qTRv5MJBOO\njLKRM68KXC5us4879IrSA77NQr1KwjGnQlykyCgGvvgwgrUTd5c/dH8EKrZVcFi6\nytM66P/1CTpk1YpbI4gqiG0HBbuXG4JRIjyzW4GT4JXeSjgvrkIYL8k/M4Az1WEc\n2wIDAQABMA0GCSqGSIb3DQEBCwUAA4IBAQAuI53m8Va6EafDi6GQdQrzNNQFCAVQ\nxIABAB0uaSYCs3H+pqTktHzOrOluSUEogXRl0UU5/OuvxAz4idA4cfBdId4i7AcY\nqZsBjA/xqH/rxR3pcgfaGyxQzrUsJFf0ZwnzqYJs7fUvuatHJYi/cRBxrKR2+4Oj\nlUbb9TSmezlzHK5CaD5XzN+lZqbsSvN3OQbOryJCbtjZVQFGZ1SmL6OLrwpbBKuP\nn2ob+gaP57YSzO3zk1NDXMlQPHRsdSOqocyKx8y+7J0g6MqPvBzIe+wI3QW85MQY\nj1/IHmj84LNGp7pHCyiYx/oI+00gRch04H2pJv0TP3sAQ37gplBwDrUo\n-----END CERTIFICATE-----"

  certificate_name   = "certificate1"

  private_key        = "-----BEGIN RSA PRIVATE KEY-----\nMIIEowIBAAKCAQEA30+wt7OlUB/YpmWbTRkxnLG0lKWiV+oupNKj8luXmC5jvOFT\nUejt1pQhpA47nCqywlOAfk2N8hJWTyJZUmKU+DWVV2So2B/obYxpiiyWF2tcF/cY\n\ni1kBYeAIu5JkVFwDe4ITK/oQUFEhIn3Qg/oCMQ2985/MTdCXONgnbmePU64GrJwf\nvOeJcQB3VIL1BBfISj4pPw5708qTRv5MJBOOjLKRM68KXC5us4879IrSA77NQr1K\nwjGnQlykyCgGvvgwgrUTd5c/dH8EKrZVcFi6ytM66P/1CTpk1YpbI4gqiG0HBbuX\nG4JRIjyzW4GT4JXeSjgvrkIYL8k/M4Az1WEc2wIDAQABAoIBAGQznukfG/uS/qTT\njNcQifl0p8HXfLwUIa/lsJkMTj6D+k8DkF59tVMGjv3NQSQ26JVX4J1L8XiAj+fc\nUtYr1Ap4CLX5PeYUkzesvKK6lPKXQvCh+Ip2eq9PVrvL2WcdDpb5695cy7suXD7c\n05aUtS0LrINH3eXAxkpEe5UHtQFni5YLrCLEXd+SSA3OKdCB+23HRELc1iCTvqjK\n5AtR916uHTBhtREHRMvWIdu4InRUsedlJhaJOLJ8G8r64JUtfm3wLUK1U8HFOsd0\nLAx9ZURU6cXl4osTWiy1vigGaM8Xuish2HkOLNYZADDUiDBB3SshmW5IDAJ5XTn5\nqVrszRECgYEA79j1y+WLTyV7yz7XkWk3OqoQXG4b2JfKItJI1M95UwllzQ8U/krM\n+QZjP3NTtB9i1YoHyaEfic103hV9Fkgz8jvKS5ocLGJulpN4CgqbHN6v9EJ3dqTk\no6X8mpx2eP2E0ngRekFyC/OCp0Zhe2KR9PXhijMa5eB2LTeCMIS/tzkCgYEA7lmk\nIdVjcpfqY7UFJ2R8zqPJHOne2+llrl9vzo6N5kx4DzAg7MP6XO9MekOvfmD1X1Lm\nFckXWFEF+0TlN5YvCTR/+OmVufYM3xp4GBT8RZdLFbyI4+xpAAeSC4SeM0ZkC9Jt\nrKqCS24+Kqy/+qSqtkxiPLQrXSdCSfCUlmn0ALMCgYBB7SLy3q+CG82BOk7Km18g\n8un4XhOtX1uiYqa+SCETH/wpd0HP/AOHV6gkIrEZS59BDuXBGFaw7BZ5jPKLE2Gj\n7adXTI797Dh1jydpqyyjrNo0i6iGpiBqkw9x+Bvged7ucy5qql6MxmxdSk01Owzf\nhk5uTEnScfZJy34vk+2WkQKBgBXx5uy+iuN4HTqE5i6UT/FunwusdLpmqNf/LXol\nIed8TumHEuD5wklgNvhi1vuZzb2zEkAbPa0B+L0DwN73UulUDhxK1WBDyTeZZklB\nVWDK5zzfGPNzRs+b4tRwp2gtKPT1sOde45QyWELxmNNo6dbS/ZB9Pijbfnz0S5n1\ns2OFAoGBAJUohI1+d2hKlkSUzpCorQarDe8lFVEbGMu0kX0JNDI7QU+H8vDp9NOl\nGqLm3sCVBYypT8sWfchgZpcVaLRLQCQtWy4+CbMN6DT3j/uBWeDpayU5Gvqt0/no\nvwqbG6b0NEYLRPLEdsS/c8TV9mMlvb0EW+GXfmkpTrTNt3hyXniu\n-----END RSA PRIVATE KEY-----"

  public_certificate = "-----BEGIN CERTIFICATE-----\nMIIC9jCCAd4CCQD2rPUVJETHGzANBgkqhkiG9w0BAQsFADA9MQswCQYDVQQGEwJV\nUzELMAkGA1UECAwCV0ExEDAOBgNVBAcMB1NlYXR0bGUxDzANBgNVBAoMBk9yYWNs\nZTAeFw0xOTAxMTcyMjU4MDVaFw0yMTAxMTYyMjU4MDVaMD0xCzAJBgNVBAYTAlVT\nMQswCQYDVQQIDAJXQTEQMA4GA1UEBwwHU2VhdHRsZTEPMA0GA1UECgwGT3JhY2xl\nMIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA30+wt7OlUB/YpmWbTRkx\nnLG0lKWiV+oupNKj8luXmC5jvOFTUejt1pQhpA47nCqywlOAfk2N8hJWTyJZUmKU\n+DWVV2So2B/obYxpiiyWF2tcF/cYi1kBYeAIu5JkVFwDe4ITK/oQUFEhIn3Qg/oC\nMQ2985/MTdCXONgnbmePU64GrJwfvOeJcQB3VIL1BBfISj4pPw5708qTRv5MJBOO\njLKRM68KXC5us4879IrSA77NQr1KwjGnQlykyCgGvvgwgrUTd5c/dH8EKrZVcFi6\nytM66P/1CTpk1YpbI4gqiG0HBbuXG4JRIjyzW4GT4JXeSjgvrkIYL8k/M4Az1WEc\n2wIDAQABMA0GCSqGSIb3DQEBCwUAA4IBAQAuI53m8Va6EafDi6GQdQrzNNQFCAVQ\nxIABAB0uaSYCs3H+pqTktHzOrOluSUEogXRl0UU5/OuvxAz4idA4cfBdId4i7AcY\nqZsBjA/xqH/rxR3pcgfaGyxQzrUsJFf0ZwnzqYJs7fUvuatHJYi/cRBxrKR2+4Oj\nlUbb9TSmezlzHK5CaD5XzN+lZqbsSvN3OQbOryJCbtjZVQFGZ1SmL6OLrwpbBKuP\nn2ob+gaP57YSzO3zk1NDXMlQPHRsdSOqocyKx8y+7J0g6MqPvBzIe+wI3QW85MQY\nj1/IHmj84LNGp7pHCyiYx/oI+00gRch04H2pJv0TP3sAQ37gplBwDrUo\n-----END CERTIFICATE-----"

  lifecycle {
    create_before_destroy = true
  }
}

resource "oci_load_balancer_path_route_set" "test_path_route_set" {
  #Required
  load_balancer_id = "${oci_load_balancer.load-balancer.id}"
  name             = "pr-set1"

  path_routes {
    #Required
    backend_set_name = "${oci_load_balancer_backend_set.lb-bes1.name}"
    path             = "/test"

    path_match_type {
      #Required
      match_type = "EXACT_MATCH"
    }
  }
}

resource "oci_load_balancer_hostname" "test_hostname1" {
  #Required
  hostname         = "app.example.com"
  load_balancer_id = "${oci_load_balancer.load-balancer.id}"
  name             = "hostname1"
}

resource "oci_load_balancer_listener" "lb-listener1" {
  load_balancer_id         = "${oci_load_balancer.load-balancer.id}"
  name                     = "http"
  default_backend_set_name = "${oci_load_balancer_backend_set.lb-bes1.name}"
  hostname_names           = ["${oci_load_balancer_hostname.test_hostname1.name}"]
  port                     = 80
  protocol                 = "HTTP"

  connection_configuration {
    idle_timeout_in_seconds = "2"
  }
}

resource "oci_load_balancer_listener" "lb-listener2" {
  load_balancer_id         = "${oci_load_balancer.load-balancer.id}"
  name                     = "https"
  default_backend_set_name = "${oci_load_balancer_backend_set.lb-bes1.name}"
  port                     = 443
  protocol                 = "HTTP"

  ssl_configuration {
    certificate_name        = "${oci_load_balancer_certificate.lb-cert1.certificate_name}"
    verify_peer_certificate = false
  }
}

resource "oci_load_balancer_backend" "lb-be1" {
  load_balancer_id = "${oci_load_balancer.load-balancer.id}"
  backendset_name  = "${oci_load_balancer_backend_set.lb-bes1.name}"
  ip_address = "${var.ip_address3}"
  port             = 80
  backup           = false
  drain            = false
  offline          = false
  weight           = 1
}

resource "oci_load_balancer_backend" "lb-be2" {
  load_balancer_id = "${oci_load_balancer.load-balancer.id}"
  backendset_name  = "${oci_load_balancer_backend_set.lb-bes1.name}"
  ip_address = "${var.ip_address4}"
  port             = 80
  backup           = false
  drain            = false
  offline          = false
  weight           = 1
}

output "lb_public_ip" {
  value = ["${oci_load_balancer.load-balancer.ip_address_details}"]
}

(*) If you describe output, you can output the value specified by output after creating the resource.

# Security list
## LB
resource "oci_core_security_list" "LB_securitylist" {
  display_name   = "Development environment_LB segment"
  compartment_id = "${var.compartment_ocid}"
  vcn_id         = "${oci_core_virtual_network.vcn1.id}"

  ingress_security_rules {
    source = "0.0.0.0/0"
    protocol = "6"
    tcp_options {
      min = 443
      max = 443
    }
  }

  egress_security_rules {
    destination = "0.0.0.0/0"
    protocol = "ALL"
    }
  }

## Web
resource "oci_core_security_list" "Web_securitylist" {
  display_name   = "Development environment_Web segment"
  compartment_id = "${var.compartment_ocid}"
  vcn_id         = "${oci_core_virtual_network.vcn1.id}"

  ingress_security_rules {
    source = "10.0.0.0/24"
    protocol = "6"
    tcp_options {
      min = 80
      max = 80
    }
  }

  egress_security_rules {
    destination = "0.0.0.0/0"
    protocol = "ALL"
    }
  }

## DB
resource "oci_core_security_list" "DB_securitylist" {
  display_name   = "Development environment_DB segment"
  compartment_id = "${var.compartment_ocid}"
  vcn_id         = "${oci_core_virtual_network.vcn1.id}"

  ingress_security_rules {
    source = "10.0.1.0/24"
    protocol = "6"
    tcp_options {
      min = 5432
      max = 5432
    }
  }

  egress_security_rules {
    destination = "0.0.0.0/0"
    protocol = "ALL"
    }
  }

## Security list Ope
resource "oci_core_security_list" "Ope_securitylist" {
  display_name   = "Development environment_Investment segment"
  compartment_id = "${var.compartment_ocid}"
  vcn_id         = "${oci_core_virtual_network.vcn2.id}"

  ingress_security_rules {
    source = "192.168.1.0/24"
    protocol = "1"
  }

  ingress_security_rules {
    source = "x.x.x.x/32"
    protocol = "6"
    tcp_options {
      min = 22
      max = 22
    }
  }

  ingress_security_rules {
    source = "192.168.1.0/24"
    protocol = "6"
    tcp_options {
      min = 22
      max = 22
    }
  }

  egress_security_rules {
    destination = "0.0.0.0/0"
    protocol = "ALL"
    }
  }

(*) Source =" x.x.x.x / 32 " is described as an SSH restriction for public IP addresses.

# Variable
variable "ImageOS" {
  default = "Oracle Linux"
}

variable "ImageOSVersion" {
  default = "7.7"
}

variable "instance_shape" {
  default = "VM.Standard.E2.1"
}

variable "fault_domain" {
  default = "FAULT-DOMAIN-1"
}

variable "ip_address1" {
  default = "192.168.1.2"
}

variable "ip_address2" {
  default = "192.168.1.3"
}

variable "ip_address3" {
  default = "10.0.1.2"
}

variable "ip_address4" {
  default = "10.0.1.3"
}

variable "ip_address5" {
  default = "192.168.1.4"
}

variable "ip_address6" {
  default = "10.0.2.2"
}

variable "ip_address7" {
  default = "192.168.1.5"
}

# Gets a list of Availability Domains
data "oci_identity_availability_domains" "ADs" {
  compartment_id = "${var.tenancy_ocid}"
}

# Gets a list of all Oracle Linux 7.7 images that support a given Instance shape
data "oci_core_images" "instance" {
  compartment_id           = "${var.tenancy_ocid}"
  operating_system         = "${var.ImageOS}"
  operating_system_version = "${var.ImageOSVersion}"
  shape                    = "${var.instance_shape}"
}

# Instance
## Compute Web-Server#1
resource "oci_core_instance" "instance1" {
  source_details {
    source_type = "image"
    source_id   = "${lookup(data.oci_core_images.instance.images[0], "id")}"
  }

  display_name        = "Web-Server#1"
  availability_domain = "${lookup(data.oci_identity_availability_domains.ADs.availability_domains[0], "name")}"
  shape               = "${var.instance_shape}"
  compartment_id      = "${var.compartment_ocid}"

  create_vnic_details {
    subnet_id        = "${oci_core_subnet.Ope_Segment.id}"
    assign_public_ip = "true"
    private_ip       = "${var.ip_address1}"
  }

  metadata = {
        ssh_authorized_keys = "${var.ssh_public_key}"
        user_data           = "${base64encode(file("./userdata/cloud-init1.tpl"))}"
    }

  fault_domain        = "${var.fault_domain}"

  provisioner "remote-exec" {
    connection {
      host    = "${oci_core_instance.instance1.public_ip}"
      type    = "ssh"
      user    = "opc"
      agent   = "true"
      timeout = "3m"
    }

    inline = [
      "crontab -l | { cat; echo \"@reboot sudo /usr/local/bin/secondary_vnic_all_configure.sh -c\"; } | crontab -"
      ]
  }
}

### SecondaryVNIC Web-Server#1
resource "oci_core_vnic_attachment" "Web1_secondary_vnic_attachment" {
  create_vnic_details {
    display_name           = "SecondaryVNIC"
    subnet_id              = "${oci_core_subnet.Web_Segment.id}"
    assign_public_ip       = "true"
    private_ip             = "${var.ip_address3}"
    skip_source_dest_check = "false"
}

  instance_id = "${oci_core_instance.instance1.id}"

}

## Compute Web-Server#2
resource "oci_core_instance" "instance2" {
  source_details {
    source_type = "image"
    source_id   = "${lookup(data.oci_core_images.instance.images[0], "id")}"
  }

  display_name        = "Web-Server#2"
  availability_domain = "${lookup(data.oci_identity_availability_domains.ADs.availability_domains[0], "name")}"
  shape               = "${var.instance_shape}"
  compartment_id      = "${var.compartment_ocid}"

  create_vnic_details {
    subnet_id        = "${oci_core_subnet.Ope_Segment.id}"
    assign_public_ip = "true"
    private_ip       = "${var.ip_address2}"
  }

  metadata = {
        ssh_authorized_keys = "${var.ssh_public_key}"
        user_data           = "${base64encode(file("./userdata/cloud-init1.tpl"))}"
    }

  fault_domain        = "${var.fault_domain}"

  provisioner "remote-exec" {
    connection {
      host    = "${oci_core_instance.instance2.public_ip}"
      type    = "ssh"
      user    = "opc"
      agent   = "true"
      timeout = "3m"
      }

    inline = [
      "crontab -l | { cat; echo \"@reboot sudo /usr/local/bin/secondary_vnic_all_configure.sh -c\"; } | crontab -"
      ]
  }
}

### SecondaryVNIC Web-Server#2
resource "oci_core_vnic_attachment" "Web2_secondary_vnic_attachment" {
  create_vnic_details {
    display_name           = "SecondaryVNIC"
    subnet_id              = "${oci_core_subnet.Web_Segment.id}"
    assign_public_ip       = "true"
    private_ip             = "${var.ip_address4}"
    skip_source_dest_check = "false"
}

  instance_id = "${oci_core_instance.instance2.id}"

}

## Compute DB-Server
resource "oci_core_instance" "instance3" {
  source_details {
    source_type = "image"
    source_id   = "${lookup(data.oci_core_images.instance.images[0], "id")}"
  }

  display_name        = "DB-Server"
  availability_domain = "${lookup(data.oci_identity_availability_domains.ADs.availability_domains[0], "name")}"
  shape               = "${var.instance_shape}"
  compartment_id      = "${var.compartment_ocid}"

  create_vnic_details {
    subnet_id        = "${oci_core_subnet.Ope_Segment.id}"
    private_ip       = "${var.ip_address5}"
  }

  metadata = {
        ssh_authorized_keys = "${var.ssh_public_key}"
        user_data           = "${base64encode(file("./userdata/cloud-init2.tpl"))}"
    }

  fault_domain        = "${var.fault_domain}"

  provisioner "remote-exec" {
    connection {
      host    = "${oci_core_instance.instance3.public_ip}"
      type    = "ssh"
      user    = "opc"
      agent   = "true"
      timeout = "3m"
      }

    inline = [
      "crontab -l | { cat; echo \"@reboot sudo /usr/local/bin/secondary_vnic_all_configure.sh -c\"; } | crontab -"
      ]
  }
}

### SecondaryVNIC DB-Server
resource "oci_core_vnic_attachment" "DB_secondary_vnic_attachment" {
  create_vnic_details {
    display_name = "SecondaryVNIC"
    subnet_id  = "${oci_core_subnet.DB_Segment.id}"
    assign_public_ip = false
    private_ip = "${var.ip_address6}"
    skip_source_dest_check = "false"
}

  instance_id = "${oci_core_instance.instance3.id}"

}

## Compute Operation-Server
resource "oci_core_instance" "instance4" {
  source_details {
    source_type = "image"
    source_id   = "${lookup(data.oci_core_images.instance.images[0], "id")}"
  }

  display_name        = " Operation-Server"
  availability_domain = "${lookup(data.oci_identity_availability_domains.ADs.availability_domains[0], "name")}"
  shape               = "${var.instance_shape}"
  compartment_id      = "${var.compartment_ocid}"

  create_vnic_details {
    subnet_id        = "${oci_core_subnet.Ope_Segment.id}"
    private_ip       = "${var.ip_address7}"
  }

  metadata = {
        ssh_authorized_keys = "${var.ssh_public_key}"
        user_data           = "${base64encode(file("./userdata/cloud-init1.tpl"))}"
    }

  fault_domain        = "${var.fault_domain}"

  provisioner "remote-exec" {
    connection {
      host = "${oci_core_instance.instance3.public_ip}"
      type    = "ssh"
      user    = "opc"
      agent   = "true"
      timeout = "3m"
      }

    inline = [
      "crontab -l | { cat; echo \"@reboot sudo /usr/local/bin/secondary_vnic_all_configure.sh -c\"; } | crontab -"
      ]
  }
}
#cloud-config

runcmd:
# download the secondary vnic script
- wget -O /usr/local/bin/secondary_vnic_all_configure.sh https://docs.cloud.oracle.com/iaas/Content/Resources/Assets/secondary_vnic_all_configure.sh
- chmod +x /usr/local/bin/secondary_vnic_all_configure.sh
- sleep 60
- /usr/local/bin/secondary_vnic_all_configure.sh -c

- yum update -y
- echo "Hello World.  The time is now $(date -R)!" | tee /root/output.txt

- echo '################### webserver userdata begins #####################'
- touch ~opc/userdata.`date +%s`.start
# echo '########## yum update all ###############'
# yum update -y
- echo '########## basic webserver ##############'
- yum install -y httpd
- systemctl enable  httpd.service
- systemctl start  httpd.service
- echo '<html><head></head><body><pre><code>' > /var/www/html/index.html
- hostname >> /var/www/html/index.html
- echo '' >> /var/www/html/index.html
- cat /etc/os-release >> /var/www/html/index.html
- echo '</code></pre></body></html>' >> /var/www/html/index.html
- firewall-offline-cmd --add-service=http
- systemctl enable  firewalld
- systemctl restart  firewalld
- touch ~opc/userdata.`date +%s`.finish
- echo '################### webserver userdata ends #######################'
#cloud-config

runcmd:
# download the secondary vnic script
- wget -O /usr/local/bin/secondary_vnic_all_configure.sh https://docs.cloud.oracle.com/iaas/Content/Resources/Assets/secondary_vnic_all_configure.sh
- chmod +x /usr/local/bin/secondary_vnic_all_configure.sh
- sleep 60
- /usr/local/bin/secondary_vnic_all_configure.sh -c
- echo 'PATH=/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin:/root/bin' > /var/spool/cron/root
- echo '@reboot /usr/local/bin/secondary_vnic_all_configure.sh -c' >> /var/spool/cron/root

#Port settings used by Postgresql
- setenforce 0
- firewall-cmd --permanent --add-port=5432/tcp
- firewall-cmd --permanent --add-port=5432/udp
- firewall-cmd --reload

#Installation of required packages
- yum install -y gcc
- yum install -y readline-devel
- yum install -y zlib-devel

#Install PostgreSQL
- cd /usr/local/src/
- wget https://ftp.postgresql.org/pub/source/v11.3/postgresql-11.3.tar.gz
- tar xvfz postgresql-11.3.tar.gz
- cd postgresql-11.3/

#compile
- ./configure
- make
- make install

#Create startup script
- cp /usr/local/src/postgresql-11.3/contrib/start-scripts/linux /etc/init.d/postgres
- chmod 755 /etc/init.d/postgres
- chkconfig --add postgres
- chkconfig --list | grep postgres

#Postgres user created
- adduser postgres

Terraform construction

First, perform the following preparatory work.

--Enable environment variables $ source env-vars --Check environment variables $ env --Registration of SSH private key (*) $ ssh-add /oci/ssh/id_rsa

(*) Terraform does not support passphrase-protected SSH keys. This is avoided by registering the ssh key in the SSH agent. Not required if you have not set a passphrase for your SSH private key.

After completing the preparatory work, it is finally time to build Terraform. Terraform construction work is the next 3 steps!

  1. Initialize with terraform init
  2. Confirm with terraform plan
  3. Apply with terraform apply

Let's look at them in order.

terraform init terraform init initializes the working directory that contains the Terraform configuration files. If no arguments are specified, the current working directory configuration is initialized. During initialization, Terraform looks up the configuration of direct and indirect references to the provider and loads the required plugins.

# terraform init

Initializing the backend...

Initializing provider plugins...

The following providers do not have any version constraints in configuration,
so the latest version was installed.

To prevent automatic upgrades to new major versions that may contain breaking
changes, it is recommended to add version = "..." constraints to the
corresponding provider blocks in configuration, with the constraint strings
suggested below.

* provider.oci: version = "~> 3.40"

Terraform has been successfully initialized!

You may now begin working with Terraform. Try running "terraform plan" to see
any changes that are required for your infrastructure. All Terraform commands
should now work.

If you ever set or change modules or backend configuration for Terraform,
rerun this command to reinitialize your working directory. If you forget, other
commands will detect it and remind you to do so if necessary.

terraform plan

terraform plan is used to create an execution plan. It is not actually reflected just by executing this command. It is used to test if it works as expected without changing the actual resources or state. You can also use the optional -out argument to save the generated plan to a file for later execution. Note that if there is an error in the tf file, it will be detected, but even if terraform plan succeeds, it may fail with terraform apply, so be careful.

# terraform plan

The refreshed state will be used to calculate this plan, but will not be
persisted to local or remote state storage.

data.oci_identity_availability_domains.ADs: Refreshing state...
data.oci_core_images.instance: Refreshing state...

------------------------------------------------------------------------

An execution plan has been generated and is shown below.
Resource actions are indicated with the following symbols:
  + create

Terraform will perform the following actions:

  # oci_core_default_route_table.default-route-table1 will be created
  + resource "oci_core_default_route_table" "default-route-table1" {
      + defined_tags               = (known after apply)
      + display_name               = (known after apply)
      + freeform_tags              = (known after apply)
      + id                         = (known after apply)
      + manage_default_resource_id = (known after apply)
      + state                      = (known after apply)
      + time_created               = (known after apply)

      + route_rules {
          + cidr_block        = (known after apply)
          + destination       = "0.0.0.0/0"
          + destination_type  = "CIDR_BLOCK"
          + network_entity_id = (known after apply)
        }
    }

/*Omission*/

Plan: 32 to add, 0 to change, 0 to destroy.

------------------------------------------------------------------------

Note: You didn't specify an "-out" parameter to save this plan, so Terraform
can't guarantee that exactly these actions will be performed if
"terraform apply" is subsequently run.

terraform apply

Terraform apply applies resource configuration changes depending on the execution plan. When this command is executed, a terraform.tfstate file will be generated.

# terraform apply

data.oci_identity_availability_domains.ADs: Refreshing state...
data.oci_core_images.instance: Refreshing state...

An execution plan has been generated and is shown below.
Resource actions are indicated with the following symbols:
  + create

Terraform will perform the following actions:

  # oci_core_default_route_table.default-route-table1 will be created
  + resource "oci_core_default_route_table" "default-route-table1" {
      + defined_tags               = (known after apply)
      + display_name               = (known after apply)
      + freeform_tags              = (known after apply)
      + id                         = (known after apply)
      + manage_default_resource_id = (known after apply)
      + state                      = (known after apply)
      + time_created               = (known after apply)

      + route_rules {
          + cidr_block        = (known after apply)
          + destination       = "0.0.0.0/0"
          + destination_type  = "CIDR_BLOCK"
          + network_entity_id = (known after apply)
        }
    }

/*Omission*/

Plan: 32 to add, 0 to change, 0 to destroy.

Do you want to perform these actions?
  Terraform will perform the actions described above.
  Only 'yes' will be accepted to approve.

  Enter a value: yes

oci_core_virtual_network.vcn2: Creating...
oci_core_virtual_network.vcn1: Creating...

/*Omission*/

Apply complete! Resources: 32 added, 0 changed, 0 destroyed.

Outputs:

lb_public_ip = [
  [
    {
      "ip_address" = "X.X.X.X"
      "is_public" = true
    },
  ],
]

** Notes </ font> ** ** If you have separate tf files for each directory and you work with different compartments, don't forget to run source env-vars when changing directories. ** **

As a hiyari hat, I worked in environment A. Next, let's say you go to the dict of the B environment and run terraform apply. If variables used in the A environment remain, it may cause resource creation or deletion in an unintended compartment.

Resource confirmation

After executing terraform apply, access the console screen of Oracle Cloud and check the created resource.

--Virtual cloud network

スクリーンショット 2019-12-02 23.46.38.png
  • instance
スクリーンショット 2019-12-02 23.50.30.png

After cloud-init is completed, when you access the IP address of the load balancer output by Outputs, ʻindex.html` of the web server will be displayed.

スクリーンショット 2019-12-04 21.15.54.png

terraform destroy

terraform destroy deletes all the built environment.

# terraform destroy

data.oci_identity_availability_domains.ADs: Refreshing state...

/*Omission*/

Plan: 0 to add, 0 to change, 32 to destroy.

Do you really want to destroy all resources?
  Terraform will destroy all your managed infrastructure, as shown above.
  There is no undo. Only 'yes' will be accepted to confirm.

  Enter a value: yes

/*Omission*/

oci_core_virtual_network.vcn1: Destruction complete after 0s

Destroy complete! Resources: 32 destroyed.

knowledge

"Operating systems" supported by Oracle Cloud

The images of Oracle Cloud that can be used with Terraform are as follows.

$ oci compute image list -c <Compartment OCID>--all | jq -r '.data[] | ."operating-system"' | sort | uniq


Canonical Ubuntu
CentOS
Custom
Oracle Linux
Windows

Therefore, you cannot use a custom image such as ** "Oracle Database". ** ** If you want to start a database service in Terraform, you can do so by using the managed database service-DBaaS.

--Documents about DBaaS service

Build multiple instances

When building multiple instances, the amount of code will increase if you write for each instance. You can perform iterative execution by using an array of variables in the tf file. In this article, the number of instances is not large, so we define the number of instances.

cloud-init You can specify a custom script for the value of user_data in the tf file. See User-Data Formats for how to utilize user data.

The point when using cloud-init is not to check the contents of the custom script, so even if terraform apply succeeds, the script may fail.

As an example, if there is a line break as shown below, it will fail.

$ sudo apt-get install \
    apt-transport-https \
    ca-certificates \
    curl \
    gnupg-agent \
    software-properties-common

Also, even if terraform apply succeeds, the Cloud-Init process is not yet complete. After logging in to the instance, you can check whether it is working as intended in the following log file.

# cat /var/log/cloud-init-output.log

Secondary VNIC Consideration should be taken when configuring a Secondary VNIC in Oracle Cloud. In this article, we run secondary_vnic_all_configure.sh to configure the IP.

For example, when executing terraform apply, if the timing to call secondary_vnic_all_configure.sh is early, the IP configuration may fail. Therefore, in this article, sleep processing is inserted in cloud-init.tpl specified in userdata of cloud-Init and it is executed reliably.

Also, in order to enable the secondary VNIC when the OS is restarted, cron is set in ʻinline of remote-execin the tf file. If you do not set a public IP address for your instance, you cannot useremote-exec`. In that case, cloud-init is a good choice.

After that, when separating the business LAN and the production LAN as in this article, it is a best practice to specify the operation LAN as the primary VNIC.

Coding of existing environment

You can also use terraform import to code existing environments for resources created before deploying Terraform.

in conclusion

Terraform demonstrates its true value in every scene, from issuing the verification environment to building the production environment and managing the configuration of the operation phase.

For those who have worked as infrastructure engineers in a legacy environment like me who grew up on-premises, I was very impressed when I first ran terraform apply and created the resources.

Nowadays, application engineers are also developing using Docker etc., but I think Terraform is the role of SRE, which is good at infrastructure areas. I frankly felt that there were some things that could only be considered by people who came to infrastructure.

I would like to continue using Terraform to create an environment where developers can concentrate on development only.

reference

Cloud native architecture

-[Cloud Native Architecture, 5 Principles](https://cloud.google.com/blog/ja/products/gcp/5-principles-for-cloud-native-architecture-what-it-is-and-how -to-master-it)

Oracle Cloud/Terraform --Automating OCI construction with Terraform --Oracle Cloud Infrastructure Advanced --Start Terraform Provider -[Create Terraform Configuration](https://docs.oracle.com/cd/E97706_01/Content/API/SDKDocs/terraformconfig.htm?TocPath=Terraform%E6%A7%8B%E6%88%90%E3% 81% AE% E4% BD% 9C% E6% 88% 90% 7C _____ 0)

Recommended Posts

The true value of Terraform automation starting with Oracle Cloud
Take the value of SwitchBot thermo-hygrometer with Raspberry Pi
Log the value of SwitchBot thermo-hygrometer with Raspberry Pi
Unify the environment of the Python development team starting with Poetry
Your URL didn't respond with the value of the challenge parameter.
Find the optimal value of a function with a genetic algorithm (Part 2)
Visualize the frequency of word occurrences in sentences with Word Cloud. [Python]
Solve the initial value problem of ordinary differential equations with JModelica
Tips: [Python] Calculate the average value of the specified area with bedgraph
Periodically log the value of Omron environment sensor with Raspberry Pi
Find the definition of the value of errno
About the return value of pthread_mutex_init ()
Automation of remote operations with Fabric
Get the number of visits to each page with ReportingAPI + Cloud Functions
Logging the value of Omron environment sensor with Raspberry Pi (USB type)
Get the return value of an external shell script (ls) with python3
Run the IDCF cloud CLI with Docker
Align the size of the colorbar with matplotlib
Check the existence of the file with python
Find the SHA256 value with R (with bonus)
The third night of the loop with for
Get the value of the middle layer of NN
Test automation starting with L-Chika (3) Oscilloscope integration
Count the number of characters with echo
Make the default value of the argument immutable
Automation of creation of working hours table created at the end of the month with Selenium
Fill the missing value (null) of DataFrame with the values before and after with pyspark
To output a value even in the middle of a cell with Jupyter Notebook
Compare the sum of each element in two lists with the specified value in Python
[Verification] Try to align the point cloud with the optimization function of pytorch Part 1
Automation of remote operations with Fabric
Winning with Monitoring
That of / etc / shadow
Proxy server with Docker
Local server with python
Put numpy scipy etc. in virtualenv of Ubuntu 12.04 LTS Server
I tried starting Django's server with VScode instead of Pycharm
Check the memory status of the server with the Linux free command
Check the operating status of the server with the Linux top command
Build a speed of light web API server with Falcon
The true value of Terraform automation starting with Oracle Cloud