史上最全 Terraform 入門教程,助你無坑入門!

Sunzz發表於2024-10-28

在雲端計算的浪潮中,基礎設施管理變得越來越複雜。如何高效地配置和管理雲資源,成為了每個開發者和運維工程師必須面對的挑戰。Terraform,作為一種強大的基礎設施即程式碼(IaC)工具,為我們提供了一種簡潔而有效的解決方案。

在這篇部落格中,我將深入探討Terraform的功能與使用場景,幫助你理解其在雲資源管理中的重要性。同時,我會詳細介紹Terraform的安裝步驟,以便你能快速上手。

這篇部落格特別適合入門級讀者,內容詳盡易懂,確保即使是初學者也能順利跟隨。我將透過實際操作,使用Terraform在AWS上建立各種基礎設施資源,包括VPC、子網、路由表、閘道器、安全組、EC2例項、EBS卷和彈性IP(EIP)。此外,我還將分享如何建立EKS的IAM角色,定義EKS的Terraform配置檔案,以及如何配置EKS Node Group的IAM角色和節點組,一步一步建立EKS叢集。

每個Terraform檔案都將配有詳盡的解釋,讓你清晰理解每一行程式碼的意義和作用。無論你是雲端計算新手還是希望提升技能的專業人士,這篇部落格都將為你提供實用的指導和深入的見解,幫助你輕鬆邁入Terraform的世界。讓我們一起開始這段充滿樂趣的學習之旅吧!

一、Terraform 簡介

1. 特性與使用場景

Terraform 是一個開源的基礎設施即程式碼(Infrastructure as Code, IaC)工具,由 HashiCorp 開發。它允許使用者透過配置檔案以程式設計方式管理雲基礎設施、物理裝置以及其他服務的資源。以下是 Terraform 的一些關鍵特性和使用場景:

基礎設施即程式碼

Terraform 使用簡單的宣告性配置語言(HCL,HashiCorp Configuration Language),使使用者能夠定義和管理他們的基礎設施。這種方式帶來了多個優勢:

  • 版本控制: 透過將配置檔案儲存在版本控制系統中,使用者可以跟蹤基礎設施的歷史變更,輕鬆回滾到先前的狀態。
  • 共享和複用: 配置檔案可以作為程式碼庫的一部分進行共享,促進團隊成員之間的協作和最佳實踐的傳遞。
  • 審計和合規性: 明確的配置檔案使得基礎設施的審計和合規性檢查變得簡單,透過對比配置檔案和實際狀態,可以快速識別不一致性。

多雲支援

Terraform 支援多種雲服務提供商,包括 AWS、Azure、Google Cloud、阿里雲等,使用者可以在一個配置檔案中同時管理不同雲環境的資源。這種多雲管理能力帶來了如下好處:

  • 靈活性: 企業可以根據需求和成本最佳化選擇合適的雲服務提供商,而無需重新編寫大量配置程式碼。
  • 災難恢復: 可以在不同雲環境中實現資源備份和故障轉移,提升業務連續性。
  • 整合能力: 不同雲提供商的服務可以無縫整合,構建跨雲應用架構。

狀態管理

Terraform 會維護一個關於當前基礎設施狀態的檔案(狀態檔案),以便在後續的變更中跟蹤和管理資源的狀態。狀態管理的優勢包括:

  • 一致性: 狀態檔案確保使用者對資源的操作是基於最新狀態的,防止併發修改導致的衝突。
  • 變更檢測: 在應用新配置之前,Terraform 會根據狀態檔案與配置檔案的比較,提供清晰的變更計劃,確保使用者瞭解即將進行的操作。
  • 遠端狀態: 支援將狀態檔案儲存在遠端後端(如 S3),便於團隊協作和提高安全性。

資源依賴管理

Terraform 能夠自動處理資源之間的依賴關係,確保在建立或修改資源時,按照正確的順序進行操作。這減少了手動處理依賴的複雜性,提高了自動化水平。具體來說:

  • 自動排序: 使用者不必手動指定建立順序,Terraform 會根據資源之間的引用關係自動處理。
  • 並行處理: 透過識別獨立資源,Terraform 可以並行建立或刪除資源,縮短整體執行時間。

可擴充套件性

Terraform 提供了豐富的外掛和模組,使用者可以透過自定義模組來擴充套件 Terraform 的功能,實現更復雜的基礎設施架構。可擴充套件性的特點包括:

  • 社群貢獻: Terraform 擁有一個活躍的社群,使用者可以方便地找到現成的模組並進行整合,減少重複工作。
  • 模組化設計: 使用者可以將常用的配置封裝成模組,提高配置的複用性和可讀性。

跨團隊協作

Terraform 的配置檔案可以與 Git 等版本控制系統結合使用,支援團隊協作。團隊協作的優勢包括:

  • 程式碼審查: 團隊成員可以對基礎設施的變更進行審查,確保變更經過充分驗證,提升基礎設施的穩定性。
  • 透明性: 透過版本控制,所有團隊成員都能清楚地看到基礎設施的變更歷史和決策過程,促進知識的共享。

使用場景

  • 建立和管理雲基礎設施: 透過 Terraform,使用者可以輕鬆建立和管理各種雲資源,如 VPC、EC2 例項、RDS 資料庫、EKS 叢集等,提升管理效率。
  • 實現持續整合和持續交付(CI/CD)中的基礎設施自動化: 將基礎設施配置納入 CI/CD 流程,確保環境一致性,降低手動操作的風險。
  • 配置和管理多個環境: 使用 Terraform,使用者可以輕鬆配置和管理開發、測試和生產環境的基礎設施,保證環境之間的一致性。
  • 透過模組化設計實現基礎設施的複用: 使用者可以建立和共享模組,以便在不同專案中複用相同的基礎設施配置。

總之,Terraform 是一個強大且靈活的工具,能夠幫助開發團隊以程式碼的方式高效管理雲基礎設施,提升運維效率和靈活性。透過 Terraform,使用者能夠在多雲環境中實現自動化和標準化,適應快速變化的業務需求。

2. Terraform 工作原理和工作流程

Terraform 是一個基礎設施即程式碼(IaC)工具,透過以下幾個步驟來管理基礎設施:

  1. 配置檔案(.tf 檔案): 使用者首先透過編寫 Terraform 配置檔案來定義所需的基礎設施。這些檔案使用 HCL(HashiCorp Configuration Language)語言,描述資源的型別、屬性和配置。

  2. 初始化(terraform init): 在開始使用 Terraform 之前,使用者需要執行 terraform init 命令。這一步會初始化工作目錄,下載所需的提供程式(如 AWS、Azure 等),並準備後續的操作。

  3. 生成執行計劃(terraform plan): 使用 terraform plan 命令,Terraform 會讀取配置檔案並生成執行計劃,展示將要執行的操作(如建立、更新或刪除資源)。這一步允許使用者預覽即將進行的變更,避免意外操作。

  4. 應用變更(terraform apply): 在確認執行計劃後,使用者可以執行 terraform apply 命令,Terraform 會根據生成的計劃實際執行相應的操作,建立、更新或刪除雲資源。

  5. 狀態管理: Terraform 會維護一個狀態檔案(terraform.tfstate),記錄當前基礎設施的狀態。這個檔案用於跟蹤資源的實際狀態,以便在後續操作中進行對比和管理。

  6. 變更管理: 當需要對基礎設施進行更改時,使用者只需修改配置檔案,然後重複執行 planapply 流程。Terraform 會自動識別資源的變更,並進行相應的更新。

  7. 銷燬資源(terraform destroy): 當不再需要某些資源時,使用者可以執行 terraform destroy 命令,Terraform 會刪除所有配置檔案中定義的資源,確保清理工作整潔。

透過這些步驟,Terraform 能夠以一致性和可預測的方式管理和部署基礎設施,使使用者在整個基礎設施生命週期中保持對資源的控制和管理。

3. Terraform基本使用概念

3.1 Provider

  • 定義: Provider 是 Terraform 與外部服務(如 AWS、Azure、Google Cloud 等)進行互動的外掛。它們負責管理資源的生命週期。
  • 使用: 在 provider.tf 檔案中,通常會定義所需的雲服務提供商。例如,對於 AWS,你可能會寫如下內容:
    terraform {
      required_providers {
        aws = {
          source  = "hashicorp/aws"
          version = "~> 3.0"
        }
      }
    }
    
    provider "aws" {
      region     = "us-west-1"
    #  access_key = var.aws_access_key  #本文使用aws cli那塊定義的key
    #  secret_key = var.aws_secret_key
    }
  • 這裡,required_providers 定義了所需的 AWS 提供商及其版本。
  • provider 塊則設定了訪問該服務的具體引數,如區域和憑證。

3.2 Terraform狀態檔案

  • 定義: terraform.tfstate 是 Terraform 用來跟蹤已管理資源狀態的檔案。它儲存了當前基礎設施的詳細資訊。
  • 作用: 這個檔案幫助 Terraform 在執行計劃(terraform plan)和應用(terraform apply)時瞭解哪些資源已經建立、哪些需要更新或刪除。
  • 注意事項: 狀態檔案敏感且重要,應妥善保管,避免直接修改。為了提高安全性,通常建議使用遠端後端(如 S3)儲存狀態檔案。

3.3 Terraform配置檔案

  • 定義: Terraform 的配置檔案以 .tf 為副檔名,包含了所有基礎設施資源的定義和配置。
  • 內容: 每個配置檔案可以包含資源定義、變數、輸出等。例如,main.tf 中可以包含 VPC、子網、EC2 例項的定義。
  • 示例:
    resource "aws_vpc" "my_vpc" {
      cidr_block = "10.0.0.0/16"
      enable_dns_support = true
      enable_dns_hostnames = true
    }

3.4 變數檔案

  • 定義: 變數檔案(通常命名為 variables.tf)用於定義可以在多個地方使用的變數,以提高靈活性。
  • 使用: 你可以在配置中引用這些變數,以便根據不同環境(如開發、測試、生產)進行定製。例如:
    variable "region" {
      description = "The AWS region to deploy resources"
      type        = string
      default     = "us-west-1"
    }

3.5 輸出檔案

  • 定義: 輸出檔案(通常命名為 outputs.tf)用於定義在 Terraform 執行後希望輸出的資訊,方便使用者獲取資源的關鍵資訊。
  • 作用: 輸出可以是 EC2 例項的公共 IP 地址、安全組 ID 等。例如:
    output "instance_ip" {
      value = aws_instance.my_instance.public_ip
    }

二、環境準備

1. 安裝 Terraform

請根據自己的作業系統參考 https://developer.hashicorp.com/terraform/install,本文之列出常見的作業系統安裝方式。

macOS

brew tap hashicorp/tap
brew install hashicorp/tap/terraform

Windows

https://releases.hashicorp.com/terraform/1.9.8/terraform_1.9.8_windows_amd64.zip

Ubuntu/Debian

wget -O- https://apt.releases.hashicorp.com/gpg | sudo gpg --dearmor -o /usr/share/keyrings/hashicorp-archive-keyring.gpg
echo "deb [signed-by=/usr/share/keyrings/hashicorp-archive-keyring.gpg] https://apt.releases.hashicorp.com $(lsb_release -cs) main" | sudo tee /etc/apt/sources.list.d/hashicorp.list
sudo apt update && sudo apt install terraform

CentOS/RHEL

sudo yum install -y yum-utils
sudo yum-config-manager --add-repo https://rpm.releases.hashicorp.com/RHEL/hashicorp.repo
sudo yum -y install terraform

2. 配置 AWS CLI

確保配置的訪問金鑰擁有足夠的許可權。

詳情可參考:https://www.cnblogs.com/Sunzz/p/18432935

2.1. 建立~/.aws/config檔案

內容如下:

[default]
region = us-west-1

其中region請根據你的實際情況進行修改即可

2.2 建立 ~/.aws/credentials檔案

內容如下:

[default]
aws_access_key_id = AKIA2LXD....
aws_secret_access_key = ZvQllpYL.....

轉載請著名原文地址:https://www.cnblogs.com/Sunzz/p/18498915

3. 初始化terraform

3.1 建立variables.tf檔案

variables.tf用來存多次用到的變數

內容如下:

variable "aws_region" {
  default = "us-west-1"
}

定義了使用的區域,這裡使用us-west-1,請根據你的實際情況進行修改。

3.2 建立provider.tf 檔案

配置定義了 Terraform 使用的 providers

terraform {
  required_providers {
    aws = {
      source  = "hashicorp/aws"
      version = "~> 4.0"
    }
  }
}

provider "aws" {
  region = var.aws_region
}

解釋

  • terraform:指定 Terraform 的基本設定,包括依賴的 providers。

  • required_providers:定義 Terraform 專案所依賴的 providers 列表,這些 providers 用於與特定平臺互動,例如 AWS 或生成 TLS 金鑰。

    • aws provider

      • source:指明此 provider 的來源,即 hashicorp/aws,表示使用 HashiCorp 官方釋出的 AWS provider。
      • version:指定使用 4.0 版本及其以上的最新版本,但不會升級到 5.0 以上。

3.3 初始化

初始化過程中會用到國外的一些網路資源,由於眾所周知的原因,下載的時候可能出現一些問題,這裡建議直接使用你的工具即可。

根據實際情況修改ip和埠

export https_proxy=http://127.0.0.1:7890
export http_proxy=http://127.0.0.1:7890
terraform init

輸出如下:

Initializing the backend...
Initializing provider plugins...
- Finding hashicorp/aws versions matching ">= 4.0.0"...
- Installing hashicorp/aws v5.72.1...
- Installed hashicorp/aws v5.72.1 (signed by HashiCorp)
Terraform has created a lock file .terraform.lock.hcl to record the provider
selections it made above. Include this file in your version control repository
so that Terraform can guarantee to make the same selections by default when
you run "terraform init" in the future.

Terraform has been successfully initialized!

You may now begin working with Terraform. Try running "terraform plan" to see
any changes that are required for your infrastructure. All Terraform commands
should now work.

If you ever set or change modules or backend configuration for Terraform,
rerun this command to reinitialize your working directory. If you forget, other
commands will detect it and remind you to do so if necessary.

三、建立aws 網路資源

本文每次建立之前我都會先建立對應資源的tf檔案,一種資源一個tf檔案,比如所有的ec2都放在一個ec2.tf檔案中。

然後都會執行 terraform plan -out=tf.plan 來預演一下執行結果,防止出錯。

terraform plan -out=tf.plan 是一個“預演”工具。它不會真的去建立或改動資源,而是生成一個詳細的計劃,告訴我們“如果執行,會做哪些具體更改”。這個計劃可以儲存成一個檔案(比如這裡的 tf.plan),這樣我們可以先檢查它,確保沒問題後,再真正去執行。這不僅減少了出錯的機會,還讓我們隨時知道哪些資源會被建立、修改或刪除。

1. 建立vpc

編寫vpc.tf檔案

resource "aws_vpc" "tf_vpc" {
  cidr_block = "10.10.0.0/16"
  enable_dns_hostnames = true
  enable_dns_support   = true

  tags = {
    Name = "tf-vpc"
  }
}

解釋:

  • resource "aws_vpc" "tf_vpc": 定義了一個 AWS VPC 資源,名稱為 tf_vpc。你可以在 Terraform 配置中引用這個名稱。

  • cidr_block = "10.10.0.0/16": 這是 VPC 的 CIDR(無類域間路由)塊,指定了 IP 地址範圍。10.10.0.0/16 表示該 VPC 可以使用從 10.10.0.010.10.255.255 的所有 IP 地址。

  • enable_dns_hostnames = true: 啟用 DNS 主機名。設定為 true 時,AWS 將為 VPC 中的 EC2 例項分配 DNS 主機名,這樣你可以透過 DNS 名稱而不是 IP 地址訪問這些例項。

  • enable_dns_support = true: 啟用 DNS 支援。這意味著 VPC 將能夠解析 DNS 名稱。這對於在 VPC 內部使用 AWS 服務和例項之間的通訊非常重要。

  • tags = { Name = "tf-vpc" }: 為 VPC 新增標籤。在 AWS 控制檯中,標籤可以幫助你識別和管理資源。這裡的標籤將 VPC 命名為 "tf-vpc"。

預執行

terraform plan -out=tf.plan
terraform plan
 terraform plan -out=tf.plan

Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the
following symbols:
  + create

Terraform will perform the following actions:

  # aws_vpc.tf_vpc will be created
  + resource "aws_vpc" "tf_vpc" {
      + arn                                  = (known after apply)
      + cidr_block                           = "10.10.0.0/16"
      + default_network_acl_id               = (known after apply)
      + default_route_table_id               = (known after apply)
      + default_security_group_id            = (known after apply)
      + dhcp_options_id                      = (known after apply)
      + enable_dns_hostnames                 = true
      + enable_dns_support                   = true
      + enable_network_address_usage_metrics = (known after apply)
      + id                                   = (known after apply)
      + instance_tenancy                     = "default"
      + ipv6_association_id                  = (known after apply)
      + ipv6_cidr_block                      = (known after apply)
      + ipv6_cidr_block_network_border_group = (known after apply)
      + main_route_table_id                  = (known after apply)
      + owner_id                             = (known after apply)
      + tags                                 = {
          + "Name" = "tf-vpc"
        }
      + tags_all                             = {
          + "Name" = "tf-vpc"
        }
    }

Plan: 1 to add, 0 to change, 0 to destroy.

────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────

Saved the plan to: tf.plan

To perform exactly these actions, run the following command to apply:
    terraform apply "tf.plan"

建立vpc

terraform apply tf.plan
aws_vpc.tf_vpc: Creating...
aws_vpc.tf_vpc: Still creating... [10s elapsed]
aws_vpc.tf_vpc: Creation complete after 13s [id=vpc-0f2e1cdca0cf5a306]

轉載請著名原文地址:https://www.cnblogs.com/Sunzz/p/18498915

2. 建立子網

新建變數

variables.tf新增如下內容:

variable "az_1" {
  description = "Availability Zone for the first subnet"
  type        = string
  default     = "us-west-1a"
}

variable "az_2" {
  description = "Availability Zone for the second subnet"
  type        = string
  default     = "us-west-1b"
}

解釋:

variable "az_1/2":

  • description: 提供了一個簡短的說明,指明這個變數代表第一個子網的可用區。
  • type: 指定變數的資料型別為字串 (string)。
  • default: 設定預設值為 "us-west-1a/b"。如果在 Terraform 配置中沒有為此變數提供其他值,則將使用這個預設值。

定義子網配置的subnet.tf檔案

# 定義第一個子網 tf-subnet01 (10.10.1.0/24, 使用變數指定可用區)
resource "aws_subnet" "tf_subnet01" {
  vpc_id            = aws_vpc.tf_vpc.id
  cidr_block        = "10.10.1.0/24"
  availability_zone = var.az_1  # 使用變數代替硬編碼的可用區
  tags = {
    Name = "tf-subnet01"
  }
}

# 定義第二個子網 tf-subnet02 (10.10.2.0/24, 使用變數指定可用區)
resource "aws_subnet" "tf_subnet02" {
  vpc_id            = aws_vpc.tf_vpc.id
  cidr_block        = "10.10.2.0/24"
  availability_zone = var.az_2
  tags = {
    Name = "tf-subnet02"
  }
}

解釋:

  • resource "aws_subnet" "tf_subnet01": 宣告建立一個名為 tf_subnet01 的子網資源。

  • vpc_id = aws_vpc.tf_vpc.id: 關聯此子網到之前定義的虛擬私有云(VPC)。aws_vpc.tf_vpc.id 引用已建立的 VPC 的 ID。

  • cidr_block = "10.10.1.0/24": 定義子網的 CIDR 塊,這裡表示子網的 IP 地址範圍是 10.10.1.010.10.1.255,總共有 256 個地址(包括網路地址和廣播地址)。

  • availability_zone = var.az_1: 指定此子網所在的可用區,使用了之前定義的變數 az_1,而不是硬編碼。這使得配置更靈活,便於修改和維護。

預執行

terraform plan -out=tf.plan
terraform plan
 terraform plan -out=tf.plan
aws_vpc.tf_vpc: Refreshing state... [id=vpc-0f2e1cdca0cf5a306]

Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the
following symbols:
  + create

Terraform will perform the following actions:

  # aws_subnet.tf_subnet01 will be created
  + resource "aws_subnet" "tf_subnet01" {
      + arn                                            = (known after apply)
      + assign_ipv6_address_on_creation                = false
      + availability_zone                              = "us-west-1a"
      + availability_zone_id                           = (known after apply)
      + cidr_block                                     = "10.10.1.0/24"
      + enable_dns64                                   = false
      + enable_resource_name_dns_a_record_on_launch    = false
      + enable_resource_name_dns_aaaa_record_on_launch = false
      + id                                             = (known after apply)
      + ipv6_cidr_block_association_id                 = (known after apply)
      + ipv6_native                                    = false
      + map_public_ip_on_launch                        = false
      + owner_id                                       = (known after apply)
      + private_dns_hostname_type_on_launch            = (known after apply)
      + tags                                           = {
          + "Name" = "tf-subnet01"
        }
      + tags_all                                       = {
          + "Name" = "tf-subnet01"
        }
      + vpc_id                                         = "vpc-0f2e1cdca0cf5a306"
    }

  # aws_subnet.tf_subnet02 will be created
  + resource "aws_subnet" "tf_subnet02" {
      + arn                                            = (known after apply)
      + assign_ipv6_address_on_creation                = false
      + availability_zone                              = "us-west-1b"
      + availability_zone_id                           = (known after apply)
      + cidr_block                                     = "10.10.2.0/24"
      + enable_dns64                                   = false
      + enable_resource_name_dns_a_record_on_launch    = false
      + enable_resource_name_dns_aaaa_record_on_launch = false
      + id                                             = (known after apply)
      + ipv6_cidr_block_association_id                 = (known after apply)
      + ipv6_native                                    = false
      + map_public_ip_on_launch                        = false
      + owner_id                                       = (known after apply)
      + private_dns_hostname_type_on_launch            = (known after apply)
      + tags                                           = {
          + "Name" = "tf-subnet02"
        }
      + tags_all                                       = {
          + "Name" = "tf-subnet02"
        }
      + vpc_id                                         = "vpc-0f2e1cdca0cf5a306"
    }

Plan: 2 to add, 0 to change, 0 to destroy.

────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────

Saved the plan to: tf.plan

To perform exactly these actions, run the following command to apply:
    terraform apply "tf.plan"

建立子網

terraform apply "tf.plan"
terraform apply "tf.plan"
aws_subnet.tf_subnet01: Creating...
aws_subnet.tf_subnet02: Creating...
aws_subnet.tf_subnet01: Creation complete after 2s [id=subnet-08f8e4b2c62e27989]
aws_subnet.tf_subnet02: Creation complete after 2s [id=subnet-019490723ad3e940a]

Apply complete! Resources: 2 added, 0 changed, 0 destroyed.

3.建立閘道器

建立定義閘道器的internet_gateway.tf檔案

resource "aws_internet_gateway" "tf_igw" {
  vpc_id = aws_vpc.tf_vpc.id
  tags = {
    Name = "tf-igw"
  }
}

解釋

  • resource "aws_internet_gateway" "tf_igw": 宣告建立一個名為 tf_igw 的網際網路閘道器資源。

  • vpc_id = aws_vpc.tf_vpc.id: 將此網際網路閘道器與之前建立的虛擬私有云(VPC)關聯。透過引用 aws_vpc.tf_vpc.id,確保該閘道器可以與指定的 VPC 一起使用。

  • tags: 為網際網路閘道器新增標籤,Name = "tf-igw"。標籤有助於使用者在 AWS 控制檯中管理和識別該資源。

預執行

terraform plan -out=tf.plan
tf plan
 Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the
following symbols:
  + create

Terraform will perform the following actions:

  # aws_internet_gateway.tf_igw will be created
  + resource "aws_internet_gateway" "tf_igw" {
      + arn      = (known after apply)
      + id       = (known after apply)
      + owner_id = (known after apply)
      + tags     = {
          + "Name" = "tf-igw"
        }
      + tags_all = {
          + "Name" = "tf-igw"
        }
      + vpc_id   = "vpc-0f2e1cdca0cf5a306"
    }

Plan: 1 to add, 0 to change, 0 to destroy.

────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────

Saved the plan to: tf.plan

To perform exactly these actions, run the following command to apply:
    terraform apply "tf.plan"

執行建立閘道器

terraform apply "tf.plan"

輸出如下:

aws_internet_gateway.tf_igw: Creating...
aws_internet_gateway.tf_igw: Creation complete after 2s [id=igw-08ec2f3357e8725df]

Apply complete! Resources: 1 added, 0 changed, 0 destroyed.

4. 建立路由表

定義route_table.tf

resource "aws_route_table" "tf_route_table" {
  vpc_id = aws_vpc.tf_vpc.id
  route {
    cidr_block = "0.0.0.0/0"
    gateway_id = aws_internet_gateway.tf_igw.id
  }
  tags = {
    Name = "tf-route-table"
  }
}

解釋

  • vpc_id = aws_vpc.tf_vpc.id: 將此路由表與之前建立的虛擬私有云(VPC)關聯。透過引用 aws_vpc.tf_vpc.id,確保路由表適用於指定的 VPC。

  • route { ... }: 該塊定義了路由表中的一條路由。

    • cidr_block = "0.0.0.0/0":
      • 指定目標 CIDR 塊為 0.0.0.0/0,表示該路由適用於所有流量(即網際網路流量)。
    • gateway_id = aws_internet_gateway.tf_igw.id:
      • 將流量指向之前建立的網際網路閘道器。這意味著任何發往網際網路的流量都將透過這個網際網路閘道器。

預執行建立路由表

terraform plan
 terraform plan -out=tf.plan

Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the
following symbols:
  + create

Terraform will perform the following actions:

  # aws_route_table.tf_route_table will be created
  + resource "aws_route_table" "tf_route_table" {
      + arn              = (known after apply)
      + id               = (known after apply)
      + owner_id         = (known after apply)
      + propagating_vgws = (known after apply)
      + route            = [
          + {
              + cidr_block                 = "0.0.0.0/0"
              + gateway_id                 = "igw-08ec2f3357e8725df"
                # (12 unchanged attributes hidden)
            },
        ]
      + tags             = {
          + "Name" = "tf-route-table"
        }
      + tags_all         = {
          + "Name" = "tf-route-table"
        }
      + vpc_id           = "vpc-0f2e1cdca0cf5a306"
    }

Plan: 1 to add, 0 to change, 0 to destroy.

────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────

Saved the plan to: tf.plan

To perform exactly these actions, run the following command to apply:
    terraform apply "tf.plan"

執行建立路由表

terraform apply "tf.plan"

輸出如下:

aws_route_table.tf_route_table: Creating...
aws_route_table.tf_route_table: Creation complete after 3s [id=rtb-0ae4b29ae8d6881ed]

Apply complete! Resources: 1 added, 0 changed, 0 destroyed.

5. 關聯路由表和子網

建立route_table_association.tf

# 關聯子網和路由表
resource "aws_route_table_association" "tf_route_table_association_01" {
  subnet_id      = aws_subnet.tf_subnet01.id
  route_table_id = aws_route_table.tf_route_table.id
}

resource "aws_route_table_association" "tf_route_table_association_02" {
  subnet_id      = aws_subnet.tf_subnet02.id
  route_table_id = aws_route_table.tf_route_table.id
}

解釋

  • resource "aws_route_table_association" "tf_route_table_association_01": 宣告建立一個名為 tf_route_table_association_01 的路由表關聯資源。該資源用於將子網和路由表連線起來。

  • subnet_id = aws_subnet.tf_subnet01.id: 指定要關聯的子網,引用之前建立的子網 tf_subnet01 的 ID。這意味著該路由表將應用於這個子網中的所有例項。

  • route_table_id = aws_route_table.tf_route_table.id: 指定要關聯的路由表,引用之前建立的路由表 tf_route_table 的 ID。透過這個引用,確保路由表與指定的子閘道器聯。

預執行 terraform plan -out=tf.plan

檢視程式碼
  terraform plan -out=tf.plan
  
 Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the
following symbols:
  + create

Terraform will perform the following actions:

  # aws_route_table_association.tf_route_table_association_01 will be created
  + resource "aws_route_table_association" "tf_route_table_association_01" {
      + id             = (known after apply)
      + route_table_id = "rtb-0ae4b29ae8d6881ed"
      + subnet_id      = "subnet-08f8e4b2c62e27989"
    }

  # aws_route_table_association.tf_route_table_association_02 will be created
  + resource "aws_route_table_association" "tf_route_table_association_02" {
      + id             = (known after apply)
      + route_table_id = "rtb-0ae4b29ae8d6881ed"
      + subnet_id      = "subnet-019490723ad3e940a"
    }

Plan: 2 to add, 0 to change, 0 to destroy.

────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────

Saved the plan to: tf.plan

To perform exactly these actions, run the following command to apply:
    terraform apply "tf.plan"
轉載請著名原文地址:https://www.cnblogs.com/Sunzz/p/18498915

執行關聯

terraform apply "tf.plan"

輸出如下:

aws_route_table_association.tf_route_table_association_01: Creating...
aws_route_table_association.tf_route_table_association_02: Creating...
aws_route_table_association.tf_route_table_association_01: Creation complete after 1s [id=rtbassoc-0999e44cc1cfb7f09]
aws_route_table_association.tf_route_table_association_02: Creation complete after 1s [id=rtbassoc-0190cb61bd5850d86]

Apply complete! Resources: 2 added, 0 changed, 0 destroyed.

四、建立ec2

1. 建立金鑰對

生成金鑰對

 ssh-keygen -t rsa -b 4096 -f ~/.ssh/tf-keypair

建立key_pair.tf檔案

resource "aws_key_pair" "tf-keypair" {
  key_name   = "tf-keypair"
  public_key = "ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQC42p8Ly5xXtaQPbBoKiVVSuU0HKhK38I5DtPhijhZrVZmhRpW5yD6pbCXmFLnIFTFNb....."
}

解釋:

  • resource "aws_key_pair" "tf-keypair": 宣告建立一個名為 tf-keypair 的金鑰對資源。這是一個 AWS EC2 金鑰對,用於透過 SSH 訪問 EC2 例項。

  • key_name = "tf-keypair": 指定金鑰對的名稱為 tf-keypair。在 AWS 控制檯中,該金鑰對將以這個名稱顯示。

  • public_key = "ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQC42p8Ly5xXtaQPbBoKiVVSuU0HKhK38I5DtPhijhZrVZmhRpW5yD6pbCXmFLnIFTFNb.....":

    • 提供公鑰的內容,使用 SSH 公鑰格式。這個公鑰將被儲存在 AWS 中,而相應的私鑰則由使用者保管,用於透過 SSH 連線到 EC2 例項。
    • 注意:公鑰必須是有效的 SSH 公鑰格式,且通常會以 ssh-rsa 開頭,後面跟著金鑰資料和可選的註釋。
    • 其中public_key 就是~/.ssh/tf-keypair.pub的內容

預執行

terraform plan -out=tf.plan
terraform plan
 terraform plan -out=tf.plan
aws_vpc.tf_vpc: Refreshing state... [id=vpc-0f2e1cdca0cf5a306]
aws_subnet.tf_subnet01: Refreshing state... [id=subnet-08f8e4b2c62e27989]
aws_subnet.tf_subnet02: Refreshing state... [id=subnet-019490723ad3e940a]

Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the
following symbols:
  + create

Terraform will perform the following actions:

  # aws_key_pair.tf-keypair will be created
  + resource "aws_key_pair" "tf-keypair" {
      + arn             = (known after apply)
      + fingerprint     = (known after apply)
      + id              = (known after apply)
      + key_name        = "tf-keypair"
      + key_name_prefix = (known after apply)
      + key_pair_id     = (known after apply)
      + key_type        = (known after apply)
      + public_key      = "ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQC42p8Ly5xXtaQPbBoKiVVSuU0HKhK38ua0arfBYQF++/QFRJZ7+/fmeES7P0+//+vKjWnwdf67BIu0RyoA+MFpztYn58hDKdAmSeEXCpp4cOojgFmgnf1+p3MdaOvnT379YT....."
      + tags_all        = (known after apply)
    }

Plan: 1 to add, 0 to change, 0 to destroy.

────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────

Saved the plan to: tf.plan

To perform exactly these actions, run the following command to apply:
    terraform apply "tf.plan"

執行建立金鑰對

terraform apply "tf.plan"

結果如下:

aws_key_pair.tf-keypair: Creating...
aws_key_pair.tf-keypair: Creation complete after 1s [id=tf-keypair]

Apply complete! Resources: 1 added, 0 changed, 0 destroyed.

2.建立安全組

建立security_group.tf檔案

resource "aws_security_group" "tf_security_group" {
  name        = "tf-security-group"
  description = "Security group for allowing specific inbound traffic"
  vpc_id      = aws_vpc.tf_vpc.id

  # ICMP (ping) 入站規則
  ingress {
    from_port   = -1
    to_port     = -1
    protocol    = "icmp"
    cidr_blocks = ["0.0.0.0/0"]
    description = "Allow ICMP (ping) traffic"
  }

  # SSH (22) 入站規則
  ingress {
    from_port   = 22
    to_port     = 22
    protocol    = "tcp"
    cidr_blocks = ["0.0.0.0/0"]
    description = "Allow SSH traffic"
  }

  # HTTP (80) 入站規則
  ingress {
    from_port   = 80
    to_port     = 80
    protocol    = "tcp"
    cidr_blocks = ["0.0.0.0/0"]
    description = "Allow HTTP traffic"
  }

  # HTTPS (443) 入站規則
  ingress {
    from_port   = 443
    to_port     = 443
    protocol    = "tcp"
    cidr_blocks = ["0.0.0.0/0"]
    description = "Allow HTTPS traffic"
  }

  # 預設出站規則:允許所有出站流量
  egress {
    from_port   = 0
    to_port     = 0
    protocol    = "-1"
    cidr_blocks = ["0.0.0.0/0"]
    description = "Allow all outbound traffic"
  }

  tags = {
    Name = "tf-security-group"
  }
}

解釋

  • ingress 規則
    • icmp 放行所有 ICMP 流量,用於允許 ping。
    • tcp 規則放行 22 (SSH)、80 (HTTP)、443 (HTTPS) 埠。
  • egress 規則
    • 出站流量預設允許所有協議和埠。

預執行

terraform plan -out=tf.plan
terraform plan
 terraform plan -out=tf.plan
aws_key_pair.tf-keypair: Refreshing state... [id=tf-keypair]
aws_vpc.tf_vpc: Refreshing state... [id=vpc-0f2e1cdca0cf5a306]
aws_subnet.tf_subnet01: Refreshing state... [id=subnet-08f8e4b2c62e27989]
aws_subnet.tf_subnet02: Refreshing state... [id=subnet-019490723ad3e940a]

Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the
following symbols:
  + create

Terraform will perform the following actions:

  # aws_security_group.tf_security_group will be created
  + resource "aws_security_group" "tf_security_group" {
      + arn                    = (known after apply)
      + description            = "Security group for allowing specific inbound traffic"
      + egress                 = [
          + {
              + cidr_blocks      = [
                  + "0.0.0.0/0",
                ]
              + description      = "Allow all outbound traffic"
              + from_port        = 0
              + ipv6_cidr_blocks = []
              + prefix_list_ids  = []
              + protocol         = "-1"
              + security_groups  = []
              + self             = false
              + to_port          = 0
            },
        ]
      + id                     = (known after apply)
      + ingress                = [
          + {
              + cidr_blocks      = [
                  + "0.0.0.0/0",
                ]
              + description      = "Allow HTTP traffic"
              + from_port        = 80
              + ipv6_cidr_blocks = []
              + prefix_list_ids  = []
              + protocol         = "tcp"
              + security_groups  = []
              + self             = false
              + to_port          = 80
            },
          + {
              + cidr_blocks      = [
                  + "0.0.0.0/0",
                ]
              + description      = "Allow HTTPS traffic"
              + from_port        = 443
              + ipv6_cidr_blocks = []
              + prefix_list_ids  = []
              + protocol         = "tcp"
              + security_groups  = []
              + self             = false
              + to_port          = 443
            },
          + {
              + cidr_blocks      = [
                  + "0.0.0.0/0",
                ]
              + description      = "Allow ICMP (ping) traffic"
              + from_port        = -1
              + ipv6_cidr_blocks = []
              + prefix_list_ids  = []
              + protocol         = "icmp"
              + security_groups  = []
              + self             = false
              + to_port          = -1
            },
          + {
              + cidr_blocks      = [
                  + "0.0.0.0/0",
                ]
              + description      = "Allow SSH traffic"
              + from_port        = 22
              + ipv6_cidr_blocks = []
              + prefix_list_ids  = []
              + protocol         = "tcp"
              + security_groups  = []
              + self             = false
              + to_port          = 22
            },
        ]
      + name                   = "tf-security-group"
      + name_prefix            = (known after apply)
      + owner_id               = (known after apply)
      + revoke_rules_on_delete = false
      + tags                   = {
          + "Name" = "tf-security-group"
        }
      + tags_all               = {
          + "Name" = "tf-security-group"
        }
      + vpc_id                 = "vpc-0f2e1cdca0cf5a306"
    }

Plan: 1 to add, 0 to change, 0 to destroy.

────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────

Saved the plan to: tf.plan

To perform exactly these actions, run the following command to apply:
    terraform apply "tf.plan"

執行建立安全組

terraform apply "tf.plan"

輸出如下:

terraform apply "tf.plan"
aws_security_group.tf_security_group: Creating...
aws_security_group.tf_security_group: Creation complete after 5s [id=sg-0907b4ae2d4bd9592]

Apply complete! Resources: 1 added, 0 changed, 0 destroyed.

3.建立ec2

先定義ami內容,方便後邊作用變數直接使用,編輯variables.tf新增如下內容。這裡使用了amazon linux和ubuntu 24.04的映象

修改variables.tf

variable "amazon_linux_ami" {
  description = "AMI ID for Amazon Linux"
  type        = string
  default     = "ami-0cf4e1fcfd8494d5b"  # 替換為你的Amazon Linux AMI ID
}

variable "ubuntu_ami" {
  description = "AMI ID for Ubuntu"
  type        = string
  default     = "ami-0da424eb883458071"  # 替換為你的Ubuntu 24.04 AMI ID
}

建立ec2.tf檔案

# 第一個 EC2 例項
resource "aws_instance" "tf-ec2-01" {
  ami           = var.amazon_linux_ami
  instance_type = "t2.micro"
  subnet_id     = aws_subnet.tf_subnet01.id
  key_name      = aws_key_pair.tf-keypair.key_name
  vpc_security_group_ids = [aws_security_group.tf_security_group.id]

  root_block_device {
    volume_size = 10
  }

  tags = {
    Name = "tf-ec2-01"
  }
}

# 第二個 EC2 例項
resource "aws_instance" "tf-ec2-02" {
  ami           = var.ubuntu_ami
  instance_type = "t2.micro"
  subnet_id     = aws_subnet.tf_subnet02.id 
  key_name      = aws_key_pair.tf-keypair.key_name
  vpc_security_group_ids = [aws_security_group.tf_security_group.id]

  root_block_device {
    volume_size = 10
  }

  tags = {
    Name = "tf-ec2-02"
  }
}

配置說明

  • AMI ID 引數化: 以便在不同環境中靈活指定 AMI
  • instance_type: 指定例項規格
  • subnet_id: 指定子網,這裡使用前邊建立的
  • 安全組和金鑰對key_name 和 vpc_security_group_ids 分別設定為之前建立的 tf-keypairtf-security-group
  • root_block_device:將 volume_size 設定為 10GB,以指定每個例項的系統盤大小。

轉載請著名原文地址:https://www.cnblogs.com/Sunzz/p/18498915

預執行

terraform plan -out=tf.plan
terraform plan
terraform plan -out=tf.plan
aws_key_pair.tf-keypair: Refreshing state... [id=tf-keypair]
aws_vpc.tf_vpc: Refreshing state... [id=vpc-0f2e1cdca0cf5a306]
aws_subnet.tf_subnet02: Refreshing state... [id=subnet-019490723ad3e940a]
aws_subnet.tf_subnet01: Refreshing state... [id=subnet-08f8e4b2c62e27989]
aws_security_group.tf_security_group: Refreshing state... [id=sg-0907b4ae2d4bd9592]

Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the
following symbols:
  + create

Terraform will perform the following actions:

  # aws_instance.tf-ec2-01 will be created
  + resource "aws_instance" "tf-ec2-01" {
      + ami                                  = "ami-0cf4e1fcfd8494d5b"
      + arn                                  = (known after apply)
      + associate_public_ip_address          = (known after apply)
      + availability_zone                    = (known after apply)
      + cpu_core_count                       = (known after apply)
      + cpu_threads_per_core                 = (known after apply)
      + disable_api_stop                     = (known after apply)
      + disable_api_termination              = (known after apply)
      + ebs_optimized                        = (known after apply)
      + get_password_data                    = false
      + host_id                              = (known after apply)
      + host_resource_group_arn              = (known after apply)
      + iam_instance_profile                 = (known after apply)
      + id                                   = (known after apply)
      + instance_initiated_shutdown_behavior = (known after apply)
      + instance_state                       = (known after apply)
      + instance_type                        = "t2.micro"
      + ipv6_address_count                   = (known after apply)
      + ipv6_addresses                       = (known after apply)
      + key_name                             = "tf-keypair"
      + monitoring                           = (known after apply)
      + outpost_arn                          = (known after apply)
      + password_data                        = (known after apply)
      + placement_group                      = (known after apply)
      + placement_partition_number           = (known after apply)
      + primary_network_interface_id         = (known after apply)
      + private_dns                          = (known after apply)
      + private_ip                           = (known after apply)
      + public_dns                           = (known after apply)
      + public_ip                            = (known after apply)
      + secondary_private_ips                = (known after apply)
      + security_groups                      = (known after apply)
      + source_dest_check                    = true
      + subnet_id                            = "subnet-08f8e4b2c62e27989"
      + tags                                 = {
          + "Name" = "tf-ec2-01"
        }
      + tags_all                             = {
          + "Name" = "tf-ec2-01"
        }
      + tenancy                              = (known after apply)
      + user_data                            = (known after apply)
      + user_data_base64                     = (known after apply)
      + user_data_replace_on_change          = false
      + vpc_security_group_ids               = [
          + "sg-0907b4ae2d4bd9592",
        ]

      + capacity_reservation_specification (known after apply)

      + cpu_options (known after apply)

      + ebs_block_device (known after apply)

      + enclave_options (known after apply)

      + ephemeral_block_device (known after apply)

      + maintenance_options (known after apply)

      + metadata_options (known after apply)

      + network_interface (known after apply)

      + private_dns_name_options (known after apply)

      + root_block_device {
          + delete_on_termination = true
          + device_name           = (known after apply)
          + encrypted             = (known after apply)
          + iops                  = (known after apply)
          + kms_key_id            = (known after apply)
          + throughput            = (known after apply)
          + volume_id             = (known after apply)
          + volume_size           = 10
          + volume_type           = (known after apply)
        }
    }

  # aws_instance.tf-ec2-02 will be created
  + resource "aws_instance" "tf-ec2-02" {
      + ami                                  = "ami-0da424eb883458071"
      + arn                                  = (known after apply)
      + associate_public_ip_address          = (known after apply)
      + availability_zone                    = (known after apply)
      + cpu_core_count                       = (known after apply)
      + cpu_threads_per_core                 = (known after apply)
      + disable_api_stop                     = (known after apply)
      + disable_api_termination              = (known after apply)
      + ebs_optimized                        = (known after apply)
      + get_password_data                    = false
      + host_id                              = (known after apply)
      + host_resource_group_arn              = (known after apply)
      + iam_instance_profile                 = (known after apply)
      + id                                   = (known after apply)
      + instance_initiated_shutdown_behavior = (known after apply)
      + instance_state                       = (known after apply)
      + instance_type                        = "t2.micro"
      + ipv6_address_count                   = (known after apply)
      + ipv6_addresses                       = (known after apply)
      + key_name                             = "tf-keypair"
      + monitoring                           = (known after apply)
      + outpost_arn                          = (known after apply)
      + password_data                        = (known after apply)
      + placement_group                      = (known after apply)
      + placement_partition_number           = (known after apply)
      + primary_network_interface_id         = (known after apply)
      + private_dns                          = (known after apply)
      + private_ip                           = (known after apply)
      + public_dns                           = (known after apply)
      + public_ip                            = (known after apply)
      + secondary_private_ips                = (known after apply)
      + security_groups                      = (known after apply)
      + source_dest_check                    = true
      + subnet_id                            = "subnet-019490723ad3e940a"
      + tags                                 = {
          + "Name" = "tf-ec2-02"
        }
      + tags_all                             = {
          + "Name" = "tf-ec2-02"
        }
      + tenancy                              = (known after apply)
      + user_data                            = (known after apply)
      + user_data_base64                     = (known after apply)
      + user_data_replace_on_change          = false
      + vpc_security_group_ids               = [
          + "sg-0907b4ae2d4bd9592",
        ]

      + capacity_reservation_specification (known after apply)

      + cpu_options (known after apply)

      + ebs_block_device (known after apply)

      + enclave_options (known after apply)

      + ephemeral_block_device (known after apply)

      + maintenance_options (known after apply)

      + metadata_options (known after apply)

      + network_interface (known after apply)

      + private_dns_name_options (known after apply)

      + root_block_device {
          + delete_on_termination = true
          + device_name           = (known after apply)
          + encrypted             = (known after apply)
          + iops                  = (known after apply)
          + kms_key_id            = (known after apply)
          + throughput            = (known after apply)
          + volume_id             = (known after apply)
          + volume_size           = 10
          + volume_type           = (known after apply)
        }
    }

Plan: 2 to add, 0 to change, 0 to destroy.

────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────

Saved the plan to: tf.plan

To perform exactly these actions, run the following command to apply:
    terraform apply "tf.plan"

執行建立ec2

terraform apply "tf.plan"

輸出:

aws_instance.tf-ec2-01: Creating...
aws_instance.tf-ec2-02: Creating...
aws_instance.tf-ec2-02: Still creating... [10s elapsed]
aws_instance.tf-ec2-01: Still creating... [10s elapsed]
aws_instance.tf-ec2-01: Creation complete after 16s [id=i-0f8d63e600d93f6b0]
aws_instance.tf-ec2-02: Creation complete after 16s [id=i-0888d477cdf36aea0]

Apply complete! Resources: 2 added, 0 changed, 0 destroyed.

4.建立ebs

新增ebs.tf文化

resource "aws_ebs_volume" "ebs_ec2_01" {
  availability_zone = var.az_1  # 使用變數代替硬編碼的可用區
  size              = 20
  type              = "gp3"
  tags = {
    Name = "ebs-ec2-01"
  }
}

resource "aws_ebs_volume" "ebs_ec2_02" {
  availability_zone = var.az_2
  size              = 20
  type              = "gp3"
  tags = {
    Name = "ebs-ec2-02"
  }
}

解釋

  • resource "aws_ebs_volume" "ebs_ec2_01": 宣告建立一個名為 ebs_ec2_01 的 EBS 卷資源。

  • availability_zone = var.az_1: 指定該 EBS 卷的可用區,使用之前定義的變數 az_1,這樣可以靈活地選擇卷所在的可用區而不需要硬編碼。

  • size = 20: 設定 EBS 卷的大小為 20 GB。這個引數決定了卷的儲存容量。

  • type = "gp3": 指定 EBS 卷的型別為 gp3,這是 AWS 提供的一種通用型 SSD 卷型別,適合大多數工作負載。

  • tags = { Name = "ebs-ec2-01" }: 為 EBS 卷新增標籤,指定其名稱為 ebs-ec2-01,便於在 AWS 控制檯中識別和管理。

預執行

terraform plan -out=tf.plan
terraform plan
 terraform plan -out=tf.plan
aws_key_pair.tf-keypair: Refreshing state... [id=tf-keypair]
aws_vpc.tf_vpc: Refreshing state... [id=vpc-0f2e1cdca0cf5a306]
aws_subnet.tf_subnet02: Refreshing state... [id=subnet-019490723ad3e940a]
aws_subnet.tf_subnet01: Refreshing state... [id=subnet-08f8e4b2c62e27989]
aws_security_group.tf_security_group: Refreshing state... [id=sg-0907b4ae2d4bd9592]
aws_instance.tf-ec2-02: Refreshing state... [id=i-0888d477cdf36aea0]
aws_instance.tf-ec2-01: Refreshing state... [id=i-0f8d63e600d93f6b0]

Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the
following symbols:
  + create

Terraform will perform the following actions:

  # aws_ebs_volume.ebs_ec2_01 will be created
  + resource "aws_ebs_volume" "ebs_ec2_01" {
      + arn               = (known after apply)
      + availability_zone = "us-west-1a"
      + encrypted         = (known after apply)
      + final_snapshot    = false
      + id                = (known after apply)
      + iops              = (known after apply)
      + kms_key_id        = (known after apply)
      + size              = 20
      + snapshot_id       = (known after apply)
      + tags              = {
          + "Name" = "ebs-ec2-01"
        }
      + tags_all          = {
          + "Name" = "ebs-ec2-01"
        }
      + throughput        = (known after apply)
      + type              = "gp3"
    }

  # aws_ebs_volume.ebs_ec2_02 will be created
  + resource "aws_ebs_volume" "ebs_ec2_02" {
      + arn               = (known after apply)
      + availability_zone = "us-west-1b"
      + encrypted         = (known after apply)
      + final_snapshot    = false
      + id                = (known after apply)
      + iops              = (known after apply)
      + kms_key_id        = (known after apply)
      + size              = 20
      + snapshot_id       = (known after apply)
      + tags              = {
          + "Name" = "ebs-ec2-02"
        }
      + tags_all          = {
          + "Name" = "ebs-ec2-02"
        }
      + throughput        = (known after apply)
      + type              = "gp3"
    }

Plan: 2 to add, 0 to change, 0 to destroy.

────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────

Saved the plan to: tf.plan

To perform exactly these actions, run the following command to apply:
    terraform apply "tf.plan"

執行建立ebs

terraform apply "tf.plan"

輸出如下:

terraform apply "tf.plan"
aws_ebs_volume.ebs_ec2_02: Creating...
aws_ebs_volume.ebs_ec2_01: Creating...
aws_ebs_volume.ebs_ec2_02: Still creating... [10s elapsed]
aws_ebs_volume.ebs_ec2_01: Still creating... [10s elapsed]
aws_ebs_volume.ebs_ec2_01: Creation complete after 12s [id=vol-0aac9f1302376328a]
aws_ebs_volume.ebs_ec2_02: Creation complete after 12s [id=vol-06bd472f44eadaf02]

Apply complete! Resources: 2 added, 0 changed, 0 destroyed.

5.將 EBS 磁碟附加到 EC2

新增ebs_attachment.tf檔案

resource "aws_volume_attachment" "attach_ebs_to_ec2_01" {
  device_name = "/dev/xvdh"                # 裝置名稱,可根據需求更改
  volume_id   = aws_ebs_volume.ebs_ec2_01.id
  instance_id = aws_instance.tf-ec2-01.id
}

resource "aws_volume_attachment" "attach_ebs_to_ec2_02" {
  device_name = "/dev/xvdh"
  volume_id   = aws_ebs_volume.ebs_ec2_02.id
  instance_id = aws_instance.tf-ec2-02.id
}

解釋

  • resource "aws_volume_attachment" "attach_ebs_to_ec2_01": 宣告瞭一個名為 attach_ebs_to_ec2_01 的 EBS 卷附件資源,屬於 aws_volume_attachment 型別。這個資源負責將 EBS 卷與 EC2 例項關聯。

  • device_name = "/dev/xvdh": 指定 EBS 卷在 EC2 例項上的裝置名稱。這裡使用的是 /dev/xvdh,可以根據需求更改。這個名稱在例項中將用於引用該 EBS 卷。

  • volume_id = aws_ebs_volume.ebs_ec2_01.id: 引用之前建立的 EBS 卷 ebs_ec2_01 的 ID,以指定要附加的卷。透過引用資源的 ID,可以確保操作的是正確的資源。

  • instance_id = aws_instance.tf-ec2-01.id: 引用要將 EBS 卷附加到的 EC2 例項 tf-ec2-01 的 ID。這樣可以明確指定哪個例項將使用該 EBS 卷。

預執行

terraform plan -out=tf.plan
terraform plan
 terraform plan -out=tf.plan
aws_ebs_volume.ebs_ec2_02: Refreshing state... [id=vol-06bd472f44eadaf02]
aws_vpc.tf_vpc: Refreshing state... [id=vpc-0f2e1cdca0cf5a306]
aws_ebs_volume.ebs_ec2_01: Refreshing state... [id=vol-0aac9f1302376328a]
aws_key_pair.tf-keypair: Refreshing state... [id=tf-keypair]
aws_subnet.tf_subnet02: Refreshing state... [id=subnet-019490723ad3e940a]
aws_subnet.tf_subnet01: Refreshing state... [id=subnet-08f8e4b2c62e27989]
aws_security_group.tf_security_group: Refreshing state... [id=sg-0907b4ae2d4bd9592]
aws_instance.tf-ec2-01: Refreshing state... [id=i-0f8d63e600d93f6b0]
aws_instance.tf-ec2-02: Refreshing state... [id=i-0888d477cdf36aea0]

Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the
following symbols:
  + create

Terraform will perform the following actions:

  # aws_volume_attachment.attach_ebs_to_ec2_01 will be created
  + resource "aws_volume_attachment" "attach_ebs_to_ec2_01" {
      + device_name = "/dev/xvdh"
      + id          = (known after apply)
      + instance_id = "i-0f8d63e600d93f6b0"
      + volume_id   = "vol-0aac9f1302376328a"
    }

  # aws_volume_attachment.attach_ebs_to_ec2_02 will be created
  + resource "aws_volume_attachment" "attach_ebs_to_ec2_02" {
      + device_name = "/dev/xvdh"
      + id          = (known after apply)
      + instance_id = "i-0888d477cdf36aea0"
      + volume_id   = "vol-06bd472f44eadaf02"
    }

Plan: 2 to add, 0 to change, 0 to destroy.

────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────

Saved the plan to: tf.plan

To perform exactly these actions, run the following command to apply:
    terraform apply "tf.plan"

執行附加磁碟

terraform apply "tf.plan"

輸出:

aws_volume_attachment.attach_ebs_to_ec2_01: Creating...
aws_volume_attachment.attach_ebs_to_ec2_02: Creating...
aws_volume_attachment.attach_ebs_to_ec2_02: Still creating... [10s elapsed]
aws_volume_attachment.attach_ebs_to_ec2_01: Still creating... [10s elapsed]
aws_volume_attachment.attach_ebs_to_ec2_01: Still creating... [20s elapsed]
aws_volume_attachment.attach_ebs_to_ec2_02: Still creating... [20s elapsed]
aws_volume_attachment.attach_ebs_to_ec2_02: Still creating... [30s elapsed]
aws_volume_attachment.attach_ebs_to_ec2_01: Still creating... [30s elapsed]
aws_volume_attachment.attach_ebs_to_ec2_02: Creation complete after 33s [id=vai-439503465]
aws_volume_attachment.attach_ebs_to_ec2_01: Creation complete after 33s [id=vai-1312740159]

Apply complete! Resources: 2 added, 0 changed, 0 destroyed.

6. 建立eip和關聯eip到ec2例項

新增eip.tf檔案

# 為 tf-ec2-01 建立 EIP
resource "aws_eip" "tf_eip_01" {
  vpc = true
  tags = {
    Name = "tf-eip-01"
  }
}

# 為 tf-ec2-02 建立 EIP
resource "aws_eip" "tf_eip_02" {
  vpc = true
  tags = {
    Name = "tf-eip-02"
  }
}

解釋

  • resource "aws_eip" "tf_eip_01": 宣告瞭一個名為 tf_eip_01 的彈性 IP 資源,屬於 aws_eip 型別。彈性 IP 是 AWS 提供的一種靜態 IPv4 地址,可以在不同的 EC2 例項之間動態遷移。

  • vpc = true: 指定這個彈性 IP 是用於 VPC(虛擬私有云)的。如果設定為 true,則彈性 IP 將與 VPC 關聯。這是因為在 VPC 中使用的彈性 IP 地址與傳統的 EC2 例項彈性 IP 有所不同。

  • tags = { Name = "tf-eip-01" }: 為該彈性 IP 新增一個標籤,名稱為 tf-eip-01。標籤用於管理和識別資源,使得在 AWS 控制檯或使用其他工具時能夠更方便地找到和管理該資源。

新增eip_association.tf檔案

# 關聯 EIP 到 tf-ec2-01 例項
resource "aws_eip_association" "tf_eip_association_01" {
  instance_id   = aws_instance.tf-ec2-01.id
  allocation_id = aws_eip.tf_eip_01.id
}

# 關聯 EIP 到 tf-ec2-02 例項
resource "aws_eip_association" "tf_eip_association_02" {
  instance_id   = aws_instance.tf-ec2-02.id
  allocation_id = aws_eip.tf_eip_02.id
}

解釋

  • resource "aws_eip_association" "tf_eip_association_01": 宣告瞭一個名為 tf_eip_association_01 的資源,屬於 aws_eip_association 型別。這個資源用於建立彈性 IP 和 EC2 例項之間的關聯。

  • instance_id = aws_instance.tf-ec2-01.id: 指定要與彈性 IP 關聯的 EC2 例項的 ID。這裡引用了之前定義的 EC2 例項 tf-ec2-01 的 ID。

  • allocation_id = aws_eip.tf_eip_01.id: 指定要關聯的彈性 IP 的分配 ID。這裡引用了之前建立的彈性 IP tf_eip_01 的 ID。

預執行

terraform plan -out=tf.plan

terraform plan
terraform plan -out=tf.plan
aws_key_pair.tf-keypair: Refreshing state... [id=tf-keypair]
aws_ebs_volume.ebs_ec2_01: Refreshing state... [id=vol-0aac9f1302376328a]
aws_ebs_volume.ebs_ec2_02: Refreshing state... [id=vol-06bd472f44eadaf02]
aws_vpc.tf_vpc: Refreshing state... [id=vpc-0f2e1cdca0cf5a306]
aws_internet_gateway.tf_igw: Refreshing state... [id=igw-08ec2f3357e8725df]
aws_subnet.tf_subnet02: Refreshing state... [id=subnet-019490723ad3e940a]
aws_subnet.tf_subnet01: Refreshing state... [id=subnet-08f8e4b2c62e27989]
aws_security_group.tf_security_group: Refreshing state... [id=sg-0907b4ae2d4bd9592]
aws_route_table.tf_route_table: Refreshing state... [id=rtb-0ae4b29ae8d6881ed]
aws_instance.tf-ec2-01: Refreshing state... [id=i-0f8d63e600d93f6b0]
aws_instance.tf-ec2-02: Refreshing state... [id=i-0888d477cdf36aea0]
aws_route_table_association.tf_route_table_association_02: Refreshing state... [id=rtbassoc-0190cb61bd5850d86]
aws_route_table_association.tf_route_table_association_01: Refreshing state... [id=rtbassoc-0999e44cc1cfb7f09]
aws_volume_attachment.attach_ebs_to_ec2_01: Refreshing state... [id=vai-1312740159]
aws_volume_attachment.attach_ebs_to_ec2_02: Refreshing state... [id=vai-439503465]

Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the
following symbols:
  + create

Terraform will perform the following actions:

  # aws_eip.tf_eip_01 will be created
  + resource "aws_eip" "tf_eip_01" {
      + allocation_id        = (known after apply)
      + association_id       = (known after apply)
      + carrier_ip           = (known after apply)
      + customer_owned_ip    = (known after apply)
      + domain               = (known after apply)
      + id                   = (known after apply)
      + instance             = (known after apply)
      + network_border_group = (known after apply)
      + network_interface    = (known after apply)
      + private_dns          = (known after apply)
      + private_ip           = (known after apply)
      + public_dns           = (known after apply)
      + public_ip            = (known after apply)
      + public_ipv4_pool     = (known after apply)
      + tags                 = {
          + "Name" = "tf-eip-01"
        }
      + tags_all             = {
          + "Name" = "tf-eip-01"
        }
      + vpc                  = true
    }

  # aws_eip.tf_eip_02 will be created
  + resource "aws_eip" "tf_eip_02" {
      + allocation_id        = (known after apply)
      + association_id       = (known after apply)
      + carrier_ip           = (known after apply)
      + customer_owned_ip    = (known after apply)
      + domain               = (known after apply)
      + id                   = (known after apply)
      + instance             = (known after apply)
      + network_border_group = (known after apply)
      + network_interface    = (known after apply)
      + private_dns          = (known after apply)
      + private_ip           = (known after apply)
      + public_dns           = (known after apply)
      + public_ip            = (known after apply)
      + public_ipv4_pool     = (known after apply)
      + tags                 = {
          + "Name" = "tf-eip-02"
        }
      + tags_all             = {
          + "Name" = "tf-eip-02"
        }
      + vpc                  = true
    }

  # aws_eip_association.tf_eip_association_01 will be created
  + resource "aws_eip_association" "tf_eip_association_01" {
      + allocation_id        = (known after apply)
      + id                   = (known after apply)
      + instance_id          = "i-0f8d63e600d93f6b0"
      + network_interface_id = (known after apply)
      + private_ip_address   = (known after apply)
      + public_ip            = (known after apply)
    }

  # aws_eip_association.tf_eip_association_02 will be created
  + resource "aws_eip_association" "tf_eip_association_02" {
      + allocation_id        = (known after apply)
      + id                   = (known after apply)
      + instance_id          = "i-0888d477cdf36aea0"
      + network_interface_id = (known after apply)
      + private_ip_address   = (known after apply)
      + public_ip            = (known after apply)
    }

Plan: 4 to add, 0 to change, 0 to destroy.

────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────

Saved the plan to: tf.plan

To perform exactly these actions, run the following command to apply:
    terraform apply "tf.plan"

執行建立eip和關聯ec2

terraform apply "tf.plan"

結果如下:

aws_eip.tf_eip_02: Creating...
aws_eip.tf_eip_01: Creating...
aws_eip.tf_eip_01: Creation complete after 2s [id=eipalloc-0a9cdbc84013614f5]
aws_eip.tf_eip_02: Creation complete after 2s [id=eipalloc-0ed1c932d9a7a305a]
aws_eip_association.tf_eip_association_01: Creating...
aws_eip_association.tf_eip_association_02: Creating...
aws_eip_association.tf_eip_association_02: Creation complete after 1s [id=eipassoc-0b517a49d76639054]
aws_eip_association.tf_eip_association_01: Creation complete after 1s [id=eipassoc-0e0359ad952266802]

Apply complete! Resources: 4 added, 0 changed, 0 destroyed.

轉載請著名原文地址:https://www.cnblogs.com/Sunzz/p/18498915

7.透過控制後臺檢視建立的結果

透過控制檯可以看到,例項名字、型別、可用區、公網IP、安全組、金鑰、磁碟等都是符合我們在tf中定義的。

史上最全 Terraform 入門教程,助你無坑入門!

再直接登入到伺服器上看下,確保網路、安全組都是可用的

ssh ec2-user@52.9.19.52 -i ~/.ssh/tf-keypair

史上最全 Terraform 入門教程,助你無坑入門!

五、建立EKS

1. 建立EKS所需的網路資源

建立eks所用的子網並關聯路由表檔案

eks_subnets.tf內容如下

resource "aws_subnet" "tf_eks_subnet1" {
  vpc_id            = aws_vpc.tf_vpc.id
  cidr_block        = "10.10.81.0/24"
  availability_zone = var.az_1
  map_public_ip_on_launch = true

  tags = {
    Name = "tf_eks_subnet1"
  }
}

resource "aws_subnet" "tf_eks_subnet2" {
  vpc_id            = aws_vpc.tf_vpc.id
  cidr_block        = "10.10.82.0/24"
  availability_zone = var.az_2
  map_public_ip_on_launch = true

  tags = {
    Name = "tf_eks_subnet2"
  }
}


# 將路由表關聯到子網tf_eks_subnet1
resource "aws_route_table_association" "tf_eks_subnet1_association" {
  subnet_id      = aws_subnet.tf_eks_subnet1.id 
  route_table_id = aws_route_table.tf_route_table.id
}

# 將路由表關聯到子網tf_eks_subnet2
resource "aws_route_table_association" "tf_eks_subnet2_association" {
  subnet_id      = aws_subnet.tf_eks_subnet2.id  
  route_table_id = aws_route_table.tf_route_table.id
}

解釋

  • resource "aws_subnet" "tf_eks_subnet1": 宣告瞭一個名為 tf_eks_subnet1 的資源,型別為 aws_subnet,用於建立一個新的子網。

  • vpc_id = aws_vpc.tf_vpc.id: 指定子網所屬的虛擬私有云(VPC)的 ID。這裡引用了之前定義的 VPC tf_vpc 的 ID,以將此子閘道器聯到相應的 VPC。

  • cidr_block = "10.10.81.0/24": 指定子網的 CIDR 塊(Classless Inter-Domain Routing),這表示該子網的 IP 地址範圍為 10.10.81.010.10.81.255,可以容納 256 個 IP 地址。

  • availability_zone = var.az_1: 指定子網所在的可用區(Availability Zone),這裡使用了變數 az_1,允許靈活配置子網的可用區。

  • map_public_ip_on_launch = true: 指定在此子網中啟動的例項是否自動分配公共 IP 地址。設定為 true 表示新啟動的例項將自動獲得公共 IP,允許它們直接訪問網際網路。

  • resource "aws_route_table_association" "tf_eks_subnet1_association": 宣告瞭一個名為 tf_eks_subnet1_association 的資源,型別為 aws_route_table_association,用於建立路由表與子網之間的關聯。

  • subnet_id = aws_subnet.tf_eks_subnet1.id: 指定要關聯的子網的 ID。這裡引用了之前定義的子網 tf_eks_subnet1 的 ID。

  • route_table_id = aws_route_table.tf_route_table.id: 指定要關聯的路由表的 ID。這裡引用了之前定義的路由表 tf_route_table 的 ID。

預建立

 terraform plan -out=tf.plan
tf plan
 Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the following
symbols:
  + create

Terraform will perform the following actions:

  # aws_route_table_association.tf_eks_subnet1_association will be created
  + resource "aws_route_table_association" "tf_eks_subnet1_association" {
      + id             = (known after apply)
      + route_table_id = "rtb-0ae4b29ae8d6881ed"
      + subnet_id      = (known after apply)
    }

  # aws_route_table_association.tf_eks_subnet2_association will be created
  + resource "aws_route_table_association" "tf_eks_subnet2_association" {
      + id             = (known after apply)
      + route_table_id = "rtb-0ae4b29ae8d6881ed"
      + subnet_id      = (known after apply)
    }

  # aws_subnet.tf_eks_subnet1 will be created
  + resource "aws_subnet" "tf_eks_subnet1" {
      + arn                                            = (known after apply)
      + assign_ipv6_address_on_creation                = false
      + availability_zone                              = "us-west-1a"
      + availability_zone_id                           = (known after apply)
      + cidr_block                                     = "10.10.81.0/24"
      + enable_dns64                                   = false
      + enable_resource_name_dns_a_record_on_launch    = false
      + enable_resource_name_dns_aaaa_record_on_launch = false
      + id                                             = (known after apply)
      + ipv6_cidr_block_association_id                 = (known after apply)
      + ipv6_native                                    = false
      + map_public_ip_on_launch                        = true
      + owner_id                                       = (known after apply)
      + private_dns_hostname_type_on_launch            = (known after apply)
      + tags                                           = {
          + "Name" = "tf_eks_subnet1"
        }
      + tags_all                                       = {
          + "Name" = "tf_eks_subnet1"
        }
      + vpc_id                                         = "vpc-0f2e1cdca0cf5a306"
    }

  # aws_subnet.tf_eks_subnet2 will be created
  + resource "aws_subnet" "tf_eks_subnet2" {
      + arn                                            = (known after apply)
      + assign_ipv6_address_on_creation                = false
      + availability_zone                              = "us-west-1b"
      + availability_zone_id                           = (known after apply)
      + cidr_block                                     = "10.10.82.0/24"
      + enable_dns64                                   = false
      + enable_resource_name_dns_a_record_on_launch    = false
      + enable_resource_name_dns_aaaa_record_on_launch = false
      + id                                             = (known after apply)
      + ipv6_cidr_block_association_id                 = (known after apply)
      + ipv6_native                                    = false
      + map_public_ip_on_launch                        = true
      + owner_id                                       = (known after apply)
      + private_dns_hostname_type_on_launch            = (known after apply)
      + tags                                           = {
          + "Name" = "tf_eks_subnet2"
        }
      + tags_all                                       = {
          + "Name" = "tf_eks_subnet2"
        }
      + vpc_id                                         = "vpc-0f2e1cdca0cf5a306"
    }

Plan: 4 to add, 0 to change, 0 to destroy.

────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────

Saved the plan to: tf.plan

To perform exactly these actions, run the following command to apply:
    terraform apply "tf.plan"

建立子網

terraform apply "tf.plan"

輸出如下:

aws_subnet.tf_eks_subnet2: Creating...
aws_subnet.tf_eks_subnet1: Creating...
aws_subnet.tf_eks_subnet1: Still creating... [10s elapsed]
aws_subnet.tf_eks_subnet2: Still creating... [10s elapsed]
aws_subnet.tf_eks_subnet2: Creation complete after 13s [id=subnet-0a30534a829758774]
aws_route_table_association.tf_eks_subnet2_association: Creating...
aws_subnet.tf_eks_subnet1: Creation complete after 13s [id=subnet-01b5d98060f0063ef]
aws_route_table_association.tf_eks_subnet1_association: Creating...
aws_route_table_association.tf_eks_subnet1_association: Creation complete after 1s [id=rtbassoc-08fef5fee4d037035]
aws_route_table_association.tf_eks_subnet2_association: Creation complete after 1s [id=rtbassoc-0ec12dc9868d6316a]

Apply complete! Resources: 4 added, 0 changed, 0 destroyed.

2. 建立EKS安全組

eks_security_group.tf

這裡放開所有隻是為了演示,請勿用在生產環境中

resource "aws_security_group" "eks_allow_all" {
  name        = "eks_allow_all"
  description = "Security group that allows all inbound and outbound traffic"
  vpc_id      = aws_vpc.tf_vpc.id


  // 允許所有入站流量
  ingress {
    from_port   = 0
    to_port     = 0
    protocol    = "-1"  // -1 表示所有協議
    cidr_blocks = ["0.0.0.0/0"]  // 允許來自所有 IP 的流量
  }

  // 允許所有出站流量
  egress {
    from_port   = 0
    to_port     = 0
    protocol    = "-1"  // -1 表示所有協議
    cidr_blocks = ["0.0.0.0/0"]  // 允許流量傳送到所有 IP
  }
}

解釋

  1. resource "aws_security_group" "eks_allow_all": 宣告一個名為 eks_allow_all 的資源,型別為 aws_security_group,用於建立安全組。

  2. name = "eks_allow_all": 設定安全組的名稱為 eks_allow_all

  3. description = "Security group that allows all inbound and outbound traffic": 為安全組提供描述,說明這個安全組的作用是允許所有入站和出站流量。

  4. vpc_id = aws_vpc.tf_vpc.id: 指定安全組所屬的 VPC,引用了之前定義的 VPC 的 ID。

入站規則(ingress

  1. ingress { ... }:
    • 定義入站規則,允許流量進入安全組。

    • from_port = 0to_port = 0: 這表示允許所有埠的流量(0到0表示所有埠)。

    • protocol = "-1": -1 表示所有協議,包括 TCP、UDP 和 ICMP 等。

    • cidr_blocks = ["0.0.0.0/0"]: 允許來自所有 IP 地址的流量(0.0.0.0/0 表示任意 IP)。

出站規則(egress

  1. egress { ... }:
    • 定義出站規則,允許流量離開安全組。

    • from_port = 0to_port = 0: 同樣允許所有埠的流量。

    • protocol = "-1": 表示所有協議。

    • cidr_blocks = ["0.0.0.0/0"]: 允許流量傳送到所有 IP 地址。

預建立

terraform plan -out=tf.plan
tf plan
 Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the following
symbols:
  + create

Terraform will perform the following actions:

  # aws_security_group.eks_allow_all will be created
  + resource "aws_security_group" "eks_allow_all" {
      + arn                    = (known after apply)
      + description            = "Security group that allows all inbound and outbound traffic"
      + egress                 = [
          + {
              + cidr_blocks      = [
                  + "0.0.0.0/0",
                ]
              + from_port        = 0
              + ipv6_cidr_blocks = []
              + prefix_list_ids  = []
              + protocol         = "-1"
              + security_groups  = []
              + self             = false
              + to_port          = 0
                # (1 unchanged attribute hidden)
            },
        ]
      + id                     = (known after apply)
      + ingress                = [
          + {
              + cidr_blocks      = [
                  + "0.0.0.0/0",
                ]
              + from_port        = 0
              + ipv6_cidr_blocks = []
              + prefix_list_ids  = []
              + protocol         = "-1"
              + security_groups  = []
              + self             = false
              + to_port          = 0
                # (1 unchanged attribute hidden)
            },
        ]
      + name                   = "eks_allow_all"
      + name_prefix            = (known after apply)
      + owner_id               = (known after apply)
      + revoke_rules_on_delete = false
      + tags_all               = (known after apply)
      + vpc_id                 = "vpc-0f2e1cdca0cf5a306"
    }

Plan: 1 to add, 0 to change, 0 to destroy.

────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────

Saved the plan to: tf.plan

To perform exactly these actions, run the following command to apply:
    terraform apply "tf.plan"

建立安全組

terraform apply "tf.plan"

輸出如下:

aws_security_group.eks_allow_all: Creating...
aws_security_group.eks_allow_all: Creation complete after 7s [id=sg-0db88cd4ca4b95099]

Apply complete! Resources: 1 added, 0 changed, 0 destroyed.

3. 建立 EKS 叢集 IAM 角色

建立eks_iam_roles.tf檔案

data "aws_iam_policy_document" "assume_role" {
  statement {
    effect = "Allow"

    principals {
      type        = "Service"
      identifiers = ["eks.amazonaws.com"]
    }
    actions = ["sts:AssumeRole"]
  }
}

resource "aws_iam_role" "eks-cluster" {
  name               = "eks-cluster"
  assume_role_policy = data.aws_iam_policy_document.assume_role.json
}

resource "aws_iam_role_policy_attachment" "eks-cluster-AmazonEKSClusterPolicy" {
  policy_arn = "arn:aws:iam::aws:policy/AmazonEKSClusterPolicy"
  role       = aws_iam_role.eks-cluster.name
}

resource "aws_iam_role_policy_attachment" "eks-cluster-AmazonEKSVPCResourceController" {
  policy_arn = "arn:aws:iam::aws:policy/AmazonEKSVPCResourceController"
  role       = aws_iam_role.eks-cluster.name
}

解釋

這段程式碼定義了一個 IAM 角色及其許可權策略,用於 Amazon EKS 叢集的建立和管理。以下是詳細解釋:

資料來源部分

  1. data "aws_iam_policy_document" "assume_role":
    • 這是一個 IAM 策略文件的資料來源,用於定義角色的信任策略。

    • statement { ... }:

      • effect = "Allow": 允許該角色的使用。
      • principals { ... }: 定義可以使用此角色的主體。
        • type = "Service": 指定主體型別為服務。
        • identifiers = ["eks.amazonaws.com"]: 允許 EKS 服務來假設此角色。
      • actions = ["sts:AssumeRole"]: 允許上述主體執行的操作,即假設角色的許可權。

IAM 角色部分

  1. resource "aws_iam_role" "eks-cluster":
    • 建立一個名為 eks-cluster 的 IAM 角色。

    • assume_role_policy = data.aws_iam_policy_document.assume_role.json: 將之前定義的信任策略應用於該角色。

IAM 角色策略附件

  1. resource "aws_iam_role_policy_attachment" "eks-cluster-AmazonEKSClusterPolicy":

    • 關聯 Amazon EKS Cluster Policy 到 IAM 角色。
    • policy_arn = "arn:aws:iam::aws:policy/AmazonEKSClusterPolicy": 指定要附加的策略 ARN。
    • role = aws_iam_role.eks-cluster.name: 將策略附加到之前建立的 eks-cluster 角色。
  2. resource "aws_iam_role_policy_attachment" "eks-cluster-AmazonEKSVPCResourceController":

    • 關聯 Amazon EKS VPC Resource Controller 策略到 IAM 角色。
    • policy_arn = "arn:aws:iam::aws:policy/AmazonEKSVPCResourceController": 指定要附加的策略 ARN。
    • role = aws_iam_role.eks-cluster.name: 將策略附加到之前建立的 eks-cluster 角色。

預建立

terraform plan -out=tf.plan
tf plan
 Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the following
symbols:
  + create

Terraform will perform the following actions:

  # aws_iam_role.eks-cluster will be created
  + resource "aws_iam_role" "eks-cluster" {
      + arn                   = (known after apply)
      + assume_role_policy    = jsonencode(
            {
              + Statement = [
                  + {
                      + Action    = "sts:AssumeRole"
                      + Effect    = "Allow"
                      + Principal = {
                          + Service = "eks.amazonaws.com"
                        }
                      + Sid       = ""
                    },
                ]
              + Version   = "2012-10-17"
            }
        )
      + create_date           = (known after apply)
      + force_detach_policies = false
      + id                    = (known after apply)
      + managed_policy_arns   = (known after apply)
      + max_session_duration  = 3600
      + name                  = "eks-cluster"
      + name_prefix           = (known after apply)
      + path                  = "/"
      + role_last_used        = (known after apply)
      + tags_all              = (known after apply)
      + unique_id             = (known after apply)

      + inline_policy (known after apply)
    }

  # aws_iam_role_policy_attachment.eks-cluster-AmazonEKSClusterPolicy will be created
  + resource "aws_iam_role_policy_attachment" "eks-cluster-AmazonEKSClusterPolicy" {
      + id         = (known after apply)
      + policy_arn = "arn:aws:iam::aws:policy/AmazonEKSClusterPolicy"
      + role       = "eks-cluster"
    }

  # aws_iam_role_policy_attachment.eks-cluster-AmazonEKSVPCResourceController will be created
  + resource "aws_iam_role_policy_attachment" "eks-cluster-AmazonEKSVPCResourceController" {
      + id         = (known after apply)
      + policy_arn = "arn:aws:iam::aws:policy/AmazonEKSVPCResourceController"
      + role       = "eks-cluster"
    }

Plan: 3 to add, 0 to change, 0 to destroy.

────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────

Saved the plan to: tf.plan

To perform exactly these actions, run the following command to apply:
    terraform apply "tf.plan"

建立eks iam role

terraform apply "tf.plan"

輸出如下:

aws_iam_role.eks-cluster: Creating...
aws_iam_role.eks-cluster: Creation complete after 2s [id=eks-cluster]
aws_iam_role_policy_attachment.eks-cluster-AmazonEKSVPCResourceController: Creating...
aws_iam_role_policy_attachment.eks-cluster-AmazonEKSClusterPolicy: Creating...
aws_iam_role_policy_attachment.eks-cluster-AmazonEKSVPCResourceController: Creation complete after 1s [id=eks-cluster-20241027124651622300000001]
aws_iam_role_policy_attachment.eks-cluster-AmazonEKSClusterPolicy: Creation complete after 1s [id=eks-cluster-20241027124651968900000002]

Apply complete! Resources: 3 added, 0 changed, 0 destroyed.

4. 建立EKS叢集

編寫eks_cluster.tf檔案

resource "aws_eks_cluster" "tf-eks" {
  name     = "tf-eks"
  version  = var.eks_version  # 指定 EKS 版本
  role_arn = aws_iam_role.eks-cluster.arn

  vpc_config {
    subnet_ids = [
	  aws_subnet.tf_eks_subnet1.id, 
	  aws_subnet.tf_eks_subnet2.id
	]
    security_group_ids      = [aws_security_group.eks_allow_all.id]    # 引用之前建立的安全組
    endpoint_public_access  = true     # 公有訪問
    endpoint_private_access = true     # 私有訪問
    public_access_cidrs = ["0.0.0.0/0"] # 允許從任何地方訪問
  }
#  # 啟用日誌
#  enabled_cluster_log_types = [
#           "api",
#           "audit",
#           "authenticator",
#           "controllerManager",
#           "scheduler",
#  ]

  depends_on = [
    aws_iam_role_policy_attachment.eks-cluster-AmazonEKSClusterPolicy,
    aws_iam_role_policy_attachment.eks-cluster-AmazonEKSVPCResourceController,
  ]
}

引數解釋

  • name:指定叢集的名稱為 tf-eks
  • version:指定 EKS 的版本,使用變數 var.eks_version,這樣可以方便地在不同環境中調整。
  • role_arn:指定用於 EKS 叢集的 IAM 角色 ARN,通常這個角色需要有相應的許可權策略。
  • subnet_ids:指定 EKS 叢集所在的子網,允許使用多個子網 ID,以便在高可用性場景中部署。
  • security_group_ids:引用之前建立的安全組,用於控制叢集的網路流量。
  • endpoint_public_access:設定為 true,表示允許透過公共網路訪問 EKS API 端點。
  • endpoint_private_access:設定為 true,表示允許在 VPC 內部訪問 EKS API 端點。
  • public_access_cidrs:允許訪問叢集的 CIDR 範圍,這裡設定為 ["0.0.0.0/0"],表示允許任何 IP 地址訪問,這可能會帶來安全風險。
  • 日誌部分註釋了,若啟用,可以指定需要記錄的叢集日誌型別,包括 API、審計、身份驗證器、控制器管理器和排程程式等。
  • depends_on:確保在建立 EKS 叢集之前,所需的 IAM 角色策略已經附加,確保資源的建立順序正確

預建立

terraform plan -out=tf.plan
tf plan
 Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the following
symbols:
  + create

Terraform will perform the following actions:

  # aws_eks_cluster.tf-eks will be created
  + resource "aws_eks_cluster" "tf-eks" {
      + arn                   = (known after apply)
      + certificate_authority = (known after apply)
      + cluster_id            = (known after apply)
      + created_at            = (known after apply)
      + endpoint              = (known after apply)
      + id                    = (known after apply)
      + identity              = (known after apply)
      + name                  = "tf-eks"
      + platform_version      = (known after apply)
      + role_arn              = "arn:aws:iam::xxxxxxxx:role/eks-cluster"
      + status                = (known after apply)
      + tags_all              = (known after apply)
      + version               = "1.31"

      + kubernetes_network_config (known after apply)

      + vpc_config {
          + cluster_security_group_id = (known after apply)
          + endpoint_private_access   = true
          + endpoint_public_access    = true
          + public_access_cidrs       = [
              + "0.0.0.0/0",
            ]
          + security_group_ids        = [
              + "sg-0db88cd4ca4b95099",
            ]
          + subnet_ids                = [
              + "subnet-01b5d98060f0063ef",
              + "subnet-0a30534a829758774",
            ]
          + vpc_id                    = (known after apply)
        }
    }

Plan: 1 to add, 0 to change, 0 to destroy.

────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────

Saved the plan to: tf.plan

To perform exactly these actions, run the following command to apply:
    terraform apply "tf.plan"

建立

terraform apply "tf.plan"

輸出如下:

aws_eks_cluster.tf-eks: Creating...
aws_eks_cluster.tf-eks: Still creating... [10s elapsed]
aws_eks_cluster.tf-eks: Still creating... [20s elapsed]
aws_eks_cluster.tf-eks: Still creating... [30s elapsed]
......
.......
aws_eks_cluster.tf-eks: Still creating... [7m21s elapsed]
aws_eks_cluster.tf-eks: Still creating... [7m31s elapsed]
aws_eks_cluster.tf-eks: Creation complete after 7m35s [id=tf-eks]

Apply complete! Resources: 1 added, 0 changed, 0 destroyed.

5. 建立Node Group IAM

建立tf檔案

eks_node_group_iam.tf

resource "aws_iam_role" "eks-nodegroup-role" {
  name = "eks-nodegroup-role"
  assume_role_policy = jsonencode({
    Statement = [{
      Action = "sts:AssumeRole"
      Effect = "Allow"
      Principal = {
        Service = "ec2.amazonaws.com"
      }
    }]
    Version = "2012-10-17"
  })
}

resource "aws_iam_role_policy_attachment" "eks-nodegroup-role-AmazonEKSWorkerNodePolicy" {
  policy_arn = "arn:aws:iam::aws:policy/AmazonEKSWorkerNodePolicy"
  role       = aws_iam_role.eks-nodegroup-role.name
}

resource "aws_iam_role_policy_attachment" "eks-nodegroup-role-AmazonEKS_CNI_Policy" {
  policy_arn = "arn:aws:iam::aws:policy/AmazonEKS_CNI_Policy"
  role       = aws_iam_role.eks-nodegroup-role.name
}

resource "aws_iam_role_policy_attachment" "eks-nodegroup-role-AmazonEC2ContainerRegistryReadOnly" {
  policy_arn = "arn:aws:iam::aws:policy/AmazonEC2ContainerRegistryReadOnly"
  role       = aws_iam_role.eks-nodegroup-role.name
}

解釋

這段程式碼定義了一個 IAM 角色,用於 Amazon EKS 節點組,並附加了必要的許可權策略。以下是詳細解釋:

IAM 角色定義

  1. resource "aws_iam_role" "eks-nodegroup-role":
    • 建立一個名為 eks-nodegroup-role 的 IAM 角色,供 EKS 節點使用。

    • assume_role_policy = jsonencode({ ... }): 定義角色的信任策略,允許特定服務假設此角色。

      • Statement = [{ ... }]: 定義角色的許可權宣告。
        • Action = "sts:AssumeRole": 允許的操作,即假設角色。
        • Effect = "Allow": 該宣告的效果是允許。
        • Principal = { Service = "ec2.amazonaws.com" }: 允許 ec2.amazonaws.com 服務(即 EC2 例項)來假設此角色,這樣 EKS 節點才能獲得許可權。
    • Version = "2012-10-17": 定義策略語言的版本。

IAM 角色策略附件

  1. resource "aws_iam_role_policy_attachment" "eks-nodegroup-role-AmazonEKSWorkerNodePolicy":

    • 將 Amazon EKS Worker Node Policy 附加到節點組角色。
    • policy_arn = "arn:aws:iam::aws:policy/AmazonEKSWorkerNodePolicy": 指定要附加的策略 ARN。
    • role = aws_iam_role.eks-nodegroup-role.name: 將策略附加到之前建立的 eks-nodegroup-role 角色。
  2. resource "aws_iam_role_policy_attachment" "eks-nodegroup-role-AmazonEKS_CNI_Policy":

    • 將 Amazon EKS CNI Policy 附加到節點組角色。
    • policy_arn = "arn:aws:iam::aws:policy/AmazonEKS_CNI_Policy": 指定要附加的策略 ARN。
    • role = aws_iam_role.eks-nodegroup-role.name: 將策略附加到之前建立的 eks-nodegroup-role 角色。
  3. resource "aws_iam_role_policy_attachment" "eks-nodegroup-role-AmazonEC2ContainerRegistryReadOnly":

    • 將 Amazon EC2 Container Registry Read Only Policy 附加到節點組角色。
    • policy_arn = "arn:aws:iam::aws:policy/AmazonEC2ContainerRegistryReadOnly": 指定要附加的策略 ARN。
    • role = aws_iam_role.eks-nodegroup-role.name: 將策略附加到之前建立的 eks-nodegroup-role 角色。

預建立

terraform plan -out=tf.plan
tf plan
 Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the following
symbols:
  + create

Terraform will perform the following actions:

  # aws_iam_role.eks-nodegroup-role will be created
  + resource "aws_iam_role" "eks-nodegroup-role" {
      + arn                   = (known after apply)
      + assume_role_policy    = jsonencode(
            {
              + Statement = [
                  + {
                      + Action    = "sts:AssumeRole"
                      + Effect    = "Allow"
                      + Principal = {
                          + Service = "ec2.amazonaws.com"
                        }
                    },
                ]
              + Version   = "2012-10-17"
            }
        )
      + create_date           = (known after apply)
      + force_detach_policies = false
      + id                    = (known after apply)
      + managed_policy_arns   = (known after apply)
      + max_session_duration  = 3600
      + name                  = "eks-nodegroup-role"
      + name_prefix           = (known after apply)
      + path                  = "/"
      + role_last_used        = (known after apply)
      + tags_all              = (known after apply)
      + unique_id             = (known after apply)

      + inline_policy (known after apply)
    }

  # aws_iam_role_policy_attachment.eks-nodegroup-role-AmazonEC2ContainerRegistryReadOnly will be created
  + resource "aws_iam_role_policy_attachment" "eks-nodegroup-role-AmazonEC2ContainerRegistryReadOnly" {
      + id         = (known after apply)
      + policy_arn = "arn:aws:iam::aws:policy/AmazonEC2ContainerRegistryReadOnly"
      + role       = "eks-nodegroup-role"
    }

  # aws_iam_role_policy_attachment.eks-nodegroup-role-AmazonEKSWorkerNodePolicy will be created
  + resource "aws_iam_role_policy_attachment" "eks-nodegroup-role-AmazonEKSWorkerNodePolicy" {
      + id         = (known after apply)
      + policy_arn = "arn:aws:iam::aws:policy/AmazonEKSWorkerNodePolicy"
      + role       = "eks-nodegroup-role"
    }

  # aws_iam_role_policy_attachment.eks-nodegroup-role-AmazonEKS_CNI_Policy will be created
  + resource "aws_iam_role_policy_attachment" "eks-nodegroup-role-AmazonEKS_CNI_Policy" {
      + id         = (known after apply)
      + policy_arn = "arn:aws:iam::aws:policy/AmazonEKS_CNI_Policy"
      + role       = "eks-nodegroup-role"
    }

Plan: 4 to add, 0 to change, 0 to destroy.

────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────

Saved the plan to: tf.plan

To perform exactly these actions, run the following command to apply:
    terraform apply "tf.plan"

建立node group iam

 terraform apply "tf.plan"

輸出:

aws_iam_role.eks-nodegroup-role: Creating...
aws_iam_role.eks-nodegroup-role: Creation complete after 2s [id=eks-nodegroup-role]
aws_iam_role_policy_attachment.eks-nodegroup-role-AmazonEKS_CNI_Policy: Creating...
aws_iam_role_policy_attachment.eks-nodegroup-role-AmazonEC2ContainerRegistryReadOnly: Creating...
aws_iam_role_policy_attachment.eks-nodegroup-role-AmazonEKSWorkerNodePolicy: Creating...
aws_iam_role_policy_attachment.eks-nodegroup-role-AmazonEC2ContainerRegistryReadOnly: Creation complete after 1s [id=eks-nodegroup-role-20241027130604526800000001]
aws_iam_role_policy_attachment.eks-nodegroup-role-AmazonEKS_CNI_Policy: Creation complete after 1s [id=eks-nodegroup-role-20241027130604963000000002]
aws_iam_role_policy_attachment.eks-nodegroup-role-AmazonEKSWorkerNodePolicy: Creation complete after 2s [id=eks-nodegroup-role-20241027130605372700000003]

Apply complete! Resources: 4 added, 0 changed, 0 destroyed.

轉載請著名原文地址:https://www.cnblogs.com/Sunzz/p/18498915

6. 建立Node Group

定義eks_node_group.tf檔案

resource "aws_eks_node_group" "node_group1" {
  cluster_name    = aws_eks_cluster.tf-eks.name
  node_group_name = "node_group1"
  ami_type        = "AL2_x86_64"
  capacity_type   = "ON_DEMAND"
  disk_size       = 20
  instance_types   = ["t3.medium"]
  node_role_arn   = aws_iam_role.eks-nodegroup-role.arn
  subnet_ids = [
      aws_subnet.tf_eks_subnet1.id,
      aws_subnet.tf_eks_subnet2.id
    ]

  scaling_config {
    desired_size = 1
    max_size     = 2
    min_size     = 1
  }

  update_config {
    max_unavailable = 1
  }

  depends_on = [
    aws_iam_role_policy_attachment.eks-nodegroup-role-AmazonEKSWorkerNodePolicy,
    aws_iam_role_policy_attachment.eks-nodegroup-role-AmazonEKS_CNI_Policy,
    aws_iam_role_policy_attachment.eks-nodegroup-role-AmazonEC2ContainerRegistryReadOnly,
  ]

#  remote_access {
#    ec2_ssh_key = aws_key_pair.tf-keypair.key_name
#    source_security_group_ids = [
#	   aws_security_group.tf_security_group.id
#    ]
#  } 
}

resource "aws_eks_node_group" "node_group2" {
  cluster_name    = aws_eks_cluster.tf-eks.name
  node_group_name = "node_group2"
  ami_type        = "AL2_x86_64"
  capacity_type   = "ON_DEMAND"
  disk_size       = 20
  instance_types  = ["t3.medium"]
  node_role_arn   = aws_iam_role.eks-nodegroup-role.arn
  subnet_ids = [
      aws_subnet.tf_eks_subnet1.id,
      aws_subnet.tf_eks_subnet2.id
    ]

  scaling_config {
    desired_size = 1
    max_size     = 2
    min_size     = 1
  }

  update_config {
    max_unavailable = 1
  }

  depends_on = [
    aws_iam_role_policy_attachment.eks-nodegroup-role-AmazonEKSWorkerNodePolicy,
    aws_iam_role_policy_attachment.eks-nodegroup-role-AmazonEKS_CNI_Policy,
    aws_iam_role_policy_attachment.eks-nodegroup-role-AmazonEC2ContainerRegistryReadOnly,
  ]

#  remote_access {
#    ec2_ssh_key = aws_key_pair.tf-keypair.key_name
#    source_security_group_ids = [
#	   aws_security_group.tf_security_group.id
#    ]
#  } 
}

解釋

EKS 節點組定義

  1. resource "aws_eks_node_group" "node_group1": 建立一個名為 node_group1 的 EKS 節點組。

  2. cluster_name = aws_eks_cluster.tf-eks.name: 指定該節點組所屬的 EKS 叢集,引用之前建立的 tf-eks 叢集的名稱。

  3. node_group_name = "node_group1": 設定節點組的名稱為 node_group1

  4. ami_type = "AL2_x86_64": 指定節點組使用的 Amazon Machine Image (AMI) 型別,這裡使用的是 Amazon Linux 2 (AL2) 的 x86_64 架構。可選的有 AL2_x86_64 AL2_x86_64_GPU AL2_ARM_64 CUSTOM BOTTLEROCKET_ARM_64 BOTTLEROCKET_x86_64 BOTTLEROCKET_ARM_64_NVIDIA BOTTLEROCKET_x86_64_NVIDIA WINDOWS_CORE_2019_x86_64 WINDOWS_FULL_2019_x86_64 WINDOWS_CORE_2022_x86_64 WINDOWS_FULL_2022_x86_64],

  5. capacity_type = "ON_DEMAND": 設定例項的容量型別為按需(On-Demand),即按使用付費,而非預留。

  6. disk_size = 20: 為每個節點指定根卷的大小,這裡設定為 20 GB。

  7. instance_types = ["t3.medium"]: 指定節點的例項型別,這裡使用的是 t3.medium 型別。

  8. node_role_arn = aws_iam_role.eks-nodegroup-role.arn: 指定節點組的 IAM 角色 ARN,允許節點訪問必要的 AWS 服務。

  9. subnet_ids = [ ... ]: 定義節點組將要使用的子網,引用 tf_eks_subnet1tf_eks_subnet2 的 ID。這些子網是 EKS 節點執行的網路環境。

擴充套件和更新配置

  1. scaling_config { ... }:

    • 設定節點組的擴充套件配置。
    • desired_size = 1: 預設啟動一個節點。
    • max_size = 2: 節點組最多可以擴充套件到 2 個節點。
    • min_size = 1: 節點組至少保留一個節點。
  2. update_config { ... }:

    • 配置節點組的更新策略。
    • max_unavailable = 1: 更新過程中,最多可以有一個節點不可用。

依賴關係

  1. depends_on = [ ... ]: 指定該資源的依賴關係,確保在建立節點組之前,相關的 IAM 角色策略附加操作已經完成。

遠端訪問配置(被註釋掉)

  1. remote_access { ... } (被註釋掉):
    • 此部分配置遠端訪問選項,允許 SSH 訪問節點組。
    • ec2_ssh_key = aws_key_pair.tf-keypair.key_name: 指定用於 SSH 訪問的 EC2 金鑰對。
    • source_security_group_ids = [ ... ]: 指定允許 SSH 訪問的安全組。

預建立

 terraform plan -out=tf.plan
tf plan
 Terraform will perform the following actions:

  # aws_eks_node_group.node_group1 will be created
  + resource "aws_eks_node_group" "node_group1" {
      + ami_type               = "AL2_x86_64"
      + arn                    = (known after apply)
      + capacity_type          = "ON_DEMAND"
      + cluster_name           = "tf-eks"
      + disk_size              = 20
      + id                     = (known after apply)
      + instance_types         = [
          + "t3.medium",
        ]
      + node_group_name        = "node_group1"
      + node_group_name_prefix = (known after apply)
      + node_role_arn          = "arn:aws:iam::xxxxxx:role/eks-nodegroup-role"
      + release_version        = (known after apply)
      + resources              = (known after apply)
      + status                 = (known after apply)
      + subnet_ids             = [
          + "subnet-01b5d98060f0063ef",
          + "subnet-0a30534a829758774",
        ]
      + tags_all               = (known after apply)
      + version                = (known after apply)

      + scaling_config {
          + desired_size = 1
          + max_size     = 2
          + min_size     = 1
        }

      + update_config {
          + max_unavailable = 1
        }
    }

  # aws_eks_node_group.node_group2 will be created
  + resource "aws_eks_node_group" "node_group2" {
      + ami_type               = "AL2_x86_64"
      + arn                    = (known after apply)
      + capacity_type          = "ON_DEMAND"
      + cluster_name           = "tf-eks"
      + disk_size              = 20
      + id                     = (known after apply)
      + instance_types         = [
          + "t3.medium",
        ]
      + node_group_name        = "node_group2"
      + node_group_name_prefix = (known after apply)
      + node_role_arn          = "arn:aws:iam::xxxxx:role/eks-nodegroup-role"
      + release_version        = (known after apply)
      + resources              = (known after apply)
      + status                 = (known after apply)
      + subnet_ids             = [
          + "subnet-01b5d98060f0063ef",
          + "subnet-0a30534a829758774",
        ]
      + tags_all               = (known after apply)
      + version                = (known after apply)

      + scaling_config {
          + desired_size = 1
          + max_size     = 2
          + min_size     = 1
        }

      + update_config {
          + max_unavailable = 1
        }
    }

Plan: 2 to add, 0 to change, 0 to destroy.

────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────

Saved the plan to: tf.plan

To perform exactly these actions, run the following command to apply:
    terraform apply "tf.plan"

建立Node Group

terraform apply "tf.plan"

輸出如下:

aws_eks_node_group.node_group2: Creating...
aws_eks_node_group.node_group1: Creating...
aws_eks_node_group.node_group1: Still creating... [10s elapsed]
......
aws_eks_node_group.node_group1: Creation complete after 1m41s [id=tf-eks:node_group1]
aws_eks_node_group.node_group2: Still creating... [1m50s elapsed]
aws_eks_node_group.node_group2: Creation complete after 1m52s [id=tf-eks:node_group2]

Apply complete! Resources: 2 added, 0 changed, 0 destroyed.

7. 獲取EKS資訊

新增eks_output.tf

# 輸出 EKS 叢集的名稱
output "eks_cluster_name" {
  value = aws_eks_cluster.tf-eks.name
  description = "The name of the EKS cluster"
}

# 輸出 EKS 叢集的 ARN(Amazon Resource Name)
output "eks_cluster_arn" {
  value = aws_eks_cluster.tf-eks.arn
  description = "The ARN of the EKS cluster"
}

# 輸出 EKS 叢集的 API 伺服器端點
output "eks_cluster_endpoint" {
  value = aws_eks_cluster.tf-eks.endpoint
  description = "The endpoint of the EKS cluster"
}

# 輸出 EKS 叢集的當前狀態
output "eks_cluster_status" {
  value = aws_eks_cluster.tf-eks.status
  description = "The status of the EKS cluster"
}

# 輸出與 EKS 叢集關聯的 VPC ID
output "eks_cluster_vpc_id" {
  value = aws_eks_cluster.tf-eks.vpc_config[0].vpc_id
  description = "The VPC ID associated with the EKS cluster"
}

# 輸出與 EKS 叢集關聯的安全組 ID
output "eks_cluster_security_group_ids" {
  value = aws_eks_cluster.tf-eks.vpc_config[0].cluster_security_group_id
  description = "The security group IDs associated with the EKS cluster"
}

# 輸出用於訪問 EKS 叢集的 kubeconfig 配置
output "kubeconfig" {
  value = <<EOT
apiVersion: v1
clusters:
- cluster:
    server: ${aws_eks_cluster.tf-eks.endpoint}
    certificate-authority-data: ${aws_eks_cluster.tf-eks.certificate_authority[0].data}
  name: ${aws_eks_cluster.tf-eks.name}
contexts:
- context:
    cluster: ${aws_eks_cluster.tf-eks.name}
    user: ${aws_eks_cluster.tf-eks.name}
  name: ${aws_eks_cluster.tf-eks.name}
current-context: ${aws_eks_cluster.tf-eks.name}
kind: Config
preferences: {}
users:
- name: ${aws_eks_cluster.tf-eks.name}
  user:
    exec:
      apiVersion: client.authentication.k8s.io/v1beta1
      command: aws
      args:
        - eks
        - get-token
        - --cluster-name
        - ${aws_eks_cluster.tf-eks.name}
EOT
  description = "Kubeconfig for accessing the EKS cluster"
}

由於output.tf只是獲取已經建立的資源資訊,不涉及資源的修改,所以可以直接apply

terraform apply

Apply complete! Resources: 0 added, 0 changed, 0 destroyed.

Outputs:

eks_cluster_arn = "arn:aws:eks:us-west-1:xxxxx:cluster/tf-eks"
eks_cluster_endpoint = "https://D59BB0103962C6BEABC8271AC16B34EC.gr7.us-west-1.eks.amazonaws.com"
eks_cluster_name = "tf-eks"
eks_cluster_security_group_ids = "sg-0159f56ebd2d93a38"
eks_cluster_status = "ACTIVE"
eks_cluster_vpc_id = "vpc-0361291552eab4047"
kubeconfig = <<EOT
apiVersion: v1
clusters:
- cluster:
    server: https://D59BB0103962C6BEABC8271AC16B34EC.gr7.us-west-1.eks.amazonaws.com
    certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSURCVENDQWUyZ0F3SUJBZ0lJSG83cjJJV.....
  name: tf-eks
contexts:
- context:
    cluster: tf-eks
    user: tf-eks
  name: tf-eks
current-context: tf-eks
kind: Config
preferences: {}
users:
- name: tf-eks
  user:
    exec:
      apiVersion: client.authentication.k8s.io/v1beta1 
      command: aws
      args:
        - eks
        - get-token
        - --cluster-name
        - tf-eks

EOT

8. 配置kubeconfig

方式1

根據上邊生成的kubeconfig內容配置~/.kube/config即可

方式2

執行命令生成kubeconfig檔案

aws eks update-kubeconfig --region us-west-1 --name tf-eks

檢視叢集節點數量

 kubectl get no
 kubectl get no
NAME                                         STATUS   ROLES    AGE     VERSION
ip-10-10-81-13.us-west-1.compute.internal    Ready    <none>   3m48s   v1.31.0-eks-a737599
ip-10-10-82-102.us-west-1.compute.internal   Ready    <none>   4m1s    v1.31.0-eks-a737599

檢視叢集允許的pod數量

kubectl get po -A
NAMESPACE     NAME                       READY   STATUS    RESTARTS   AGE
kube-system   aws-node-8zjcn             2/2     Running   0          4m55s
kube-system   aws-node-n4ns8             2/2     Running   0          4m42s
kube-system   coredns-6486b6fd59-hkcnb   1/1     Running   0          20m
kube-system   coredns-6486b6fd59-hz75m   1/1     Running   0          20m
kube-system   kube-proxy-fbdv9           1/1     Running   0          4m42s
kube-system   kube-proxy-nnb2r           1/1     Running   0          4m55s

9. 建立nginx應用

編輯nginx-deployment.yaml檔案

---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-deployment
  labels:
    app: nginx
spec:
  replicas: 3  # 副本數量
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
        - name: nginx
          image: nginx:1.26  # 指定 Nginx 映象版本
          ports:
            - containerPort: 80  # 暴露容器埠

建立nginx-deployment

kubectl apply -f nginx-deployment.yaml

史上最全 Terraform 入門教程,助你無坑入門!

10.回顧建立的tf檔案

至此我們已經建立了一堆tf檔案,大家也可以一起建立所有的tf檔案,最後再 terraform plan

詳情如下:

史上最全 Terraform 入門教程,助你無坑入門!

9.銷燬資源

terraform destroy

總結

經過一段時間的深入探索與編寫,這篇部落格成為了我迄今為止最費時間和最長的一篇作品。從研究Terraform的功能到實際操作在AWS上建立各種資源,每一個步驟都需要細緻的推敲與反覆的驗證。這不僅讓我更加熟悉Terraform的強大功能,也讓我在分享知識的過程中收穫了許多。

在這篇博文中,我力求以詳盡的內容和清晰的解釋幫助讀者輕鬆入門,尤其是針對那些初學者。我希望這些努力能為讀者提供實用的指導,幫助他們在雲端計算的道路上邁出堅實的第一步。這段旅程雖然漫長,但每一個字都承載著我對Terraform的熱情和對知識分享的期待。希望你們能在這篇部落格中找到靈感和啟發!

相關文章