之前文章(“VMware Cloud Foundation Part 03:準備 Excel 參數列。”和“VMware Cloud Foundation Part 04:準備 ESXi 主機。”),我們已經知道了對於部署一個 VMware Cloud Foundation 來說,需要準備部署引數配置檔案以及用於部署管理域的 ESXi 主機,這在前期準備當中確實需要花費大量時間和精力,不過等這一切都準備就緒後,到了真正的實施環節,也許幾個小時之內就能完成所有部署上線工作,這就是使用 VMware Cloud Foundation 自動化和標準化 SDDC 解決方案所帶來的魅力。
話不多說,下面正式進入主題。
一、Cloud Builder 使用技巧
可能有一些小技巧對於 Cloud Builder 工具的使用能帶來幫助。工欲善其事,必先利其器。
1)檢視 Log 日誌檔案
在 Cloud Builder 部署 VCF 管理域過程中,有可能會遇到一些報錯或者失敗的任務,這時可以檢視以下 Cloud Builder 中的 Log 日誌檔案來檢查具體錯誤的原因。SSH 以 admin 使用者登入到 Cloud Builder 並切換到 root 使用者,執行以下命令。
tail -f /var/log/vmware/vcf/bringup/vcf-bringup.log
tail -f /var/log/vmware/vcf/bringup/vcf-bringup-debug.log
2)開啟 History 歷史記錄
預設情況下,Cloud Builder 虛擬機器的 History 命令歷史記錄功能是關閉的,如果你想檢視之前使用過的命令並向上翻閱歷史記錄,將會失敗。如果想開啟 History 功能,可以移除關閉歷史記錄的配置檔案。SSH 以 admin 使用者登入到 Cloud Builder 並切換到 root 使用者,執行以下命令。
mv /etc/profile.d/disable.history.sh .
history
3)重置 Postgres 資料庫
當使用 Cloud Builder 部署完成 VCF 管理域以後,最終會顯示如下圖所示介面。如果你想繼續使用 Cloud Builder 重新部署 VCF 管理域或者部署另外一個 VCF 例項,再去訪問 Cloud Builder 時始終會停留在下圖所示的介面。
要想再次使用 Cloud Builder,可以重置 Postgres 資料庫。SSH 以 admin 使用者登入到 Cloud Builder 並切換到 root 使用者,執行以下命令。
/usr/pgsql/13/bin/psql -U postgres -d bringup -h localhost
delete from execution;
delete from "Resource";
\q
二、vSAN ESA HCL 自定義檔案
當你跟我一樣使用了巢狀 ESXi 虛擬機器來部署 VMware Cloud Foundation 時,如果你選擇使用 vSAN OSA 架構來部署 VCF 管理域,那麼在部署的時候不會遇到 HCL 相容性問題,因為不會去檢查 HCL JSON 檔案;但是,要是你部署 vSAN ESA 架構,並使用官方的 HCL JSON 檔案(https://partnerweb.vmware.com/service/vsan/all.json)的話,那一定會遇到相容性問題,ESXi host vSAN compatibility validation 檢查將會失敗(Failed to verify HCL status on ESXi Host vcf-mgmt01-esxi01.mulab.local),如下圖所示。
1)生成巢狀 ESXi 主機的自定義 HCL JSON 檔案
針對上述這個問題,可以使用 VMware 工程師 William Lam 所製作的 PowerCLI 指令碼來生成自定義 HCL JSON 檔案進行解決,指令碼完整內容如下。很有意思的是,這個方法是不是也可以用於巢狀環境中 vSAN ESA 叢集的部署和使用基於映像的 vLCM 生命週期管理所遇到的硬體相容性問題?!注意,你需要安裝 PowerCLI 環境,才能執行以下步驟。
# Author: William Lam
# Description: Dynamically generate custom vSAN ESA HCL JSON file connected to standalone ESXi host
$vmhost = Get-VMHost
$supportedESXiReleases = @("ESXi 8.0 U2")
Write-Host -ForegroundColor Green "`nCollecting SSD information from ESXi host ${vmhost} ... "
$imageManager = Get-View ($Vmhost.ExtensionData.ConfigManager.ImageConfigManager)
$vibs = $imageManager.fetchSoftwarePackages()
$storageDevices = $vmhost.ExtensionData.Config.StorageDevice.scsiTopology.Adapter
$storageAdapters = $vmhost.ExtensionData.Config.StorageDevice.hostBusAdapter
$devices = $vmhost.ExtensionData.Config.StorageDevice.scsiLun
$pciDevices = $vmhost.ExtensionData.Hardware.PciDevice
$ctrResults = @()
$ssdResults = @()
$seen = @{}
foreach ($storageDevice in $storageDevices) {
$targets = $storageDevice.target
if($targets -ne $null) {
foreach ($target in $targets) {
foreach ($ScsiLun in $target.Lun.ScsiLun) {
$device = $devices | where {$_.Key -eq $ScsiLun}
$storageAdapter = $storageAdapters | where {$_.Key -eq $storageDevice.Adapter}
$pciDevice = $pciDevices | where {$_.Id -eq $storageAdapter.Pci}
# Convert from Dec to Hex
$vid = ('{0:x}' -f $pciDevice.VendorId).ToLower()
$did = ('{0:x}' -f $pciDevice.DeviceId).ToLower()
$svid = ('{0:x}' -f $pciDevice.SubVendorId).ToLower()
$ssid = ('{0:x}' -f $pciDevice.SubDeviceId).ToLower()
$combined = "${vid}:${did}:${svid}:${ssid}"
if($storageAdapter.Driver -eq "nvme_pcie" -or $storageAdapter.Driver -eq "pvscsi") {
switch ($storageAdapter.Driver) {
"nvme_pcie" {
$controllerType = $storageAdapter.Driver
$controllerDriver = ($vibs | where {$_.name -eq "nvme-pcie"}).Version
}
"pvscsi" {
$controllerType = $storageAdapter.Driver
$controllerDriver = ($vibs | where {$_.name -eq "pvscsi"}).Version
}
}
$ssdReleases=@{}
foreach ($supportedESXiRelease in $supportedESXiReleases) {
$tmpObj = [ordered] @{
vsanSupport = @( "All Flash:","vSANESA-SingleTier")
$controllerType = [ordered] @{
$controllerDriver = [ordered] @{
firmwares = @(
[ordered] @{
firmware = $device.Revision
vsanSupport = [ordered] @{
tier = @("AF-Cache", "vSANESA-Singletier")
mode = @("vSAN", "vSAN ESA")
}
}
)
type = "inbox"
}
}
}
if(!$ssdReleases[$supportedESXiRelease]) {
$ssdReleases.Add($supportedESXiRelease,$tmpObj)
}
}
if($device.DeviceType -eq "disk" -and !$seen[$combined]) {
$ssdTmp = [ordered] @{
id = [int]$(Get-Random -Minimum 1000 -Maximum 50000).toString()
did = $did
vid = $vid
ssid = $ssid
svid = $svid
vendor = $device.Vendor
model = ($device.Model).trim()
devicetype = $device.ApplicationProtocol
partnername = $device.Vendor
productid = ($device.Model).trim()
partnumber = $device.SerialNumber
capacity = [Int]((($device.Capacity.BlockSize * $device.Capacity.Block) / 1048576))
vcglink = "https://williamlam.com/homelab"
releases = $ssdReleases
vsanSupport = [ordered] @{
mode = @("vSAN", "vSAN ESA")
tier = @("vSANESA-Singletier", "AF-Cache")
}
}
$controllerReleases=@{}
foreach ($supportedESXiRelease in $supportedESXiReleases) {
$tmpObj = [ordered] @{
$controllerType = [ordered] @{
$controllerDriver = [ordered] @{
type = "inbox"
queueDepth = $device.QueueDepth
firmwares = @(
[ordered] @{
firmware = $device.Revision
vsanSupport = @( "Hybrid:Pass-Through","All Flash:Pass-Through","vSAN ESA")
}
)
}
}
vsanSupport = @( "Hybrid:Pass-Through","All Flash:Pass-Through")
}
if(!$controllerReleases[$supportedESXiRelease]) {
$controllerReleases.Add($supportedESXiRelease,$tmpObj)
}
}
$controllerTmp = [ordered] @{
id = [int]$(Get-Random -Minimum 1000 -Maximum 50000).toString()
releases = $controllerReleases
}
$ctrResults += $controllerTmp
$ssdResults += $ssdTmp
$seen[$combined] = "yes"
}
}
}
}
}
}
# Retrieve the latest vSAN HCL jsonUpdatedTime
$results = Invoke-WebRequest -Uri 'https://vsanhealth.vmware.com/products/v1/bundles/lastupdatedtime' -Headers @{'x-vmw-esp-clientid'='vsan-hcl-vcf-2023'}
# Parse out content between '{...}'
$pattern = '\{(.+?)\}'
$matched = ([regex]::Matches($results, $pattern)).Value
if($matched -ne $null) {
$vsanHclTime = $matched|ConvertFrom-Json
} else {
Write-Error "Unable to retrieve vSAN HCL jsonUpdatedTime, ensure you have internet connectivity when running this script"
}
$hclObject = [ordered] @{
timestamp = $vsanHclTime.timestamp
jsonUpdatedTime = $vsanHclTime.jsonUpdatedTime
totalCount = $($ssdResults.count + $ctrResults.count)
supportedReleases = $supportedESXiReleases
eula = @{}
data = [ordered] @{
controller = @($ctrResults)
ssd = @($ssdResults)
hdd = @()
}
}
$dateTimeGenerated = Get-Date -Uformat "%m_%d_%Y_%H_%M_%S"
$outputFileName = "custom_vsan_esa_hcl_${dateTimeGenerated}.json"
Write-Host -ForegroundColor Green "Saving Custom vSAN ESA HCL to ${outputFileName}`n"
$hclObject | ConvertTo-Json -Depth 12 | Out-File -FilePath $outputFileName
執行 Powershell,使用 PowerCLI 命令 Connect-VISserver 連線到巢狀 ESXi 主機,並執行自定義 HCL JSON 檔案生成指令碼。
生成的自定義 HCL JSON 檔案內容如下所示。注意,執行上述指令碼需要電腦連線網際網路,如果不能連網,則需要手動下載官方的 HCL JSON 檔案(https://partnerweb.vmware.com/service/vsan/all.json),然後將 timestamp 和 jsonUpdatedTime 欄位的值修改為官方的 HCL JSON 檔案中的最新值。
{
"timestamp": 1721122728,
"jsonUpdatedTime": "July 16, 2024, 2:38 AM PDT",
"totalCount": 2,
"supportedReleases": [
"ESXi 8.0 U2"
],
"eula": {
},
"data": {
"controller": [
{
"id": 33729,
"releases": {
"ESXi 8.0 U2": {
"nvme_pcie": {
"1.2.4.11-1vmw.802.0.0.22380479": {
"type": "inbox",
"queueDepth": 510,
"firmwares": [
{
"firmware": "1.3",
"vsanSupport": [
"Hybrid:Pass-Through",
"All Flash:Pass-Through",
"vSAN ESA"
]
}
]
}
},
"vsanSupport": [
"Hybrid:Pass-Through",
"All Flash:Pass-Through"
]
}
}
}
],
"ssd": [
{
"id": 25674,
"did": "7f0",
"vid": "15ad",
"ssid": "7f0",
"svid": "15ad",
"vendor": "NVMe",
"model": "VMware Virtual NVMe Disk",
"devicetype": "NVMe",
"partnername": "NVMe",
"productid": "VMware Virtual NVMe Disk",
"partnumber": "f72c2cf6551ae47e000c2968afc4b0ec",
"capacity": 61440,
"vcglink": "https://williamlam.com/homelab",
"releases": {
"ESXi 8.0 U2": {
"vsanSupport": [
"All Flash:",
"vSANESA-SingleTier"
],
"nvme_pcie": {
"1.2.4.11-1vmw.802.0.0.22380479": {
"firmwares": [
{
"firmware": "1.3",
"vsanSupport": {
"tier": [
"AF-Cache",
"vSANESA-Singletier"
],
"mode": [
"vSAN",
"vSAN ESA"
]
}
}
],
"type": "inbox"
}
}
}
},
"vsanSupport": {
"mode": [
"vSAN",
"vSAN ESA"
],
"tier": [
"vSANESA-Singletier",
"AF-Cache"
]
}
}
],
"hdd": [
]
}
}
2)重新另存為 HCL JSON 檔案
很奇怪,不知道為什麼上面自動生成的 HCL JSON 檔案我這邊直接使用有問題,我將生成的 HCL JSON 檔案透過記事本開啟,然後全部複製到另一個記事本中,再另存為 JSON 檔案(如 all.json),最後匯入到 Cloud Builder 才驗證成功。如果你遇到同樣的問題,可以嘗試這一操作。
3)上傳 HCL JSON 檔案到 Cloud Builder
使用上面指令碼生成了巢狀 ESXi 主機的自定義 HCL JSON 檔案後,需要透過 SFTP 將它上傳到 Cloud Builder,同時需要在 Excel 參數列中配置 HCL JSON 檔案的路徑,後續在部署管理域的時候需要使用。
mv /home/admin/all.json /opt/vmware/bringup/tmp/
chmod 644 /opt/vmware/bringup/tmp/all.json
chown vcf_bringup:vcf /opt/vmware/bringup/tmp/all.json
三、NSX Manager 部署技巧
1)增加 NSX Manager 部署的等待時間
VCF 管理域部署期間,在自動部署和配置 NSX 元件的時候花費的時間最長,如果部署環境的硬體效能不好,可能會持續很長時間,最後甚至會失敗。可以調整 Cloud Builder 部署 NSX 元件的等待時間,這樣也能在超時之前完成部署過程。SSH 以 admin 使用者登入到 Cloud Builder 並切換到 root 使用者,執行以下命令。
vim /opt/vmware/bringup/webapps/bringup-app/conf/application.properties
增加下面引數:
nsxt.manager.wait.minutes=100 (或者更長)
重啟 Cloud Builder 服務。
systemctl restart vcf-bringup
2)修改 NSX Manager 部署節點的數量
預設情況下,部署 NSX Manager 元件的時候會部署 3 個 NSX Manager 節點並配置完整的 NSX 叢集。其實,如果只是測試學習,當部署 VCF 環境的宿主機的資源不是很充足的情況下,可以只部署 1 個 NSX Manager 節點,這樣還可以大大降低資源的佔用。
透過將 Excel 引數錶轉換成 JSON 配置檔案,然後找到 JSON 檔案中關於 NSX 的配置,如下所示。
"nsxtSpec":
{
"nsxtManagerSize": "medium",
"nsxtManagers": [
{
"hostname": "vcf-mgmt01-nsx01a",
"ip": "192.168.32.67"
},
{
"hostname": "vcf-mgmt01-nsx01b",
"ip": "192.168.32.68"
},
{
"hostname": "vcf-mgmt01-nsx01c",
"ip": "192.168.32.69"
}
],
"rootNsxtManagerPassword": "Vcf5@password",
"nsxtAdminPassword": "Vcf5@password",
"nsxtAuditPassword": "Vcf5@password",
"vip": "192.168.32.66",
"vipFqdn": "vcf-mgmt01-nsx01",
將另外 2 個 NSX Manager 節點從 JSON 檔案中刪除,如下所示。這樣你就可以只部署 1 個節點了。
"nsxtSpec":
{
"nsxtManagerSize": "medium",
"nsxtManagers": [
{
"hostname": "vcf-mgmt01-nsx01a",
"ip": "192.168.32.67"
}
],
"rootNsxtManagerPassword": "Vcf5@password",
"nsxtAdminPassword": "Vcf5@password",
"nsxtAuditPassword": "Vcf5@password",
"vip": "192.168.32.66",
"vipFqdn": "vcf-mgmt01-nsx01",
3)調整 NSX Manager 預設儲存策略
同樣的原因,當硬體效能不夠時,可以透過調整 vSAN 預設的儲存策略,將 FTT 修改為 0,也就是沒有任務副本,這樣在部署 NSX Manager 元件的時候也可以加快部署,等後續 VCF 管理域部署成功之後,再將 NSX Manager 節點的 vSAN 儲存策略調整為 vSAN ESA 預設的儲存策略(RAID 5)。注意,這需要在 Cloud Builder 部署 NSX Manager 元件之前登入 vSphere Client 進行調整。
4)修改 NSX Manager 記憶體預留
同樣的原因,當硬體資源不夠時,可以將 NSX Manager 節點虛擬機器的記憶體配置中的記憶體預留修改為“0”,也就是不佔用分配的全部記憶體資源。當然這個可根據需要在 VCF 管理域部署成功之後登入 vSphere Client 進行修改。
四、準備 JSON 配置檔案
1)Excel 參數列
下面是針對當前環境準備的 Excel 參數列,大家可以有個直觀的瞭解。License 已經過處理。
- Credentials 參數列
- Hosts and Networks 參數列
- Deploy Parameters 參數列
2)JSON 配置檔案
後面將使用 JSON 格式的配置檔案匯入部署,只保留了 1 個 NSX Manager 節點。License 已經過處理。
{
"subscriptionLicensing": false,
"skipEsxThumbprintValidation": false,
"managementPoolName": "vcf-mgmt01-np01",
"sddcManagerSpec": {
"secondUserCredentials": {
"username": "vcf",
"password": "Vcf5@password"
},
"ipAddress": "192.168.32.70",
"hostname": "vcf-mgmt01-sddc01",
"rootUserCredentials": {
"username": "root",
"password": "Vcf5@password"
},
"localUserPassword": "Vcf5@password"
},
"sddcId": "vcf-mgmt01",
"esxLicense": "00000-00000-00000-00000-00000",
"taskName": "workflowconfig/workflowspec-ems.json",
"ceipEnabled": false,
"fipsEnabled": false,
"ntpServers": ["192.168.32.3"],
"dnsSpec": {
"subdomain": "mulab.local",
"domain": "mulab.local",
"nameserver": "192.168.32.3"
},
"networkSpecs": [
{
"networkType": "MANAGEMENT",
"subnet": "192.168.32.0/24",
"gateway": "192.168.32.254",
"vlanId": "0",
"mtu": "1500",
"portGroupKey": "vcf-mgmt01-vds01-pg-mgmt",
"standbyUplinks":[],
"activeUplinks":[
"uplink1",
"uplink2"
]
},
{
"networkType": "VMOTION",
"subnet": "192.168.40.0/24",
"gateway": "192.168.40.254",
"vlanId": "40",
"mtu": "9000",
"portGroupKey": "vcf-mgmt01-vds01-pg-vmotion",
"includeIpAddressRanges": [{"endIpAddress": "192.168.40.4", "startIpAddress": "192.168.40.1"}],
"standbyUplinks":[],
"activeUplinks":[
"uplink1",
"uplink2"
]
},
{
"networkType": "VSAN",
"subnet": "192.168.41.0/24",
"gateway": "192.168.41.254",
"vlanId": "41",
"mtu": "9000",
"portGroupKey": "vcf-mgmt01-vds02-pg-vsan",
"includeIpAddressRanges": [{"endIpAddress": "192.168.41.4", "startIpAddress": "192.168.41.1"}],
"standbyUplinks":[],
"activeUplinks":[
"uplink1",
"uplink2"
]
},
{
"networkType": "VM_MANAGEMENT",
"subnet": "192.168.32.0/24",
"gateway": "192.168.32.254",
"vlanId": "0",
"mtu": "9000",
"portGroupKey": "vcf-mgmt01-vds01-pg-vm-mgmt",
"standbyUplinks":[],
"activeUplinks":[
"uplink1",
"uplink2"
]
}
],
"nsxtSpec":
{
"nsxtManagerSize": "medium",
"nsxtManagers": [
{
"hostname": "vcf-mgmt01-nsx01a",
"ip": "192.168.32.67"
}
],
"rootNsxtManagerPassword": "Vcf5@password",
"nsxtAdminPassword": "Vcf5@password",
"nsxtAuditPassword": "Vcf5@password",
"vip": "192.168.32.66",
"vipFqdn": "vcf-mgmt01-nsx01",
"nsxtLicense": "33333-33333-33333-33333-33333",
"transportVlanId": 42,
"ipAddressPoolSpec": {
"name": "vcf-mgmt01-tep01",
"description": "ESXi Host Overlay TEP IP Pool",
"subnets":[
{
"ipAddressPoolRanges":[
{
"start": "192.168.42.1",
"end": "192.168.42.8"
}
],
"cidr": "192.168.42.0/24",
"gateway": "192.168.42.254"
}
]
}
},
"vsanSpec": {
"licenseFile": "11111-11111-11111-11111-11111",
"vsanDedup": "false",
"esaConfig": {
"enabled": true
},
"hclFile": "/opt/vmware/bringup/tmp/all.json",
"datastoreName": "vcf-mgmt01-vsan-esa-datastore01"
},
"dvsSpecs": [
{
"dvsName": "vcf-mgmt01-vds01",
"vmnics": [
"vmnic0",
"vmnic1"
],
"mtu": 9000,
"networks":[
"MANAGEMENT",
"VMOTION",
"VM_MANAGEMENT"
],
"niocSpecs":[
{
"trafficType":"VSAN",
"value":"HIGH"
},
{
"trafficType":"VMOTION",
"value":"LOW"
},
{
"trafficType":"VDP",
"value":"LOW"
},
{
"trafficType":"VIRTUALMACHINE",
"value":"HIGH"
},
{
"trafficType":"MANAGEMENT",
"value":"NORMAL"
},
{
"trafficType":"NFS",
"value":"LOW"
},
{
"trafficType":"HBR",
"value":"LOW"
},
{
"trafficType":"FAULTTOLERANCE",
"value":"LOW"
},
{
"trafficType":"ISCSI",
"value":"LOW"
}
],
"nsxtSwitchConfig": {
"transportZones": [
{
"name": "vcf-mgmt01-tz-vlan01",
"transportType": "VLAN"
}
]
}
},
{
"dvsName": "vcf-mgmt01-vds02",
"vmnics": [
"vmnic2",
"vmnic3"
],
"mtu": 9000,
"networks":[
"VSAN"
],
"nsxtSwitchConfig": {
"transportZones": [ {
"name": "vcf-mgmt01-tz-overlay01",
"transportType": "OVERLAY"
},
{
"name": "vcf-mgmt01-tz-vlan02",
"transportType": "VLAN"
}
]
}
}
],
"clusterSpec":
{
"clusterName": "vcf-mgmt01-cluster01",
"clusterEvcMode": "intel-broadwell",
"clusterImageEnabled": true,
"vmFolders": {
"MANAGEMENT": "vcf-mgmt01-fd-mgmt",
"NETWORKING": "vcf-mgmt01-fd-nsx",
"EDGENODES": "vcf-mgmt01-fd-edge"
}
},
"pscSpecs": [
{
"adminUserSsoPassword": "Vcf5@password",
"pscSsoSpec": {
"ssoDomain": "vsphere.local"
}
}
],
"vcenterSpec": {
"vcenterIp": "192.168.32.65",
"vcenterHostname": "vcf-mgmt01-vcsa01",
"licenseFile": "22222-22222-22222-22222-22222",
"vmSize": "small",
"storageSize": "",
"rootVcenterPassword": "Vcf5@password"
},
"hostSpecs": [
{
"association": "vcf-mgmt01-datacenter01",
"ipAddressPrivate": {
"ipAddress": "192.168.32.61"
},
"hostname": "vcf-mgmt01-esxi01",
"credentials": {
"username": "root",
"password": "Vcf5@password"
},
"sshThumbprint": "SHA256:PYxgi8oEfK3j263pHx3InwL1xjIY1rAYN6pR607NWjc",
"sslThumbprint": "FF:A2:88:5B:C3:9A:A0:14:CE:ED:6D:F7:CE:5C:55:B6:2B:6D:35:E8:60:AE:79:79:FD:A3:A7:6C:D7:C1:5C:FA",
"vSwitch": "vSwitch0"
},
{
"association": "vcf-mgmt01-datacenter01",
"ipAddressPrivate": {
"ipAddress": "192.168.32.62"
},
"hostname": "vcf-mgmt01-esxi02",
"credentials": {
"username": "root",
"password": "Vcf5@password"
},
"sshThumbprint": "SHA256:h6HfTvQi/HJxFq48Q4SQH1TevWqNvgEQ1kWARQwpjKw",
"sslThumbprint": "70:1A:62:4F:B6:A9:A2:E2:AC:6E:4D:28:DE:E5:A8:FE:B1:F3:B0:A0:3F:26:93:86:F1:66:B3:A6:44:50:1F:AE",
"vSwitch": "vSwitch0"
},
{
"association": "vcf-mgmt01-datacenter01",
"ipAddressPrivate": {
"ipAddress": "192.168.32.63"
},
"hostname": "vcf-mgmt01-esxi03",
"credentials": {
"username": "root",
"password": "Vcf5@password"
},
"sshThumbprint": "SHA256:rniXpvC4JmiXVq7nd+FkjMrX+oTKCM+CgkvglKATgEE",
"sslThumbprint": "76:84:9E:03:BB:C5:10:FE:72:FC:D3:24:84:71:F5:85:7B:A7:0B:55:7C:7B:0F:BB:83:EA:D7:4F:66:3E:B1:8D",
"vSwitch": "vSwitch0"
},
{
"association": "vcf-mgmt01-datacenter01",
"ipAddressPrivate": {
"ipAddress": "192.168.32.64"
},
"hostname": "vcf-mgmt01-esxi04",
"credentials": {
"username": "root",
"password": "Vcf5@password"
},
"sshThumbprint": "SHA256:b5tRZdaKBbMUGmXPAph5s6XdMKQ5Mh0pjzgM0A16J/g",
"sslThumbprint": "97:83:39:DE:C0:D3:99:06:49:FF:1C:E8:BA:76:60:C6:C1:45:19:BD:C9:10:B0:C2:58:AC:71:12:C8:21:A9:BF",
"vSwitch": "vSwitch0"
}
]
}
五、部署 SDDC 管理域
在準備以上所有環境後,現在正式進入 SDDC 管理域的部署。透過跳板機訪問到 Cloud Builder 並完成登入。
選擇 VMware Cloud Foundation 平臺。
確定接受,點選 NEXT。
已準備好引數配置檔案,點選 NEXT。
上傳 JSON 配置檔案,點選 NEXT。
完成配置檔案檢查,點選 NEXT。
點選確定部署 SDDC。
開始 SDDC Bringup 構建過程。
可以去吃個飯喝杯咖啡,然後完成部署。
部署過程的全部任務(之前截圖)。
DOWNLOAD 部署報告,用了 2 小時。
點選 FINISH,訪問 SDDC Manager。
跳轉到 vCenter Server 並輸入密碼登入。
檢視 VMware Cloud Foundation 版本。
六、SDDC 管理域相關資訊
1)SDDC Manager
- SDDC Manager 儀表盤
- SDDC Manager 清單中所有工作負載域
- vcf-mgmt01 管理工作負載域摘要
- vcf-mgmt01 管理工作負載域中的主機
- vcf-mgmt01 管理工作負載域中的叢集
- vcf-mgmt01 管理工作負載域元件證書
- SDDC Manager 清單中的所有主機
- SDDC Manager 中所包含的發行版本
- SDDC Manager 中所建立的網路池
- SDDC Manager 配置備份
- SDDC Manager 中元件密碼管理
2)NSX Manager
- NSX 系統配置概覽
- NSX 節點裝置
- NSX 傳輸節點
- NSX 配置檔案
- NSX 傳輸區域
- NSX 配置備份
3)vCenter Server
- VCF 管理域的主機和叢集
- VCF 管理域 vSAN ESA 儲存架構
- VCF 管理域相關元件虛擬機器
- VCF 管理域所使用的 vSAN 儲存
- VCF 管理域的分散式交換機配置
- VCF 管理域 ESXi 主機的網路配置