28-replicationcontroller

cucytoman發表於2019-10-22

concepts/workloads/controllers/replicationcontroller/

Note: A Deployment that configures a ReplicaSet is now the recommended way to set up replication. 現在建議使用配置複製集ReplicaSet部署來設定複製

A ReplicationController ensures that a specified number of pod replicas are running at any one time. In other words, a ReplicationController makes sure that a pod or a homogeneous set of pods is always up and available. replication controller確保一次執行指定數量的pod副本。換句話說,一個複製控制程式確保一個pod或一組均勻的pod始終是可用的。

How a ReplicationController Works

If there are too many pods, the ReplicationController terminates the extra pods. If there are too few, the ReplicationController starts more pods. Unlike manually created pods, the pods maintained by a ReplicationController are automatically replaced if they fail, are deleted, or are terminated. For example, your pods are re-created on a node after disruptive maintenance such as a kernel upgrade. For this reason, you should use a ReplicationController even if your application requires only a single pod. A ReplicationController is similar to a process supervisor, but instead of supervising individual processes on a single node, the ReplicationController supervises multiple pods across multiple nodes. 如果pod太多,複製控制程式會終止多餘的pod。如果數量太少,複製控制器就會啟動更多的pod。與手動建立的pod不同,由replicationcontroller維護的pod在失敗、被刪除或終止時會自動替換。例如,在中斷性維護(如核心升級)之後,將在節點上重新建立pod。因此,即使應用程式只需要一個pod,也應該使用replicationcontroller。replicationcontroller與程式管理器類似,但它不是在單個節點上管理單個程式,而是在多個節點上管理多個pod。

ReplicationController is often abbreviated to “rc” or “rcs” in discussion, and as a shortcut in kubectl commands. 在討論中,replicationcontroller通常縮寫為“rc”或“rcs”,並作為kubectl命令的快捷方式。

A simple case is to create one ReplicationController object to reliably run one instance of a Pod indefinitely. A more complex use case is to run several identical replicas of a replicated service, such as web servers. 一個簡單的例子是建立一個replicationcontroller物件來可靠地無限期地執行pod的一個例項。一個更復雜的用例是執行一個複製服務的多個相同副本,例如web伺服器。

Running an example ReplicationController

This example ReplicationController config runs three copies of the nginx web server. 這個示例replicationcontroller config執行nginx web伺服器的三個副本。

controllers/replication.yaml

apiVersion: v1
kind: ReplicationController
metadata:
  name: nginx
spec:
  replicas: 3
  selector:
    app: nginx
  template:
    metadata:
      name: nginx
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: nginx
        ports:
        - containerPort: 80

Run the example job by downloading the example file and then running this command:

kubectl apply -f https://k8s.io/examples/controllers/replication.yaml
replicationcontroller/nginx created

Check on the status of the ReplicationController using this command: 使用以下命令檢查複製控制器的狀態:

kubectl describe replicationcontrollers/nginx

Name:        nginx
Namespace:   default
Selector:    app=nginx
Labels:      app=nginx
Annotations:    <none>
Replicas:    3 current / 3 desired
Pods Status: 0 Running / 3 Waiting / 0 Succeeded / 0 Failed
Pod Template:
  Labels:       app=nginx
  Containers:
   nginx:
    Image:              nginx
    Port:               80/TCP
    Environment:        <none>
    Mounts:             <none>
  Volumes:              <none>
Events:
  FirstSeen       LastSeen     Count    From                        SubobjectPath    Type      Reason              Message
  ---------       --------     -----    ----                        -------------    ----      ------              -------
  20s             20s          1        {replication-controller }                    Normal    SuccessfulCreate    Created pod: nginx-qrm3m
  20s             20s          1        {replication-controller }                    Normal    SuccessfulCreate    Created pod: nginx-3ntk0
  20s             20s          1        {replication-controller }                    Normal    SuccessfulCreate    Created pod: nginx-4ok8v

Here, three pods are created, but none is running yet, perhaps because the image is being pulled. A little later, the same command may show: 在這裡,建立了三個pod,但還沒有執行,可能是因為iamge正在被拉。稍晚些時候,相同的命令可能會顯示:

Pods Status:    3 Running / 0 Waiting / 0 Succeeded / 0 Failed

To list all the pods that belong to the ReplicationController in a machine readable form, you can use a command like this: 要以機器可讀的形式列出屬於replicationcontroller的所有pod,可以使用如下命令:

pods=$(kubectl get pods --selector=app=nginx --output=jsonpath={.items..metadata.name})
echo $pods
nginx-3ntk0 nginx-4ok8v nginx-qrm3m

Here, the selector is the same as the selector for the ReplicationController (seen in the kubectl describe output), and in a different form in replication.yaml. The --output=jsonpath option specifies an expression that just gets the name from each pod in the returned list. 這裡,選擇器與replicationcontroller的選擇器相同(見“kubectl descripe”輸出),在“replication.yaml”中的形式不同。“--output=jsonpath”選項指定一個表示式,該表示式只從返回列表中的每個pod獲取名稱。

Writing a ReplicationController Spec

As with all other Kubernetes config, a ReplicationController needs apiVersion, kind, and metadata fields. For general information about working with config files, see object management . 與所有其他kubernetes配置一樣,replicationcontroller需要“apiversion”、“kind”和“metadata”欄位。有關使用配置檔案的一般資訊,請參見[物件管理](https://kubernetes.io/docs/concepts/overvi... with objects/object management/)。

A ReplicationController also needs a .spec section. 複製控制器還需要一個[.spec欄位部分](https://git.k8s.io/community/contributors/... architecture/api conventions.md spec and status)。

Pod Template

The .spec.template is the only required field of the .spec. “.spec.template”是“.spec”的唯一必需欄位。

The .spec.template is a pod template. It has exactly the same schema as a pod, except it is nested and does not have an apiVersion or kind. “.spec.template”是一個[pod模板](https://kubernetes.io/docs/concepts/worklo... overview/pod模板)。它的模式與pod完全相同,只是它是巢狀的,沒有“apiversion”或“kind”。

In addition to required fields for a Pod, a pod template in a ReplicationController must specify appropriate labels and an appropriate restart policy. For labels, make sure not to overlap with other controllers. See pod selector. 除了pod所需的欄位外,ReplicationController中的pod模板還必須指定適當的標籤labels和適當的重新啟動策略。對於標籤,請確保不要與其他控制器重疊。請參閱[pod selector](https://kubernetes.io/docs/concepts/worklo... selector)。

Only a .spec.template.spec.restartPolicy equal to Always is allowed, which is the default if not specified. 只允許使用等於“always”的[.spec.template.spec.restart policy](https://kubernetes.io/docs/concepts/worklo... lifecycle/restart policy),如果未指定,則為預設值。

For local container restarts, ReplicationControllers delegate to an agent on the node, for example the Kubelet or Docker. 對於本地容器重新啟動,複製控制器委派給節點上的代理,例如kubelet或docker。

Labels on the ReplicationController

The ReplicationController can itself have labels (.metadata.labels). Typically, you would set these the same as the .spec.template.metadata.labels; if .metadata.labels is not specified then it defaults to .spec.template.metadata.labels. However, they are allowed to be different, and the .metadata.labels do not affect the behavior of the ReplicationController. replication controller本身可以有標籤(.metadata.labels)。通常,您將這些設定與.spec.template.metadata.labels;如果未指定.metadata.labels,則預設為.spec.template.metadata.labels。但是,允許它們不同,'.metadata.labels'不會影響複製控制器的行為。

Pod Selector

The .spec.selector field is a label selector. A ReplicationController manages all the pods with labels that match the selector. It does not distinguish between pods that it created or deleted and pods that another person or process created or deleted. This allows the ReplicationController to be replaced without affecting the running pods. “.spec.selector”欄位是一個[標籤選擇器](https://kubernetes.io/docs/concepts/overvi... with objects/labels/)標籤選擇器。replicationcontroller使用與選擇器匹配的標籤管理所有pod。它不區分它建立或刪除的pod和其他人或流程建立或刪除的pod。這允許在不影響正在執行的pod的情況下替換replicationcontroller。

If specified, the .spec.template.metadata.labels must be equal to the .spec.selector, or it will be rejected by the API. If .spec.selector is unspecified, it will be defaulted to .spec.template.metadata.labels. 如果指定,.spec.template.metadata.labels必須等於.spec.selector,否則API將拒絕它。如果未指定.spec.selector,則預設為.spec.template.metadata.labels

Also you should not normally create any pods whose labels match this selector, either directly, with another ReplicationController, or with another controller such as Job. If you do so, the ReplicationController thinks that it created the other pods. Kubernetes does not stop you from doing this. 此外,通常不應建立標籤與此選擇器匹配的任何pod,可以直接與另一個replicationcontroller匹配,也可以與另一個控制器(如job)匹配。如果你這樣做,複製控制者會認為它創造了其他的豆莢。庫伯內特斯不會阻止你這麼做。

If you do end up with multiple controllers that have overlapping selectors, you will have to manage the deletion yourself (see below). 如果最終出現多個具有重疊選擇器的控制器,則必須自己管理刪除操作 (see below).

Multiple Replicas

You can specify how many pods should run concurrently by setting .spec.replicas to the number of pods you would like to have running concurrently. The number running at any time may be higher or lower, such as if the replicas were just increased or decreased, or if a pod is gracefully shutdown, and a replacement starts early. 通過將.spec.replicas設定為希望同時執行的pod數,可以指定應同時執行多少個pod。在任何時候執行的數量可能會更高或更低,例如複製副本只是增加或減少,或者如果pod正常關閉,並且替換項很早就開始了。

If you do not specify .spec.replicas, then it defaults to 1. 如果未指定.spec.replicas,則預設為1。

Working with ReplicationControllers

Deleting a ReplicationController and its Pods 刪除複製控制器及其pods

To delete a ReplicationController and all its pods, use kubectl delete. Kubectl will scale the ReplicationController to zero and wait for it to delete each pod before deleting the ReplicationController itself. If this kubectl command is interrupted, it can be restarted. 要刪除複製控制器及其所有pod,請使用[kubectl delete](https://kubernetes.io/docs/reference/gener... commands delete)。kubectl將把replicationcontroller縮放為零,並等待它刪除每個pod,然後再刪除replicationcontroller本身。如果此kubectl命令被中斷,則可以重新啟動它。

When using the REST API or go client library, you need to do the steps explicitly (scale replicas to 0, wait for pod deletions, then delete the ReplicationController). 使用rest api或go客戶端庫時,需要顯式地執行這些步驟(將副本縮放到0,等待pod刪除,然後刪除replicationcontroller)。

Deleting just a ReplicationController

You can delete a ReplicationController without affecting any of its pods. 您可以在不影響任何播客的情況下刪除複製控制器。

Using kubectl, specify the --cascade=false option to kubectl delete. 使用kubectl,將“--cascade=false”選項指定為kubectl delete

When using the REST API or go client library, simply delete the ReplicationController object. 使用rest api或go客戶端庫時,只需刪除replicationcontroller物件。

Once the original is deleted, you can create a new ReplicationController to replace it. As long as the old and new .spec.selector are the same, then the new one will adopt the old pods. However, it will not make any effort to make existing pods match a new, different pod template. To update pods to a new spec in a controlled way, use a rolling update. 刪除原始副本後,可以建立新的replicationcontroller來替換它。只要新舊的“spec.selector”相同,那麼新的將採用舊的pods。然而,它不會作出任何努力,使現有的pod匹配一個新的,不同的POD模板。要以受控方式將pods更新到新規範,請使用滾動更新.

Isolating pods from a ReplicationController 從複製控制中分離pod

Pods may be removed from a ReplicationController’s target set by changing their labels. This technique may be used to remove pods from service for debugging, data recovery, etc. Pods that are removed in this way will be replaced automatically (assuming that the number of replicas is not also changed). pod可以通過改變標籤從複製控制器的目標集中移除。此技術可用於將pod從服務中移除以進行除錯、資料恢復等。以這種方式移除的pod將被自動替換(假設副本的數量也沒有更改)。

Common usage patterns

Rescheduling

As mentioned above, whether you have 1 pod you want to keep running, or 1000, a ReplicationController will ensure that the specified number of pods exists, even in the event of node failure or pod termination (for example, due to an action by another control agent). 如上所述,無論您希望繼續執行1個POD,還是1000,複製控制器將確保指定數量的POD存在,即使在節點故障或POD終止(例如,由於另一個控制代理的動作)的情況下。

Scaling

The ReplicationController makes it easy to scale the number of replicas up or down, either manually or by an auto-scaling control agent, by simply updating the replicas field. replicationcontroller通過簡單地更新“replicas”欄位,可以方便地手動或通過自動縮放控制代理來放大或縮小副本的數量。

Rolling updates

The ReplicationController is designed to facilitate rolling updates to a service by replacing pods one-by-one. replicationcontroller旨在通過逐個替換pod來方便滾動更新服務。

As explained in #1353, the recommended approach is to create a new ReplicationController with 1 replica, scale the new (+1) and old (-1) controllers one by one, and then delete the old controller after it reaches 0 replicas. This predictably updates the set of pods regardless of unexpected failures. 如1353中所述,建議的方法是建立一個具有1個副本的新的複製控制器,逐個縮放新的(+1)和舊的(-1)控制器,然後在舊控制器達到0個副本後將其刪除。無論意外失敗如何,這都可以預料地更新pod集。

Ideally, the rolling update controller would take application readiness into account, and would ensure that a sufficient number of pods were productively serving at any given time. 理想情況下,滾動更新控制器將考慮到應用程式的就緒性,並確保在任何給定時間都有足夠數量的pod有效地服務。

The two ReplicationControllers would need to create pods with at least one differentiating label, such as the image tag of the primary container of the pod, since it is typically image updates that motivate rolling updates. 這兩個複製控制器需要建立至少有一個區別標籤的pod,例如pod主容器的影像標籤,因為通常是影像更新激發滾動更新。

Rolling update is implemented in the client tool kubectl rolling-update. Visit kubectl rolling-update task for more concrete examples. 滾動更新在客戶端工具[kubectl rolling update中實現](https://kubernetes.io/docs/reference/gener... commands rolling update)。有關更多具體示例,請訪問kubectl rolling update任務

Multiple release tracks 多個釋放軌跡

In addition to running multiple releases of an application while a rolling update is in progress, it’s common to run multiple releases for an extended period of time, or even continuously, using multiple release tracks. The tracks would be differentiated by labels. 除了在滾動更新過程中執行應用程式的多個版本外,使用多個版本跟蹤在較長時間內(甚至連續地)執行多個版本也是常見的。曲目將通過標籤區分。

For instance, a service might target all pods with tier in (frontend), environment in (prod). Now say you have 10 replicated pods that make up this tier. But you want to be able to ‘canary’ a new version of this component. You could set up a ReplicationController with replicas set to 9 for the bulk of the replicas, with labels tier=frontend, environment=prod, track=stable, and another ReplicationController with replicas set to 1 for the canary, with labels tier=frontend, environment=prod, track=canary. Now the service is covering both the canary and non-canary pods. But you can mess with the ReplicationControllers separately to test things out, monitor the results, etc. 例如,一個服務可能以tier-in(frontend),environment-in(prod)為目標。現在假設你有10個複製的pods組成這一層。但您希望能夠“金絲雀”這個元件的新版本。您可以設定一個replicationcontroller,將大部分副本的replicas設定為9,將標籤tier=frontend,environment=prod,track=stable,將另一個replicationcontroller的“replicas”設定為1,將標籤tier=frontend,environment=prod,track=canary。現在這項服務涵蓋了金絲雀和非金絲雀豆莢。但是你可以單獨使用複製控制器來測試、監控結果等。

Using ReplicationControllers with Services 將複製控制器與服務一起使用

Multiple ReplicationControllers can sit behind a single service, so that, for example, some traffic goes to the old version, and some goes to the new version. 多個replicationcontroller可以位於單個服務的後面,因此,例如,某些通訊流將轉到舊版本,而某些通訊流將轉到新版本。

A ReplicationController will never terminate on its own, but it isn’t expected to be as long-lived as services. Services may be composed of pods controlled by multiple ReplicationControllers, and it is expected that many ReplicationControllers may be created and destroyed over the lifetime of a service (for instance, to perform an update of pods that run the service). Both services themselves and their clients should remain oblivious to the ReplicationControllers that maintain the pods of the services. replicationcontroller永遠不會自行終止,但它不會像服務那樣長壽。服務可以由多個複製控制器控制的pod組成,並且在服務的整個生命週期內(例如,執行執行服務的pod的更新)可能會建立和銷燬許多複製控制器。服務本身及其客戶機都應該對維護服務pod的複製控制器保持不敏感。

Writing programs for Replication 編寫複製程式

Pods created by a ReplicationController are intended to be fungible and semantically identical, though their configurations may become heterogeneous over time. This is an obvious fit for replicated stateless servers, but ReplicationControllers can also be used to maintain availability of master-elected, sharded, and worker-pool applications. Such applications should use dynamic work assignment mechanisms, such as the RabbitMQ work queues, as opposed to static/one-time customization of the configuration of each pod, which is considered an anti-pattern. Any pod customization performed, such as vertical auto-sizing of resources (for example, cpu or memory), should be performed by another online controller process, not unlike the ReplicationController itself. 由複製控制器建立的pod是可替換的,語義相同,儘管隨著時間的推移它們的配置可能會變得異構。這顯然適合於複製的無狀態伺服器,但是replicationcontroller也可以用於維護主選擇、分片和工作池應用程式的可用性。此類應用程式應使用動態工作分配機制,例如[rabbitmq工作佇列](https://www.rabbitmq.com/tutorials/tutoria... two python.html),而不是對每個pod的配置進行靜態/一次性定製,這被認為是一種反模式。執行的任何pod自定義,例如資源(例如,cpu或記憶體)的垂直自動調整大小,都應該由另一個聯機控制器程式執行,這與replicationcontroller本身沒有什麼不同。

Responsibilities of the ReplicationController 複製控制者的職責

The ReplicationController simply ensures that the desired number of pods matches its label selector and are operational. Currently, only terminated pods are excluded from its count. In the future, readiness and other information available from the system may be taken into account, we may add more controls over the replacement policy, and we plan to emit events that could be used by external clients to implement arbitrarily sophisticated replacement and/or scale-down policies. replicationcontroller只需確保所需數量的pod與其標籤選擇器匹配並可操作。目前,只有終止的pod被排除在其計數之外。在未來,準備就緒和系統提供的其他資訊可能會被考慮在內,我們可能會增加對替換策略的更多控制,並且我們計劃發出事件,外部客戶端可以使用這些事件來實現任意複雜的替換和/或縮減策略。

The ReplicationController is forever constrained to this narrow responsibility. It itself will not perform readiness nor liveness probes. Rather than performing auto-scaling, it is intended to be controlled by an external auto-scaler (as discussed in #492), which would change its replicas field. We will not add scheduling policies (for example, spreading) to the ReplicationController. Nor should it verify that the pods controlled match the currently specified template, as that would obstruct auto-sizing and other automated processes. Similarly, completion deadlines, ordering dependencies, configuration expansion, and other features belong elsewhere. We even plan to factor out the mechanism for bulk pod creation (#170). 複製控制者永遠被這個狹隘的責任所束縛。它本身不會執行準備就緒或活性探測。它不是執行自動縮放,而是由外部自動縮放器控制(如492中所述),後者將更改其“replicas”欄位。我們不會將排程策略(例如,[spreading傳播](http://issue.k8s.io/367 35; issuecomment-48428019))新增到replicationcontroller。它也不應該驗證所控制的pod是否與當前指定的模板匹配,因為這將妨礙自動調整大小和其他自動化過程。類似地,完成期限、排序依賴項、配置擴充套件和其他特性也屬於其他地方。我們甚至計劃將批量建立pod的機制(http://issue.k8s.io/170)排除在外

The ReplicationController is intended to be a composable building-block primitive. We expect higher-level APIs and/or tools to be built on top of it and other complementary primitives for user convenience in the future. The “macro” operations currently supported by kubectl (run, scale, rolling-update) are proof-of-concept examples of this. For instance, we could imagine something like Asgard managing ReplicationControllers, auto-scalers, services, scheduling policies, canaries, etc. replicationcontroller是一個可組合的構建塊原語。我們希望將來在它和其他互補原語的基礎上構建更高階別的api和/或工具,以方便使用者使用。kubectl當前支援的“巨集”操作(run、scale、rolling update)就是這方面的概念示例。例如,我們可以想象像asgard管理複製控制器、自動縮放器、服務、排程策略、金絲雀等。

API Object

Replication controller is a top-level resource in the Kubernetes REST API. More details about the API object can be found at: ReplicationController API object. 複製控制器是kubernetes rest api中的頂級資源。有關api物件的更多詳細資訊,請訪問ReplicationController API object.

Alternatives to ReplicationController 複製控制器的替代品

ReplicaSet

ReplicaSet is the next-generation ReplicationController that supports the new set-based label selector. It’s mainly used by Deployment as a mechanism to orchestrate pod creation, deletion and updates. Note that we recommend using Deployments instead of directly using Replica Sets, unless you require custom update orchestration or don’t require updates at all. replicaset是支援新的[基於集的標籤選擇器](https://kubernetes.io/docs/concepts/overvi... with objects/labels/set-based requirement)的下一代複製控制器。它主要被deployment用作協調pod建立、刪除和更新的機制。請注意,我們建議使用部署,而不是直接使用副本集,除非您需要自定義更新業務流程或根本不需要更新。

Deployment (Recommended) 部署(推薦)

Deployment is a higher-level API object that updates its underlying Replica Sets and their Pods in a similar fashion as kubectl rolling-update. Deployments are recommended if you want this rolling update functionality, because unlike kubectl rolling-update, they are declarative, server-side, and have additional features. deployment是一個高階api物件,它以類似於“kubectl rolling update”的方式更新其底層副本集及其pod。如果您需要此滾動更新功能,建議進行部署,因為與“kubectl rolling update”不同,它們是宣告性的、伺服器端的,並且具有其他功能。

Bare Pods 光

Unlike in the case where a user directly created pods, a ReplicationController replaces pods that are deleted or terminated for any reason, such as in the case of node failure or disruptive node maintenance, such as a kernel upgrade. For this reason, we recommend that you use a ReplicationController even if your application requires only a single pod. Think of it similarly to a process supervisor, only it supervises multiple pods across multiple nodes instead of individual processes on a single node. A ReplicationController delegates local container restarts to some agent on the node (for example, Kubelet or Docker). 與使用者直接建立pod的情況不同,replicationcontroller會替換因任何原因(如節點故障或中斷性節點維護,如核心升級)而刪除或終止的pod。因此,即使應用程式只需要一個pod,我們也建議您使用replicationcontroller。與流程管理器類似,它只管理跨多個節點的多個pod,而不是單個節點上的單個流程。複製控制器將本地容器重新啟動委託給節點上的某個代理(例如,kubelet或docker)。

Job

Use a Job instead of a ReplicationController for pods that are expected to terminate on their own (that is, batch jobs). 使用一個job而不是一個replicationcontroller,用於希望自行終止的pod(即批處理作業)。

DaemonSet

Use a DaemonSet instead of a ReplicationController for pods that provide a machine-level function, such as machine monitoring or machine logging. These pods have a lifetime that is tied to a machine lifetime: the pod needs to be running on the machine before other pods start, and are safe to terminate when the machine is otherwise ready to be rebooted/shutdown. 使用一個daemonset而不是提供機器級功能(如機器監視或機器日誌記錄)的pod的複製控制器。這些pod的生命週期與機器的生命週期相關聯:pod需要在其他pod啟動之前在機器上執行,並且在機器準備重新啟動/關閉時可以安全地終止。

For more information

Read Run Stateless AP Replication Controller.

Feedback

Was this page helpful?

本作品採用《CC 協議》,轉載必須註明作者和本文連結