[原始碼解析] 並行分散式框架 Celery 之 worker 啟動 (2)

羅西的思考發表於2021-04-01

[原始碼解析] 並行分散式框架 Celery 之 worker 啟動 (2)

0x00 摘要

Celery是一個簡單、靈活且可靠的,處理大量訊息的分散式系統,專注於實時處理的非同步任務佇列,同時也支援任務排程。Celery 是呼叫其Worker 元件來完成具體任務處理。

前文講了 Celery 啟動過程的前半部分,本文繼續後半部分的分析。

本文相關其他連結如下:

[原始碼分析] 訊息佇列 Kombu 之 mailbox

[原始碼分析] 訊息佇列 Kombu 之 Hub

[原始碼分析] 訊息佇列 Kombu 之 Consumer

[原始碼分析] 訊息佇列 Kombu 之 Producer

[原始碼分析] 訊息佇列 Kombu 之 啟動過程

[原始碼解析] 訊息佇列 Kombu 之 基本架構

以及

原始碼解析 並行分散式框架 Celery 之架構 (2)

[原始碼解析] 並行分散式框架 Celery 之架構 (2)

[原始碼解析] 並行分散式框架 Celery 之 worker 啟動 (1)

0x01 前文回顧

前文提到了,我們經過一系列過程,正式來到了 Worker 的邏輯。我們在本文將接下來繼續看後續 work as a program 的啟動過程。

                                     +----------------------+
      +----------+                   |  @cached_property    |
      |   User   |                   |      Worker          |
      +----+-----+            +--->  |                      |
           |                  |      |                      |
           |  worker_main     |      |  Worker application  |
           |                  |      |  celery/app/base.py  |
           v                  |      +----------------------+
 +---------+------------+     |
 |        Celery        |     |
 |                      |     |
 |  Celery application  |     |
 |  celery/app/base.py  |     |
 |                      |     |
 +---------+------------+     |
           |                  |
           |  celery.main     |
           |                  |
           v                  |
 +---------+------------+     |
 |  @click.pass_context |     |
 |       celery         |     |
 |                      |     |
 |                      |     |
 |    CeleryCommand     |     |
 | celery/bin/celery.py |     |
 |                      |     |
 +---------+------------+     |
           |                  |
           |                  |
           |                  |
           v                  |
+----------+------------+     |
|   @click.pass_context |     |
|        worker         |     |
|                       |     |
|                       |     |
|     WorkerCommand     |     |
| celery/bin/worker.py  |     |
+-----------+-----------+     |
            |                 |
            +-----------------+

為了便於大家理解,我們先給出最終的流程圖如下:

0x2 Worker as a program

這裡的 worker 其實就是 業務主體,值得大書特書。

程式碼來到了celery/apps/worker.py。

class Worker(WorkController):
    """Worker as a program."""

例項化的過程呼叫到了WorkController基類的init。

初始化基本就是:

  • loader 載入各種配置;
  • setup_defaults做預設設定;
  • setup_instance 就是正式建立,包括配置存放訊息的queue。
  • 通過Blueprint來建立 Worker 內部的各個子模組

程式碼位於celery/apps/worker.py。

class WorkController:
    """Unmanaged worker instance."""

    app = None
    pidlock = None
    blueprint = None
    pool = None
    semaphore = None

    #: contains the exit code if a :exc:`SystemExit` event is handled.
    exitcode = None

    class Blueprint(bootsteps.Blueprint):
        """Worker bootstep blueprint."""

        name = 'Worker'
        default_steps = {
            'celery.worker.components:Hub',
            'celery.worker.components:Pool',
            'celery.worker.components:Beat',
            'celery.worker.components:Timer',
            'celery.worker.components:StateDB',
            'celery.worker.components:Consumer',
            'celery.worker.autoscale:WorkerComponent',
        }

    def __init__(self, app=None, hostname=None, **kwargs):
        self.app = app or self.app                      # 設定app屬性
        self.hostname = default_nodename(hostname)      # 生成node的hostname
        self.startup_time = datetime.utcnow()
        self.app.loader.init_worker()                   # 呼叫app.loader的init_worker()方法
        self.on_before_init(**kwargs)                   # 呼叫該初始化方法
        self.setup_defaults(**kwargs)                   # 設定預設值
        self.on_after_init(**kwargs)
        self.setup_instance(**self.prepare_args(**kwargs))  # 建立例項

此時會呼叫app.loader的init_worker方法,

2.1 loader

此處的app.loader,是在Celery初始化的時候設定的loader屬性,該值預設是celery.loaders.app:AppLoader。其作用就是匯入各種配置

其位於celery/loaders/base.py,定義如下:

@cached_property
def loader(self):
    """Current loader instance."""
    return get_loader_cls(self.loader_cls)(app=self)

get_loader_cls如下:

def get_loader_cls(loader):
    """Get loader class by name/alias."""
    return symbol_by_name(loader, LOADER_ALIASES, imp=import_from_cwd)

此時的loader例項就是AppLoader,然後呼叫該類的init_worker方法,

def init_worker(self):
    if not self.worker_initialized:             # 如果該類沒有被設定過
        self.worker_initialized = True          # 設定成設定過
        self.import_default_modules()           # 匯入預設的modules
        self.on_worker_init()  

import_default_modules如下,主要就是匯入在app配置檔案中需要匯入的modules,

def import_default_modules(self):
    responses = signals.import_modules.send(sender=self.app)
    # Prior to this point loggers are not yet set up properly, need to
    #   check responses manually and reraised exceptions if any, otherwise
    #   they'll be silenced, making it incredibly difficult to debug.
    for _, response in responses:   # 匯入專案中需要匯入的modules
        if isinstance(response, Exception):
            raise response
    return [self.import_task_module(m) for m in self.default_modules]

2.2 setup_defaults in worker

繼續分析Worker類初始化過程中的self.setup_defaults方法,給執行中需要設定的引數設定值

這之後,self.pool_cls的數值為:<class 'celery.concurrency.prefork.TaskPool'>

程式碼如下:

def setup_defaults(self, concurrency=None, loglevel='WARN', logfile=None,
                   task_events=None, pool=None, consumer_cls=None,
                   timer_cls=None, timer_precision=None,
                   autoscaler_cls=None,
                   pool_putlocks=None,
                   pool_restarts=None,
                   optimization=None, O=None,  # O maps to -O=fair
                   statedb=None,
                   time_limit=None,
                   soft_time_limit=None,
                   scheduler=None,
                   pool_cls=None,              # XXX use pool
                   state_db=None,              # XXX use statedb
                   task_time_limit=None,       # XXX use time_limit
                   task_soft_time_limit=None,  # XXX use soft_time_limit
                   scheduler_cls=None,         # XXX use scheduler
                   schedule_filename=None,
                   max_tasks_per_child=None,
                   prefetch_multiplier=None, disable_rate_limits=None,
                   worker_lost_wait=None,
                   max_memory_per_child=None, **_kw):
    either = self.app.either                # 從配置檔案中獲取,如果獲取不到就使用給定的預設值 
    self.loglevel = loglevel                # 設定日誌等級
    self.logfile = logfile                  # 設定日誌檔案

    self.concurrency = either('worker_concurrency', concurrency)        # 設定worker的工作程式數
    self.task_events = either('worker_send_task_events', task_events)   # 設定時間
    self.pool_cls = either('worker_pool', pool, pool_cls)               # 連線池設定
    self.consumer_cls = either('worker_consumer', consumer_cls)         # 消費類設定
    self.timer_cls = either('worker_timer', timer_cls)                  # 時間類設定
    self.timer_precision = either(
        'worker_timer_precision', timer_precision,
    )
    self.optimization = optimization or O                               # 優先順序設定
    self.autoscaler_cls = either('worker_autoscaler', autoscaler_cls) 
    self.pool_putlocks = either('worker_pool_putlocks', pool_putlocks)
    self.pool_restarts = either('worker_pool_restarts', pool_restarts)
    self.statedb = either('worker_state_db', statedb, state_db)         # 執行結果儲存
    self.schedule_filename = either(
        'beat_schedule_filename', schedule_filename,
    )                                                                   # 定時任務排程設定
    self.scheduler = either('beat_scheduler', scheduler, scheduler_cls) # 獲取排程類
    self.time_limit = either(
        'task_time_limit', time_limit, task_time_limit)                 # 獲取限制時間值
    self.soft_time_limit = either(
        'task_soft_time_limit', soft_time_limit, task_soft_time_limit,
    )
    self.max_tasks_per_child = either(
        'worker_max_tasks_per_child', max_tasks_per_child,
    )                                                                   # 設定每個子程式最大處理任務的個數
    self.max_memory_per_child = either(
        'worker_max_memory_per_child', max_memory_per_child,
   )                                                                   # 設定每個子程式最大記憶體值
    self.prefetch_multiplier = int(either(
        'worker_prefetch_multiplier', prefetch_multiplier,
    ))
    self.disable_rate_limits = either(
        'worker_disable_rate_limits', disable_rate_limits,
    )
    self.worker_lost_wait = either('worker_lost_wait', worker_lost_wait)

2.3 setup_instance in worker

執行完成後,會繼續執行 self.setup_instance方法來建立例項

def setup_instance(self, queues=None, ready_callback=None, pidfile=None,
                   include=None, use_eventloop=None, exclude_queues=None,
                   **kwargs):
    self.pidfile = pidfile                              # pidfile
    self.setup_queues(queues, exclude_queues)           # 指定相關的消費與不消費佇列
    self.setup_includes(str_to_list(include))           # 獲取所有的task任務

    # Set default concurrency
    if not self.concurrency:                            # 如果沒有設定預設值
        try:
            self.concurrency = cpu_count()              # 設定程式數與cpu的個數相同
        except NotImplementedError: 
            self.concurrency = 2                        # 如果獲取失敗則預設為2

    # Options
    self.loglevel = mlevel(self.loglevel)               # 設定日誌等級
    self.ready_callback = ready_callback or self.on_consumer_ready  # 設定回撥函式

    # this connection won't establish, only used for params
    self._conninfo = self.app.connection_for_read()     
    self.use_eventloop = (
        self.should_use_eventloop() if use_eventloop is None
        else use_eventloop          
    )                                                   # 獲取eventloop型別
    self.options = kwargs

    signals.worker_init.send(sender=self)               # 傳送訊號

    # Initialize bootsteps
    self.pool_cls = _concurrency.get_implementation(self.pool_cls)  # 獲取緩衝池類
    self.steps = []                                     # 需要執行的步驟
    self.on_init_blueprint() 
    self.blueprint = self.Blueprint(
        steps=self.app.steps['worker'],
        on_start=self.on_start,
        on_close=self.on_close,
        on_stopped=self.on_stopped,
    )                                                  # 初始化blueprint
    self.blueprint.apply(self, **kwargs)               # 呼叫初始化完成的blueprint類的apply方法

其中setup_queues和setup_includes所做的工作如下,

def setup_queues(self, include, exclude=None):
    include = str_to_list(include)
    exclude = str_to_list(exclude)
    try:
        self.app.amqp.queues.select(include)        # 新增佇列消費
    except KeyError as exc:
        raise ImproperlyConfigured(
            SELECT_UNKNOWN_QUEUE.strip().format(include, exc))
    try:
        self.app.amqp.queues.deselect(exclude)      # 不消費指定的佇列中的任務
    except KeyError as exc:
        raise ImproperlyConfigured(
            DESELECT_UNKNOWN_QUEUE.strip().format(exclude, exc))
    if self.app.conf.worker_direct:
        self.app.amqp.queues.select_add(worker_direct(self.hostname))  # 新增消費的佇列

def setup_includes(self, includes):
    # Update celery_include to have all known task modules, so that we
    # ensure all task modules are imported in case an execv happens.
    prev = tuple(self.app.conf.include)                             # 獲取配置檔案中的task
    if includes:
        prev += tuple(includes)
        [self.app.loader.import_task_module(m) for m in includes]   # 將task新增到loader的task中
    self.include = includes                     
    task_modules = {task.__class__.__module__
                    for task in values(self.app.tasks)}             # 獲取已經註冊的任務
    self.app.conf.include = tuple(set(prev) | task_modules)         # 去重後重新設定include

2.3.1 setup_queues

self.app.amqp.queues.select(include)會設定queues。

堆疊如下:

__init__, amqp.py:59
Queues, amqp.py:259
queues, amqp.py:572
__get__, objects.py:43
setup_queues, worker.py:172
setup_instance, worker.py:106
__init__, worker.py:99
worker, worker.py:326
caller, base.py:132
new_func, decorators.py:21
invoke, core.py:610
invoke, core.py:1066
invoke, core.py:1259
main, core.py:782
start, base.py:358
worker_main, base.py:374

程式碼位於:celery/app/amqp.py

class Queues(dict):
    """Queue name⇒ declaration mapping.
    """

    #: If set, this is a subset of queues to consume from.
    #: The rest of the queues are then used for routing only.
    _consume_from = None

    def __init__(self, queues=None, default_exchange=None,
                 create_missing=True, autoexchange=None,
                 max_priority=None, default_routing_key=None):
        dict.__init__(self)
        self.aliases = WeakValueDictionary()
        self.default_exchange = default_exchange
        self.default_routing_key = default_routing_key
        self.create_missing = create_missing
        self.autoexchange = Exchange if autoexchange is None else autoexchange
        self.max_priority = max_priority
        if queues is not None and not isinstance(queues, Mapping):
            queues = {q.name: q for q in queues}
        queues = queues or {}
        for name, q in queues.items():
            self.add(q) if isinstance(q, Queue) else self.add_compat(name, **q)

所以目前如下:

                                     +----------------------+
      +----------+                   |  @cached_property    |
      |   User   |                   |      Worker          |
      +----+-----+            +--->  |                      |
           |                  |      |                      |
           |  worker_main     |      |  Worker application  |
           |                  |      |  celery/app/base.py  |
           v                  |      +----------+-----------+
 +---------+------------+     |                 |
 |        Celery        |     |                 |
 |                      |     |                 |
 |  Celery application  |     |                 v
 |  celery/app/base.py  |     |  +--------------+--------------+    +---> app.loader.init_worker
 |                      |     |  | class Worker(WorkController)|    |
 +---------+------------+     |  |                             |    |
           |                  |  |                             +--------> setup_defaults
           |  celery.main     |  |    Worker as a program      |    |
           |                  |  |   celery/apps/worker.py     |    |
           v                  |  +-----------------------------+    +---> setup_instance +-----> setup_queues  +------>  app.amqp.queues
 +---------+------------+     |
 |  @click.pass_context |     |
 |       celery         |     |
 |                      |     |
 |                      |     |
 |    CeleryCommand     |     |
 | celery/bin/celery.py |     |
 |                      |     |
 +---------+------------+     |
           |                  |
           |                  |
           |                  |
           v                  |
+----------+------------+     |
|   @click.pass_context |     |
|        worker         |     |
|                       |     |
|                       |     |
|     WorkerCommand     |     |
| celery/bin/worker.py  |     |
+-----------+-----------+     |
            |                 |
            +-----------------+

手機如圖:

2.4 Blueprint

下面就要建立 Worker 內部的各個子模組

worker初始化過程中,其內部各個子模組的執行順序是由一個BluePrint類定義,並且根據各個模組之間的依賴進行排序(實際上把這種依賴關係組織成了一個 DAG)執行,此時執行的操作就是載入blueprint類中的default_steps。

具體邏輯是:

  • self.claim_steps方法功能是獲取定義的steps
  • _finalize_steps 獲取step依賴,並進行拓撲排序,返回按依賴關係排序的steps。
  • 根據依賴,返回依賴排序後step。
  • 當step生成之後,就開始呼叫來生成元件。
  • apply函式最後,返回worker。當所有的類初始化完成後,此時就是一個worker就初始化完成。

程式碼如下:celery/apps/worker.py。此時的Blueprint類定義在Worker裡面。

class Blueprint(bootsteps.Blueprint):
    """Worker bootstep blueprint."""

    name = 'Worker'
    default_steps = {
        'celery.worker.components:Hub',
        'celery.worker.components:Pool',
        'celery.worker.components:Beat',
        'celery.worker.components:Timer',
        'celery.worker.components:StateDB',
        'celery.worker.components:Consumer',
        'celery.worker.autoscale:WorkerComponent',
    }

celery Worker 中各個模組定為step,具體如下:

  • Timer:用於執行定時任務的 Timer,和 Consumer 那裡的 timer 不同;

  • Hub:Eventloop 的封裝物件;

  • Pool:構造各種執行池(執行緒/程式/協程);

  • Autoscaler:用於自動增長或者 pool 中工作單元;

  • StateDB:持久化 worker 重啟區間的資料(只是重啟);

  • Autoreloader:用於自動載入修改過的程式碼;

  • Beat:建立 Beat 程式,不過是以子程式的形式執行(不同於命令列中以 beat 引數執行);

celery 中定義依賴關係主要依靠了幾個類屬性 requires,label,conditon,last,比如Hub依賴Timer,Consumer最後執行。

class Hub(bootsteps.StartStopStep):
    """Worker starts the event loop."""
    requires = (Timer,)

class Consumer(bootsteps.StartStopStep):
    """Bootstep starting the Consumer blueprint."""
    last = True

2.5 Blueprint基類

apply呼叫的是基類程式碼。其基類位於celery/bootsteps.py。

class Blueprint:
    """Blueprint containing bootsteps that can be applied to objects.
    """

    GraphFormatter = StepFormatter
    name = None
    state = None
    started = 0
    default_steps = set()
    state_to_name = {
        0: 'initializing',
        RUN: 'running',
        CLOSE: 'closing',
        TERMINATE: 'terminating',
    }

    def __init__(self, steps=None, name=None,
                 on_start=None, on_close=None, on_stopped=None):
        self.name = name or self.name or qualname(type(self))
        self.types = set(steps or []) | set(self.default_steps)
        self.on_start = on_start
        self.on_close = on_close
        self.on_stopped = on_stopped
        self.shutdown_complete = Event()
        self.steps = {}

apply程式碼如下,這個是在WorkController初始化時執行的。

def apply(self, parent, **kwargs):
    """Apply the steps in this blueprint to an object.

    This will apply the ``__init__`` and ``include`` methods
    of each step, with the object as argument::

        step = Step(obj)
        ...
        step.include(obj)

    For :class:`StartStopStep` the services created
    will also be added to the objects ``steps`` attribute.
    """
    self._debug('Preparing bootsteps.')
    order = self.order = []                         # 用於存放依序排列的需要執行的模組類
    steps = self.steps = self.claim_steps()         # 獲得定義的step,生成{step.name, step}結構

    self._debug('Building graph...')
    for S in self._finalize_steps(steps):                   # 進行依賴排序後,返回對應的step
        step = S(parent, **kwargs)                          # 獲得例項化的step
        steps[step.name] = step                             # 已step.name為key,step例項為val
        order.append(step)                                  # 新增到order列表中
    self._debug('New boot order: {%s}',
                ', '.join(s.alias for s in self.order))
    for step in order:                             # 遍歷order列表
        step.include(parent)                       # 各個step依次執行include函式,執行create函式
    return self

2.5.1 獲取定義的steps

其中self.claim_steps方法功能是獲取定義的steps,具體如下:

def claim_steps(self):
    return dict(self.load_step(step) for step in self.types)# 匯入types中的類,並返回已名稱和類對應的k:v字典

def load_step(self, step):
    step = symbol_by_name(step)
    return step.name, step

其中self.types可以在初始化的時候傳入。

def __init__(self, steps=None, name=None,
             on_start=None, on_close=None, on_stopped=None):
    self.name = name or self.name or qualname(type(self))
    self.types = set(steps or []) | set(self.default_steps)
    self.on_start = on_start
    self.on_close = on_close
    self.on_stopped = on_stopped
    self.shutdown_complete = Event()
    self.steps = {}

在Blueprint初始化時沒有傳入steps,所以此時types就是default_steps屬性,該值就是WorkController類中的Blueprint類的default_steps值。

default_steps = {
    'celery.worker.components:Hub',
    'celery.worker.components:Pool',
    'celery.worker.components:Beat',
    'celery.worker.components:Timer',
    'celery.worker.components:StateDB',
    'celery.worker.components:Consumer',
    'celery.worker.autoscale:WorkerComponent',
}

2.5.2 _finalize_steps

_finalize_steps作用為: 獲取step依賴,並進行拓撲排序,返回按依賴關係排序的steps

其中重點分析一下self._finalize_steps(steps)函式,排序操作主要在此函式中完成。

def _finalize_steps(self, steps):
        last = self._find_last()                                # 找到last=True的step,上文也提到Consumer類
        self._firstpass(steps)                                  # 將step及所需要的依賴加入到steps列表,並獲取依賴
        it = ((C, C.requires) for C in values(steps))           # 資料結構((a,[b,c]),(b,[e]))
        G = self.graph = DependencyGraph(                       # 依賴圖模型初始化,新增定點和邊界
            it, formatter=self.GraphFormatter(root=last),
        )
        if last:                                                # last最終執行模組,依賴所有模組執行完成後執行,所以為所有模組新增last到step邊界
            for obj in G:
                if obj != last:
                    G.add_edge(last, obj)
        try:
            return G.topsort()                                  # 進行拓撲運算,獲得排好序的steps列表
        except KeyError as exc:
            raise KeyError('unknown bootstep: %s' % exc)

首先定義了一個拓撲類DependencyGraph,根據依賴關係新增定點和邊結構:

  • 頂點就是各個step。
  • 邊結構就是依賴step生成的列表。
  • 結構為{step1:[step2,step3]}。

下面我們一起看一下DependencyGraph類中的初始化操作

@python_2_unicode_compatible
class DependencyGraph(object):

    def __init__(self, it=None, formatter=None):
        self.formatter = formatter or GraphFormatter()
        self.adjacent = {}                                              # 儲存圖形結構
        if it is not None:
            self.update(it)

    def update(self, it):
        tups = list(it)
        for obj, _ in tups:
            self.add_arc(obj)
        for obj, deps in tups:
            for dep in deps:
                self.add_edge(obj, dep)

    def add_arc(self, obj):                                               # 新增定點
        self.adjacent.setdefault(obj, [])

    def add_edge(self, A, B):                                             # 新增邊界
        self[A].append(B)

圖形結構已經生成了各個模組之間的依賴關係圖,主要依靠celery自己實現的一套DAG,依靠拓撲排序方法,得到執行順序。拓撲排序就是實現依賴排序,生成模組的執行順序,然後按照循序執行模組。

2.5.3 返回依賴排序後step

這部分作用就是根據依賴,返回依賴排序後step

此時由於這幾個類中存在相互依賴的執行,比如Hub類,

class Hub(bootsteps.StartStopStep):
    """Worker starts the event loop."""

    requires = (Timer,)

Hub類就依賴於Timer類,所以_finalize_steps的工作就是將被依賴的類先匯入。

此時繼續分析到order列表,該列表就是所有依賴順序解決完成後的各個類的列表,並且這些steps類都是直接繼承或間接繼承自bootsteps.Step。

@with_metaclass(StepType)
class Step(object):
    ...

該類使用了元類,繼續檢視StepType。

class StepType(type):
    """Meta-class for steps."""

    name = None
    requires = None

    def __new__(cls, name, bases, attrs):
        module = attrs.get('__module__')                                # 獲取__module__屬性
        qname = '{0}.{1}'.format(module, name) if module else name      # 如果獲取到了__module__就設定qname為模組.name的形式,否則就設定成name
        attrs.update(
            __qualname__=qname,
            name=attrs.get('name') or qname,
        )                                                               # 將要新建的類中的屬性,name和__qualname__更新為所給的型別
        return super(StepType, cls).__new__(cls, name, bases, attrs)

    def __str__(self):
        return bytes_if_py2(self.name)

    def __repr__(self):
        return bytes_if_py2('step:{0.name}{{{0.requires!r}}}'.format(self))

這裡使用了有關Python元類程式設計的相關知識,通過在新建該類例項的時候控制相關屬性的值,從而達到控制類的相關屬性的目的。此時會呼叫Step的include方法。

def _should_include(self, parent):
    if self.include_if(parent):
        return True, self.create(parent)
    return False, None

def include(self, parent):
    return self._should_include(parent)[0]

如果繼承的是StartStopStep,則呼叫的include方法如下,

def include(self, parent):
    inc, ret = self._should_include(parent)
    if inc:
        self.obj = ret
        parent.steps.append(self)
    return inc

排序之後,變數資料舉例如下:

order = {list: 7} 
 0 = {Timer} <step: Timer>
 1 = {Hub}  
 2 = {Pool} <step: Pool>
 3 = {WorkerComponent} <step: Autoscaler>
 4 = {StateDB} <step: StateDB>
 5 = {Beat} <step: Beat>
 6 = {Consumer} <step: Consumer>
 __len__ = {int} 7
 
 
steps = {dict: 7} 
 'celery.worker.components.Timer' = {Timer} <step: Timer>
 'celery.worker.components.Pool' = {Pool} <step: Pool>
 'celery.worker.components.Consumer' = {Consumer} <step: Consumer>
 'celery.worker.autoscale.WorkerComponent' = {WorkerComponent} <step: Autoscaler>
 'celery.worker.components.StateDB' = {StateDB} <step: StateDB>
 'celery.worker.components.Hub' = {Hub} <step: Hub>
 'celery.worker.components.Beat' = {Beat} <step: Beat>
 __len__ = {int} 7

2.5.4 生成元件

當step生成之後,就開始呼叫來生成元件

for step in order:
    step.include(parent)

對於每個元件,會繼續呼叫。各個step依次執行include函式,執行create函式

def include(self, parent):
    return self._should_include(parent)[0]

如果需要呼叫,就建立元件,比如self為timer,parent是worker例項

<class 'celery.worker.worker.WorkController'>

程式碼如下:

def _should_include(self, parent):
    if self.include_if(parent):
        return True, self.create(parent)
    return False, None
    
def include_if(self, w):
    return w.use_eventloop

以timer為例,

class Timer(bootsteps.Step):
    """Timer bootstep."""

    def create(self, w):
        if w.use_eventloop:
            # does not use dedicated timer thread.
            w.timer = _Timer(max_interval=10.0)

其堆疊如下:

create, components.py:36
_should_include, bootsteps.py:335
include, bootsteps.py:339
apply, bootsteps.py:211
setup_instance, worker.py:139
__init__, worker.py:99
worker, worker.py:326
caller, base.py:132
new_func, decorators.py:21
invoke, core.py:610
invoke, core.py:1066
invoke, core.py:1259
main, core.py:782
start, base.py:358
worker_main, base.py:374
<module>, myTest.py:26

2.5.5 返回worker

apply函式最後,返回worker。當所有的類初始化完成後,此時就是一個worker就初始化完成

def apply(self, parent, **kwargs):
    """Apply the steps in this blueprint to an object.

    This will apply the ``__init__`` and ``include`` methods
    of each step, with the object as argument::

        step = Step(obj)
        ...
        step.include(obj)

    For :class:`StartStopStep` the services created
    will also be added to the objects ``steps`` attribute.
    """
    self._debug('Preparing bootsteps.')
    order = self.order = []
    steps = self.steps = self.claim_steps()

    self._debug('Building graph...')
    for S in self._finalize_steps(steps):
        step = S(parent, **kwargs)
        steps[step.name] = step
        order.append(step)
    self._debug('New boot order: {%s}',
                ', '.join(s.alias for s in self.order))
    for step in order:
        step.include(parent)
    return self

self如下:

self = {Blueprint} <celery.worker.worker.WorkController.Blueprint object at 0x7fcd3ad33d30>
 GraphFormatter = {type} <class 'celery.bootsteps.StepFormatter'>
 alias = {str} 'Worker'
 default_steps = {set: 7} {'celery.worker.components:Hub', 'celery.worker.components:Consumer', 'celery.worker.components:Beat', 'celery.worker.autoscale:WorkerComponent', 'celery.worker.components:StateDB', 'celery.worker.components:Timer', 'celery.worker.components:Pool'}
 graph = {DependencyGraph: 7} celery.worker.autoscale.WorkerComponent(3)\n     celery.worker.components.Pool(2)\n          celery.worker.components.Hub(1)\n               celery.worker.components.Timer(0)\ncelery.worker.components.StateDB(0)\ncelery.worker.components.Hub(1)\n     celery.worker.components.Timer(0)\ncelery.worker.components.Consumer(12)\n     celery.worker.autoscale.WorkerComponent(3)\n          celery.worker.components.Pool(2)\n               celery.worker.components.Hub(1)\n                    celery.worker.components.Timer(0)\n     celery.worker.components.StateDB(0)\n     celery.worker.components.Hub(1)\n          celery.worker.components.Timer(0)\n     celery.worker.components.Beat(0)\n     celery.worker.components.Timer(0)\n     celery.worker.components.Pool(2)\n          celery.worker.components.Hub(1)\n               celery.worker.components.Timer(0)\ncelery.worker.components.Beat(0)\ncelery.worker.components.Timer(0)\ncelery.worker.components.Pool(2)\n     celery.worker.components.Hub(1)\n          celery.worker.co...
 name = {str} 'Worker'
 order = {list: 7} [<step: Timer>, <step: Hub>, <step: Pool>, <step: Autoscaler>, <step: StateDB>, <step: Beat>, <step: Consumer>]
 shutdown_complete = {Event} <threading.Event object at 0x7fcd3ad33b38>
 started = {int} 0
 state = {NoneType} None
 state_to_name = {dict: 4} {0: 'initializing', 1: 'running', 2: 'closing', 3: 'terminating'}
 steps = {dict: 7} {'celery.worker.autoscale.WorkerComponent': <step: Autoscaler>, 'celery.worker.components.StateDB': <step: StateDB>, 'celery.worker.components.Hub': <step: Hub>, 'celery.worker.components.Consumer': <step: Consumer>, 'celery.worker.components.Beat': <step: Beat>, 'celery.worker.components.Timer': <step: Timer>, 'celery.worker.components.Pool': <step: Pool>}
 types = {set: 7} {'celery.worker.autoscale:WorkerComponent', 'celery.worker.components:StateDB', 'celery.worker.components:Hub', 'celery.worker.components:Consumer', 'celery.worker.components:Beat', 'celery.worker.components:Timer', 'celery.worker.components:Pool'}

此時如下:

                                     +----------------------+
      +----------+                   |  @cached_property    |
      |   User   |                   |      Worker          |
      +----+-----+            +--->  |                      |
           |                  |      |                      |
           |  worker_main     |      |  Worker application  |
           |                  |      |  celery/app/base.py  |
           v                  |      +----------+-----------+
 +---------+------------+     |                 |
 |        Celery        |     |                 |
 |                      |     |                 |
 |  Celery application  |     |                 v
 |  celery/app/base.py  |     |  +--------------+--------------+    +---> app.loader.init_worker
 |                      |     |  | class Worker(WorkController)|    |
 +---------+------------+     |  |                             |    |
           |                  |  |                             +--------> setup_defaults
           |  celery.main     |  |    Worker as a program      |    |
           |                  |  |   celery/apps/worker.py     |    |
           v                  |  +-----------------------------+    +---> setup_instance +-----> setup_queues  +------>  app.amqp.queues
 +---------+------------+     |                                                +
 |  @click.pass_context |     |                                                |
 |       celery         |     |                 +------------------------------+
 |                      |     |                 |       apply
 |                      |     |                 |
 |    CeleryCommand     |     |                 v
 | celery/bin/celery.py |     |
 |                      |     |  +-------------------------------------+        +---------------------+     +--->  claim_steps
 +---------+------------+     |  | class Blueprint(bootsteps.Blueprint)|        |  class Blueprint    |     |
           |                  |  |                                     +------>-+                     | +------->  _finalize_steps
           |                  |  |                                     |        |                     |     |
           |                  |  |     celery/apps/worker.py           |        | celery/bootsteps.py |     |                   +--> Timer
           v                  |  +-------------------------------------+        +---------------------+     +--->  include +--->+
+----------+------------+     |                                                                                                 +--> Hub
|   @click.pass_context |     |                                                                                                 |
|        worker         |     |                                                                                                 +--> Pool
|                       |     |                                                                                                 |
|                       |     |                                                                                                 +--> ......
|     WorkerCommand     |     |                                                                                                 |
| celery/bin/worker.py  |     |                                                                                                 +--> Consumer
+-----------+-----------+     |
            | 1  app.Worker   |
            +-----------------+

手機如下:

0x3 start in worker 命令

此時初始化完成後就執行到了celery/bin/worker.py中,開始執行 worker。

worker.start()

具體回憶之前程式碼下:

def worker(ctx, hostname=None, pool_cls=None, app=None, uid=None, gid=None,
           loglevel=None, logfile=None, pidfile=None, statedb=None,
           **kwargs):
    """Start worker instance.
    """
    app = ctx.obj.app

    if kwargs.get('detach', False):
        return detach(...)

    worker = app.Worker(...)
    
    worker.start()         #我們回到這裡
    return worker.exitcode

3.1 start in Worker as a program

此時呼叫的方法就是:celery/worker/worker.py。可以看到,直接就是呼叫 blueprint的 start函式,就是啟動 blueprint 之中各個元件

def start(self):
    try:
        self.blueprint.start(self)           # 此時呼叫blueprint的start方法
    except WorkerTerminate:
        self.terminate()
    except Exception as exc:
        self.stop(exitcode=EX_FAILURE)
    except SystemExit as exc:
        self.stop(exitcode=exc.code)
    except KeyboardInterrupt:
        self.stop(exitcode=EX_FAILURE)

3.2 start in blueprint

程式碼是:celery/bootsteps.py

此時,parent.steps就是在step.include中新增到該陣列中,parent.steps目前值為[Hub,Pool,Consumer],此時呼叫了worker的on_start方法。本例本例如下:

parent.steps = {list: 3} 
 0 = {Hub} <step: Hub>
 1 = {Pool} <step: Pool>
 2 = {Consumer} <step: Consumer>

具體 start 程式碼如下:

class Blueprint:
    """Blueprint containing bootsteps that can be applied to objects.
    """

    def start(self, parent):
        self.state = RUN
        if self.on_start:
            self.on_start()
        for i, step in enumerate(s for s in parent.steps if s is not None):
            self.started = i + 1
            step.start(parent)

3.2.1 回撥 on_start in Worker

blueprint首先回撥 on_start in Worker。

此時程式碼是/celery/apps/worker.py

具體是:

  • 設定app;
  • 用父類的on_start;
  • 列印啟動資訊;
  • 註冊相應的訊號處理handler;
  • 做相關配置比如重定向;
class Worker(WorkController):
    """Worker as a program."""

    def on_start(self):
        app = self.app                                                  # 設定app
        WorkController.on_start(self)                                   # 呼叫父類的on_start

        # this signal can be used to, for example, change queues after
        # the -Q option has been applied.
        signals.celeryd_after_setup.send(
            sender=self.hostname, instance=self, conf=app.conf,
        )

        if self.purge:
            self.purge_messages()

        if not self.quiet:
            self.emit_banner()                                     # 列印啟動資訊

        self.set_process_status('-active-')
        self.install_platform_tweaks(self)                         # 註冊相應的訊號處理handler
        if not self._custom_logging and self.redirect_stdouts:
            app.log.redirect_stdouts(self.redirect_stdouts_level)

        # TODO: Remove the following code in Celery 6.0
        # This qualifies as a hack for issue #6366.
        warn_deprecated = True
        config_source = app._config_source
        if isinstance(config_source, str):
            # Don't raise the warning when the settings originate from
            # django.conf:settings
            warn_deprecated = config_source.lower() not in [
                'django.conf:settings',
            ]

3.2.2 基類on_start

此時程式碼是/celery/apps/worker.py。

def on_start(self):
    app = self.app
    WorkController.on_start(self)

WorkController程式碼在:celery/worker/worker.py。

父類的on_start就是建立pid檔案。

def on_start(self):
    if self.pidfile:
        self.pidlock = create_pidlock(self.pidfile)

3.2.3 訊號處理handler

其中註冊相關的訊號處理handler的函式如下,

def install_platform_tweaks(self, worker):
    """Install platform specific tweaks and workarounds."""
    if self.app.IS_macOS:
        self.macOS_proxy_detection_workaround()

    # Install signal handler so SIGHUP restarts the worker.
    if not self._isatty:
        # only install HUP handler if detached from terminal,
        # so closing the terminal window doesn't restart the worker
        # into the background.
        if self.app.IS_macOS:
            # macOS can't exec from a process using threads.
            # See https://github.com/celery/celery/issues#issue/152
            install_HUP_not_supported_handler(worker)
        else:
            install_worker_restart_handler(worker)                      # 註冊重啟的訊號 SIGHUP
    install_worker_term_handler(worker)             
    install_worker_term_hard_handler(worker)
    install_worker_int_handler(worker)                                  
    install_cry_handler()                                               # SIGUSR1 訊號處理函式
    install_rdb_handler()                                               # SIGUSR2 訊號處理函式

單獨分析一下重啟的函式,

def _reload_current_worker():
    platforms.close_open_fds([
        sys.__stdin__, sys.__stdout__, sys.__stderr__,
    ])# 關閉已經開啟的檔案描述符
    os.execv(sys.executable, [sys.executable] + sys.argv)# 重新載入該程式

以及:

def restart_worker_sig_handler(*args):
    """Signal handler restarting the current python program."""
    set_in_sighandler(True)
    safe_say('Restarting celery worker ({0})'.format(' '.join(sys.argv)))
    import atexit
    atexit.register(_reload_current_worker)                 # 註冊程式退出時執行的函式
    from celery.worker import state
    state.should_stop = EX_OK                               # 設定狀態
platforms.signals[sig] = restart_worker_sig_handler

其中的platforms.signals類設定了setitem方法,

def __setitem__(self, name, handler):
    """Install signal handler.

    Does nothing if the current platform has no support for signals,
    or the specified signal in particular.
    """
    try:
        _signal.signal(self.signum(name), handler)
    except (AttributeError, ValueError):
        pass

此時就將相應的handler設定到了執行程式中,_signal就是匯入的signal庫。

3.2.4 逐次呼叫step

然後,blueprint逐次呼叫step的start。

此時,parent.steps就是在step.include中新增到該陣列中,parent.steps目前值為[Hub,Pool,Consumer],此時呼叫了worker的on_start方法,

parent.steps = {list: 3} 
 0 = {Hub} <step: Hub>
 1 = {Pool} <step: Pool>
 2 = {Consumer} <step: Consumer>

此時就繼續執行Blueprint的start方法,

依次遍歷parent.steps的方法中,依次遍歷Hub,Pool,Consumer,呼叫step的start方法。

def start(self, parent):
    self.state = RUN                                        # 設定當前執行狀態                          
    if self.on_start:                                       # 如果初始化是傳入了該方法就執行該方法
        self.on_start()
    for i, step in enumerate(s for s in parent.steps if s is not None):     # 依次遍歷step並呼叫step的start方法
        self._debug('Starting %s', step.alias)
        self.started = i + 1
        step.start(parent)
        logger.debug('^-- substep ok')
3.2.4.1 Hub

由於Hub重寫了start方法,該方法什麼都不執行,

def start(self, w):
    pass
3.2.4.2 Pool

繼續呼叫Pool方法,此時會呼叫到StartStopStep,此時的obj就是呼叫create方法返回的物件,此時obj為pool例項,

def start(self, parent):
    if self.obj:
        return self.obj.start()

具體我們後文講解

3.2.4.3 Consumer

繼續呼叫Consumer的start方法,

def start(self):
    blueprint = self.blueprint
    while blueprint.state != CLOSE:                           # 判斷當前狀態是否是關閉
        maybe_shutdown()                                      # 通過標誌判斷是否應該關閉
        if self.restart_count:                                # 如果設定了重啟次數
            try:
                self._restart_state.step()                    # 重置
            except RestartFreqExceeded as exc:
                crit('Frequent restarts detected: %r', exc, exc_info=1)
                sleep(1)
        self.restart_count += 1                               # 次數加1
        try: 
            blueprint.start(self)                             # 呼叫開始方法
        except self.connection_errors as exc:
            # If we're not retrying connections, no need to catch
            # connection errors
            if not self.app.conf.broker_connection_retry:
                raise
            if isinstance(exc, OSError) and exc.errno == errno.EMFILE:
                raise  # Too many open files
            maybe_shutdown()
            if blueprint.state != CLOSE:                              # 如果狀態不是關閉狀態
                if self.connection:
                    self.on_connection_error_after_connected(exc)
                else:
                    self.on_connection_error_before_connected(exc)
                self.on_close()
                blueprint.restart(self)                               # 呼叫重啟方法

Consumer也有blueprint,具體step如下:

  • Connection:管理和 broker 的 Connection 連線
  • Events:用於傳送監控事件
  • Agent:cell actor
  • Mingle:不同 worker 之間同步狀態用的
  • Tasks:啟動訊息 Consumer
  • Gossip:消費來自其他 worker 的事件
  • Heart:傳送心跳事件(consumer 的心跳)
  • Control:遠端命令管理服務

此時又進入到了blueprint的start方法,該blueprint的steps值是由Consumer在初始化的時候傳入的。

傳入的steps是Agent,Connection,Evloop,Control,Events,Gossip,Heart,Mingle,Tasks類的例項,然後根據呼叫最後新增到parent.steps中的例項就是[Connection,Events,Heart,Mingle,Tasks,Control,Gossip,Evloop],此時就依次呼叫這些例項的start方法。

首先分析一下Connection的start方法,

def start(self, c):
    c.connection = c.connect()
    info('Connected to %s', c.connection.as_uri())

就是呼叫了consumer的connect()函式,

def connect(self):
    """Establish the broker connection.

    Retries establishing the connection if the
    :setting:`broker_connection_retry` setting is enabled
    """
    conn = self.app.connection_for_read(heartbeat=self.amqheartbeat)        # 心跳

    # Callback called for each retry while the connection
    # can't be established.
    def _error_handler(exc, interval, next_step=CONNECTION_RETRY_STEP):
        if getattr(conn, 'alt', None) and interval == 0:
            next_step = CONNECTION_FAILOVER
        error(CONNECTION_ERROR, conn.as_uri(), exc,
              next_step.format(when=humanize_seconds(interval, 'in', ' ')))

    # remember that the connection is lazy, it won't establish
    # until needed.
    if not self.app.conf.broker_connection_retry:                           # 如果retry禁止
        # retry disabled, just call connect directly.
        conn.connect()                                                      # 直接連線
        return conn                                                         # 返回conn

    conn = conn.ensure_connection(
        _error_handler, self.app.conf.broker_connection_max_retries,
        callback=maybe_shutdown,
    )                                                                       # 確保連線上
    if self.hub:
        conn.transport.register_with_event_loop(conn.connection, self.hub)  # 使用非同步呼叫
    return conn                                                             # 返回conn

此時就建立了連線。

繼續分析Task的start方法,

def start(self, c):
    """Start task consumer."""
    c.update_strategies()                                           # 更新已知的任務

    # - RabbitMQ 3.3 completely redefines how basic_qos works..
    # This will detect if the new qos smenatics is in effect,
    # and if so make sure the 'apply_global' flag is set on qos updates.
    qos_global = not c.connection.qos_semantics_matches_spec

    # set initial prefetch count
    c.connection.default_channel.basic_qos(
        0, c.initial_prefetch_count, qos_global,
    )                                                               # 設定計數

    c.task_consumer = c.app.amqp.TaskConsumer(
        c.connection, on_decode_error=c.on_decode_error,
    )                                                               # 開始消費

    def set_prefetch_count(prefetch_count):
        return c.task_consumer.qos(
            prefetch_count=prefetch_count,
            apply_global=qos_global,
        )
    c.qos = QoS(set_prefetch_count, c.initial_prefetch_count)       # 設定計數

此時就開啟了對應的任務消費,開啟消費後我們繼續分析一下loop的開啟

def start(self, c):
    self.patch_all(c)
    c.loop(*c.loop_args())

此時就是呼叫了consumer中的loop函式,該loop函式就是位於celery/worker/loops.py中的asyncloop函式。

stack如下:

asynloop, loops.py:43
start, consumer.py:592
start, bootsteps.py:116
start, consumer.py:311
start, bootsteps.py:365
start, bootsteps.py:116
start, worker.py:203
worker, worker.py:327
caller, base.py:132
new_func, decorators.py:21
invoke, core.py:610
invoke, core.py:1066
invoke, core.py:1259
main, core.py:782
start, base.py:358
worker_main, base.py:374

asyncloop函式如下:

def asynloop(obj, connection, consumer, blueprint, hub, qos,
         heartbeat, clock, hbrate=2.0):
    """Non-blocking event loop."""                                  # 其中obj就是consumer例項
    RUN = bootsteps.RUN                                             # 獲取執行狀態
    update_qos = qos.update
    errors = connection.connection_errors

    on_task_received = obj.create_task_handler()                    # 建立任務處理頭

    _enable_amqheartbeats(hub.timer, connection, rate=hbrate)       # 定時傳送心跳包

    consumer.on_message = on_task_received                          # 設定消費的on_message為on_task_received
    consumer.consume()                                              # 開始消費
    obj.on_ready()                                                  # 呼叫回撥函式
    obj.controller.register_with_event_loop(hub)                    # 向所有生成的blueprint中的例項註冊hub
    obj.register_with_event_loop(hub)                               

    # did_start_ok will verify that pool processes were able to start,
    # but this will only work the first time we start, as
    # maxtasksperchild will mess up metrics.
    if not obj.restart_count and not obj.pool.did_start_ok():
        raise WorkerLostError('Could not start worker processes')

    # consumer.consume() may have prefetched up to our
    # limit - drain an event so we're in a clean state
    # prior to starting our event loop.
    if connection.transport.driver_type == 'amqp':
        hub.call_soon(_quick_drain, connection)

    # FIXME: Use loop.run_forever
    # Tried and works, but no time to test properly before release.
    hub.propagate_errors = errors
    loop = hub.create_loop()                                        # 建立loop,本質是一個生成器

    try:
        while blueprint.state == RUN and obj.connection:            # 檢查是否在執行並且連線是否有
            # shutdown if signal handlers told us to.
            should_stop, should_terminate = (
                state.should_stop, state.should_terminate,
            )
            # False == EX_OK, so must use is not False
            if should_stop is not None and should_stop is not False:
                raise WorkerShutdown(should_stop)
            elif should_terminate is not None and should_stop is not False:
                raise WorkerTerminate(should_terminate)

            # We only update QoS when there's no more messages to read.
            # This groups together qos calls, and makes sure that remote
            # control commands will be prioritized over task messages.
            if qos.prev != qos.value:
                update_qos()                                        

            try:
                next(loop)                                          # 迴圈下一個
            except StopIteration:
                loop = hub.create_loop()
    finally:
        try:
            hub.reset()
        except Exception as exc:  # pylint: disable=broad-except
            logger.exception(
                'Error cleaning up after event loop: %r', exc)

至此,非同步Loop就開啟了,然後就開始了服務端的事件等待處理,worker流程就啟動完畢。

0x4 本文總結

本文主要講述了worker的啟動流程。celery預設啟動的就是以程式的方式啟動任務,然後非同步IO處理消費端的事件,至此worker的啟動流程概述已分析完畢。

                                     +----------------------+
      +----------+                   |  @cached_property    |
      |   User   |                   |      Worker          |
      +----+-----+            +--->  |                      |
           |                  |      |                      |
           |  worker_main     |      |  Worker application  |
           |                  |      |  celery/app/base.py  |
           v                  |      +----------+-----------+
 +---------+------------+     |                 |
 |        Celery        |     |                 |
 |                      |     |                 |
 |  Celery application  |     |                 v
 |  celery/app/base.py  |     |  +--------------+--------------+    +---> app.loader.init_worker
 |                      |     |  | class Worker(WorkController)|    |
 +---------+------------+     |  |                             |    |
           |                  |  |                             +--------> setup_defaults
           |  celery.main     |  |    Worker as a program      |    |
           |                  |  |   celery/apps/worker.py     |    |
           v                  |  +-----------------------------+    +---> setup_instance +-----> setup_queues  +------>  app.amqp.queues
 +---------+------------+     |                                                +
 |  @click.pass_context |     |                                                |
 |       celery         |     |                 +------------------------------+
 |                      |     |                 |       apply
 |                      |     |                 |
 |    CeleryCommand     |     |                 v
 | celery/bin/celery.py |     |
 |                      |     |  +-------------------------------------+        +---------------------+     +--->  claim_steps
 +---------+------------+     |  | class Blueprint(bootsteps.Blueprint)| apply  |  class Blueprint    |     |
           |                  |  |                                     +------>-+                     | +------->  _finalize_steps
           |                  |  |                                     |        |                     |     |
           |                  |  |     celery/apps/worker.py           |        | celery/bootsteps.py |     |                   +--> Timer
           v                  |  +-----+---------------+---------------+        +---------------------+     +--->  include +--->+
+----------+------------+     |        ^               |                                                                        +--> Hub
|   @click.pass_context |     |        |               | start                                                                  |
|        worker         |     |        |               |                                                                        +--> Pool
|                       |     |        |               |        +---->  WorkController.on_start                                 |
|                       |     |        |               |        |                                                               +--> ......
|     WorkerCommand     |     |        |               +------------->   Hub. start / Pool.start /  Task.start                  |
| celery/bin/worker.py  |     |        |                        |                                                               +--> Consumer
+-----------+-----------+     |        |                        +---->   Evloop.start +----->  asynloop
            | 1  app.Worker   |        |
            +-----------------+        |
            |                          |
            | 2 blueprint.start        |
            +------------------------->+

手機如下:

0xFF 參考

Celery 原始碼學習(二)多程式模型

celery原理初探

celery原始碼分析-wroker初始化分析(上)

celery原始碼分析-worker初始化分析(下)https://blog.csdn.net/qq_33339479/article/details/80958059

celery worker初始化--DAG實現

python celery多worker、多佇列、定時任務

celery 詳細教程-- Worker篇

使用Celery

Celery 原始碼解析一:Worker 啟動流程概述

Celery 原始碼解析二:Worker 的執行引擎

Celery 原始碼解析三:Task 物件的實現

Celery 原始碼解析四:定時任務的實現

Celery 原始碼解析五:遠端控制管理

Celery 原始碼解析六:Events 的實現

Celery 原始碼解析七:Worker 之間的互動

Celery 原始碼解析八:State 和 Result

相關文章