[原始碼解析] 並行分散式框架 Celery 之 worker 啟動 (1)

羅西的思考發表於2021-03-29

[原始碼解析] 並行分散式框架 Celery 之 worker 啟動 (1)

0x00 摘要

Celery是一個簡單、靈活且可靠的,處理大量訊息的分散式系統,專注於實時處理的非同步任務佇列,同時也支援任務排程。Celery 是呼叫其Worker 元件來完成具體任務處理。

$ celery --app=proj worker -l INFO
$ celery -A proj worker -l INFO -Q hipri,lopri
$ celery -A proj worker --concurrency=4
$ celery -A proj worker --concurrency=1000 -P eventlet
$ celery worker --autoscale=10,0

所以我們本文就來講解 worker 的啟動過程。

0x01 Celery的架構

前面我們用幾篇文章分析了 Kombu,為 Celery 的分析打下了基礎。

[原始碼分析] 訊息佇列 Kombu 之 mailbox

[原始碼分析] 訊息佇列 Kombu 之 Hub

[原始碼分析] 訊息佇列 Kombu 之 Consumer

[原始碼分析] 訊息佇列 Kombu 之 Producer

[原始碼分析] 訊息佇列 Kombu 之 啟動過程

[原始碼解析] 訊息佇列 Kombu 之 基本架構

以及

原始碼解析 並行分散式框架 Celery 之架構 (2)

[原始碼解析] 並行分散式框架 Celery 之架構 (2)

下面我們再回顧下 Celery 的結構。Celery的架構圖如下所示:

 +-----------+            +--------------+
 | Producer  |            |  Celery Beat |
 +-------+---+            +----+---------+
         |                     |
         |                     |
         v                     v

       +-------------------------+
       |          Broker         |
       +------------+------------+
                    |
                    |
                    |
     +-------------------------------+
     |              |                |
     v              v                v
+----+-----+   +----+------+   +-----+----+
| Exchange |   |  Exchange |   | Exchange |
+----+-----+   +----+------+   +----+-----+
     |              |               |
     v              v               v

  +-----+       +-------+       +-------+
  |queue|       | queue |       | queue |
  +--+--+       +---+---+       +---+---+
     |              |               |
     |              |               |
     v              v               v

+---------+     +--------+     +----------+
| worker  |     | Worker |     |  Worker  |
+-----+---+     +---+----+     +----+-----+
      |             |               |
      |             |               |
      +-----------------------------+
                    |
                    |
                    v
                +---+-----+
                | backend |
                +---------+

0x02 示例程式碼

其實網上難以找到除錯Celery worker的辦法。我們可以去其原始碼看看,發現如下:

# def test_worker_main(self):
#     from celery.bin import worker as worker_bin
#
#     class worker(worker_bin.worker):
#
#         def execute_from_commandline(self, argv):
#             return argv
#
#     prev, worker_bin.worker = worker_bin.worker, worker
#     try:
#         ret = self.app.worker_main(argv=['--version'])
#         assert ret == ['--version']
#     finally:
#         worker_bin.worker = prev

所以我們可以模仿來進行,使用如下啟動worker,進行除錯。

from celery import Celery

app = Celery('tasks', broker='redis://localhost:6379')

@app.task()
def add(x, y):
    return x + y

if __name__ == '__main__':
    app.worker_main(argv=['worker'])

0x03 邏輯概述

當啟動一個worker的時候,這個worker會與broker建立連結(tcp長連結),然後如果有資料傳輸,則會建立相應的channel, 這個連線可以有多個channel。然後,worker就會去borker的佇列裡面取相應的task來進行消費了,這也是典型的消費者生產者模式。

這個worker主要是有四部分組成的,task_pool, consumer, scheduler, mediator。其中,task_pool主要是用來存放的是一些worker,當啟動了一個worker,並且提供併發引數的時候,會將一些worker放在這裡面。

celery預設的併發方式是prefork,也就是多程式的方式,這裡只是celery對multiprocessing pool進行了輕量的改造,然後給了一個新的名字叫做prefork,這個pool與多程式的程式池的區別就是這個task_pool只是存放一些執行的worker。

consumer也就是消費者,主要是從broker那裡接受一些message,然後將message轉化為celery.worker.request.Request 的一個例項。

Celery 在適當的時候,會把這個請求包裝進Task中,Task就是用裝飾器app_celery.task()裝飾的函式所生成的類,所以可以在自定義的任務函式中使用這個請求引數,獲取一些關鍵的資訊。此時,已經瞭解了task_pool和consumer。

接下來,這個worker具有兩套資料結構,這兩套資料結構是並行執行的,他們分別是 'ET時刻表' 、就緒佇列。

就緒佇列:那些 立刻就需要執行的task, 這些task到達worker的時候會被放到這個就緒佇列中等待consumer執行。

我們下面看看如何啟動Celery

0x04 Celery應用

程式首先會來到Celery類,這是Celery的應用。

可以看到主要就是:各種類名稱,TLS, 初始化之後的各種signal。

位置在:celery/app/base.py,其定義如下:

class Celery:
    """Celery application."""

    amqp_cls = 'celery.app.amqp:AMQP'
    backend_cls = None
    events_cls = 'celery.app.events:Events'
    loader_cls = None
    log_cls = 'celery.app.log:Logging'
    control_cls = 'celery.app.control:Control'
    task_cls = 'celery.app.task:Task'
    registry_cls = 'celery.app.registry:TaskRegistry'

    #: Thread local storage.
    _local = None
    _fixups = None
    _pool = None
    _conf = None
    _after_fork_registered = False

    #: Signal sent when app is loading configuration.
    on_configure = None

    #: Signal sent after app has prepared the configuration.
    on_after_configure = None

    #: Signal sent after app has been finalized.
    on_after_finalize = None

    #: Signal sent by every new process after fork.
    on_after_fork = None

對於我們的示例程式碼,入口是:

def worker_main(self, argv=None):
    if argv is None:
        argv = sys.argv

    if 'worker' not in argv:
        raise ValueError(
            "The worker sub-command must be specified in argv.\n"
            "Use app.start() to programmatically start other commands."
        )

    self.start(argv=argv)

4.1 新增子command

celery/bin/celery.py 會進行新增 子command,我們可以看出來。

這些 Commnd 是可以在命令列作為子命令直接使用的

celery.add_command(purge)
celery.add_command(call)
celery.add_command(beat)
celery.add_command(list_)
celery.add_command(result)
celery.add_command(migrate)
celery.add_command(status)
celery.add_command(worker)
celery.add_command(events)
celery.add_command(inspect)
celery.add_command(control)
celery.add_command(graph)
celery.add_command(upgrade)
celery.add_command(logtool)
celery.add_command(amqp)
celery.add_command(shell)
celery.add_command(multi)

每一個都是command。我們以worker為例,具體如下:

worker = {CeleryDaemonCommand} <CeleryDaemonCommand worker>
 add_help_option = {bool} True
 allow_extra_args = {bool} False
 allow_interspersed_args = {bool} True
 context_settings = {dict: 1} {'allow_extra_args': True}
 epilog = {NoneType} None
 name = {str} 'worker'
 options_metavar = {str} '[OPTIONS]'
 params = {list: 32} [<CeleryOption hostname>, ...... , <CeleryOption executable>]

4.2 入口點

然後會引入Celery 命令入口點 Celery。

def start(self, argv=None):
    from celery.bin.celery import celery

    celery.params[0].default = self

    try:
        celery.main(args=argv, standalone_mode=False)
    except Exit as e:
        return e.exit_code
    finally:
        celery.params[0].default = None

4.3 快取屬性cached_property

Celery 中,大量的成員變數是被cached_property修飾的

使用 cached_property修飾過的函式,就變成是物件的屬性,該物件第一次引用該屬性時,會呼叫函式,物件第二次引用該屬性時就直接從詞典中取了,即 Caches the return value of the get method on first call。

很多知名Python專案都自己實現過 cached_property,比如Werkzeug,Django。

因為太有用,所以 Python 3.8 給 functools 模組新增了 cached_property 類,這樣就有了官方的實現。

Celery 的程式碼舉例如下:

    @cached_property
    def Worker(self):
        """Worker application.
        """
        return self.subclass_with_self('celery.apps.worker:Worker')

    @cached_property
    def Task(self):
        """Base task class for this app."""
        return self.create_task_cls()

    @property
    def pool(self):
        """Broker connection pool: :class:`~@pool`.
        """
        if self._pool is None:
            self._ensure_after_fork()
            limit = self.conf.broker_pool_limit
            pools.set_limit(limit)
            self._pool = pools.connections[self.connection_for_write()]
        return self._pool

所以,最終,Celery的內容應該是這樣的:

app = {Celery} <Celery tasks at 0x7fb8e1538400>
 AsyncResult = {type} <class 'celery.result.AsyncResult'>
 Beat = {type} <class 'celery.apps.beat.Beat'>
 GroupResult = {type} <class 'celery.result.GroupResult'>
 Pickler = {type} <class 'celery.app.utils.AppPickler'>
 ResultSet = {type} <class 'celery.result.ResultSet'>
 Task = {type} <class 'celery.app.task.Task'>
 WorkController = {type} <class 'celery.worker.worker.WorkController'>
 Worker = {type} <class 'celery.apps.worker.Worker'>
 amqp = {AMQP} <celery.app.amqp.AMQP object at 0x7fb8e2444860>
 annotations = {tuple: 0} ()
 autofinalize = {bool} True
 backend = {DisabledBackend} <celery.backends.base.DisabledBackend object at 0x7fb8e25fd668>
 builtin_fixups = {set: 1} {'celery.fixups.django:fixup'}
 clock = {LamportClock} 1
 conf = {Settings: 163} Settings({'broker_url': 'redis://localhost:6379', 'deprecated_settings': set(), 'cache_...
 configured = {bool} True
 control = {Control} <celery.app.control.Control object at 0x7fb8e2585f98>
 current_task = {NoneType} None
 current_worker_task = {NoneType} None
 events = {Events} <celery.app.events.Events object at 0x7fb8e25ecb70>
 loader = {AppLoader} <celery.loaders.app.AppLoader object at 0x7fb8e237a4a8>
 main = {str} 'tasks'
 on_after_configure = {Signal} <Signal: app.on_after_configure providing_args={'source'}>
 on_after_finalize = {Signal} <Signal: app.on_after_finalize providing_args=set()>
 on_after_fork = {Signal} <Signal: app.on_after_fork providing_args=set()>
 on_configure = {Signal} <Signal: app.on_configure providing_args=set()>
 pool = {ConnectionPool} <kombu.connection.ConnectionPool object at 0x7fb8e26e9e80>
 producer_pool = {ProducerPool} <kombu.pools.ProducerPool object at 0x7fb8e26f02b0>
 registry_cls = {type} <class 'celery.app.registry.TaskRegistry'>
 set_as_current = {bool} True
 steps = {defaultdict: 2} defaultdict(<class 'set'>, {'worker': set(), 'consumer': set()})
 tasks = {TaskRegistry: 10} {'celery.chain': <@task: celery.chain of tasks at 0x7fb8e1538400>, 'celery.starmap': <@task: celery.starmap of tasks at 0x7fb8e1538400>, 'celery.chord': <@task: celery.chord of tasks at 0x7fb8e1538400>, 'celery.backend_cleanup': <@task: celery.backend_clea
 user_options = {defaultdict: 0} defaultdict(<class 'set'>, {})

具體部分成員變數舉例如下圖:

+---------------------------------------+
|  Celery                               |
|                                       |
|                              Beat+-----------> celery.apps.beat.Beat
|                                       |
|                              Task+-----------> celery.app.task.Task
|                                       |
|                     WorkController+----------> celery.worker.worker.WorkController
|                                       |
|                            Worker+-----------> celery.apps.worker.Worker
|                                       |
|                              amqp +----------> celery.app.amqp.AMQP
|                                       |
|                           control +----------> celery.app.control.Control
|                                       |
|                            events  +---------> celery.app.events.Events
|                                       |
|                            loader +----------> celery.loaders.app.AppLoader
|                                       |
|                              pool +----------> kombu.connection.ConnectionPool
|                                       |
|                     producer_pool +----------> kombu.pools.ProducerPool
|                                       |
|                             tasks +----------> TaskRegistry
|                                       |
|                                       |
+---------------------------------------+

0x05 Celery 命令

Celery的命令總入口為celery方法,具體在:celery/bin/celery.py。

程式碼縮減版如下:

@click.pass_context
def celery(ctx, app, broker, result_backend, loader, config, workdir,
           no_color, quiet, version):
    """Celery command entrypoint."""

    if loader:
        # Default app takes loader from this env (Issue #1066).
        os.environ['CELERY_LOADER'] = loader
    if broker:
        os.environ['CELERY_BROKER_URL'] = broker
    if result_backend:
        os.environ['CELERY_RESULT_BACKEND'] = result_backend
    if config:
        os.environ['CELERY_CONFIG_MODULE'] = config
    ctx.obj = CLIContext(app=app, no_color=no_color, workdir=workdir,
                         quiet=quiet)

    # User options
    worker.params.extend(ctx.obj.app.user_options.get('worker', []))
    beat.params.extend(ctx.obj.app.user_options.get('beat', []))
    events.params.extend(ctx.obj.app.user_options.get('events', []))

    for command in celery.commands.values():
        command.params.extend(ctx.obj.app.user_options.get('preload', []))

在方法中,會遍歷celery.commands,擴充param,具體如下。這些 commands 就是之前剛剛提到的子Command:

celery.commands = 
 'report' = {CeleryCommand} <CeleryCommand report>
 'purge' = {CeleryCommand} <CeleryCommand purge>
 'call' = {CeleryCommand} <CeleryCommand call>
 'beat' = {CeleryDaemonCommand} <CeleryDaemonCommand beat>
 'list' = {Group} <Group list>
 'result' = {CeleryCommand} <CeleryCommand result>
 'migrate' = {CeleryCommand} <CeleryCommand migrate>
 'status' = {CeleryCommand} <CeleryCommand status>
 'worker' = {CeleryDaemonCommand} <CeleryDaemonCommand worker>
 'events' = {CeleryDaemonCommand} <CeleryDaemonCommand events>
 'inspect' = {CeleryCommand} <CeleryCommand inspect>
 'control' = {CeleryCommand} <CeleryCommand control>
 'graph' = {Group} <Group graph>
 'upgrade' = {Group} <Group upgrade>
 'logtool' = {Group} <Group logtool>
 'amqp' = {Group} <Group amqp>
 'shell' = {CeleryCommand} <CeleryCommand shell>
 'multi' = {CeleryCommand} <CeleryCommand multi>

0x06 worker 子命令

Work子命令是 Command 總命令的一員,也是我們直接在命令列加入 worker 引數時候,呼叫到的子命令。

$ celery -A proj worker -l INFO -Q hipri,lopri

worker 子命令繼承了click.BaseCommand,為

定義在celery/bin/worker.py。

因此如下程式碼間接呼叫到 worker 命令:

celery.main(args=argv, standalone_mode=False)

定義如下:

def worker(ctx, hostname=None, pool_cls=None, app=None, uid=None, gid=None,
           loglevel=None, logfile=None, pidfile=None, statedb=None,
           **kwargs):
    """Start worker instance.

    Examples
    --------
    $ celery --app=proj worker -l INFO
    $ celery -A proj worker -l INFO -Q hipri,lopri
    $ celery -A proj worker --concurrency=4
    $ celery -A proj worker --concurrency=1000 -P eventlet
    $ celery worker --autoscale=10,0

    """
    app = ctx.obj.app
    maybe_drop_privileges(uid=uid, gid=gid)
    worker = app.Worker(
        hostname=hostname, pool_cls=pool_cls, loglevel=loglevel,
        logfile=logfile,  # node format handled by celery.app.log.setup
        pidfile=node_format(pidfile, hostname),
        statedb=node_format(statedb, hostname),
        no_color=ctx.obj.no_color,
        **kwargs)
    worker.start()
    return worker.exitcode

此時流程如下圖,可以看到,從 Celery 應用就進入到了具體的 worker 命令:

      +----------+
      |   User   |
      +----+-----+
           |
           |  worker_main
           |
           v
 +---------+------------+
 |        Celery        |
 |                      |
 |  Celery application  |
 |  celery/app/base.py  |
 |                      |
 +---------+------------+
           |
           |  celery.main
           |
           v
 +---------+------------+
 |  @click.pass_context |
 |       celery         |
 |                      |
 |                      |
 |    CeleryCommand     |
 | celery/bin/celery.py |
 |                      |
 +---------+------------+
           |
           |
           |
           v
+----------+------------+
|   @click.pass_context |
|        worker         |
|                       |
|                       |
|     WorkerCommand     |
| celery/bin/worker.py  |
+-----------------------+

0x07 Worker application

此時在該函式中會例項化app的Worker,Worker application 就是 worker 的例項此時的app就是前面定義的Celery類的例項app

定義在:celery/app/base.py。

@cached_property
def Worker(self):
    """Worker application.

    See Also:
        :class:`~@Worker`.
    """
    return self.subclass_with_self('celery.apps.worker:Worker')

此時subclass_with_self利用了Python的type動態生成類例項的屬性。

def subclass_with_self(self, Class, name=None, attribute='app',
                       reverse=None, keep_reduce=False, **kw):
    """Subclass an app-compatible class.
    """
    Class = symbol_by_name(Class)                               # 匯入該類
    reverse = reverse if reverse else Class.__name__            # 判斷是否傳入值,如沒有則使用類的名稱

    def __reduce__(self):                                       # 定義的方法 該方法在pickle過程中會被呼叫
        return _unpickle_appattr, (reverse, self.__reduce_args__()) 

    attrs = dict(
        {attribute: self},                                      # 預設設定app的值為self
        __module__=Class.__module__,    
        __doc__=Class.__doc__,
        **kw)                                                   # 填充屬性
    if not keep_reduce:                                         
        attrs['__reduce__'] = __reduce__                        # 如果預設則生成的類設定__reduce__方法

    return type(bytes_if_py2(name or Class.__name__), (Class,), attrs) # 利用type實誠類例項

此時就已經從 worker 命令 得到了一個celery.apps.worker:Worker的例項,然後呼叫該例項的start方法,此時首先分析一下Worker類的例項化的過程。

我們先回顧下:

我們的執行從 worker_main 這個程式入口,來到了 Celery 應用。然後進入了 Celery Command,然後又進入到了 Worker 子Command,具體如下圖。

                                     +----------------------+
      +----------+                   |  @cached_property    |
      |   User   |                   |      Worker          |
      +----+-----+            +--->  |                      |
           |                  |      |                      |
           |  worker_main     |      |  Worker application  |
           |                  |      |  celery/app/base.py  |
           v                  |      +----------------------+
 +---------+------------+     |
 |        Celery        |     |
 |                      |     |
 |  Celery application  |     |
 |  celery/app/base.py  |     |
 |                      |     |
 +---------+------------+     |
           |                  |
           |  celery.main     |
           |                  |
           v                  |
 +---------+------------+     |
 |  @click.pass_context |     |
 |       celery         |     |
 |                      |     |
 |                      |     |
 |    CeleryCommand     |     |
 | celery/bin/celery.py |     |
 |                      |     |
 +---------+------------+     |
           |                  |
           |                  |
           |                  |
           v                  |
+----------+------------+     |
|   @click.pass_context |     |
|        worker         |     |
|                       |     |
|                       |     |
|     WorkerCommand     |     |
| celery/bin/worker.py  |     |
+-----------+-----------+     |
            |                 |
            +-----------------+

下面就會正式進入 worker,Celery 把 worker 的正式邏輯成為 work as a program。

我們在下文將接下來繼續看後續 work as a program 的啟動過程。

0xFF 參考

Celery 原始碼學習(二)多程式模型

celery原理初探

celery原始碼分析-wroker初始化分析(上)

celery原始碼分析-worker初始化分析(下)

celery worker初始化--DAG實現

python celery多worker、多佇列、定時任務

celery 詳細教程-- Worker篇

使用Celery

Celery 原始碼解析一:Worker 啟動流程概述

Celery 原始碼解析二:Worker 的執行引擎

Celery 原始碼解析三:Task 物件的實現

Celery 原始碼解析四:定時任務的實現

Celery 原始碼解析五:遠端控制管理

Celery 原始碼解析六:Events 的實現

Celery 原始碼解析七:Worker 之間的互動

Celery 原始碼解析八:State 和 Result

相關文章