supervisor原始碼分析
Supervisor分析
1.執行原理概述:
Supervisor生成主程式並將主程式變成守護程式,supervisor依次生成配置檔案中的工作程式,然後依次監控工作程式的工作狀態,並且主程式負責與supervisorctl客戶端通訊,實現主程式對子程式的控制。
2.本次分析只分析supervisor的最小實現原理部分
1).supervisor生成主程式併成為守護程式,根據配置依次生成子程式
2).supervisor與supervisorctl基於rpc通訊
3).子程式與主程式的管道通訊
3.原始碼分析
本次原始碼基於supervisor-2.0b1
命令列啟動
[wuzi:] supervisord
此時supervisord已經啟動,我們就從這個命令中開始supervisord的分析
Supervisord檔案中
#!/usr/bin/python
from supervisor.supervisord import main
main()
執行的開始函式開始於main, main中主要執行程式碼為:
while 1:
# if we hup, restart by making a new Supervisor()
# the test argument just makes it possible to unit test this code
options = ServerOptions()
d = Supervisor(options)
d.main(None, test, first)
1).ServerOptions類主要進行了配置檔案的優化,提供伺服器的初始化和主程式變成守護程式,子程式的建立和管理等工作。
2).Supervisor類主要實現了基於非同步IO的伺服器執行,接受子程式、rpc客戶端的通訊處理等工作。
3).d.main(None, test, first),主要執行了執行時的引數日誌等初始化工作,並生成子程式,開啟http伺服器等工作。
d.main(None, test, first)程式碼為:
def main(self, args=None, test=False, first=False):
.......(省略部分初始化程式碼)
self.run(test)
def run(self, test=False):
self.processes = {}
for program in self.options.programs:
name = program.name
# 根據初始化後的配置檔案生成相應的子程式例項
self.processes[name] = self.options.make_process(program)
try:
# 生成pid檔案
self.options.write_pidfile()
# 開啟http伺服器
self.options.openhttpserver(self)
# 設定註冊的訊號量
self.options.setsignals()
# 主程式是否成為守護程式
if not self.options.nodaemon:
self.options.daemonize()
# 執行非同步io伺服器
self.runforever(test)
finally:
self.options.cleanup()
def runforever(self, test=False):
timeout = 1
# 獲取已經註冊的控制程式碼
socket_map = self.options.get_socket_map()
while 1:
# mood表示主程式狀態1為執行
if self.mood > 0:
self.start_necessary()
r, w, x = [], [], []
process_map = {}
# process output fds
# 子程式管道資料操作
for proc in self.processes.values():
proc.log_output()
drains = proc.get_pipe_drains()
for fd, drain in drains:
r.append(fd)
process_map[fd] = drain
# medusa i/o fds
# 網路socket io操作
for fd, dispatcher in socket_map.items():
if dispatcher.readable():
r.append(fd)
if dispatcher.writable():
w.append(fd)
# mood為主程式為停止狀態
if self.mood < 1:
if not self.stopping:
self.stop_all()
self.stopping = True
# if there are no delayed processes (we're done killing
# everything), it's OK to stop or reload
delayprocs = self.get_delay_processes()
if delayprocs:
names = [ p.config.name for p in delayprocs]
namestr = ', '.join(names)
self.options.logger.info('waiting for %s to die' % namestr)
else:
break
try:
# 依次遍歷註冊的檔案控制程式碼
r, w, x = select.select(r, w, x, timeout)
except select.error, err:
if err[0] == errno.EINTR:
self.options.logger.log(self.options.TRACE,
'EINTR encountered in select')
else:
raise
r = w = x = []
for fd in r:
# 如果是子程式的管道事件
if process_map.has_key(fd):
drain = process_map[fd]
# drain the file descriptor
drain(fd)
# 如果是客戶端的rpc讀事件
if socket_map.has_key(fd):
try:
socket_map[fd].handle_read_event()
except asyncore.ExitNow:
raise
except:
socket_map[fd].handle_error()
for fd in w:
# 如果是客戶端rpc寫事件
if socket_map.has_key(fd):
try:
socket_map[fd].handle_write_event()
except asyncore.ExitNow:
raise
except:
socket_map[fd].handle_error()
# 判斷配置子程式的狀態,來決定該子程式是否執行(這其中是由於有些程式可以配置延遲執行),通過呼叫子程式例項的spwn()方法來執行子程式
self.give_up()
# 殺死沒有要殺死但還沒殺死的程式
self.kill_undead()
# 獲取已經死亡的子程式資訊
self.reap()
# 處理訊號
self.handle_signal()
if test:
break
其中以上比較重要的步驟為:
1).明白子程式如果將資料傳送給主程式
2).明白如何處理客戶端發過來的rpc請求
Supervisor將http的控制程式碼和管道的控制程式碼放在了同一個select中進行了處理,
一.管道資料的傳送
Linux中管道是單向傳輸資料的,如果建立管道後,如果要讀就必須關閉管道的寫操作。
首先我們先找到run()函式,其中有
for program in self.options.programs:
name = program.name
# 根據初始化後的配置檔案生成相應的子程式例項
self.processes[name] = self.options.make_process(program)
找到options.py中,1029行,
def make_process(self, config):
from supervisord import Subprocess
return Subprocess(self, config)
其中Subprocess類的主要方法:
def spawn(self):
"""Start the subprocess. It must not be running already.
Return the process id. If the fork() call fails, return 0.
"""
pname = self.config.name
# 如果該例項已經有pid檔案則該例項已經執行
if self.pid:
msg = 'process %r already running' % pname
self.options.logger.critical(msg)
return
# 相應狀態的初始化
self.killing = 0
self.spawnerr = None
self.exitstatus = None
self.system_stop = 0
self.administrative_stop = 0
# 最後一次啟動時間
self.laststart = time.time()
# 獲取配置子程式的執行命令
filename, argv, st = self.get_execv_args()
# 檢查該配置檔案是否可以執行這些執行命令
fail_msg = self.options.check_execv_args(filename, argv, st)
if fail_msg is not None:
self.record_spawnerr(fail_msg)
return
try:
# 生成管道,生成與主程式通訊的管道
self.pipes = self.options.make_pipes()
except OSError, why:
code = why[0]
if code == errno.EMFILE:
# too many file descriptors open
msg = 'too many open files to spawn %r' % pname
else:
msg = 'unknown error: %s' % errno.errorcode.get(code, code)
self.record_spawnerr(msg)
return
try:
# 生成子程式
pid = self.options.fork()
except OSError, why:
code = why[0]
if code == errno.EAGAIN:
# process table full
msg = 'Too many processes in process table to spawn %r' % pname
else:
msg = 'unknown error: %s' % errno.errorcode.get(code, code)
self.record_spawnerr(msg)
self.options.close_pipes(self.pipes)
return
if pid != 0:
# Parent
self.pid = pid
# 關閉父程式中管道的寫
for fdname in ('child_stdin', 'child_stdout', 'child_stderr'):
self.options.close_fd(self.pipes[fdname])
self.options.logger.info('spawned: %r with pid %s' % (pname, pid))
self.spawnerr = None
self.delay = time.time() + self.config.startsecs
self.options.pidhistory[pid] = self
return pid
else:
# Child
try:
# prevent child from receiving signals sent to the
# parent by calling os.setpgrp to create a new process
# group for the child; this prevents, for instance,
# the case of child processes being sent a SIGINT when
# running supervisor in foreground mode and Ctrl-C in
# the terminal window running supervisord is pressed.
# Presumably it also prevents HUP, etc received by
# supervisord from being sent to children.
self.options.setpgrp()
# 0 將子程式的標準輸入重定向到管道
self.options.dup2(self.pipes['child_stdin'], 0)
# 1 將子程式的標準輸出重定向到管道
self.options.dup2(self.pipes['child_stdout'], 1)
# 2 將子程式的標準錯誤重定向到管道
self.options.dup2(self.pipes['child_stderr'], 2)
# 關閉子程式管道的讀
for i in range(3, self.options.minfds):
self.options.close_fd(i)
# sending to fd 1 will put this output in the log(s)
msg = self.set_uid()
if msg:
self.options.write(
1, "%s: error trying to setuid to %s!\n" %
(pname, self.config.uid)
)
self.options.write(1, "%s: %s\n" % (pname, msg))
try:
# 子程式開始執行
self.options.execv(filename, argv)
except OSError, why:
code = why[0]
self.options.write(1, "couldn't exec %s: %s\n" % (
argv[0], errno.errorcode.get(code, code)))
except:
(file, fun, line), t,v,tbinfo = asyncore.compact_traceback()
error = '%s, %s: file: %s line: %s' % (t, v, file, line)
self.options.write(1, "couldn't exec %s: %s\n" % (filename,
error))
finally:
# 子程式執行完畢後,退出
self.options._exit(127)
其中,make_pipes()方法位於options.py中1033行,
def make_pipes(self):
""" Create pipes for parent to child stdin/stdout/stderr
communications. Open fd in nonblocking mode so we can read them
in the mainloop without blocking """
pipes = {}
try:
# 生成一個子程式標準輸入管道的讀和寫控制程式碼
pipes['child_stdin'], pipes['stdin'] = os.pipe()
# 生成一個子程式標準輸出管道的讀和寫控制程式碼
pipes['stdout'], pipes['child_stdout'] = os.pipe()
# 生成一個子程式標準錯誤管道的讀和寫控制程式碼
pipes['stderr'], pipes['child_stderr'] = os.pipe()
# 將主程式中要讀的管道設定成非阻塞,使之在非同步io中不阻塞整個迴圈
for fd in (pipes['stdout'], pipes['stderr'], pipes['stdin']):
fcntl(fd, F_SETFL, fcntl(fd, F_GETFL) | os.O_NDELAY)
return pipes
except OSError:
self.close_pipes(pipes)
raise
此時,在上述程式碼中runforever()函式中:
for proc in self.processes.values():
proc.log_output()
drains = proc.get_pipe_drains()
for fd, drain in drains:
r.append(fd)
process_map[fd] = drain
Proc為程式例項,get_pipe_drains()返回管道的標準輸出和標準錯誤輸出
def get_pipe_drains(self):
if not self.pipes:
return []
drains = ( [ self.pipes['stdout'], self.drain_stdout],
[ self.pipes['stderr'], self.drain_stderr] )
return drains
其中,self.drain_stdout為
def drain_stdout(self, *ignored):
#將管道中的內容讀出並儲存
output = self.options.readfd(self.pipes['stdout'])
if self.config.log_stdout:
self.logbuffer += output
for fd in r:
if process_map.has_key(fd):
drain = process_map[fd]
# drain the file descriptor
# 其中drain就是self.drain_stdout或者self.drain_stderr
drain(fd)
至此管道的資料處理方式已經完成,管道資料的傳遞的基本原理已經分析完成
二.Rpc事件的處理
由於rpc的處理方式,使用了python中的asyncore, asynchat這兩個包作為基礎進行擴充套件。
這裡對這兩個包做個簡要的分析,因為後面在rpc的處理中,會用到這兩個包的基礎知識。
由於在服務端非同步程式設計中,
伺服器端
ser=socket.socket(socket.AF_INET,socket.SOCK_STREAM)
ser.bind(('127.0.0.1',8000))
ser.listen(5)
while 1:
client,addr=ser.accept()
print 'accept %s connect'%(addr,)
data=client.recv(1024)
print data
client.send('get')
client.close()
會初始化一個socket,如上例中的ser,這個ser只負責接收建立新的請求,client,addr=ser.accept(),當新接收的請求client再接收資料後然後再通過該請求client將該資料傳送出去。
此時我們就有兩個需求一個是專門的ser只負責接收新的請求,一個是專門處理新請求的例項,將新的請求並處理該請求的類對應到asyncore.py中dispatcher類為專門接收新請求的類和asynchat.py中就是async_chat專門處理新接收的請求。
asyncore.py簡要分析
socket_map 為包全域性的描述符的字典
poll()對應select模式
poll2()對應epoll模式
loop()函式根據當前的執行環境選擇哪種非同步IO模式
dispatcher()類
init(self, sock=None, map=None) 如果傳入的sock為None則例項化為ser例項,如果sock不為None則是處理請求例項,並將該sock設定為非阻塞,加入socket_map中
add_channel(self, map=None) 新增socket_map中為自己的例項
del_channel(self, map=None) 刪除socket_map中為自己的例項
create_socket(self, family=socket.AF_INET, type=socket.SOCK_STREAM) 建立ser例項,並將該例項設定為非阻塞,並加入socket_map中
set_socket(self, sock, map=None) 新增sock到socket_map中
set_reuse_addr(self) 設定ser監聽的埠能夠在斷開監聽後立馬重新被監聽
readable(self) 該sock是否可讀
writable(self) 該sock是否可寫
listen(self, num) 設定ser監聽的數量
bind(self, addr) 設定ser監聽的埠
connect(self, address) 設定連線地址
accept(self) 接收新的連線請求
send(self, data) 傳送資料
recv(self, buffer_size) 接收資料
close(self) 關閉連線
handle_read_event(self) 處理讀事件,如果是新請求則接收,如果是連線傳送資料則接收
handle_connect_event(self) 處理新連線進來的請求
handle_write_event(self) 處理連線的寫請求
以上為dispatcher主要的方法,
在asynchat.py中async_chat繼承自dispatcher.該類的詳細分析
class async_chat(asyncore.dispatcher):
"""This is an abstract class. You must derive from this class, and add
the two methods collect_incoming_data() and found_terminator()"""
# these are overridable defaults
# 接收快取區大小
ac_in_buffer_size = 65536
# 傳送資料緩衝區大小
ac_out_buffer_size = 65536
# we don't want to enable the use of encoding by default, because that is a
# sign of an application bug that we don't want to pass silently
use_encoding = 0
encoding = 'latin-1'
def __init__(self, sock=None, map=None):
# for string terminator matching
# 初始化接收緩衝區
self.ac_in_buffer = b''
# we use a list here rather than io.BytesIO for a few reasons...
# del lst[:] is faster than bio.truncate(0)
# lst = [] is faster than bio.truncate(0)
# 分段接收資料的列表
self.incoming = []
# we toss the use of the "simple producer" and replace it with
# a pure deque, which the original fifo was a wrapping of
# 傳送資料時使用的佇列資料結構
self.producer_fifo = deque()
# 呼叫dispatcher的構造方法,將sock設定成非阻塞,將sock新增到socket_map
asyncore.dispatcher.__init__(self, sock, map)
# 獲取接收到的資料
def collect_incoming_data(self, data):
raise NotImplementedError("must be implemented in subclass")
def _collect_incoming_data(self, data):
self.incoming.append(data)
# 將所有接收到的資料
def _get_data(self):
d = b''.join(self.incoming)
del self.incoming[:]
return d
# 查詢接收資料中的分隔符
def found_terminator(self):
raise NotImplementedError("must be implemented in subclass")
# 設定接收資料中的分隔符
def set_terminator(self, term):
"""Set the input delimiter.
Can be a fixed string of any length, an integer, or None.
"""
if isinstance(term, str) and self.use_encoding:
term = bytes(term, self.encoding)
elif isinstance(term, int) and term < 0:
raise ValueError('the number of received bytes must be positive')
self.terminator = term
# 獲取已經設定的分隔符
def get_terminator(self):
return self.terminator
# grab some more data from the socket,
# throw it to the collector method,
# check for the terminator,
# if found, transition to the next state.
# 處理讀事件
def handle_read(self):
try:
data = self.recv(self.ac_in_buffer_size)
except BlockingIOError:
return
except OSError as why:
self.handle_error()
return
if isinstance(data, str) and self.use_encoding:
data = bytes(str, self.encoding)
# 將讀出的資料存入到接收緩衝區
self.ac_in_buffer = self.ac_in_buffer + data
# Continue to search for self.terminator in self.ac_in_buffer,
# while calling self.collect_incoming_data. The while loop
# is necessary because we might read several data+terminator
# combos with a single recv(4096).
while self.ac_in_buffer:
# 獲取接收的資料長度
lb = len(self.ac_in_buffer)
# 獲取設定的分隔符
terminator = self.get_terminator()
if not terminator:
# no terminator, collect it all
# 將已經接收的資料處理
self.collect_incoming_data(self.ac_in_buffer)
# 將接收緩衝區設定為空
self.ac_in_buffer = b''
# 如果設定分隔符是數字則接收相應長度的資料
elif isinstance(terminator, int):
# numeric terminator
n = terminator
if lb < n:
self.collect_incoming_data(self.ac_in_buffer)
self.ac_in_buffer = b''
# 將設定的分隔符長度減去已經接收的資料長度
self.terminator = self.terminator - lb
else:
# 清楚已經接收的資料
self.collect_incoming_data(self.ac_in_buffer[:n])
# 留下超出長度的部分
self.ac_in_buffer = self.ac_in_buffer[n:]
# 重置
self.terminator = 0
self.found_terminator()
else:
# 3 cases:
# 1) end of buffer matches terminator exactly:
# collect data, transition
# 2) end of buffer matches some prefix:
# collect data to the prefix
# 3) end of buffer does not match any prefix:
# collect data
terminator_len = len(terminator)
# 在接收緩衝區中查詢分隔符
index = self.ac_in_buffer.find(terminator)
if index != -1:
# we found the terminator
if index > 0:
# don't bother reporting the empty string
# (source of subtle bugs)
self.collect_incoming_data(self.ac_in_buffer[:index])
# 將剩下的資料保留到接收資料緩衝區
self.ac_in_buffer = self.ac_in_buffer[index+terminator_len:]
# This does the Right Thing if the terminator
# is changed here.
self.found_terminator()
else:
# check for a prefix of the terminator
# 檢查接收緩衝區是否已分隔符結尾
index = find_prefix_at_end(self.ac_in_buffer, terminator)
if index:
if index != lb:
# we found a prefix, collect up to the prefix
# 如果是分隔符結尾則結束本次處理
self.collect_incoming_data(self.ac_in_buffer[:-index])
self.ac_in_buffer = self.ac_in_buffer[-index:]
break
else:
# no prefix, collect it all
# 將接收緩衝區資料處理並重置
self.collect_incoming_data(self.ac_in_buffer)
self.ac_in_buffer = b''
def handle_write(self):
# 將處理的資料全部傳送出去
self.initiate_send()
def handle_close(self):
# 關閉連線
self.close()
def push(self, data):
# 將連線處理後的資料全部加入傳送緩衝區
if not isinstance(data, (bytes, bytearray, memoryview)):
raise TypeError('data argument must be byte-ish (%r)',
type(data))
sabs = self.ac_out_buffer_size
# 如果要傳送出去的資料大於傳送緩衝區大小,就使用生產者模式傳送
if len(data) > sabs:
for i in range(0, len(data), sabs):
self.producer_fifo.append(data[i:i+sabs])
else:
self.producer_fifo.append(data)
self.initiate_send()
def push_with_producer(self, producer):
self.producer_fifo.append(producer)
self.initiate_send()
def readable(self):
"predicate for inclusion in the readable for select()"
# cannot use the old predicate, it violates the claim of the
# set_terminator method.
# return (len(self.ac_in_buffer) <= self.ac_in_buffer_size)
return 1
def writable(self):
"predicate for inclusion in the writable for select()"
return self.producer_fifo or (not self.connected)
def close_when_done(self):
"automatically close this channel once the outgoing queue is empty"
self.producer_fifo.append(None)
def initiate_send(self):
# 如果當前生產者佇列不為空,連線未關閉就傳送資料
while self.producer_fifo and self.connected:
first = self.producer_fifo[0]
# handle empty string/buffer or None entry
if not first:
# 如果資料生產者沒有資料則刪除該生產者,如果為None則所有資料已經傳送完成,並關閉連線
del self.producer_fifo[0]
if first is None:
self.handle_close()
return
# handle classic producer behavior
obs = self.ac_out_buffer_size
try:
data = first[:obs]
except TypeError:
data = first.more()
if data:
self.producer_fifo.appendleft(data)
else:
del self.producer_fifo[0]
continue
if isinstance(data, str) and self.use_encoding:
# 將傳送資料改為位元組型別
data = bytes(data, self.encoding)
# send the data
try:
# 傳送資料
num_sent = self.send(data)
except OSError:
self.handle_error()
return
if num_sent:
if num_sent < len(data) or obs < len(first):
# 如果傳送的資料還沒有完成則繼續傳送
self.producer_fifo[0] = first[num_sent:]
else:
del self.producer_fifo[0]
return
def discard_buffers(self):
# Emergencies only!
self.ac_in_buffer = b''
del self.incoming[:]
self.producer_fifo.clear()
對這兩個包有大概瞭解後,我們開始分析run()方法中
self.options.openhttpserver(self)
該方法呼叫http中的make_http_server方法
def openhttpserver(self, supervisord):
from http import make_http_server
try:
self.httpserver = make_http_server(self, supervisord)
except socket.error, why:
if why[0] == errno.EADDRINUSE:
port = str(self.http_port.address)
self.usage('Another program is already listening on '
'the port that our HTTP server is '
'configured to use (%s). Shut this program '
'down first before starting supervisord. ' %
port)
except ValueError, why:
self.usage(why[0])
在http.py檔案中,
def make_http_server(options, supervisord):
if not options.http_port:
return
# 配置的使用者名稱和密碼
username = options.http_username
password = options.http_password
class LogWrapper:
def log(self, msg):
if msg.endswith('\n'):
msg = msg[:-1]
options.logger.info(msg)
wrapper = LogWrapper()
family = options.http_port.family
# 如果是socket監聽
if family == socket.AF_INET:
# 主要分析socket連線
host, port = options.http_port.address
# 生成http_server
hs = supervisor_af_inet_http_server(host, port, logger_object=wrapper)
# 如果是原始套接字
elif family == socket.AF_UNIX:
socketname = options.http_port.address
sockchmod = options.sockchmod
sockchown = options.sockchown
hs = supervisor_af_unix_http_server(socketname, sockchmod, sockchown,
logger_object=wrapper)
else:
raise ValueError('Cannot determine socket type %r' % family)
from xmlrpc import supervisor_xmlrpc_handler
from web import supervisor_ui_handler
# 本次分析的rpchandler
xmlrpchandler = supervisor_xmlrpc_handler(supervisord)
tailhandler = logtail_handler(supervisord)
here = os.path.abspath(os.path.dirname(__file__))
templatedir = os.path.join(here, 'ui')
filesystem = filesys.os_filesystem(templatedir)
uihandler = supervisor_ui_handler(filesystem, supervisord)
if username:
# wrap the xmlrpc handler and tailhandler in an authentication handler
users = {username:password}
from medusa.auth_handler import auth_handler
xmlrpchandler = auth_handler(users, xmlrpchandler)
tailhandler = auth_handler(users, tailhandler)
uihandler = auth_handler(users, uihandler)
else:
options.logger.critical('Running without any HTTP authentication '
'checking')
# 將handler註冊到伺服器類中
hs.install_handler(uihandler)
hs.install_handler(tailhandler)
hs.install_handler(xmlrpchandler)
return hs
這裡分析supervisor_af_inet_http_server類,該類繼承自supervisor_http_server,supervisor_http_server繼承自http_server.http_server,http_server.http_server繼承自asyncore.dispatcher,所以hs就是上例中的接收新請求的類,因為該例項的主要作用就是在新請求進來時處理
class http_server (asyncore.dispatcher):
SERVER_IDENT = 'HTTP Server (V%s)' % VERSION_STRING
channel_class = http_channel
def handle_accept (self):
self.total_clients.increment()
try:
# 接收新請求
conn, addr = self.accept()
except socket.error:
# linux: on rare occasions we get a bogus socket back from
# accept. socketmodule.c:makesockaddr complains that the
# address family is unknown. We don't want the whole server
# to shut down because of this.
self.log_info ('warning: server accept() threw an exception', 'warning')
return
except TypeError:
# unpack non-sequence. this can happen when a read event
# fires on a listening socket, but when we call accept()
# we get EWOULDBLOCK, so dispatcher.accept() returns None.
# Seen on FreeBSD3.
self.log_info ('warning: server accept() threw EWOULDBLOCK', 'warning')
return
# 將新請求用該類例項化處理
self.channel_class (self, conn, addr)
supervisor_http_server類的定義
channel_class = deferring_http_channel
所以通過deferring_http_channel處理該請求
deferring_http_channel繼承自http_server.http_channel
http_server.http_channel繼承自asynchat.async_chat
由於當該連結有可讀資料時,就出觸發handle_read函式,而該函式在接收資料放入接收緩衝區後,就會呼叫 found_terminator函式,
我們分析一下deferring_http_channel函式
def found_terminator (self):
""" We only override this to use 'deferring_http_request' class
instead of the normal http_request class; it sucks to need to override
this """
# 如果當前請求例項存在則繼續處理接收資料
if self.current_request:
self.current_request.found_terminator()
# 如果不存在當前初始化例項
else:
# 第一次接收的資料
header = self.in_buffer
# 將接收緩衝區清空
self.in_buffer = ''
# 將頭部資訊分離出來
lines = string.split (header, '\r\n')
# --------------------------------------------------
# crack the request header
# --------------------------------------------------
while lines and not lines[0]:
# as per the suggestion of http-1.1 section 4.1, (and
# Eric Parker <eparker@zyvex.com>), ignore a leading
# blank lines (buggy browsers tack it onto the end of
# POST requests)
lines = lines[1:]
if not lines:
self.close_when_done()
return
# 第一行頭部資料
request = lines[0]
# 第一行資料的命令,uri,版本
command, uri, version = http_server.crack_request (request)
# 處理剩下的頭部資訊
header = http_server.join_headers (lines[1:])
# unquote path if necessary (thanks to Skip Montanaro for pointing
# out that we must unquote in piecemeal fashion).
rpath, rquery = http_server.splitquery(uri)
if '%' in rpath:
if rquery:
uri = http_server.unquote (rpath) + '?' + rquery
else:
uri = http_server.unquote (rpath)
# 例項化一個http_request例項
r = deferring_http_request (self, request, command, uri, version,
header)
self.request_counter.increment()
self.server.total_requests.increment()
if command is None:
self.log_info ('Bad HTTP request: %s' % repr(request), 'error')
r.error (400)
return
# --------------------------------------------------
# handler selection and dispatch
# --------------------------------------------------
# 通過第一行資訊來匹配註冊的handlers
for h in self.server.handlers:
# 呼叫handler中的match方法,如果匹配rpc方法就返回rpchandler
if h.match (r):
try:
# 將該處理例項儲存
self.current_request = r
# This isn't used anywhere.
# r.handler = h # CYCLE
# handler處理該請求
h.handle_request (r)
except:
self.server.exceptions.increment()
(file, fun, line), t, v, tbinfo = \
asyncore.compact_traceback()
self.log_info(
'Server Error: %s, %s: file: %s line: %s' %
(t,v,file,line),
'error')
try:
r.error (500)
except:
pass
return
# no handlers, so complain
r.error (404)
由於此次只分析rpchandler,supervisor_xmlrpc_handler繼承自xmlrpc_handler分析xmlrpc_handler,通過協議來匹配該handler
def match (self, request):
# Note: /RPC2 is not required by the spec, so you may override this method.
if request.uri[:5] == '/RPC2':
return 1
else:
return 0
class supervisor_xmlrpc_handler(xmlrpc_handler):
def __init__(self, supervisord):
# rpc呼叫類方發介面,通過該類實現rpc客戶端的對主程式的操作
self.rpcinterface = RPCInterface(supervisord)
# supervisord例項
self.supervisord = supervisord
def continue_request (self, data, request):
logger = self.supervisord.options.logger
try:
# 解析出上傳的內容,並通過xmlrpclib解析成方法
params, method = xmlrpclib.loads(data)
try:
logger.debug('XML-RPC method called: %s()' % method)
# 呼叫方法執行
value = self.call(method, params)
logger.debug('XML-RPC method %s() returned successfully' %
method)
except RPCError, err:
# turn RPCError reported by method into a Fault instance
value = xmlrpclib.Fault(err.code, err.text)
logger.warn('XML-RPC method %s() returned fault: [%d] %s' % (
method,
err.code, err.text))
if isinstance(value, types.FunctionType):
# returning a function from an RPC method implies that
# this needs to be a deferred response (it needs to block).
pushproducer = request.channel.push_with_producer
pushproducer(DeferredXMLRPCResponse(request, value))
else:
# if we get anything but a function, it implies that this
# response doesn't need to be deferred, we can service it
# right away.
# 將方法執行的結果返回
body = xmlrpc_marshal(value)
request['Content-Type'] = 'text/xml'
request['Content-Length'] = len(body)
# 呼叫request.push方法,將body資訊壓入該內容自行閱讀原始碼
request.push(body)
# 執行完後將資料傳送出去,該內容自行閱讀原始碼
request.done()
except:
io = StringIO.StringIO()
traceback.print_exc(file=io)
val = io.getvalue()
logger.critical(val)
# internal error, report as HTTP server error
request.error(500)
def call(self, method, params):
# 呼叫rpcinterface的方法
return traverse(self.rpcinterface, method, params)
該方法就是呼叫一個例項的方法
def traverse(ob, method, params):
path = method.split('.')
for name in path:
if name.startswith('_'):
# security (don't allow things that start with an underscore to
# be called remotely)
raise RPCError(Faults.UNKNOWN_METHOD)
ob = getattr(ob, name, None)
if ob is None:
raise RPCError(Faults.UNKNOWN_METHOD)
try:
return ob(*params)
except TypeError:
raise RPCError(Faults.INCORRECT_PARAMETERS)
至此一個rpchandle的處理就完成了
總結
[supervisor_demo](https://github.com/xiaowuzidaxia/sp_demo)
本文只是簡要介紹了supervisor執行時的基本原理,由於水平有限,不能詳細介紹其他更多功能,但是supervisor的基本原理已經介紹了,此時附一個基於python實現的簡單的supervisor的demo希望對大家理解有幫助。[簡單supervisor模型](https://github.com/xiaowuzidaxia/sp_demo)
相關文章
- Retrofit原始碼分析三 原始碼分析原始碼
- 集合原始碼分析[2]-AbstractList 原始碼分析原始碼
- 集合原始碼分析[1]-Collection 原始碼分析原始碼
- 集合原始碼分析[3]-ArrayList 原始碼分析原始碼
- Guava 原始碼分析之 EventBus 原始碼分析Guava原始碼
- Android 原始碼分析之 AsyncTask 原始碼分析Android原始碼
- 【JDK原始碼分析系列】ArrayBlockingQueue原始碼分析JDK原始碼BloC
- 以太坊原始碼分析(36)ethdb原始碼分析原始碼
- 以太坊原始碼分析(38)event原始碼分析原始碼
- 以太坊原始碼分析(41)hashimoto原始碼分析原始碼
- 以太坊原始碼分析(43)node原始碼分析原始碼
- 以太坊原始碼分析(52)trie原始碼分析原始碼
- 深度 Mybatis 3 原始碼分析(一)SqlSessionFactoryBuilder原始碼分析MyBatis原始碼SQLSessionUI
- 以太坊原始碼分析(51)rpc原始碼分析原始碼RPC
- 【Android原始碼】Fragment 原始碼分析Android原始碼Fragment
- 【Android原始碼】Intent 原始碼分析Android原始碼Intent
- k8s client-go原始碼分析 informer原始碼分析(6)-Indexer原始碼分析K8SclientGo原始碼ORMIndex
- k8s client-go原始碼分析 informer原始碼分析(4)-DeltaFIFO原始碼分析K8SclientGo原始碼ORM
- 以太坊原始碼分析(20)core-bloombits原始碼分析原始碼OOM
- 以太坊原始碼分析(24)core-state原始碼分析原始碼
- 以太坊原始碼分析(29)core-vm原始碼分析原始碼
- 【MyBatis原始碼分析】select原始碼分析及小結MyBatis原始碼
- redis原始碼分析(二)、redis原始碼分析之sds字串Redis原始碼字串
- ArrayList 原始碼分析原始碼
- kubeproxy原始碼分析原始碼
- [原始碼分析]ArrayList原始碼
- redux原始碼分析Redux原始碼
- preact原始碼分析React原始碼
- Snackbar原始碼分析原始碼
- React原始碼分析React原始碼
- CAS原始碼分析原始碼
- Redux 原始碼分析Redux原始碼
- SDWebImage 原始碼分析Web原始碼
- Aspects原始碼分析原始碼
- httprouter 原始碼分析HTTP原始碼
- PowerManagerService原始碼分析原始碼
- HashSet原始碼分析原始碼
- 原始碼分析——HashMap原始碼HashMap