Phxrpc協程庫實現
Phxrpc中的coroutine實現分析:
由於Phxrpc程式碼量不是很多,大概花個一兩天可以分析明白,裡面把epoll+timer+協程用的蠻溜。它的框架還是單程式多執行緒模型,主執行緒繫結CPU,跑LoopAccept邏輯,根據hsha_server_->hsha_server_qos_.CanAccept()
建立新連線;然後通過輪詢hsha_server_->server_unit_list_[idx_++]->AddAcceptedFd(accepted_fd)
分配給某個HshaServerUnit
物件,然後繼續accept socket,貼一張連線中的圖:
HshaServerUnit
是由一個執行緒池和排程執行緒組成,排程執行緒也做了IO邏輯,一個執行緒裡可能有協程實現。
當構造HshaServerUnit
時,建立一個執行緒池,每個工作執行緒可能是協程模式,由uthread_count_分割槽:
408 void Worker::Func() {
409 if (uthread_count_ == 0) {
410 ThreadMode();
411 } else {
412 UThreadMode();
413 }
414 }
433 void Worker::UThreadMode() {
434 worker_scheduler_ = new UThreadEpollScheduler(utherad_stack_size_, uthread_count_, true);
435 assert(worker_scheduler_ != nullptr);
436 worker_scheduler_->SetHandlerNewRequestFunc(bind(&Worker::HandlerNewRequestFunc, this));
437 worker_scheduler_->RunForever();
438 }
排程執行緒的主要邏輯在Run中,部分原始碼如下:
699 void HshaServerIO::RunForever() {
700 scheduler_->SetHandlerAcceptedFdFunc(bind(&HshaServerIO::HandlerAcceptedFd, this));
701 scheduler_->SetActiveSocketFunc(bind(&HshaServerIO::ActiveSocketFunc, this));
702 scheduler_->RunForever();
703 }
239 bool UThreadEpollScheduler::Run() {
240 ConsumeTodoList();
241
242 struct epoll_event * events = (struct epoll_event*) calloc(max_task_, sizeof(struct epoll_event));
243
244 int next_timeout = timer_.GetNextTimeout();
245
246 for (; (run_forever_) || (!runtime_.IsAllDone());) {
247 int nfds = epoll_wait(epoll_fd_, events, max_task_, 4);
248 if (nfds != -1) {
249 for (int i = 0; i < nfds; i++) {
250 UThreadSocket_t * socket = (UThreadSocket_t*) events[i].data.ptr;
251 socket->waited_events = events[i].events;
252
253 runtime_.Resume(socket->uthread_id);
254 }
255
256 //for server mode
257 if (active_socket_func_ != nullptr) {
258 UThreadSocket_t * socket = nullptr;
259 while ((socket = active_socket_func_()) != nullptr) {
260 runtime_.Resume(socket->uthread_id);
261 }
262 }
263
264 //for server uthread worker
265 if (handler_new_request_func_ != nullptr) {
266 handler_new_request_func_();
267 }
268
269 if (handler_accepted_fd_func_ != nullptr) {
270 handler_accepted_fd_func_();
271 }
272
273 if (closed_) {
274 ResumeAll(UThreadEpollREvent_Close);
275 break;
276 }
277
278 ConsumeTodoList();
279 DealwithTimeout(next_timeout);
280 } else if (errno != EINTR) {
281 ResumeAll(UThreadEpollREvent_Error);
282 break;
283 }
284
285 StatEpollwaitEvents(nfds);
286 }
287
288 free(events);
289
290 return true;
291 }
以上的主要邏輯分別是處理一些事件,和超時事件,強制wait四毫秒,如果有事件發生則resume協程處理,如果有response要傳送則從data_flow取出放入某個協程並resume,如果是協程模式則有新的request從data_flow取出交由協程處理,然後是resume協程處理新的連線並關聯IOFunc處理函式:
561 void HshaServerIO::HandlerAcceptedFd() {
562 lock_guard<mutex> lock(queue_mutex_);
563 while (!accepted_fd_list_.empty()) {
564 int accepted_fd = accepted_fd_list_.front();
565 accepted_fd_list_.pop();
566 scheduler_->AddTask(bind(&HshaServerIO::IOFunc, this, accepted_fd), nullptr);
567 }
568 }
大概邏輯如下,為accepted_fd建立UThreadSocket_t物件,讀請求(封裝了socket buffer相關的東西),並塞進data_flow中,並notify協程,然後因wait切出協程,等協程處理完畢後,切回來把response塞進data_flow中,其中有一些邏輯判斷就省略了,部分實現如下:
570 void HshaServerIO::IOFunc(int accepted_fd) {
571 UThreadSocket_t *socket{scheduler_->CreateSocket(accepted_fd)};
572 UThreadTcpStream stream;
573 stream.Attach(socket);
574 UThreadSetSocketTimeout(*socket, config_->GetSocketTimeoutMS());
575
576 while (true) {
582 unique_ptr<BaseProtocolFactory> factory(
583 BaseProtocolFactory::CreateFactory(stream));
584 unique_ptr<BaseProtocol> protocol(factory->GenProtocol());
586 BaseRequest *req{nullptr};
587 ReturnCode ret{protocol->ServerRecv(stream, req)};
588 if (ReturnCode::OK != ret) {
589 delete req;
595 break;
596 }
600 if (!data_flow_->CanPushRequest(config_->GetMaxQueueLength())) {
601 delete req;
604 break;
605 }
606
607 if (!hsha_server_qos_->CanEnqueue()) {
609 delete req;
614 break;
615 }
622 data_flow_->PushRequest((void *)socket, req);
625 worker_pool_->Notify();
626 UThreadSetArgs(*socket, nullptr);
628 UThreadWait(*socket, config_->GetSocketTimeoutMS());
629 if (UThreadGetArgs(*socket) == nullptr) {
636 socket = stream.DetachSocket();
637 UThreadLazyDestory(*socket);
641 break;
642 }
645 {
646 BaseResponse *resp((BaseResponse *)UThreadGetArgs(*socket));
647 if (!resp->fake()) {
648 ret = resp->ModifyResp(is_keep_alive, version);
649 ret = resp->Send(stream);
651 }
652 delete resp;
653 }
658 if (ReturnCode::OK != ret) {
660 }
662 if (!is_keep_alive || (ReturnCode::OK != ret)) {
663 break;
664 }
665 }
668 }
然後是處理超時事件,timer用的是小根堆實現的,GetNextTimeout取下一個超時時間,如果沒有則break,否則PopTimeout節點處理:
309 void UThreadEpollScheduler::DealwithTimeout(int & next_timeout) {
310 while (true) {
311 next_timeout = timer_.GetNextTimeout();
312 if (next_timeout != 0) {
313 break;
314 }
315
316 UThreadSocket_t * socket = timer_.PopTimeout();
317 socket->waited_events = UThreadEpollREvent_Timeout;
318 runtime_.Resume(socket->uthread_id);
319 }
320 }
以上是整個Run的處理邏輯,執行在排程執行緒中,有些細節沒說明。
78 void EpollNotifier :: Run() {
79 assert(pipe(pipe_fds_) == 0);
80 fcntl(pipe_fds_[1], F_SETFL, O_NONBLOCK);
81 scheduler_->AddTask(std::bind(&EpollNotifier::Func, this), nullptr);
82 }
83
84 void EpollNotifier :: Func() {
85 UThreadSocket_t * socket = scheduler_->CreateSocket(pipe_fds_[0], -1, -1, false);
86 char tmp[2] = {0};
87 while (true) {
88 if (UThreadRead(*socket, tmp, 1, 0) < 0) {
89 break;
90 }
91 }
92 free(socket);
93 }
94
95 void EpollNotifier :: Notify() {
96 ssize_t write_len = write(pipe_fds_[1], (void *)"a", 1);
97 if (write_len < 0) {
99 }
100 }
526 void WorkerPool::Notify() {
527 lock_guard<mutex> lock(mutex_);
528 if (last_notify_idx_ == worker_list_.size()) {
529 last_notify_idx_ = 0;
530 }
531
532 worker_list_[last_notify_idx_++]->Notify();
533 }
485 void Worker::Notify() {
486 if (uthread_count_ == 0) {
487 return;
488 }
489
490 worker_scheduler_->NotifyEpoll();
491 }
上面幾行是協程的等待和喚醒,每個工作執行緒如果是協程模式,也會有個排程功能,跟執行緒模式沒多大關係。當有request/response的時候,會通過pipe寫一個字元,然後發生可讀事件然後進行後續處理,整個過程比較簡單。
以下為accept/read/send/這三部分實現,基本對fd設定了非阻塞,然後註冊監聽事件和超時,並切換協程,當發生事件時或超時時,resume的協程會進行刪除,如果是發生了感興趣的事件,則進行accept/read/send等處理;
324 int UThreadPoll(UThreadSocket_t & socket, int events, int * revents, int timeout_ms) {
325 int ret = -1;
327 socket.uthread_id = socket.scheduler->GetCurrUThread();
329 socket.event.events = events;
330
331 socket.scheduler->AddTimer(&socket, timeout_ms);
332 epoll_ctl(socket.epoll_fd, EPOLL_CTL_ADD, socket.socket, &socket.event);
333
334 socket.scheduler->YieldTask();
336 epoll_ctl(socket.epoll_fd, EPOLL_CTL_DEL, socket.socket, &socket.event);
337 socket.scheduler->RemoveTimer(socket.timer_id);
338
339 *revents = socket.waited_events;
340
341 if ((*revents) > 0) {
342 if ((*revents) & events) {
343 ret = 1;
344 } else {
345 errno = EINVAL;
346 ret = 0;
347 }
348 } else if ((*revents) == UThreadEpollREvent_Timeout) {
349 //timeout
350 errno = ETIMEDOUT;
351 ret = 0;
352 } else if ((*revents) == UThreadEpollREvent_Error){
353 //error
354 errno = ECONNREFUSED;
355 ret = -1;
356 } else {
357 //active close
358 errno = 0;
359 ret = -1;
360 }
361
362 return ret;
363 }
424 int UThreadAccept(UThreadSocket_t & socket, struct sockaddr *addr, socklen_t *addrlen) {
425 int ret = accept(socket.socket, addr, addrlen);
426 if (ret < 0) {
427 if (EAGAIN != errno && EWOULDBLOCK != errno) {
428 return -1;
429 }
430
431 int revents = 0;
432 if (UThreadPoll(socket, EPOLLIN, &revents, -1) > 0) {
433 ret = accept(socket.socket, addr, addrlen);
434 } else {
435 ret = -1;
436 }
437 }
438
439 return ret;
440 }
441
442 ssize_t UThreadRead(UThreadSocket_t & socket, void * buf, size_t len, int flags) {
443 //int ret = read(socket.socket, buf, len);
444 int ret = -1;
445
446 //if (ret < 0 && EAGAIN == errno) {
447 int revents = 0;
448 if (UThreadPoll(socket, EPOLLIN, &revents, socket.socket_timeout_ms) > 0) {
449 ret = read(socket.socket, buf, len);
450 } else {
451 ret = -1;
452 }
453 //}
454
455 return ret;
456 }
474 ssize_t UThreadSend(UThreadSocket_t & socket, const void *buf, size_t len, int flags) {
475 int ret = send(socket.socket, buf, len, flags);
476
477 if (ret < 0 && EAGAIN == errno) {
478 int revents = 0;
479 if (UThreadPoll(socket, EPOLLOUT, &revents, socket.socket_timeout_ms) > 0) {
480 ret = send(socket.socket, buf, len, flags);
481 } else {
482 ret = -1;
483 }
484 }
485
486 return ret;
487 }
下面的WorkerLogic實現是協程或執行緒實際處理request的實現,從data_flow中拿訊息,然後dispatch_處理後放入data_flow中,並NotifyEpoll:
459 void Worker::WorkerLogic(void *args, BaseRequest *req, int queue_wait_time_ms) {
464 BaseResponse *resp{req->GenResponse()};
465 if (queue_wait_time_ms < MAX_QUEUE_WAIT_TIME_COST) {
466 HshaServerStat::TimeCost time_cost;
468 DispatcherArgs_t dispatcher_args(pool_->hsha_server_stat_->hsha_server_monitor_,
469 worker_scheduler_, pool_->args_);
470 pool_->dispatch_(req, resp, &dispatcher_args);
474 } else {
475 pool_->hsha_server_stat_->worker_drop_requests_++;
476 }
477 pool_->data_flow_->PushResponse(args, resp);
480 pool_->scheduler_->NotifyEpoll();
482 delete req;
483 }
下面是整個協程的實現,協程的排程由UThreadEpollScheduler
來操作,上面也作了些分析,管理由UThreadRuntime
。
以下實現是每個協程的棧,私有棧,使用檔案對映的方法作了些保護:
29 class UThreadStackMemory {
30 public:
31 UThreadStackMemory(const size_t stack_size, const bool need_protect = true);
32 ~UThreadStackMemory();
33
34 void * top();
35 size_t size();
36
37 private:
38 void * raw_stack_;
39 void * stack_;
40 size_t stack_size_;
41 int need_protect_;
42 };
33 UThreadStackMemory :: UThreadStackMemory(const size_t stack_size, const bool need_protect) :
34 raw_stack_(nullptr), stack_(nullptr), need_protect_(need_protect) {
35 int page_size = getpagesize();
36 if ((stack_size % page_size) != 0) {
37 stack_size_ = (stack_size / page_size + 1) * page_size;
38 } else {
39 stack_size_ = stack_size;
40 }
41
42 if (need_protect) {
43 raw_stack_ = mmap(NULL, stack_size_ + page_size * 2,
44 PROT_READ | PROT_WRITE, MAP_ANONYMOUS | MAP_PRIVATE, -1, 0);
45 assert(raw_stack_ != nullptr);
46 assert(mprotect(raw_stack_, page_size, PROT_NONE) == 0);
47 assert(mprotect((void *)((char *)raw_stack_ + stack_size_ + page_size), page_size, PROT_NONE) == 0);
48 stack_ = (void *)((char *)raw_stack_ + page_size);
49 } else {
50 raw_stack_ = mmap(NULL, stack_size_, PROT_READ | PROT_WRITE, MAP_ANONYMOUS | MAP_PRIVATE, -1, 0);
51 assert(raw_stack_ != nullptr);
52 stack_ = raw_stack_;
53 }
54 }
55
56 UThreadStackMemory :: ~UThreadStackMemory() {
57 int page_size = getpagesize();
58 if (need_protect_) {
59 assert(mprotect(raw_stack_, page_size, PROT_READ | PROT_WRITE) == 0);
60 assert(mprotect((void *)((char *)raw_stack_ + stack_size_ + page_size), page_size, PROT_READ | PROT_WRITE) == 0);
61 assert(munmap(raw_stack_, stack_size_ + page_size * 2) == 0);
62 } else {
63 assert(munmap(raw_stack_, stack_size_) == 0);
64 }
65 }
以下是協程物件的基類和派生類的具體實現:
36 class UThreadContext {
37 public:
38 UThreadContext() { }
39 virtual ~UThreadContext() { }
40
41 static UThreadContext * Create(size_t stack_size,
42 UThreadFunc_t func, void * args,
43 UThreadDoneCallback_t callback, const bool need_stack_protect);
44 static void SetContextCreateFunc(ContextCreateFunc_t context_create_func);
45 static ContextCreateFunc_t GetContextCreateFunc();
46
47 virtual void Make(UThreadFunc_t func, void * args) = 0;
48 virtual bool Resume() = 0;
49 virtual bool Yield() = 0;
50
51 private:
52 static ContextCreateFunc_t context_create_func_;
53 };
28 UThreadContext * UThreadContext :: Create(size_t stack_size,
29 UThreadFunc_t func, void * args,
30 UThreadDoneCallback_t callback, const bool need_stack_protect) {
31 if (context_create_func_ != nullptr) {
32 return context_create_func_(stack_size, func, args, callback, need_stack_protect);
33 }
34 return nullptr;
35 }
34 class UThreadContextSystem : public UThreadContext {
35 public:
36 UThreadContextSystem(size_t stack_size, UThreadFunc_t func, void * args,
37 UThreadDoneCallback_t callback, const bool need_stack_protect);
38 ~UThreadContextSystem();
39
40 static UThreadContext * DoCreate(size_t stack_size,
41 UThreadFunc_t func, void * args, UThreadDoneCallback_t callback,
42 const bool need_stack_protect);
43
44 void Make(UThreadFunc_t func, void * args) override;
45 bool Resume() override;
46 bool Yield() override;
47
48 ucontext_t * GetMainContext();
49
50 private:
51 static void UThreadFuncWrapper(uint32_t low32, uint32_t high32);
52
53 ucontext_t context_;
54 UThreadFunc_t func_;
55 void * args_;
56 UThreadStackMemory stack_;
57 UThreadDoneCallback_t callback_;
58 };
30
31 UThreadContextSystem :: UThreadContextSystem(size_t stack_size, UThreadFunc_t func, void * args,
32 UThreadDoneCallback_t callback, const bool need_stack_protect)
33 : func_(func), args_(args), stack_(stack_size, need_stack_protect), callback_(callback) {
34 Make(func, args);
35 }
40 UThreadContext * UThreadContextSystem :: DoCreate(size_t stack_size,
41 UThreadFunc_t func, void * args, UThreadDoneCallback_t callback,
42 const bool need_stack_protect) {
43 return new UThreadContextSystem(stack_size, func, args, callback, need_stack_protect);
44 }
69 ucontext_t * UThreadContextSystem :: GetMainContext() {
70 static __thread ucontext_t main_context;
71 return &main_context;
72 }
上面的實現是用的ucontext_t,其中用到的幾個函式說明下:
int getcontext(ucontext_t *ucp) 用於將當前執行狀態上下文儲存到一個ucontext_t結構中;
int makecontext(ucontext_t ucp, void (func)(), int argc, ...) 用於初始化一個ucontext_t型別的結構,即上下文,每個引數型別都是int型,函式指標func指明瞭該context的入口函式,在執行該函式前,一般要執行getcontext並設定相關的初始棧資訊,包括棧指標和棧大小等資訊;
int swapcontext(ucontext_t *oucp, ucontext_t *ucp) 用於完成舊狀態的儲存和切換到新狀態的工作;
以上具體原理可以移到參考資料中。
31 class UThreadRuntime {
32 public:
33 UThreadRuntime(size_t stack_size, const bool need_stack_protect);
34 ~UThreadRuntime();
35
36 int Create(UThreadFunc_t func, void * args);
37 int GetCurrUThread();
38 bool Yield();
39 bool Resume(size_t index);
40 bool IsAllDone();
41 int GetUnfinishedItemCount() const;
42
43 void UThreadDoneCallback();
44
45 private:
46 struct ContextSlot {
47 ContextSlot() {
48 context = nullptr;
49 next_done_item = -1;
50 }
51 UThreadContext * context;
52 int next_done_item;
53 int status;
54 };
55
56 size_t stack_size_;
57 std::vector<ContextSlot> context_list_;
58 int first_done_item_;
59 int current_uthread_;
60 int unfinished_item_count_;
61 bool need_stack_protect_;
62 };
36 UThreadRuntime :: UThreadRuntime(size_t stack_size, const bool need_stack_protect)
37 :stack_size_(stack_size), first_done_item_(-1),
38 current_uthread_(-1), unfinished_item_count_(0),
39 need_stack_protect_(need_stack_protect) {
40 if (UThreadContext::GetContextCreateFunc() == nullptr) {
41 UThreadContext::SetContextCreateFunc(UThreadContextSystem::DoCreate);
42 }
43 }
111 return true;
112 }
上面差不多是管理協程的池,如何使用?比如如下程式碼段:
193 void UThreadEpollScheduler::ConsumeTodoList() {
194 while (!todo_list_.empty()) {
195 auto & it = todo_list_.front();
196 int id = runtime_.Create(it.first, it.second);
197 runtime_.Resume(id);
198
199 todo_list_.pop();
200 }
201 }
對每一個task,建立協程,如果first_done_item_
表示有可用的協程,那麼複用它並修改first_done_item_
,next_done_item
表示下一個可用的協程索引號,然後呼叫Make進行設定和初始化當前協程的上下文,然後設定uc_link執行完畢回到的主協程,即main_context
,UThreadFuncWrapper
是協程的入口函式,把this分段成兩個32位的值,在UThreadFuncWrapper
裡又拼接成this,然後進行處理,如果有回撥那麼進行回撥UThreadDoneCallback
,它的作用差不多類似把協程資源歸還到vector中,並更新一些資料:
74 void UThreadContextSystem :: UThreadFuncWrapper(uint32_t low32, uint32_t high32) {
75 uintptr_t ptr = (uintptr_t)low32 | ((uintptr_t) high32 << 32);
76 UThreadContextSystem * uc = (UThreadContextSystem *)ptr;
77 uc->func_(uc->args_);
78 if (uc->callback_ != nullptr) {
79 uc->callback_();
80 }
81 }
77 void UThreadRuntime :: UThreadDoneCallback() {
78 if (current_uthread_ != -1) {
79 ContextSlot & context_slot = context_list_[current_uthread_];
80 context_slot.next_done_item = first_done_item_;
81 context_slot.status = UTHREAD_DONE;
82 first_done_item_ = current_uthread_;
83 unfinished_item_count_--;
84 current_uthread_ = -1;
85 }
86 }
如果first_done_item_
為-1表示沒有可用的協程,那麼建立新的協程物件,並進行初始化,然後入vector進行管理。
51 int UThreadRuntime :: Create(UThreadFunc_t func, void * args) {
52 if (func == nullptr) {
53 return -2;
54 }
55 int index = -1;
56 if (first_done_item_ >= 0) {
57 index = first_done_item_;
58 first_done_item_ = context_list_[index].next_done_item;
59 context_list_[index].context->Make(func, args);
60 } else {
61 index = context_list_.size();
62 auto new_context = UThreadContext::Create(stack_size_, func, args,
63 std::bind(&UThreadRuntime::UThreadDoneCallback, this),
64 need_stack_protect_);
65 assert(new_context != nullptr);
66 ContextSlot context_slot;
67 context_slot.context = new_context;
68 context_list_.push_back(context_slot);
69 }
70
71 context_list_[index].next_done_item = -1;
72 context_list_[index].status = UTHREAD_SUSPEND;
73 unfinished_item_count_++;
74 return index;
75 }
46 void UThreadContextSystem :: Make(UThreadFunc_t func, void * args) {
47 func_ = func;
48 args_ = args;
49 getcontext(&context_);
50 context_.uc_stack.ss_sp = stack_.top();
51 context_.uc_stack.ss_size = stack_.size();
52 context_.uc_stack.ss_flags = 0;
53 context_.uc_link = GetMainContext();
54 uintptr_t ptr = (uintptr_t)this;
55 makecontext(&context_, (void (*)(void))UThreadContextSystem::UThreadFuncWrapper,
56 2, (uint32_t)ptr, (uint32_t)(ptr >> 32));
57 }
建立完協程後,進行Resume
開始切入該協程,設定status為running狀態,並切出主協程,切入要執行的協程:
88 bool UThreadRuntime :: Resume(size_t index) {
89 if (index >= context_list_.size()) {
90 return false;
91 }
92
93 auto context_slot = context_list_[index];
94 if (context_slot.status == UTHREAD_SUSPEND) {
95 current_uthread_ = index;
96 context_slot.status = UTHREAD_RUNNING;
97 context_slot.context->Resume();
98 return true;
99 }
100 return false;
101 }
59 bool UThreadContextSystem :: Resume() {
60 swapcontext(GetMainContext(), &context_);
61 return true;
62 }
如果要等待某個資源,那麼協程需要讓出給另一個協程執行,即Yield
,根據當前的協程索引,獲得協程物件,並設定status為掛起suspend,然後讓出:
103 bool UThreadRuntime :: Yield() {
104 if (current_uthread_ != -1) {
105 auto context_slot = context_list_[current_uthread_];
106 current_uthread_ = -1;
107 context_slot.status = UTHREAD_SUSPEND;
108 context_slot.context->Yield();
109 }
64 bool UThreadContextSystem :: Yield() {
65 swapcontext(&context_, GetMainContext());
66 return true;
67 }
以上差不多是整個協程池的實現了,後期也會分析下libco中的實現,跟這個不一樣,然後pebble中的協程也會分析下。
大概分析如下,其他的一些比如data_flow/buffer/timer等實現,如果有時間會分析下,以下連線中有分析了。基本上對於一個框架的理解,主要是從整個框架比如多程式還是單程式多執行緒,從訊息源到處理,到返回訊息響應,這麼分析會比較好,然後再慢慢加上其他功能,比如統計,負載情況,以及怎麼處理訊息,怎麼快取請求等。
參考資料:
https://blog.csdn.net/shanshanpt/article/details/55213287
https://blog.csdn.net/shanshanpt/article/details/55253379
https://blog.csdn.net/lmfqyj/article/details/79437157
https://blog.csdn.net/lmfqyj/article/details/79406952
http://man7.org/linux/man-pages/man2/mmap.2.html
https://segmentfault.com/p/1210000009166339/read
http://man7.org/linux/man-pages/man2/getcontext.2.html
https://segmentfault.com/a/1190000013177055
相關文章
- 實現一個簡單的C++協程庫C++
- PHP 協程實現過程詳解PHP
- Golang協程池(workpool)實現Golang
- Python協程greenlet實現原理Python
- 跟羽夏去實現協程
- PHP7下的協程實現PHP
- python中的協程及實現Python
- 協程實現canvas影像隨機閃爍Canvas隨機
- 使用socket+gevent實現協程併發
- python網路-多工實現之協程Python
- 以swoole為例,學習如何實現協程
- 基於lua協程的AI服務實現AI
- python中gevent協程庫Python
- 協程庫基礎知識
- 【Swoole原始碼研究】深入理解Swoole協程實現原始碼
- 聊一聊Unity協程背後的實現原理Unity
- 探索Golang協程實現——從v1.0開始Golang
- perl協程運算元據庫
- python協程(yield、asyncio標準庫、gevent第三方)、非同步的實現Python非同步
- GO GMP協程排程實現原理 5w字長文史上最全Go
- 實現一個協程帶進度條下載器
- 擴充套件實現Unity協程的完整棧跟蹤套件Unity
- 設計並實現倉庫 ETL 過程
- Swoole 實戰:MySQL 查詢器的實現(協程連線池版)MySql
- python大佬養成計劃–協程實現TCP連線PythonTCP
- python——asyncio模組實現協程、非同步程式設計(三)Python非同步程式設計
- 學習之路 / goroutine 併發協程池設計及實現Go
- 【協程原理】 - Java中的協程Java
- 協程在RN中的實踐
- Kotlin + 協程 + Retrofit + MVVM優雅的實現網路請求KotlinMVVM
- 用Go語言實現多協程檔案上傳,斷點續傳,你如何實現?Go斷點
- 實現 Raft 協議Raft協議
- swoole 協程原始碼解讀 (協程的排程)原始碼
- Android版kotlin協程入門(四):kotlin協程開發實戰AndroidKotlin
- Swoole 協程與 Go 協程的區別Go
- Kotlin Coroutine(協程): 二、初識協程Kotlin
- Kotlin Coroutine(協程): 三、瞭解協程Kotlin
- Swoole協程與Go協程的區別Go