OpenRTMFP/Cumulus Primer(6)CumulusServer啟動流程分析(續2)
OpenRTMFP/Cumulus Primer(6)CumulusServer啟動流程分析(續2)
- 作者:柳大·Poechant(鍾超)
- 部落格:Blog.CSDN.net/Poechant
- 郵箱:zhongchao.ustc#gmail.com (# -> @)
- 日期:April 14th, 2012
1 main.cpp 中的main()
函式中的server
main.cpp 中真正啟動的是server
,它繼承自Cumulus::RTMFPServer
,而Cumulus::RTMFPServer
又繼承自Cumulus::Startable
、Cumulus::Gateway
、Cumulus::Handler
。而Cumulus::Startable
繼承自Poco::Runnable
,所以其是一個可以執行的執行緒。在OpenRTMFP/CumulusServer
中,這是主執行緒。
Server server(config().getString("application.dir", "./"), *this, config());
server.start(params);
這是CumulusServer/Server.h
中定義的,其建構函式的原型為:
Server(const std::string& root,
ApplicationKiller& applicationKiller,
const Poco::Util::AbstractConfiguration& configurations);
個引數含義如下:
- The Path Root for the Server Application
- Killer for Termanting the Server Application
- Server Configuration
距離來說,在我的 Worksapce 中:
-
root
是/Users/michael/Development/workspace/eclipse/OpenRTMFP-Cumulus/Debug/
建構函式的初始化列表極長:
Server::Server(const std::string& root,
ApplicationKiller& applicationKiller,
const Util::AbstractConfiguration& configurations)
: _blacklist(root + "blacklist", *this),
_applicationKiller(applicationKiller),
_hasOnRealTime(true),
_pService(NULL),
luaMail(_pState=Script::CreateState(),
configurations.getString("smtp.host","localhost"),
configurations.getInt("smtp.port",SMTPSession::SMTP_PORT),
configurations.getInt("smtp.timeout",60)) {
下面呼叫Poco::File
建立目錄:
File((string&)WWWPath = root + "www").createDirectory();
因為roor
是/Users/michael/Development/workspace/eclipse/OpenRTMFP-Cumulus/Debug/
目錄,所以WWWPath
就是/Users/michael/Development/workspace/eclipse/OpenRTMFP-Cumulus/Debug/www
目錄。然後初始化GlobalTable
,這個GlobalTable
是和 Lua 有關的東東,這裡暫不細說,先知道與 Lua 相關就好。
Service::InitGlobalTable(_pState);
下面就涉及到了 Lua script 了:
SCRIPT_BEGIN(_pState)
SCRIPT_CREATE_PERSISTENT_OBJECT(Invoker,LUAInvoker,*this)
readNextConfig(_pState,configurations,"");
lua_setglobal(_pState,"cumulus.configs");
SCRIPT_END
}
其中SCRIPT_BEGIN
、SCRIPT_CREATE_PERSISTENT_OBJECT
和SCRIPT_END
都是巨集,其定義在Script.h
檔案中,如下:
#define SCRIPT_BEGIN(STATE) \
if (lua_State* __pState = STATE) { \
const char* __error=NULL;
#define SCRIPT_CREATE_PERSISTENT_OBJECT(TYPE,LUATYPE,OBJ) \
Script::WritePersistentObject<TYPE,LUATYPE>(__pState,OBJ); \
lua_pop(__pState,1);
#define SCRIPT_END }
SCRIPT_BEGIN
和SCRIPT_END
經常用到,當與 Lua 相關的操作出現時,都會以這兩個巨集作為開頭和結尾。
2 main.cpp 中main()
函式的server.start()
void RTMFPServer::start(RTMFPServerParams& params) {
如果OpenRTMFP/CumulusServer
正在執行,則返回並終止啟動。
if(running()) {
ERROR("RTMFPServer server is yet running, call stop method before");
return;
}
設定埠號,如果埠號為 0,則返回並終止啟動。
_port = params.port;
if (_port == 0) {
ERROR("RTMFPServer port must have a positive value");
return;
}
設定OpenRTMFP/CumulusEdge
的埠號,如果其埠號與OpenRTMFP/CumulusSever
埠號相同,則返回並終止啟動:
_edgesPort = params.edgesPort;
if(_port == _edgesPort) {
ERROR("RTMFPServer port must different than RTMFPServer edges.port");
return;
}
Cirrus:
_freqManage = 2000000; // 2 sec by default
if(params.pCirrus) {
_pCirrus = new Target(*params.pCirrus);
_freqManage = 0; // no waiting, direct process in the middle case!
NOTE("RTMFPServer started in man-in-the-middle mode with server %s \
(unstable debug mode)", _pCirrus->address.toString().c_str());
}
middle:
_middle = params.middle;
if(_middle)
NOTE("RTMFPServer started in man-in-the-middle mode between peers \
(unstable debug mode)");
UDP Buffer:
(UInt32&)udpBufferSize =
params.udpBufferSize==0 ?
_socket.getReceiveBufferSize() : params.udpBufferSize;
_socket.setReceiveBufferSize(udpBufferSize);
_socket.setSendBufferSize(udpBufferSize);
_edgesSocket.setReceiveBufferSize(udpBufferSize);
_edgesSocket.setSendBufferSize(udpBufferSize);
DEBUG("Socket buffer receving/sending size = %u/%u",
udpBufferSize,
udpBufferSize);
(UInt32&)keepAliveServer =
params.keepAliveServer < 5 ? 5000 : params.keepAliveServer * 1000;
(UInt32&)keepAlivePeer =
params.keepAlivePeer < 5 ? 5000 : params.keepAlivePeer * 1000;
(UInt8&)edgesAttemptsBeforeFallback = params.edgesAttemptsBeforeFallback;
setPriority(params.threadPriority);
啟動執行緒,進入迴圈執行:
Startable::start();
}
上句具體的原始碼實現為:
void Startable::start() {
if (running())
return;
如果在執行則返回並終止啟動。然後加一個區域性鎖。
ScopedLock<FastMutex> lock(_mutex);
如果不得不join()
到主執行緒中,那就join()
吧
if(_haveToJoin) {
_thread.join();
_haveToJoin=false;
}
然後就執行這個執行緒吧:
_terminate = false;
_thread.start(*this);
_haveToJoin = true;
}
-
轉載請註明來自柳大的CSDN部落格:Blog.CSDN.net/Poechant
-
相關文章
- OpenRTMFP/Cumulus Primer(5)CumulusServer啟動流程分析(續)Server
- OpenRTMFP/Cumulus Primer(7)CumulusServer啟動流程分析(續3)Server
- OpenRTMFP/Cumulus Primer(5)CumulusServer啟動流程分析(續1)Server
- OpenRTMFP/Cumulus Primer(7)CumulusServer 啟動流程分析(續3)Server
- OpenRTMFP/Cumulus Primer(4)CumulusServer啟動流程分析Server
- OpenRTMFP/Cumulus Primer(8)CumulusServer主程式主迴圈分析Server
- OpenRTMFP/Cumulus Primer(1)入門介紹與部署CumulusServerServer
- OpenRTMFP/Cumulus Primer(2)用Lua編寫HelloWorld應用擴充套件CumulusServer套件Server
- OpenRTMFP/Cumulus Primer(18)AMF解析之AMFReader(續2)
- OpenRTMFP/Cumulus Primer(17)AMF解析之AMFReader(續1)
- OpenRTMFP/Cumulus Primer(16)AMF解析之AMFReader
- OpenRTMFP/Cumulus Primer(14)AMF解析之PacketReader/Writer
- OpenRTMFP/Cumulus Primer(22)執行緒邏輯分析之一:RTMFPServer執行緒的啟動和等待執行緒Server
- OpenRTMFP/Cumulus Primer(9)AMF解析之BinaryReader/Writer
- OpenRTMFP/Cumulus Primer(21)經由伺服器的釋出/訂閱流程的關鍵點伺服器
- OpenRTMFP/Cumulus Primer(15)AMF解析之資料型別定義資料型別
- OpenRTMFP/Cumulus Primer(9)AMF 處理方式解析——BinaryReader/Writer
- OpenRTMFP/Cumulus Primer(13)IO管理之區域性記憶體片記憶體
- OpenRTMFP/Cumulus Primer(23)執行緒邏輯分析之二:RTMFPManager對RTMFPServer的影響執行緒Server
- OpenRTMFP/Cumulus Primer(19)獨立使用CumulusLib時的執行緒安全Bug執行緒
- 2.xv6啟動流程
- spark core原始碼分析2 master啟動流程Spark原始碼AST
- Activity啟動流程分析
- activity 啟動流程分析
- Unbound啟動流程分析
- Scrapy原始碼閱讀分析_2_啟動流程原始碼
- FlutterApp啟動流程分析FlutterAPP
- nodejs啟動流程分析NodeJS
- Flutter啟動流程原始碼分析Flutter原始碼
- Linux:uboot啟動流程分析Linuxboot
- apiserver原始碼分析——啟動流程APIServer原始碼
- Activity啟動流程原始碼分析原始碼
- Android應用啟動流程分析Android
- Flutter系列三:Flutter啟動流程分析Flutter
- Tomcat原始碼分析--啟動流程Tomcat原始碼
- JobTracker啟動流程原始碼級分析原始碼
- 以太坊原始碼分析(39)geth啟動流程分析原始碼
- SpringBoot2 | SpringBoot啟動流程原始碼分析(一)Spring Boot原始碼