上一篇nginx的文章中,我們理解了整個http正向代理的執行流程原理,主要就是事件機制接入,header解析,body解析,然後遍歷各種checker,以及詳細講解了其正向代理的具體實現過程。這已經讓我們對整個nginx有了較深入的瞭解,但nginx核心固然重要,但其擴充套件功能才是其吸引大家的地方。而它的擴充套件功能又是無窮無盡的,這是好事又是壞事,好事是功能特別多,壞事是我們不可能都能探究其每個模組。
個人覺得,nginx至少有兩大必備的功能:http伺服器(正向代理),http反向代理(服務轉發);所以,既然前面我們弄清了其正向代理的實現,接下就是搬另一座大山的時刻了。
0. 反向代理白話
所謂反向代理,實際就是其本身不做伺服器的功能,它只是起到一個代理的角色,當有人請求它的時候,它按照已知的規則將該請求轉發到目標伺服器上,完成工作後,它再將結果響應給到客戶端。在客戶端看來,nginx它就是一個目標伺服器。這樣做有什麼好處呢?其實是非常多的,列舉兩個:遮蔽底層許多不同伺服器的差異避免上游過多關心切換問題而導致業務重心不穩;遮蔽內網的各種防火牆限制,上游只需關注與nginx間的網路通暢性即可;統一管控上游接入切換方便;
看起來反向代理功能很棒,而且表面一看就是一轉發功能,並非難事。但事實真如此嗎?
要想知道難不難,我們得思考下這個代理伺服器都會面臨什麼業務要求?
1. 得有支援轉向任意伺服器的能力;
2. 得有支援任意http協議的處理能力(不僅僅是get/post);
3. 得有保持請求源資訊的能力(如客戶端ip);
4. 得是高效能的、支援高併發的;
前幾個看起來都是最基本的東西,難度並不大,後面的要求又很虛,看起來也沒毛病。但是單要你實現一個高效能、高併發的系統,可能也不會很簡單哦。而這裡的高效能高併發是硬性要求,因為這裡是被作為統一入口服務的,如果自己無法滿足這要求,那麼下游再強的能力也無濟於事了。
另外,因作為一個通用的代理伺服器,那麼它一定是會隨時變動的,那麼如何支援動態變更配置又是一個問題。通常我們會基於資料庫去實現配置,但引入資料庫這個元件,將會帶來很大的未知。而如果想基於其他的配置,也許就沒那麼方便了。
總之,好用的反向代理伺服器並不多,這不是沒有原因的。
1. nginx 靜態檔案配置
要配置反向代理伺服器,只需在http server中配置 proxy_pass 代理即可。(當然了,你可以根據字首配置許多不同的代理、server)
http { include mime.types; default_type application/octet-stream; #log_format main '$remote_addr - $remote_user [$time_local] "$request" ' # '$status $body_bytes_sent "$http_referer" ' # '"$http_user_agent" "$http_x_forwarded_for"'; #access_log logs/access.log main; sendfile on; #tcp_nopush on; #keepalive_timeout 0; keepalive_timeout 65; #gzip on; server { listen 8085; server_name localhost; #charset koi8-r; #access_log logs/host.access.log main; location /tohello { # 將請求轉發給另一個伺服器 proxy_pass http://localhost:8081/hello; # 保持請求端資訊 proxy_set_header Host $host; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; } location / { root html; index index.html index.htm; } } # 後續可以新增無數個server 擴充套件 }
配置簡單吧,實際核心就三兩行程式碼搞定:監聽埠號 listen、訪問域名 server_name、代理轉發 proxy_pass。明顯這是nginx成功的原因之一。
本文要討論的場景是,如果我訪問 http://localhost:8085/tohello/getUsers?pageNum=1&pageSize=2, 實際我想訪問的是其背後的服務,那nginx將如何幹成這件事呢?
2. 靜態檔案模組的註冊
整個反向代理的功能,基本都聚合在 proxy_module中,或更準確的說是入口處理都在proxy_module中。但proxy和靜態檔案模組相比,又是相當複雜的。它的註冊也比static_module更復雜。如下:
// http/modules/ngx_http_proxy_module.c ngx_module_t ngx_http_proxy_module; // 所有支援的操作命令定義,想檢視完整定義,請點選後續程式碼塊 static ngx_command_t ngx_http_proxy_commands[] = { { ngx_string("proxy_pass"), NGX_HTTP_LOC_CONF|NGX_HTTP_LIF_CONF|NGX_HTTP_LMT_CONF|NGX_CONF_TAKE1, ngx_http_proxy_pass, NGX_HTTP_LOC_CONF_OFFSET, 0, NULL }, { ngx_string("proxy_set_header"), NGX_HTTP_MAIN_CONF|NGX_HTTP_SRV_CONF|NGX_HTTP_LOC_CONF|NGX_CONF_TAKE2, ngx_conf_set_keyval_slot, NGX_HTTP_LOC_CONF_OFFSET, offsetof(ngx_http_proxy_loc_conf_t, headers_source), NULL }, ... ngx_null_command }; static ngx_http_module_t ngx_http_proxy_module_ctx = { ngx_http_proxy_add_variables, /* preconfiguration */ NULL, /* postconfiguration */ ngx_http_proxy_create_main_conf, /* create main configuration */ NULL, /* init main configuration */ NULL, /* create server configuration */ NULL, /* merge server configuration */ ngx_http_proxy_create_loc_conf, /* create location configuration */ ngx_http_proxy_merge_loc_conf /* merge location configuration */ }; // 暴露模組服務 ngx_module_t ngx_http_proxy_module = { NGX_MODULE_V1, &ngx_http_proxy_module_ctx, /* module context */ ngx_http_proxy_commands, /* module directives */ NGX_HTTP_MODULE, /* module type */ NULL, /* init master */ NULL, /* init module */ NULL, /* init process */ NULL, /* init thread */ NULL, /* exit thread */ NULL, /* exit process */ NULL, /* exit master */ NGX_MODULE_V1_PADDING };
想要檢視更多命令,請點選下面連結。
// 所有支援的操作命令定義 static ngx_command_t ngx_http_proxy_commands[] = { { ngx_string("proxy_pass"), NGX_HTTP_LOC_CONF|NGX_HTTP_LIF_CONF|NGX_HTTP_LMT_CONF|NGX_CONF_TAKE1, ngx_http_proxy_pass, NGX_HTTP_LOC_CONF_OFFSET, 0, NULL }, { ngx_string("proxy_redirect"), NGX_HTTP_MAIN_CONF|NGX_HTTP_SRV_CONF|NGX_HTTP_LOC_CONF|NGX_CONF_TAKE12, ngx_http_proxy_redirect, NGX_HTTP_LOC_CONF_OFFSET, 0, NULL }, { ngx_string("proxy_cookie_domain"), NGX_HTTP_MAIN_CONF|NGX_HTTP_SRV_CONF|NGX_HTTP_LOC_CONF|NGX_CONF_TAKE12, ngx_http_proxy_cookie_domain, NGX_HTTP_LOC_CONF_OFFSET, 0, NULL }, { ngx_string("proxy_cookie_path"), NGX_HTTP_MAIN_CONF|NGX_HTTP_SRV_CONF|NGX_HTTP_LOC_CONF|NGX_CONF_TAKE12, ngx_http_proxy_cookie_path, NGX_HTTP_LOC_CONF_OFFSET, 0, NULL }, { ngx_string("proxy_store"), NGX_HTTP_MAIN_CONF|NGX_HTTP_SRV_CONF|NGX_HTTP_LOC_CONF|NGX_CONF_TAKE1, ngx_http_proxy_store, NGX_HTTP_LOC_CONF_OFFSET, 0, NULL }, { ngx_string("proxy_store_access"), NGX_HTTP_MAIN_CONF|NGX_HTTP_SRV_CONF|NGX_HTTP_LOC_CONF|NGX_CONF_TAKE123, ngx_conf_set_access_slot, NGX_HTTP_LOC_CONF_OFFSET, offsetof(ngx_http_proxy_loc_conf_t, upstream.store_access), NULL }, { ngx_string("proxy_buffering"), NGX_HTTP_MAIN_CONF|NGX_HTTP_SRV_CONF|NGX_HTTP_LOC_CONF|NGX_CONF_FLAG, ngx_conf_set_flag_slot, NGX_HTTP_LOC_CONF_OFFSET, offsetof(ngx_http_proxy_loc_conf_t, upstream.buffering), NULL }, { ngx_string("proxy_request_buffering"), NGX_HTTP_MAIN_CONF|NGX_HTTP_SRV_CONF|NGX_HTTP_LOC_CONF|NGX_CONF_FLAG, ngx_conf_set_flag_slot, NGX_HTTP_LOC_CONF_OFFSET, offsetof(ngx_http_proxy_loc_conf_t, upstream.request_buffering), NULL }, { ngx_string("proxy_ignore_client_abort"), NGX_HTTP_MAIN_CONF|NGX_HTTP_SRV_CONF|NGX_HTTP_LOC_CONF|NGX_CONF_FLAG, ngx_conf_set_flag_slot, NGX_HTTP_LOC_CONF_OFFSET, offsetof(ngx_http_proxy_loc_conf_t, upstream.ignore_client_abort), NULL }, { ngx_string("proxy_bind"), NGX_HTTP_MAIN_CONF|NGX_HTTP_SRV_CONF|NGX_HTTP_LOC_CONF|NGX_CONF_TAKE12, ngx_http_upstream_bind_set_slot, NGX_HTTP_LOC_CONF_OFFSET, offsetof(ngx_http_proxy_loc_conf_t, upstream.local), NULL }, { ngx_string("proxy_socket_keepalive"), NGX_HTTP_MAIN_CONF|NGX_HTTP_SRV_CONF|NGX_HTTP_LOC_CONF|NGX_CONF_FLAG, ngx_conf_set_flag_slot, NGX_HTTP_LOC_CONF_OFFSET, offsetof(ngx_http_proxy_loc_conf_t, upstream.socket_keepalive), NULL }, { ngx_string("proxy_connect_timeout"), NGX_HTTP_MAIN_CONF|NGX_HTTP_SRV_CONF|NGX_HTTP_LOC_CONF|NGX_CONF_TAKE1, ngx_conf_set_msec_slot, NGX_HTTP_LOC_CONF_OFFSET, offsetof(ngx_http_proxy_loc_conf_t, upstream.connect_timeout), NULL }, { ngx_string("proxy_send_timeout"), NGX_HTTP_MAIN_CONF|NGX_HTTP_SRV_CONF|NGX_HTTP_LOC_CONF|NGX_CONF_TAKE1, ngx_conf_set_msec_slot, NGX_HTTP_LOC_CONF_OFFSET, offsetof(ngx_http_proxy_loc_conf_t, upstream.send_timeout), NULL }, { ngx_string("proxy_send_lowat"), NGX_HTTP_MAIN_CONF|NGX_HTTP_SRV_CONF|NGX_HTTP_LOC_CONF|NGX_CONF_TAKE1, ngx_conf_set_size_slot, NGX_HTTP_LOC_CONF_OFFSET, offsetof(ngx_http_proxy_loc_conf_t, upstream.send_lowat), &ngx_http_proxy_lowat_post }, { ngx_string("proxy_intercept_errors"), NGX_HTTP_MAIN_CONF|NGX_HTTP_SRV_CONF|NGX_HTTP_LOC_CONF|NGX_CONF_FLAG, ngx_conf_set_flag_slot, NGX_HTTP_LOC_CONF_OFFSET, offsetof(ngx_http_proxy_loc_conf_t, upstream.intercept_errors), NULL }, { ngx_string("proxy_set_header"), NGX_HTTP_MAIN_CONF|NGX_HTTP_SRV_CONF|NGX_HTTP_LOC_CONF|NGX_CONF_TAKE2, ngx_conf_set_keyval_slot, NGX_HTTP_LOC_CONF_OFFSET, offsetof(ngx_http_proxy_loc_conf_t, headers_source), NULL }, { ngx_string("proxy_headers_hash_max_size"), NGX_HTTP_MAIN_CONF|NGX_HTTP_SRV_CONF|NGX_HTTP_LOC_CONF|NGX_CONF_TAKE1, ngx_conf_set_num_slot, NGX_HTTP_LOC_CONF_OFFSET, offsetof(ngx_http_proxy_loc_conf_t, headers_hash_max_size), NULL }, { ngx_string("proxy_headers_hash_bucket_size"), NGX_HTTP_MAIN_CONF|NGX_HTTP_SRV_CONF|NGX_HTTP_LOC_CONF|NGX_CONF_TAKE1, ngx_conf_set_num_slot, NGX_HTTP_LOC_CONF_OFFSET, offsetof(ngx_http_proxy_loc_conf_t, headers_hash_bucket_size), NULL }, { ngx_string("proxy_set_body"), NGX_HTTP_MAIN_CONF|NGX_HTTP_SRV_CONF|NGX_HTTP_LOC_CONF|NGX_CONF_TAKE1, ngx_conf_set_str_slot, NGX_HTTP_LOC_CONF_OFFSET, offsetof(ngx_http_proxy_loc_conf_t, body_source), NULL }, { ngx_string("proxy_method"), NGX_HTTP_MAIN_CONF|NGX_HTTP_SRV_CONF|NGX_HTTP_LOC_CONF|NGX_CONF_TAKE1, ngx_http_set_complex_value_slot, NGX_HTTP_LOC_CONF_OFFSET, offsetof(ngx_http_proxy_loc_conf_t, method), NULL }, { ngx_string("proxy_pass_request_headers"), NGX_HTTP_MAIN_CONF|NGX_HTTP_SRV_CONF|NGX_HTTP_LOC_CONF|NGX_CONF_FLAG, ngx_conf_set_flag_slot, NGX_HTTP_LOC_CONF_OFFSET, offsetof(ngx_http_proxy_loc_conf_t, upstream.pass_request_headers), NULL }, { ngx_string("proxy_pass_request_body"), NGX_HTTP_MAIN_CONF|NGX_HTTP_SRV_CONF|NGX_HTTP_LOC_CONF|NGX_CONF_FLAG, ngx_conf_set_flag_slot, NGX_HTTP_LOC_CONF_OFFSET, offsetof(ngx_http_proxy_loc_conf_t, upstream.pass_request_body), NULL }, { ngx_string("proxy_buffer_size"), NGX_HTTP_MAIN_CONF|NGX_HTTP_SRV_CONF|NGX_HTTP_LOC_CONF|NGX_CONF_TAKE1, ngx_conf_set_size_slot, NGX_HTTP_LOC_CONF_OFFSET, offsetof(ngx_http_proxy_loc_conf_t, upstream.buffer_size), NULL }, { ngx_string("proxy_read_timeout"), NGX_HTTP_MAIN_CONF|NGX_HTTP_SRV_CONF|NGX_HTTP_LOC_CONF|NGX_CONF_TAKE1, ngx_conf_set_msec_slot, NGX_HTTP_LOC_CONF_OFFSET, offsetof(ngx_http_proxy_loc_conf_t, upstream.read_timeout), NULL }, { ngx_string("proxy_buffers"), NGX_HTTP_MAIN_CONF|NGX_HTTP_SRV_CONF|NGX_HTTP_LOC_CONF|NGX_CONF_TAKE2, ngx_conf_set_bufs_slot, NGX_HTTP_LOC_CONF_OFFSET, offsetof(ngx_http_proxy_loc_conf_t, upstream.bufs), NULL }, { ngx_string("proxy_busy_buffers_size"), NGX_HTTP_MAIN_CONF|NGX_HTTP_SRV_CONF|NGX_HTTP_LOC_CONF|NGX_CONF_TAKE1, ngx_conf_set_size_slot, NGX_HTTP_LOC_CONF_OFFSET, offsetof(ngx_http_proxy_loc_conf_t, upstream.busy_buffers_size_conf), NULL }, { ngx_string("proxy_force_ranges"), NGX_HTTP_MAIN_CONF|NGX_HTTP_SRV_CONF|NGX_HTTP_LOC_CONF|NGX_CONF_FLAG, ngx_conf_set_flag_slot, NGX_HTTP_LOC_CONF_OFFSET, offsetof(ngx_http_proxy_loc_conf_t, upstream.force_ranges), NULL }, { ngx_string("proxy_limit_rate"), NGX_HTTP_MAIN_CONF|NGX_HTTP_SRV_CONF|NGX_HTTP_LOC_CONF|NGX_CONF_TAKE1, ngx_conf_set_size_slot, NGX_HTTP_LOC_CONF_OFFSET, offsetof(ngx_http_proxy_loc_conf_t, upstream.limit_rate), NULL }, #if (NGX_HTTP_CACHE) { ngx_string("proxy_cache"), NGX_HTTP_MAIN_CONF|NGX_HTTP_SRV_CONF|NGX_HTTP_LOC_CONF|NGX_CONF_TAKE1, ngx_http_proxy_cache, NGX_HTTP_LOC_CONF_OFFSET, 0, NULL }, { ngx_string("proxy_cache_key"), NGX_HTTP_MAIN_CONF|NGX_HTTP_SRV_CONF|NGX_HTTP_LOC_CONF|NGX_CONF_TAKE1, ngx_http_proxy_cache_key, NGX_HTTP_LOC_CONF_OFFSET, 0, NULL }, { ngx_string("proxy_cache_path"), NGX_HTTP_MAIN_CONF|NGX_CONF_2MORE, ngx_http_file_cache_set_slot, NGX_HTTP_MAIN_CONF_OFFSET, offsetof(ngx_http_proxy_main_conf_t, caches), &ngx_http_proxy_module }, { ngx_string("proxy_cache_bypass"), NGX_HTTP_MAIN_CONF|NGX_HTTP_SRV_CONF|NGX_HTTP_LOC_CONF|NGX_CONF_1MORE, ngx_http_set_predicate_slot, NGX_HTTP_LOC_CONF_OFFSET, offsetof(ngx_http_proxy_loc_conf_t, upstream.cache_bypass), NULL }, { ngx_string("proxy_no_cache"), NGX_HTTP_MAIN_CONF|NGX_HTTP_SRV_CONF|NGX_HTTP_LOC_CONF|NGX_CONF_1MORE, ngx_http_set_predicate_slot, NGX_HTTP_LOC_CONF_OFFSET, offsetof(ngx_http_proxy_loc_conf_t, upstream.no_cache), NULL }, { ngx_string("proxy_cache_valid"), NGX_HTTP_MAIN_CONF|NGX_HTTP_SRV_CONF|NGX_HTTP_LOC_CONF|NGX_CONF_1MORE, ngx_http_file_cache_valid_set_slot, NGX_HTTP_LOC_CONF_OFFSET, offsetof(ngx_http_proxy_loc_conf_t, upstream.cache_valid), NULL }, { ngx_string("proxy_cache_min_uses"), NGX_HTTP_MAIN_CONF|NGX_HTTP_SRV_CONF|NGX_HTTP_LOC_CONF|NGX_CONF_TAKE1, ngx_conf_set_num_slot, NGX_HTTP_LOC_CONF_OFFSET, offsetof(ngx_http_proxy_loc_conf_t, upstream.cache_min_uses), NULL }, { ngx_string("proxy_cache_max_range_offset"), NGX_HTTP_MAIN_CONF|NGX_HTTP_SRV_CONF|NGX_HTTP_LOC_CONF|NGX_CONF_TAKE1, ngx_conf_set_off_slot, NGX_HTTP_LOC_CONF_OFFSET, offsetof(ngx_http_proxy_loc_conf_t, upstream.cache_max_range_offset), NULL }, { ngx_string("proxy_cache_use_stale"), NGX_HTTP_MAIN_CONF|NGX_HTTP_SRV_CONF|NGX_HTTP_LOC_CONF|NGX_CONF_1MORE, ngx_conf_set_bitmask_slot, NGX_HTTP_LOC_CONF_OFFSET, offsetof(ngx_http_proxy_loc_conf_t, upstream.cache_use_stale), &ngx_http_proxy_next_upstream_masks }, { ngx_string("proxy_cache_methods"), NGX_HTTP_MAIN_CONF|NGX_HTTP_SRV_CONF|NGX_HTTP_LOC_CONF|NGX_CONF_1MORE, ngx_conf_set_bitmask_slot, NGX_HTTP_LOC_CONF_OFFSET, offsetof(ngx_http_proxy_loc_conf_t, upstream.cache_methods), &ngx_http_upstream_cache_method_mask }, { ngx_string("proxy_cache_lock"), NGX_HTTP_MAIN_CONF|NGX_HTTP_SRV_CONF|NGX_HTTP_LOC_CONF|NGX_CONF_FLAG, ngx_conf_set_flag_slot, NGX_HTTP_LOC_CONF_OFFSET, offsetof(ngx_http_proxy_loc_conf_t, upstream.cache_lock), NULL }, { ngx_string("proxy_cache_lock_timeout"), NGX_HTTP_MAIN_CONF|NGX_HTTP_SRV_CONF|NGX_HTTP_LOC_CONF|NGX_CONF_TAKE1, ngx_conf_set_msec_slot, NGX_HTTP_LOC_CONF_OFFSET, offsetof(ngx_http_proxy_loc_conf_t, upstream.cache_lock_timeout), NULL }, { ngx_string("proxy_cache_lock_age"), NGX_HTTP_MAIN_CONF|NGX_HTTP_SRV_CONF|NGX_HTTP_LOC_CONF|NGX_CONF_TAKE1, ngx_conf_set_msec_slot, NGX_HTTP_LOC_CONF_OFFSET, offsetof(ngx_http_proxy_loc_conf_t, upstream.cache_lock_age), NULL }, { ngx_string("proxy_cache_revalidate"), NGX_HTTP_MAIN_CONF|NGX_HTTP_SRV_CONF|NGX_HTTP_LOC_CONF|NGX_CONF_FLAG, ngx_conf_set_flag_slot, NGX_HTTP_LOC_CONF_OFFSET, offsetof(ngx_http_proxy_loc_conf_t, upstream.cache_revalidate), NULL }, { ngx_string("proxy_cache_convert_head"), NGX_HTTP_MAIN_CONF|NGX_HTTP_SRV_CONF|NGX_HTTP_LOC_CONF|NGX_CONF_FLAG, ngx_conf_set_flag_slot, NGX_HTTP_LOC_CONF_OFFSET, offsetof(ngx_http_proxy_loc_conf_t, upstream.cache_convert_head), NULL }, { ngx_string("proxy_cache_background_update"), NGX_HTTP_MAIN_CONF|NGX_HTTP_SRV_CONF|NGX_HTTP_LOC_CONF|NGX_CONF_FLAG, ngx_conf_set_flag_slot, NGX_HTTP_LOC_CONF_OFFSET, offsetof(ngx_http_proxy_loc_conf_t, upstream.cache_background_update), NULL }, #endif { ngx_string("proxy_temp_path"), NGX_HTTP_MAIN_CONF|NGX_HTTP_SRV_CONF|NGX_HTTP_LOC_CONF|NGX_CONF_TAKE1234, ngx_conf_set_path_slot, NGX_HTTP_LOC_CONF_OFFSET, offsetof(ngx_http_proxy_loc_conf_t, upstream.temp_path), NULL }, { ngx_string("proxy_max_temp_file_size"), NGX_HTTP_MAIN_CONF|NGX_HTTP_SRV_CONF|NGX_HTTP_LOC_CONF|NGX_CONF_TAKE1, ngx_conf_set_size_slot, NGX_HTTP_LOC_CONF_OFFSET, offsetof(ngx_http_proxy_loc_conf_t, upstream.max_temp_file_size_conf), NULL }, { ngx_string("proxy_temp_file_write_size"), NGX_HTTP_MAIN_CONF|NGX_HTTP_SRV_CONF|NGX_HTTP_LOC_CONF|NGX_CONF_TAKE1, ngx_conf_set_size_slot, NGX_HTTP_LOC_CONF_OFFSET, offsetof(ngx_http_proxy_loc_conf_t, upstream.temp_file_write_size_conf), NULL }, { ngx_string("proxy_next_upstream"), NGX_HTTP_MAIN_CONF|NGX_HTTP_SRV_CONF|NGX_HTTP_LOC_CONF|NGX_CONF_1MORE, ngx_conf_set_bitmask_slot, NGX_HTTP_LOC_CONF_OFFSET, offsetof(ngx_http_proxy_loc_conf_t, upstream.next_upstream), &ngx_http_proxy_next_upstream_masks }, { ngx_string("proxy_next_upstream_tries"), NGX_HTTP_MAIN_CONF|NGX_HTTP_SRV_CONF|NGX_HTTP_LOC_CONF|NGX_CONF_TAKE1, ngx_conf_set_num_slot, NGX_HTTP_LOC_CONF_OFFSET, offsetof(ngx_http_proxy_loc_conf_t, upstream.next_upstream_tries), NULL }, { ngx_string("proxy_next_upstream_timeout"), NGX_HTTP_MAIN_CONF|NGX_HTTP_SRV_CONF|NGX_HTTP_LOC_CONF|NGX_CONF_TAKE1, ngx_conf_set_msec_slot, NGX_HTTP_LOC_CONF_OFFSET, offsetof(ngx_http_proxy_loc_conf_t, upstream.next_upstream_timeout), NULL }, { ngx_string("proxy_pass_header"), NGX_HTTP_MAIN_CONF|NGX_HTTP_SRV_CONF|NGX_HTTP_LOC_CONF|NGX_CONF_TAKE1, ngx_conf_set_str_array_slot, NGX_HTTP_LOC_CONF_OFFSET, offsetof(ngx_http_proxy_loc_conf_t, upstream.pass_headers), NULL }, { ngx_string("proxy_hide_header"), NGX_HTTP_MAIN_CONF|NGX_HTTP_SRV_CONF|NGX_HTTP_LOC_CONF|NGX_CONF_TAKE1, ngx_conf_set_str_array_slot, NGX_HTTP_LOC_CONF_OFFSET, offsetof(ngx_http_proxy_loc_conf_t, upstream.hide_headers), NULL }, { ngx_string("proxy_ignore_headers"), NGX_HTTP_MAIN_CONF|NGX_HTTP_SRV_CONF|NGX_HTTP_LOC_CONF|NGX_CONF_1MORE, ngx_conf_set_bitmask_slot, NGX_HTTP_LOC_CONF_OFFSET, offsetof(ngx_http_proxy_loc_conf_t, upstream.ignore_headers), &ngx_http_upstream_ignore_headers_masks }, { ngx_string("proxy_http_version"), NGX_HTTP_MAIN_CONF|NGX_HTTP_SRV_CONF|NGX_HTTP_LOC_CONF|NGX_CONF_TAKE1, ngx_conf_set_enum_slot, NGX_HTTP_LOC_CONF_OFFSET, offsetof(ngx_http_proxy_loc_conf_t, http_version), &ngx_http_proxy_http_version }, #if (NGX_HTTP_SSL) { ngx_string("proxy_ssl_session_reuse"), NGX_HTTP_MAIN_CONF|NGX_HTTP_SRV_CONF|NGX_HTTP_LOC_CONF|NGX_CONF_FLAG, ngx_conf_set_flag_slot, NGX_HTTP_LOC_CONF_OFFSET, offsetof(ngx_http_proxy_loc_conf_t, upstream.ssl_session_reuse), NULL }, { ngx_string("proxy_ssl_protocols"), NGX_HTTP_MAIN_CONF|NGX_HTTP_SRV_CONF|NGX_HTTP_LOC_CONF|NGX_CONF_1MORE, ngx_conf_set_bitmask_slot, NGX_HTTP_LOC_CONF_OFFSET, offsetof(ngx_http_proxy_loc_conf_t, ssl_protocols), &ngx_http_proxy_ssl_protocols }, { ngx_string("proxy_ssl_ciphers"), NGX_HTTP_MAIN_CONF|NGX_HTTP_SRV_CONF|NGX_HTTP_LOC_CONF|NGX_CONF_TAKE1, ngx_conf_set_str_slot, NGX_HTTP_LOC_CONF_OFFSET, offsetof(ngx_http_proxy_loc_conf_t, ssl_ciphers), NULL }, { ngx_string("proxy_ssl_name"), NGX_HTTP_MAIN_CONF|NGX_HTTP_SRV_CONF|NGX_HTTP_LOC_CONF|NGX_CONF_TAKE1, ngx_http_set_complex_value_slot, NGX_HTTP_LOC_CONF_OFFSET, offsetof(ngx_http_proxy_loc_conf_t, upstream.ssl_name), NULL }, { ngx_string("proxy_ssl_server_name"), NGX_HTTP_MAIN_CONF|NGX_HTTP_SRV_CONF|NGX_HTTP_LOC_CONF|NGX_CONF_FLAG, ngx_conf_set_flag_slot, NGX_HTTP_LOC_CONF_OFFSET, offsetof(ngx_http_proxy_loc_conf_t, upstream.ssl_server_name), NULL }, { ngx_string("proxy_ssl_verify"), NGX_HTTP_MAIN_CONF|NGX_HTTP_SRV_CONF|NGX_HTTP_LOC_CONF|NGX_CONF_FLAG, ngx_conf_set_flag_slot, NGX_HTTP_LOC_CONF_OFFSET, offsetof(ngx_http_proxy_loc_conf_t, upstream.ssl_verify), NULL }, { ngx_string("proxy_ssl_verify_depth"), NGX_HTTP_MAIN_CONF|NGX_HTTP_SRV_CONF|NGX_HTTP_LOC_CONF|NGX_CONF_TAKE1, ngx_conf_set_num_slot, NGX_HTTP_LOC_CONF_OFFSET, offsetof(ngx_http_proxy_loc_conf_t, ssl_verify_depth), NULL }, { ngx_string("proxy_ssl_trusted_certificate"), NGX_HTTP_MAIN_CONF|NGX_HTTP_SRV_CONF|NGX_HTTP_LOC_CONF|NGX_CONF_TAKE1, ngx_conf_set_str_slot, NGX_HTTP_LOC_CONF_OFFSET, offsetof(ngx_http_proxy_loc_conf_t, ssl_trusted_certificate), NULL }, { ngx_string("proxy_ssl_crl"), NGX_HTTP_MAIN_CONF|NGX_HTTP_SRV_CONF|NGX_HTTP_LOC_CONF|NGX_CONF_TAKE1, ngx_conf_set_str_slot, NGX_HTTP_LOC_CONF_OFFSET, offsetof(ngx_http_proxy_loc_conf_t, ssl_crl), NULL }, { ngx_string("proxy_ssl_certificate"), NGX_HTTP_MAIN_CONF|NGX_HTTP_SRV_CONF|NGX_HTTP_LOC_CONF|NGX_CONF_TAKE1, ngx_conf_set_str_slot, NGX_HTTP_LOC_CONF_OFFSET, offsetof(ngx_http_proxy_loc_conf_t, ssl_certificate), NULL }, { ngx_string("proxy_ssl_certificate_key"), NGX_HTTP_MAIN_CONF|NGX_HTTP_SRV_CONF|NGX_HTTP_LOC_CONF|NGX_CONF_TAKE1, ngx_conf_set_str_slot, NGX_HTTP_LOC_CONF_OFFSET, offsetof(ngx_http_proxy_loc_conf_t, ssl_certificate_key), NULL }, { ngx_string("proxy_ssl_password_file"), NGX_HTTP_MAIN_CONF|NGX_HTTP_SRV_CONF|NGX_HTTP_LOC_CONF|NGX_CONF_TAKE1, ngx_http_proxy_ssl_password_file, NGX_HTTP_LOC_CONF_OFFSET, 0, NULL }, #endif ngx_null_command };
proxy 模組有非常多命令可以操作,即可配置項非常之多:各種代理方式、自定義變數、快取、cookie、ssl、upstream等等。所以,造就了這個模組的複雜性。
但是,我們不打算了解全部(也瞭解不了),我們只看大概,其如何設定header及如何轉發請求即可。
3. 核心代理功能實現
proxy 代理處理算是content處理的一個分支,所以同樣會被 core_content_phase 管理. 但不同的是, 它是在content_handler中被呼叫, 而不是作為 handler被呼叫.
// http/ngx_http_core_module.c ngx_int_t ngx_http_core_content_phase(ngx_http_request_t *r, ngx_http_phase_handler_t *ph) { size_t root; ngx_int_t rc; ngx_str_t path; if (r->content_handler) { r->write_event_handler = ngx_http_request_empty_handler; // 呼叫 content_handler 進行轉發處理 ngx_http_finalize_request(r, r->content_handler(r)); return NGX_OK; } ngx_log_debug1(NGX_LOG_DEBUG_HTTP, r->connection->log, 0, "content phase: %ui", r->phase_handler); rc = ph->handler(r); if (rc != NGX_DECLINED) { ngx_http_finalize_request(r, rc); return NGX_OK; } /* rc == NGX_DECLINED */ ph++; if (ph->checker) { r->phase_handler++; return NGX_AGAIN; } /* no content handler was found */ if (r->uri.data[r->uri.len - 1] == '/') { if (ngx_http_map_uri_to_path(r, &path, &root, 0) != NULL) { ngx_log_error(NGX_LOG_ERR, r->connection->log, 0, "directory index of \"%s\" is forbidden", path.data); } ngx_http_finalize_request(r, NGX_HTTP_FORBIDDEN); return NGX_OK; } ngx_log_error(NGX_LOG_ERR, r->connection->log, 0, "no handler found"); ngx_http_finalize_request(r, NGX_HTTP_NOT_FOUND); return NGX_OK; }
在上面的註冊之後,會在 ngx_http_proxy_merge_loc_conf 被呼叫時,將handler設定為 ngx_http_proxy_handler。
// http/modules/ngx_http_proxy_module.c // 代理功能入口 static ngx_int_t ngx_http_proxy_handler(ngx_http_request_t *r) { ngx_int_t rc; ngx_http_upstream_t *u; ngx_http_proxy_ctx_t *ctx; ngx_http_proxy_loc_conf_t *plcf; #if (NGX_HTTP_CACHE) ngx_http_proxy_main_conf_t *pmcf; #endif // 建立upstream, 即轉發流準備 if (ngx_http_upstream_create(r) != NGX_OK) { return NGX_HTTP_INTERNAL_SERVER_ERROR; } ctx = ngx_pcalloc(r->pool, sizeof(ngx_http_proxy_ctx_t)); if (ctx == NULL) { return NGX_HTTP_INTERNAL_SERVER_ERROR; } // 將建立好的上下文資訊賦給 r->ctx 中 // r->ctx[ngx_http_proxy_module.ctx_index] = ctx; ngx_http_set_ctx(r, ctx, ngx_http_proxy_module); plcf = ngx_http_get_module_loc_conf(r, ngx_http_proxy_module); // 設定upstream 資訊 u = r->upstream; if (plcf->proxy_lengths == NULL) { // {key_start = {len = 21, data = 0x8000b4110 "http://localhost:8081/hello"}, // schema = {len = 7, data = 0x8000b4110 "http://localhost:8081/hello"}, // host_header = {len = 14, data = 0x8000b4117 "localhost:8081/hello"}, // port = {len = 4, data = 0x8000b4121 "8081/hello"}, // uri = {len = 6, data = 0x8000b4125 "/hello"}} ctx->vars = plcf->vars; u->schema = plcf->vars.schema; #if (NGX_HTTP_SSL) u->ssl = (plcf->upstream.ssl != NULL); #endif } else { if (ngx_http_proxy_eval(r, ctx, plcf) != NGX_OK) { return NGX_HTTP_INTERNAL_SERVER_ERROR; } } u->output.tag = (ngx_buf_tag_t) &ngx_http_proxy_module; u->conf = &plcf->upstream; #if (NGX_HTTP_CACHE) pmcf = ngx_http_get_module_main_conf(r, ngx_http_proxy_module); u->caches = &pmcf->caches; u->create_key = ngx_http_proxy_create_key; #endif u->create_request = ngx_http_proxy_create_request; u->reinit_request = ngx_http_proxy_reinit_request; u->process_header = ngx_http_proxy_process_status_line; u->abort_request = ngx_http_proxy_abort_request; u->finalize_request = ngx_http_proxy_finalize_request; r->state = 0; // 重定向設定 if (plcf->redirects) { u->rewrite_redirect = ngx_http_proxy_rewrite_redirect; } if (plcf->cookie_domains || plcf->cookie_paths) { u->rewrite_cookie = ngx_http_proxy_rewrite_cookie; } u->buffering = plcf->upstream.buffering; u->pipe = ngx_pcalloc(r->pool, sizeof(ngx_event_pipe_t)); if (u->pipe == NULL) { return NGX_HTTP_INTERNAL_SERVER_ERROR; } u->pipe->input_filter = ngx_http_proxy_copy_filter; u->pipe->input_ctx = r; u->input_filter_init = ngx_http_proxy_input_filter_init; u->input_filter = ngx_http_proxy_non_buffered_copy_filter; u->input_filter_ctx = r; u->accel = 1; if (!plcf->upstream.request_buffering && plcf->body_values == NULL && plcf->upstream.pass_request_body && (!r->headers_in.chunked || plcf->http_version == NGX_HTTP_VERSION_11)) { r->request_body_no_buffering = 1; } // 重要: 讀取客戶端請求資料 // ngx_http_upstream_init 被作為處理器傳入 rc = ngx_http_read_client_request_body(r, ngx_http_upstream_init); if (rc >= NGX_HTTP_SPECIAL_RESPONSE) { return rc; } return NGX_DONE; }
整個流程, 看起來更向是做一些準備工作, 並未發現如何進行轉發. 而且其與upstream有非常大的關係, 可能我們理解了proxy 也就理解了 upstream 了. 以上是upstream的建立方式, 也是非常的簡單. 所以, 更多工作被放到了 ngx_http_read_client_request_body 中去了.
// http/ngx_http_upstream.c ngx_int_t ngx_http_upstream_create(ngx_http_request_t *r) { ngx_http_upstream_t *u; u = r->upstream; if (u && u->cleanup) { r->main->count++; ngx_http_upstream_cleanup(r); } u = ngx_pcalloc(r->pool, sizeof(ngx_http_upstream_t)); if (u == NULL) { return NGX_ERROR; } r->upstream = u; u->peer.log = r->connection->log; u->peer.log_error = NGX_ERROR_ERR; #if (NGX_HTTP_CACHE) r->cache = NULL; #endif // header 被設定為-1, 後續作區分處理 u->headers_in.content_length_n = -1; u->headers_in.last_modified_time = -1; return NGX_OK; }
請看下一集.
4. 請求轉發細節實現
上面轉到通用http處理模組後,又發生了一些變化。 大體流程是:ngx_http_read_client_request_body -> ngx_http_upstream_init -> ngx_http_upstream_init_request -> ngx_http_upstream_connect -> ngx_http_upstream_send_request -> ngx_http_upstream_send_request_body -> ngx_handle_write_event -> 寫資料到目標伺服器 -> 非同步等待目標伺服器響應 -> 響應客戶端 。
// http/ngx_http_request_body.c // 通用讀取請求並處理流程 ngx_int_t ngx_http_read_client_request_body(ngx_http_request_t *r, ngx_http_client_body_handler_pt post_handler) { size_t preread; ssize_t size; ngx_int_t rc; ngx_buf_t *b; ngx_chain_t out; ngx_http_request_body_t *rb; ngx_http_core_loc_conf_t *clcf; r->main->count++; if (r != r->main || r->request_body || r->discard_body) { r->request_body_no_buffering = 0; post_handler(r); return NGX_OK; } if (ngx_http_test_expect(r) != NGX_OK) { rc = NGX_HTTP_INTERNAL_SERVER_ERROR; goto done; } rb = ngx_pcalloc(r->pool, sizeof(ngx_http_request_body_t)); if (rb == NULL) { rc = NGX_HTTP_INTERNAL_SERVER_ERROR; goto done; } /* * set by ngx_pcalloc(): * * rb->bufs = NULL; * rb->buf = NULL; * rb->free = NULL; * rb->busy = NULL; * rb->chunked = NULL; */ rb->rest = -1; rb->post_handler = post_handler; r->request_body = rb; if (r->headers_in.content_length_n < 0 && !r->headers_in.chunked) { r->request_body_no_buffering = 0; // header為-1, proxy 會直接走此處, 即轉身 upstream 處理 post_handler(r); return NGX_OK; } ... } // http/ngx_http_upstream.c void ngx_http_upstream_init(ngx_http_request_t *r) { ngx_connection_t *c; c = r->connection; ngx_log_debug1(NGX_LOG_DEBUG_HTTP, c->log, 0, "http init upstream, client timer: %d", c->read->timer_set); #if (NGX_HTTP_V2) if (r->stream) { ngx_http_upstream_init_request(r); return; } #endif if (c->read->timer_set) { ngx_del_timer(c->read); } if (ngx_event_flags & NGX_USE_CLEAR_EVENT) { if (!c->write->active) { if (ngx_add_event(c->write, NGX_WRITE_EVENT, NGX_CLEAR_EVENT) == NGX_ERROR) { ngx_http_finalize_request(r, NGX_HTTP_INTERNAL_SERVER_ERROR); return; } } } ngx_http_upstream_init_request(r); } // http/ngx_http_upstream.c static void ngx_http_upstream_init_request(ngx_http_request_t *r) { ngx_str_t *host; ngx_uint_t i; ngx_resolver_ctx_t *ctx, temp; ngx_http_cleanup_t *cln; ngx_http_upstream_t *u; ngx_http_core_loc_conf_t *clcf; ngx_http_upstream_srv_conf_t *uscf, **uscfp; ngx_http_upstream_main_conf_t *umcf; if (r->aio) { return; } u = r->upstream; #if (NGX_HTTP_CACHE) if (u->conf->cache) { ngx_int_t rc; rc = ngx_http_upstream_cache(r, u); if (rc == NGX_BUSY) { r->write_event_handler = ngx_http_upstream_init_request; return; } r->write_event_handler = ngx_http_request_empty_handler; if (rc == NGX_ERROR) { ngx_http_finalize_request(r, NGX_HTTP_INTERNAL_SERVER_ERROR); return; } if (rc == NGX_OK) { rc = ngx_http_upstream_cache_send(r, u); if (rc == NGX_DONE) { return; } if (rc == NGX_HTTP_UPSTREAM_INVALID_HEADER) { rc = NGX_DECLINED; r->cached = 0; u->buffer.start = NULL; u->cache_status = NGX_HTTP_CACHE_MISS; u->request_sent = 1; } } if (rc != NGX_DECLINED) { ngx_http_finalize_request(r, rc); return; } } #endif u->store = u->conf->store; if (!u->store && !r->post_action && !u->conf->ignore_client_abort) { r->read_event_handler = ngx_http_upstream_rd_check_broken_connection; r->write_event_handler = ngx_http_upstream_wr_check_broken_connection; } if (r->request_body) { u->request_bufs = r->request_body->bufs; } // 建立代理請求, 此處為 ngx_http_proxy_create_request // {len = 3, data = 0x8000a9700 "GET /tohello/getUsers?pageNum=1&pageSize=2 HTTP/1.1\r\nHost"} if (u->create_request(r) != NGX_OK) { ngx_http_finalize_request(r, NGX_HTTP_INTERNAL_SERVER_ERROR); return; } if (ngx_http_upstream_set_local(r, u, u->conf->local) != NGX_OK) { ngx_http_finalize_request(r, NGX_HTTP_INTERNAL_SERVER_ERROR); return; } if (u->conf->socket_keepalive) { u->peer.so_keepalive = 1; } clcf = ngx_http_get_module_loc_conf(r, ngx_http_core_module); u->output.alignment = clcf->directio_alignment; u->output.pool = r->pool; u->output.bufs.num = 1; u->output.bufs.size = clcf->client_body_buffer_size; if (u->output.output_filter == NULL) { u->output.output_filter = ngx_chain_writer; u->output.filter_ctx = &u->writer; } u->writer.pool = r->pool; if (r->upstream_states == NULL) { r->upstream_states = ngx_array_create(r->pool, 1, sizeof(ngx_http_upstream_state_t)); if (r->upstream_states == NULL) { ngx_http_finalize_request(r, NGX_HTTP_INTERNAL_SERVER_ERROR); return; } } else { u->state = ngx_array_push(r->upstream_states); if (u->state == NULL) { ngx_http_upstream_finalize_request(r, u, NGX_HTTP_INTERNAL_SERVER_ERROR); return; } ngx_memzero(u->state, sizeof(ngx_http_upstream_state_t)); } // 清理資料 cln = ngx_http_cleanup_add(r, 0); if (cln == NULL) { ngx_http_finalize_request(r, NGX_HTTP_INTERNAL_SERVER_ERROR); return; } cln->handler = ngx_http_upstream_cleanup; cln->data = r; u->cleanup = &cln->handler; if (u->resolved == NULL) { uscf = u->conf->upstream; } else { #if (NGX_HTTP_SSL) u->ssl_name = u->resolved->host; #endif host = &u->resolved->host; umcf = ngx_http_get_module_main_conf(r, ngx_http_upstream_module); uscfp = umcf->upstreams.elts; for (i = 0; i < umcf->upstreams.nelts; i++) { uscf = uscfp[i]; if (uscf->host.len == host->len && ((uscf->port == 0 && u->resolved->no_port) || uscf->port == u->resolved->port) && ngx_strncasecmp(uscf->host.data, host->data, host->len) == 0) { goto found; } } if (u->resolved->sockaddr) { if (u->resolved->port == 0 && u->resolved->sockaddr->sa_family != AF_UNIX) { ngx_log_error(NGX_LOG_ERR, r->connection->log, 0, "no port in upstream \"%V\"", host); ngx_http_upstream_finalize_request(r, u, NGX_HTTP_INTERNAL_SERVER_ERROR); return; } if (ngx_http_upstream_create_round_robin_peer(r, u->resolved) != NGX_OK) { ngx_http_upstream_finalize_request(r, u, NGX_HTTP_INTERNAL_SERVER_ERROR); return; } ngx_http_upstream_connect(r, u); return; } if (u->resolved->port == 0) { ngx_log_error(NGX_LOG_ERR, r->connection->log, 0, "no port in upstream \"%V\"", host); ngx_http_upstream_finalize_request(r, u, NGX_HTTP_INTERNAL_SERVER_ERROR); return; } temp.name = *host; ctx = ngx_resolve_start(clcf->resolver, &temp); if (ctx == NULL) { ngx_http_upstream_finalize_request(r, u, NGX_HTTP_INTERNAL_SERVER_ERROR); return; } if (ctx == NGX_NO_RESOLVER) { ngx_log_error(NGX_LOG_ERR, r->connection->log, 0, "no resolver defined to resolve %V", host); ngx_http_upstream_finalize_request(r, u, NGX_HTTP_BAD_GATEWAY); return; } ctx->name = *host; ctx->handler = ngx_http_upstream_resolve_handler; ctx->data = r; ctx->timeout = clcf->resolver_timeout; u->resolved->ctx = ctx; if (ngx_resolve_name(ctx) != NGX_OK) { u->resolved->ctx = NULL; ngx_http_upstream_finalize_request(r, u, NGX_HTTP_INTERNAL_SERVER_ERROR); return; } return; } found: if (uscf == NULL) { ngx_log_error(NGX_LOG_ALERT, r->connection->log, 0, "no upstream configuration"); ngx_http_upstream_finalize_request(r, u, NGX_HTTP_INTERNAL_SERVER_ERROR); return; } u->upstream = uscf; #if (NGX_HTTP_SSL) u->ssl_name = uscf->host; #endif // 初始化連線點資料 // 預設為: ngx_http_upstream_init_round_robin_peer if (uscf->peer.init(r, uscf) != NGX_OK) { ngx_http_upstream_finalize_request(r, u, NGX_HTTP_INTERNAL_SERVER_ERROR); return; } u->peer.start_time = ngx_current_msec; if (u->conf->next_upstream_tries && u->peer.tries > u->conf->next_upstream_tries) { u->peer.tries = u->conf->next_upstream_tries; } // 連線到upstream 中, 預設是 round-robin ngx_http_upstream_connect(r, u); } // http/ngx_http_upstream.c static void ngx_http_upstream_connect(ngx_http_request_t *r, ngx_http_upstream_t *u) { ngx_int_t rc; ngx_connection_t *c; r->connection->log->action = "connecting to upstream"; if (u->state && u->state->response_time == (ngx_msec_t) -1) { u->state->response_time = ngx_current_msec - u->start_time; } u->state = ngx_array_push(r->upstream_states); if (u->state == NULL) { ngx_http_upstream_finalize_request(r, u, NGX_HTTP_INTERNAL_SERVER_ERROR); return; } ngx_memzero(u->state, sizeof(ngx_http_upstream_state_t)); u->start_time = ngx_current_msec; u->state->response_time = (ngx_msec_t) -1; u->state->connect_time = (ngx_msec_t) -1; u->state->header_time = (ngx_msec_t) -1; // 連線無端socket, count++ rc = ngx_event_connect_peer(&u->peer); ngx_log_debug1(NGX_LOG_DEBUG_HTTP, r->connection->log, 0, "http upstream connect: %i", rc); if (rc == NGX_ERROR) { ngx_http_upstream_finalize_request(r, u, NGX_HTTP_INTERNAL_SERVER_ERROR); return; } u->state->peer = u->peer.name; if (rc == NGX_BUSY) { ngx_log_error(NGX_LOG_ERR, r->connection->log, 0, "no live upstreams"); ngx_http_upstream_next(r, u, NGX_HTTP_UPSTREAM_FT_NOLIVE); return; } if (rc == NGX_DECLINED) { ngx_http_upstream_next(r, u, NGX_HTTP_UPSTREAM_FT_ERROR); return; } /* rc == NGX_OK || rc == NGX_AGAIN || rc == NGX_DONE */ c = u->peer.connection; c->requests++; c->data = r; // 讀寫事件處理器設定為 ngx_http_upstream_handler c->write->handler = ngx_http_upstream_handler; c->read->handler = ngx_http_upstream_handler; u->write_event_handler = ngx_http_upstream_send_request_handler; u->read_event_handler = ngx_http_upstream_process_header; c->sendfile &= r->connection->sendfile; u->output.sendfile = c->sendfile; if (r->connection->tcp_nopush == NGX_TCP_NOPUSH_DISABLED) { c->tcp_nopush = NGX_TCP_NOPUSH_DISABLED; } if (c->pool == NULL) { /* we need separate pool here to be able to cache SSL connections */ c->pool = ngx_create_pool(128, r->connection->log); if (c->pool == NULL) { ngx_http_upstream_finalize_request(r, u, NGX_HTTP_INTERNAL_SERVER_ERROR); return; } } c->log = r->connection->log; c->pool->log = c->log; c->read->log = c->log; c->write->log = c->log; /* init or reinit the ngx_output_chain() and ngx_chain_writer() contexts */ u->writer.out = NULL; u->writer.last = &u->writer.out; u->writer.connection = c; u->writer.limit = 0; if (u->request_sent) { if (ngx_http_upstream_reinit(r, u) != NGX_OK) { ngx_http_upstream_finalize_request(r, u, NGX_HTTP_INTERNAL_SERVER_ERROR); return; } } if (r->request_body && r->request_body->buf && r->request_body->temp_file && r == r->main) { /* * the r->request_body->buf can be reused for one request only, * the subrequests should allocate their own temporary bufs */ u->output.free = ngx_alloc_chain_link(r->pool); if (u->output.free == NULL) { ngx_http_upstream_finalize_request(r, u, NGX_HTTP_INTERNAL_SERVER_ERROR); return; } u->output.free->buf = r->request_body->buf; u->output.free->next = NULL; u->output.allocated = 1; r->request_body->buf->pos = r->request_body->buf->start; r->request_body->buf->last = r->request_body->buf->start; r->request_body->buf->tag = u->output.tag; } u->request_sent = 0; u->request_body_sent = 0; u->request_body_blocked = 0; // 未處理完成,等待下一次事件通知 if (rc == NGX_AGAIN) { ngx_add_timer(c->write, u->conf->connect_timeout); return; } #if (NGX_HTTP_SSL) if (u->ssl && c->ssl == NULL) { ngx_http_upstream_ssl_init_connection(r, u, c); return; } #endif // 傳送請求到目標端 ngx_http_upstream_send_request(r, u, 1); } // http/ngx_http_upstream.c static void ngx_http_upstream_send_request(ngx_http_request_t *r, ngx_http_upstream_t *u, ngx_uint_t do_write) { ngx_int_t rc; ngx_connection_t *c; c = u->peer.connection; ngx_log_debug0(NGX_LOG_DEBUG_HTTP, c->log, 0, "http upstream send request"); if (u->state->connect_time == (ngx_msec_t) -1) { u->state->connect_time = ngx_current_msec - u->start_time; } // 測試傳送資料ok if (!u->request_sent && ngx_http_upstream_test_connect(c) != NGX_OK) { ngx_http_upstream_next(r, u, NGX_HTTP_UPSTREAM_FT_ERROR); return; } c->log->action = "sending request to upstream"; // 傳送資料 rc = ngx_http_upstream_send_request_body(r, u, do_write); if (rc == NGX_ERROR) { ngx_http_upstream_next(r, u, NGX_HTTP_UPSTREAM_FT_ERROR); return; } if (rc >= NGX_HTTP_SPECIAL_RESPONSE) { ngx_http_upstream_finalize_request(r, u, rc); return; } if (rc == NGX_AGAIN) { if (!c->write->ready || u->request_body_blocked) { ngx_add_timer(c->write, u->conf->send_timeout); } else if (c->write->timer_set) { ngx_del_timer(c->write); } if (ngx_handle_write_event(c->write, u->conf->send_lowat) != NGX_OK) { ngx_http_upstream_finalize_request(r, u, NGX_HTTP_INTERNAL_SERVER_ERROR); return; } if (c->write->ready && c->tcp_nopush == NGX_TCP_NOPUSH_SET) { if (ngx_tcp_push(c->fd) == -1) { ngx_log_error(NGX_LOG_CRIT, c->log, ngx_socket_errno, ngx_tcp_push_n " failed"); ngx_http_upstream_finalize_request(r, u, NGX_HTTP_INTERNAL_SERVER_ERROR); return; } c->tcp_nopush = NGX_TCP_NOPUSH_UNSET; } return; } /* rc == NGX_OK */ if (c->write->timer_set) { ngx_del_timer(c->write); } if (c->tcp_nopush == NGX_TCP_NOPUSH_SET) { if (ngx_tcp_push(c->fd) == -1) { ngx_log_error(NGX_LOG_CRIT, c->log, ngx_socket_errno, ngx_tcp_push_n " failed"); ngx_http_upstream_finalize_request(r, u, NGX_HTTP_INTERNAL_SERVER_ERROR); return; } c->tcp_nopush = NGX_TCP_NOPUSH_UNSET; } if (!u->conf->preserve_output) { u->write_event_handler = ngx_http_upstream_dummy_handler; } // 寫事件 if (ngx_handle_write_event(c->write, 0) != NGX_OK) { ngx_http_upstream_finalize_request(r, u, NGX_HTTP_INTERNAL_SERVER_ERROR); return; } if (!u->request_body_sent) { u->request_body_sent = 1; if (u->header_sent) { return; } // 註冊讀超時定時器,避免等等超時 ngx_add_timer(c->read, u->conf->read_timeout); // 如果已響應,立即處理,否則非同步等待後續事件通知 if (c->read->ready) { ngx_http_upstream_process_header(r, u); return; } } }
大體流程如上,以上同步處理。但比較向目標伺服器寫資料,等待目標伺服器響應走的是非同步流程,非阻塞io。所以前面註冊了事件監聽,將會後在後續事件就緒時再次處理。
5. 非同步後續事件接入處理
首次代理處理後,將所有上下文資訊,目標伺服器等都已準備好了。但目標伺服器或者網路會很慢,所以做了非同步化處理,走的另一條路線。handler 為 ngx_http_upstream_handler 。
// http/ngx_http_upstream.c static void ngx_http_upstream_handler(ngx_event_t *ev) { ngx_connection_t *c; ngx_http_request_t *r; ngx_http_upstream_t *u; c = ev->data; r = c->data; u = r->upstream; c = r->connection; ngx_http_set_log_request(c->log, r); ngx_log_debug2(NGX_LOG_DEBUG_HTTP, c->log, 0, "http upstream request: \"%V?%V\"", &r->uri, &r->args); if (ev->delayed && ev->timedout) { ev->delayed = 0; ev->timedout = 0; } // 寫就緒、讀就緒 if (ev->write) { u->write_event_handler(r, u); } else { u->read_event_handler(r, u); } ngx_http_run_posted_requests(c); } // 接收目標伺服器返回值並處理 // http/ngx_http_upstream.c static void ngx_http_upstream_process_header(ngx_http_request_t *r, ngx_http_upstream_t *u) { ssize_t n; ngx_int_t rc; ngx_connection_t *c; c = u->peer.connection; ngx_log_debug0(NGX_LOG_DEBUG_HTTP, c->log, 0, "http upstream process header"); c->log->action = "reading response header from upstream"; // 超時處理 if (c->read->timedout) { ngx_http_upstream_next(r, u, NGX_HTTP_UPSTREAM_FT_TIMEOUT); return; } if (!u->request_sent && ngx_http_upstream_test_connect(c) != NGX_OK) { ngx_http_upstream_next(r, u, NGX_HTTP_UPSTREAM_FT_ERROR); return; } if (u->buffer.start == NULL) { u->buffer.start = ngx_palloc(r->pool, u->conf->buffer_size); if (u->buffer.start == NULL) { ngx_http_upstream_finalize_request(r, u, NGX_HTTP_INTERNAL_SERVER_ERROR); return; } u->buffer.pos = u->buffer.start; u->buffer.last = u->buffer.start; u->buffer.end = u->buffer.start + u->conf->buffer_size; u->buffer.temporary = 1; u->buffer.tag = u->output.tag; if (ngx_list_init(&u->headers_in.headers, r->pool, 8, sizeof(ngx_table_elt_t)) != NGX_OK) { ngx_http_upstream_finalize_request(r, u, NGX_HTTP_INTERNAL_SERVER_ERROR); return; } if (ngx_list_init(&u->headers_in.trailers, r->pool, 2, sizeof(ngx_table_elt_t)) != NGX_OK) { ngx_http_upstream_finalize_request(r, u, NGX_HTTP_INTERNAL_SERVER_ERROR); return; } #if (NGX_HTTP_CACHE) if (r->cache) { u->buffer.pos += r->cache->header_start; u->buffer.last = u->buffer.pos; } #endif } for ( ;; ) { // 迴圈讀取目標伺服器傳回的資料 n = c->recv(c, u->buffer.last, u->buffer.end - u->buffer.last); if (n == NGX_AGAIN) { #if 0 ngx_add_timer(rev, u->read_timeout); #endif if (ngx_handle_read_event(c->read, 0) != NGX_OK) { ngx_http_upstream_finalize_request(r, u, NGX_HTTP_INTERNAL_SERVER_ERROR); return; } return; } if (n == 0) { ngx_log_error(NGX_LOG_ERR, c->log, 0, "upstream prematurely closed connection"); } if (n == NGX_ERROR || n == 0) { ngx_http_upstream_next(r, u, NGX_HTTP_UPSTREAM_FT_ERROR); return; } u->state->bytes_received += n; u->buffer.last += n; #if 0 u->valid_header_in = 0; u->peer.cached = 0; #endif // 處理header資訊 // 該prcocess_header由前面做好的設定, ngx_http_proxy_process_status_line rc = u->process_header(r); if (rc == NGX_AGAIN) { if (u->buffer.last == u->buffer.end) { ngx_log_error(NGX_LOG_ERR, c->log, 0, "upstream sent too big header"); ngx_http_upstream_next(r, u, NGX_HTTP_UPSTREAM_FT_INVALID_HEADER); return; } continue; } break; } if (rc == NGX_HTTP_UPSTREAM_INVALID_HEADER) { ngx_http_upstream_next(r, u, NGX_HTTP_UPSTREAM_FT_INVALID_HEADER); return; } if (rc == NGX_ERROR) { ngx_http_upstream_finalize_request(r, u, NGX_HTTP_INTERNAL_SERVER_ERROR); return; } /* rc == NGX_OK */ u->state->header_time = ngx_current_msec - u->start_time; if (u->headers_in.status_n >= NGX_HTTP_SPECIAL_RESPONSE) { if (ngx_http_upstream_test_next(r, u) == NGX_OK) { return; } if (ngx_http_upstream_intercept_errors(r, u) == NGX_OK) { return; } } // 處理header資訊 if (ngx_http_upstream_process_headers(r, u) != NGX_OK) { return; } // 響應客戶端 ngx_http_upstream_send_response(r, u); }
比如目標伺服器可寫時會收到一個系統的io事件,然後觸發寫動作,然後傳送資料到目標伺服器。完成之後,註冊一個讀事件監聽,即目標伺服器處理完成之後,會響應回來。這時nginx又會收到系統的事件就緒通知,從而處理請求。此時要做的就是將目標伺服器響應的資料寫到客戶端即可。(有可能需要新增一些自定義的資訊)
// http/ngx_http_upstream.c // 傳送資料到客戶端 static void ngx_http_upstream_send_response(ngx_http_request_t *r, ngx_http_upstream_t *u) { ssize_t n; ngx_int_t rc; ngx_event_pipe_t *p; ngx_connection_t *c; ngx_http_core_loc_conf_t *clcf; // 傳送header rc = ngx_http_send_header(r); if (rc == NGX_ERROR || rc > NGX_OK || r->post_action) { ngx_http_upstream_finalize_request(r, u, rc); return; } u->header_sent = 1; if (u->upgrade) { #if (NGX_HTTP_CACHE) if (r->cache) { ngx_http_file_cache_free(r->cache, u->pipe->temp_file); } #endif ngx_http_upstream_upgrade(r, u); return; } c = r->connection; if (r->header_only) { if (!u->buffering) { ngx_http_upstream_finalize_request(r, u, rc); return; } if (!u->cacheable && !u->store) { ngx_http_upstream_finalize_request(r, u, rc); return; } u->pipe->downstream_error = 1; } if (r->request_body && r->request_body->temp_file && r == r->main && !r->preserve_body && !u->conf->preserve_output) { ngx_pool_run_cleanup_file(r->pool, r->request_body->temp_file->file.fd); r->request_body->temp_file->file.fd = NGX_INVALID_FILE; } clcf = ngx_http_get_module_loc_conf(r, ngx_http_core_module); if (!u->buffering) { #if (NGX_HTTP_CACHE) if (r->cache) { ngx_http_file_cache_free(r->cache, u->pipe->temp_file); } #endif if (u->input_filter == NULL) { u->input_filter_init = ngx_http_upstream_non_buffered_filter_init; u->input_filter = ngx_http_upstream_non_buffered_filter; u->input_filter_ctx = r; } u->read_event_handler = ngx_http_upstream_process_non_buffered_upstream; r->write_event_handler = ngx_http_upstream_process_non_buffered_downstream; r->limit_rate = 0; r->limit_rate_set = 1; if (u->input_filter_init(u->input_filter_ctx) == NGX_ERROR) { ngx_http_upstream_finalize_request(r, u, NGX_ERROR); return; } if (clcf->tcp_nodelay && ngx_tcp_nodelay(c) != NGX_OK) { ngx_http_upstream_finalize_request(r, u, NGX_ERROR); return; } n = u->buffer.last - u->buffer.pos; if (n) { u->buffer.last = u->buffer.pos; u->state->response_length += n; if (u->input_filter(u->input_filter_ctx, n) == NGX_ERROR) { ngx_http_upstream_finalize_request(r, u, NGX_ERROR); return; } ngx_http_upstream_process_non_buffered_downstream(r); } else { u->buffer.pos = u->buffer.start; u->buffer.last = u->buffer.start; if (ngx_http_send_special(r, NGX_HTTP_FLUSH) == NGX_ERROR) { ngx_http_upstream_finalize_request(r, u, NGX_ERROR); return; } if (u->peer.connection->read->ready || u->length == 0) { ngx_http_upstream_process_non_buffered_upstream(r, u); } } return; } /* TODO: preallocate event_pipe bufs, look "Content-Length" */ #if (NGX_HTTP_CACHE) if (r->cache && r->cache->file.fd != NGX_INVALID_FILE) { ngx_pool_run_cleanup_file(r->pool, r->cache->file.fd); r->cache->file.fd = NGX_INVALID_FILE; } switch (ngx_http_test_predicates(r, u->conf->no_cache)) { case NGX_ERROR: ngx_http_upstream_finalize_request(r, u, NGX_ERROR); return; case NGX_DECLINED: u->cacheable = 0; break; default: /* NGX_OK */ if (u->cache_status == NGX_HTTP_CACHE_BYPASS) { /* create cache if previously bypassed */ if (ngx_http_file_cache_create(r) != NGX_OK) { ngx_http_upstream_finalize_request(r, u, NGX_ERROR); return; } } break; } if (u->cacheable) { time_t now, valid; now = ngx_time(); valid = r->cache->valid_sec; if (valid == 0) { valid = ngx_http_file_cache_valid(u->conf->cache_valid, u->headers_in.status_n); if (valid) { r->cache->valid_sec = now + valid; } } if (valid) { r->cache->date = now; r->cache->body_start = (u_short) (u->buffer.pos - u->buffer.start); if (u->headers_in.status_n == NGX_HTTP_OK || u->headers_in.status_n == NGX_HTTP_PARTIAL_CONTENT) { r->cache->last_modified = u->headers_in.last_modified_time; if (u->headers_in.etag) { r->cache->etag = u->headers_in.etag->value; } else { ngx_str_null(&r->cache->etag); } } else { r->cache->last_modified = -1; ngx_str_null(&r->cache->etag); } if (ngx_http_file_cache_set_header(r, u->buffer.start) != NGX_OK) { ngx_http_upstream_finalize_request(r, u, NGX_ERROR); return; } } else { u->cacheable = 0; } } ngx_log_debug1(NGX_LOG_DEBUG_HTTP, c->log, 0, "http cacheable: %d", u->cacheable); if (u->cacheable == 0 && r->cache) { ngx_http_file_cache_free(r->cache, u->pipe->temp_file); } if (r->header_only && !u->cacheable && !u->store) { ngx_http_upstream_finalize_request(r, u, 0); return; } #endif p = u->pipe; p->output_filter = ngx_http_upstream_output_filter; p->output_ctx = r; p->tag = u->output.tag; p->bufs = u->conf->bufs; p->busy_size = u->conf->busy_buffers_size; p->upstream = u->peer.connection; p->downstream = c; p->pool = r->pool; p->log = c->log; p->limit_rate = u->conf->limit_rate; p->start_sec = ngx_time(); p->cacheable = u->cacheable || u->store; p->temp_file = ngx_pcalloc(r->pool, sizeof(ngx_temp_file_t)); if (p->temp_file == NULL) { ngx_http_upstream_finalize_request(r, u, NGX_ERROR); return; } p->temp_file->file.fd = NGX_INVALID_FILE; p->temp_file->file.log = c->log; p->temp_file->path = u->conf->temp_path; p->temp_file->pool = r->pool; if (p->cacheable) { p->temp_file->persistent = 1; #if (NGX_HTTP_CACHE) if (r->cache && !r->cache->file_cache->use_temp_path) { p->temp_file->path = r->cache->file_cache->path; p->temp_file->file.name = r->cache->file.name; } #endif } else { p->temp_file->log_level = NGX_LOG_WARN; p->temp_file->warn = "an upstream response is buffered " "to a temporary file"; } p->max_temp_file_size = u->conf->max_temp_file_size; p->temp_file_write_size = u->conf->temp_file_write_size; #if (NGX_THREADS) if (clcf->aio == NGX_HTTP_AIO_THREADS && clcf->aio_write) { p->thread_handler = ngx_http_upstream_thread_handler; p->thread_ctx = r; } #endif p->preread_bufs = ngx_alloc_chain_link(r->pool); if (p->preread_bufs == NULL) { ngx_http_upstream_finalize_request(r, u, NGX_ERROR); return; } p->preread_bufs->buf = &u->buffer; p->preread_bufs->next = NULL; u->buffer.recycled = 1; p->preread_size = u->buffer.last - u->buffer.pos; if (u->cacheable) { p->buf_to_file = ngx_calloc_buf(r->pool); if (p->buf_to_file == NULL) { ngx_http_upstream_finalize_request(r, u, NGX_ERROR); return; } p->buf_to_file->start = u->buffer.start; p->buf_to_file->pos = u->buffer.start; p->buf_to_file->last = u->buffer.pos; p->buf_to_file->temporary = 1; } if (ngx_event_flags & NGX_USE_IOCP_EVENT) { /* the posted aio operation may corrupt a shadow buffer */ p->single_buf = 1; } /* TODO: p->free_bufs = 0 if use ngx_create_chain_of_bufs() */ p->free_bufs = 1; /* * event_pipe would do u->buffer.last += p->preread_size * as though these bytes were read */ u->buffer.last = u->buffer.pos; if (u->conf->cyclic_temp_file) { /* * we need to disable the use of sendfile() if we use cyclic temp file * because the writing a new data may interfere with sendfile() * that uses the same kernel file pages (at least on FreeBSD) */ p->cyclic_temp_file = 1; c->sendfile = 0; } else { p->cyclic_temp_file = 0; } p->read_timeout = u->conf->read_timeout; p->send_timeout = clcf->send_timeout; p->send_lowat = clcf->send_lowat; p->length = -1; if (u->input_filter_init && u->input_filter_init(p->input_ctx) != NGX_OK) { ngx_http_upstream_finalize_request(r, u, NGX_ERROR); return; } u->read_event_handler = ngx_http_upstream_process_upstream; r->write_event_handler = ngx_http_upstream_process_downstream; ngx_http_upstream_process_upstream(r, u); } // 傳送客戶端資料時處理 // http/ngx_http_upstream.c static void ngx_http_upstream_process_upstream(ngx_http_request_t *r, ngx_http_upstream_t *u) { ngx_event_t *rev; ngx_event_pipe_t *p; ngx_connection_t *c; c = u->peer.connection; p = u->pipe; rev = c->read; ngx_log_debug0(NGX_LOG_DEBUG_HTTP, c->log, 0, "http upstream process upstream"); c->log->action = "reading upstream"; if (rev->timedout) { p->upstream_error = 1; ngx_connection_error(c, NGX_ETIMEDOUT, "upstream timed out"); } else { if (rev->delayed) { ngx_log_debug0(NGX_LOG_DEBUG_HTTP, c->log, 0, "http upstream delayed"); if (ngx_handle_read_event(rev, 0) != NGX_OK) { ngx_http_upstream_finalize_request(r, u, NGX_ERROR); } return; } // 管道式讀取資料響應資料,即不會一次性輸出 // 而是源源不斷地輸出 if (ngx_event_pipe(p, 0) == NGX_ABORT) { ngx_http_upstream_finalize_request(r, u, NGX_ERROR); return; } } ngx_http_upstream_process_request(r, u); } // http/ngx_http_upstream.c // 響應客戶端 static void ngx_http_upstream_process_request(ngx_http_request_t *r, ngx_http_upstream_t *u) { ngx_temp_file_t *tf; ngx_event_pipe_t *p; p = u->pipe; #if (NGX_THREADS) if (p->writing && !p->aio) { /* * make sure to call ngx_event_pipe() * if there is an incomplete aio write */ if (ngx_event_pipe(p, 1) == NGX_ABORT) { ngx_http_upstream_finalize_request(r, u, NGX_ERROR); return; } } if (p->writing) { return; } #endif if (u->peer.connection) { if (u->store) { if (p->upstream_eof || p->upstream_done) { tf = p->temp_file; if (u->headers_in.status_n == NGX_HTTP_OK && (p->upstream_done || p->length == -1) && (u->headers_in.content_length_n == -1 || u->headers_in.content_length_n == tf->offset)) { ngx_http_upstream_store(r, u); } } } #if (NGX_HTTP_CACHE) if (u->cacheable) { if (p->upstream_done) { ngx_http_file_cache_update(r, p->temp_file); } else if (p->upstream_eof) { tf = p->temp_file; if (p->length == -1 && (u->headers_in.content_length_n == -1 || u->headers_in.content_length_n == tf->offset - (off_t) r->cache->body_start)) { ngx_http_file_cache_update(r, tf); } else { ngx_http_file_cache_free(r->cache, tf); } } else if (p->upstream_error) { ngx_http_file_cache_free(r->cache, p->temp_file); } } #endif if (p->upstream_done || p->upstream_eof || p->upstream_error) { ngx_log_debug1(NGX_LOG_DEBUG_HTTP, r->connection->log, 0, "http upstream exit: %p", p->out); // 輸出完成,關閉連線 // 沒有輸出完成的情況,會等待下一次就緒事件,再進行處理 // 而無需等待目標伺服器完全響應,再返回本次處理 // 這就是非阻塞的優勢所在 if (p->upstream_done || (p->upstream_eof && p->length == -1)) { // 關閉連線 ngx_http_upstream_finalize_request(r, u, 0); return; } if (p->upstream_eof) { ngx_log_error(NGX_LOG_ERR, r->connection->log, 0, "upstream prematurely closed connection"); } ngx_http_upstream_finalize_request(r, u, NGX_HTTP_BAD_GATEWAY); return; } } if (p->downstream_error) { ngx_log_debug0(NGX_LOG_DEBUG_HTTP, r->connection->log, 0, "http upstream downstream error"); if (!u->cacheable && !u->store && u->peer.connection) { ngx_http_upstream_finalize_request(r, u, NGX_ERROR); } } }
新名詞,管理式輸出結果到客戶端。
說了這麼多,我們到底把nginx的代理功能講清楚了嗎?(請找出:header資訊是在何時替換掉的呢?)
1. 解析客戶端url, 解析body;
2. 連線目標伺服器;
3. 設定header併傳送;
4. 設定body併傳送;
5. 非同步等待服務端響應;
6. 讀取服務端響應;
7. 管道式輸出資料到客戶端;
整體上是沒有問題的,但是nginx使用了大量的非阻塞io,即大量的非同步處理,所以有超強悍的效能,而長期的生產驗證,則給了大家非常好的保證。