安裝nginx-kafka外掛

王聰聰發表於2018-10-21

安裝nginx-kafka外掛

安裝nginx-kafka外掛

  • 1.安裝git
yum install -y git
複製程式碼
  • 2.切換到/usr/local/src目錄,然後將kafka的c客戶端原始碼clone到本地
cd /usr/local/src
如果git下載失敗使用這個命令 
報錯內容:  fatal: unable to access 'https://github.com/edenhill/librdkafka.git/': Peer reports incompatible or unsupported protocol version.
解決:yum update -y nss curl libcurl
	
git clone https://github.com/edenhill/librdkafka
複製程式碼
  • 3.進入到librdkafka,然後進行編譯
cd librdkafka
yum install -y gcc gcc-c++ pcre-devel zlib-devel
./configure
make && make install
複製程式碼
  • 4.安裝nginx整合kafka的外掛,進入到/usr/local/src,clone nginx整合kafka的原始碼
cd /usr/local/src
git clone https://github.com/brg-liuwei/ngx_kafka_module
複製程式碼
  • 5.進入到nginx的原始碼包目錄下 (編譯nginx,然後將將外掛同時編譯)
cd /usr/local/src/nginx-1.12.2
./configure --add-module=/usr/local/src/ngx_kafka_module/
make
make install
複製程式碼
  • 6.修改nginx的配置檔案,詳情請檢視當前目錄的nginx.conf

  • 7.啟動zk和kafka叢集(建立topic)

	/bigdata/zookeeper-3.4.9/bin/zkServer.sh start
	/bigdata/kafka_2.11-0.10.2.1/bin/kafka-server-start.sh -daemon /bigdata/kafka_2.11-0.10.2.1/config/server.properties
複製程式碼

建立topic:

./kafka-topics.sh --create --zookeeper spark1:2181,spark2:2181,spark3:2181 --replication-factor 3 --partitions 3 --topic track

./kafka-topics.sh --create --zookeeper spark1:2181,spark2:2181,spark3:2181 --replication-factor 3 --partitions 3 --topic user
複製程式碼

消費者:

./kafka-console-consumer.sh --bootstrap-server spark1:9092,spark2:9092,spark3:9092 --topic user --from-beginning
複製程式碼
  • 檢視資料是否同步;
./kafka-topics.sh --describe --zookeeper spark1:2181,spark2:2181,spark3:2181 --topic track
複製程式碼
  • 8.啟動nginx,報錯,找不到kafka.so.1的檔案
error while loading shared libraries: librdkafka.so.1: cannot open shared object file: No such file or directory
複製程式碼
  • 9.載入so庫
echo "/usr/local/lib" >> /etc/ld.so.conf
ldconfig
複製程式碼
  • 10.重新載入nginx
usr/local/nginx/sbin/nginx -t

nginx: the configuration file /usr/local/nginx/conf/nginx.conf syntax is ok
nginx: configuration file /usr/local/nginx/conf/nginx.conf test is successful
複製程式碼
  • 11.測試,向nginx中寫入資料,然後觀察kafka的消費者能不能消費到資料
curl localhost/kafka/track -d "message send to kafka topic"
curl localhost/kafka/track -d "老趙666" 
複製程式碼

相關文章