log4j+kafka+storm+mongodb+mysql 日誌處理
一:背景
隨著系統的分散式部署,日誌的集中處理顯得極為重要。本文整合log4j+kafka+storm+mongodb+mysql介紹一下日誌的集中處理過程。本方案已應用到線上的正式環境中。
log4j版本:log4j-1.2.17.jar
kafka版本:kafka_2.10-0.8.2.1
zookeeper版本:zookeeper-3.4.7.tar.gz
storm版本:apache-storm-0.9.5.tar.gz
mongodb 版本:mongodb-linux-x86_64-2.6.11.tgz
MySql版本:mysql-5.5.31.tar.gz
二:軟體安裝
1.kafka安裝 請參照http://blog.itpub.net/28624388/viewspace-1966761/
2.storm安裝 請參照http://blog.itpub.net/28624388/viewspace-1973943/
3.mongodb安裝 請參照http://blog.itpub.net/28624388/viewspace-1974449/
4.mysql安裝 請參照http://blog.itpub.net/28624388/viewspace-764718/
三:log4j與kafka整合
請參照http://blog.itpub.net/28624388/viewspace-1972027/
日誌格式:
logger.warn("處理請求|"+((null==userId || "".equals(userId.toString()))?"0":userId.toString())+"|"+system+"|"+version+"|" + request.getRequestURI() +"|"+GbdDateUtils.format(Calendar.getInstance().getTime(),"yyyy-MM-dd HH:mm:ss")+ "|" + (System.currentTimeMillis() - accessTime) + "|毫秒!");
四:kafka+storm+mongodb+mysql整合
1.程式碼結構
2.gmap-logs-deal-center模組介紹
該模組實現消費kafka收集到的日誌資訊,並透過storm簡單的處理儲存到Mongodb資料庫內,透過schedule將mongodb裡面的日誌資料進行統計分析儲存到Mysql資料庫內。
a)pom.xml配置檔案
b)Start類
c).StormKafkaBolt類
d).transaction-context.xml
e).GmapSystemLogsCollectSchedule類
3.gmap-common-logs-bo程式碼
GmapSystemLogBo類
隨著系統的分散式部署,日誌的集中處理顯得極為重要。本文整合log4j+kafka+storm+mongodb+mysql介紹一下日誌的集中處理過程。本方案已應用到線上的正式環境中。
log4j版本:log4j-1.2.17.jar
kafka版本:kafka_2.10-0.8.2.1
zookeeper版本:zookeeper-3.4.7.tar.gz
storm版本:apache-storm-0.9.5.tar.gz
mongodb 版本:mongodb-linux-x86_64-2.6.11.tgz
MySql版本:mysql-5.5.31.tar.gz
二:軟體安裝
1.kafka安裝 請參照http://blog.itpub.net/28624388/viewspace-1966761/
2.storm安裝 請參照http://blog.itpub.net/28624388/viewspace-1973943/
3.mongodb安裝 請參照http://blog.itpub.net/28624388/viewspace-1974449/
4.mysql安裝 請參照http://blog.itpub.net/28624388/viewspace-764718/
三:log4j與kafka整合
請參照http://blog.itpub.net/28624388/viewspace-1972027/
日誌格式:
logger.warn("處理請求|"+((null==userId || "".equals(userId.toString()))?"0":userId.toString())+"|"+system+"|"+version+"|" + request.getRequestURI() +"|"+GbdDateUtils.format(Calendar.getInstance().getTime(),"yyyy-MM-dd HH:mm:ss")+ "|" + (System.currentTimeMillis() - accessTime) + "|毫秒!");
四:kafka+storm+mongodb+mysql整合
1.程式碼結構
2.gmap-logs-deal-center模組介紹
該模組實現消費kafka收集到的日誌資訊,並透過storm簡單的處理儲存到Mongodb資料庫內,透過schedule將mongodb裡面的日誌資料進行統計分析儲存到Mysql資料庫內。
a)pom.xml配置檔案
點選(此處)摺疊或開啟
-
<?xml version="1.0"?>
-
<project
-
xsi:schemaLocation=" "
-
xmlns="" xmlns:xsi="">
-
<modelVersion>4.0.0</modelVersion>
-
<parent>
-
<groupId>com.gemdale.gmap</groupId>
-
<artifactId>GMap</artifactId>
-
<version>0.0.1-SNAPSHOT</version>
-
</parent>
-
<groupId>com.gemdale.gmap</groupId>
-
<artifactId>gmap-logs-deal-center</artifactId>
-
<version>0.0.1-SNAPSHOT</version>
-
<name>gmap-logs-deal-center</name>
-
<dependencies>
-
<dependency>
-
<groupId>${project.groupId}</groupId>
-
<artifactId>gmap-common</artifactId>
-
<version>${project.version}</version>
-
</dependency>
-
-
<dependency>
-
<groupId>${project.groupId}</groupId>
-
<artifactId>gmap-common-logs-bo</artifactId>
-
<version>${project.version}</version>
-
</dependency>
-
-
<dependency>
-
<groupId>${project.groupId}</groupId>
-
<artifactId>gmap-common-redis-support</artifactId>
-
<version>${project.version}</version>
-
</dependency>
-
-
<dependency>
-
<groupId>org.slf4j</groupId>
-
<artifactId>slf4j-log4j12</artifactId>
-
</dependency>
-
-
<dependency>
-
<groupId>org.apache.kafka</groupId>
-
<artifactId>kafka_2.10</artifactId>
-
</dependency>
-
-
<dependency>
-
<groupId>org.apache.kafka</groupId>
-
<artifactId>kafka-clients</artifactId>
-
</dependency>
-
-
<dependency>
-
<groupId>com.google.guava</groupId>
-
<artifactId>guava</artifactId>
-
<version>18.0</version>
-
</dependency>
-
-
<dependency>
-
<groupId>com.101tec</groupId>
-
<artifactId>zkclient</artifactId>
-
</dependency>
-
-
<dependency>
-
<groupId>org.apache.storm</groupId>
-
<artifactId>storm-kafka</artifactId>
-
</dependency>
-
-
-
<dependency>
-
<groupId>org.apache.storm</groupId>
-
<artifactId>storm-core</artifactId>
-
</dependency>
-
-
-
<dependency>
-
<groupId>javax.persistence</groupId>
-
<artifactId>persistence-api</artifactId>
-
</dependency>
-
-
<dependency>
-
<groupId>junit</groupId>
-
<artifactId>junit</artifactId>
-
</dependency>
-
-
<dependency>
-
<groupId>org.springframework</groupId>
-
<artifactId>spring-context</artifactId>
-
</dependency>
-
-
<dependency>
-
<groupId>org.springframework</groupId>
-
<artifactId>spring-test</artifactId>
-
</dependency>
-
-
<dependency>
-
<groupId>org.springframework</groupId>
-
<artifactId>spring-core</artifactId>
-
</dependency>
-
-
<dependency>
-
<groupId>org.springframework</groupId>
-
<artifactId>spring-beans</artifactId>
-
</dependency>
-
-
<dependency>
-
<groupId>org.springframework.data</groupId>
-
<artifactId>spring-data-mongodb</artifactId>
-
</dependency>
-
-
<dependency>
-
<groupId>${project.groupId}</groupId>
-
<artifactId>gmap-common-dao-support</artifactId>
-
<version>${project.version}</version>
-
</dependency>
-
-
<dependency>
-
<groupId>org.quartz-scheduler</groupId>
-
<artifactId>quartz</artifactId>
-
</dependency>
-
-
-
</dependencies>
- </project>
點選(此處)摺疊或開啟
-
package com.enjoylink.gmap.logs.deal.center;
-
-
import java.util.ArrayList;
-
import java.util.List;
-
-
import org.apache.log4j.Logger;
-
import org.springframework.context.support.ClassPathXmlApplicationContext;
-
import org.springframework.data.mongodb.core.MongoTemplate;
-
import org.springframework.data.redis.core.RedisTemplate;
-
-
import com.enjoyslink.gmap.logs.deal.center.stormkafka.StormKafkaBolt;
-
import com.gemdale.gmap.common.redis.support.RedisUtil;
-
import com.gemdale.gmap.common.util.ConfigureUtil;
-
-
import backtype.storm.Config;
-
import backtype.storm.LocalCluster;
-
import backtype.storm.spout.SchemeAsMultiScheme;
-
import backtype.storm.topology.TopologyBuilder;
-
import storm.kafka.BrokerHosts;
-
import storm.kafka.KafkaSpout;
-
import storm.kafka.SpoutConfig;
-
import storm.kafka.StringScheme;
-
import storm.kafka.ZkHosts;
-
-
/**
-
* @ClassName: Start
-
* @Description: TODO(用一句話描述這個類的用途)
-
* @author 資訊管理部-gengchong
-
* @date 2015年10月15日 下午5:42:52
-
*
-
*/
-
public class Start {
-
private static Logger logger = Logger.getLogger(Start.class);
-
-
@SuppressWarnings("resource")
-
public static void main(String[] args) {
-
// 進行spring的系統初始化化工作,進行必要的資料初始化
-
ClassPathXmlApplicationContext context = new ClassPathXmlApplicationContext("applicationContext.xml");
-
-
// 獲取介面,並提供服務
-
// 初始化REDIS服務
-
logger.info("初始化REDIS服務...");
-
RedisUtil.initRedisTemplate((RedisTemplate) context.getBean("redisTemplate"));
-
-
logger.info("初始化storm-kafka......");
-
//配置kafka topic
-
String topic = ConfigureUtil.getProperty("topic", "gmap");
-
//配置kafka zookeeper
-
BrokerHosts hosts = new ZkHosts(ConfigureUtil.getProperty("kafka.zk"));
-
//配置storm spout
-
SpoutConfig spoutConfig = new SpoutConfig(hosts, topic, "/kafkastorm", "gmapKafka");
-
spoutConfig.scheme = new SchemeAsMultiScheme(new StringScheme());
-
//配置storm zookeeper
-
List<String> zkServers = new ArrayList<>();
-
zkServers.add(ConfigureUtil.getProperty("storm.zk.host"));
-
spoutConfig.zkServers = zkServers;
-
spoutConfig.zkPort = Integer.valueOf(ConfigureUtil.getProperty("storm.zk.port"));
-
-
spoutConfig.forceFromStart = false;
-
spoutConfig.socketTimeoutMs = 60 * 1000;
-
//配置topology
-
TopologyBuilder builder = new TopologyBuilder();
-
builder.setSpout("spout", new KafkaSpout(spoutConfig),
-
Integer.valueOf(ConfigureUtil.getProperty("storm.max.thread")));
-
//配置storm bolt
-
builder.setBolt("bolt",new StormKafkaBolt(), Integer.valueOf(ConfigureUtil.getProperty("storm.max.thread")))
-
.shuffleGrouping("spout");
-
//開啟topology
-
Config config = new Config();
-
config.setDebug(false);
-
new LocalCluster().submitTopology("topology", config, builder.createTopology());
-
-
logger.info("Gmap日誌處理中心啟動成功!");
-
-
}
- }
d).transaction-context.xml
點選(此處)摺疊或開啟
-
<beans xmlns="" xmlns:xsi="" xmlns:context="" xmlns:aop="" xmlns:tx="" xmlns:mongo="" xsi:schemaLocation="
-
/spring-beans-4.0.xsd
-
-
/spring-context-4.0.xsd
-
-
/spring-aop-4.0.xsd
-
-
/spring-mongo-1.8.xsd
-
-
/spring-tx-4.0.xsd">
-
-
<bean id="propertyConfigurer" class="org.springframework.beans.factory.config.PropertyPlaceholderConfigurer">
-
<property name="location">
-
<description>datasource config</description>
-
<value>classpath:context-datasource.properties</value>
-
</property>
-
</bean>
-
-
<mongo:db-factory host="${mongodb.host}" port="${mongodb.port}" dbname="${mongodb.database}" username="${mongodb.username}" password="${mongodb.password}" />
-
-
-
<bean id="mongoTemplate" class="org.springframework.data.mongodb.core.MongoTemplate">
-
<constructor-arg name="mongoDbFactory" ref="mongoDbFactory" />
-
</bean>
-
-
<mongo:mapping-converter id="converter" db-factory-ref="mongoDbFactory" />
-
<bean id="gridFsTemplate" class="org.springframework.data.mongodb.gridfs.GridFsTemplate">
-
<constructor-arg ref="mongoDbFactory" />
-
<constructor-arg ref="converter" />
-
</bean>
-
-
<bean id="jdbcTemplate" class="org.springframework.jdbc.core.JdbcTemplate">
-
<property name="dataSource" ref="platformTomcat" />
-
</bean>
-
-
<bean id="jdbcReadTemplate" class="org.springframework.jdbc.core.JdbcTemplate">
-
<property name="dataSource" ref="platformReadTomcat" />
-
</bean>
-
-
<bean id="transactionManager" class="org.springframework.jdbc.datasource.DataSourceTransactionManager">
-
<property name="dataSource" ref="platformTomcat"></property>
-
</bean>
-
-
<tx:annotation-driven transaction-manager="transactionManager" proxy-target-class="true"/>
- </beans>
3.gmap-common-logs-bo程式碼
GmapSystemLogBo類
來自 “ ITPUB部落格 ” ,連結:http://blog.itpub.net/28624388/viewspace-1982560/,如需轉載,請註明出處,否則將追究法律責任。
相關文章
- nginx日誌處理Nginx
- PHP日誌處理類PHP
- orbeon form 的日誌處理ORBORM
- shell日誌顏色處理
- DATAGUARD中手工處理日誌GAP
- node錯誤處理與日誌
- logstash kafka output 日誌處理Kafka
- oracle alert日誌亂碼處理Oracle
- strom打造日誌處理系統
- Db2 日誌處理二DB2
- apache日誌匯入oracle(日誌經過python處理)ApacheOraclePython
- 如何在zuul上做日誌處理Zuul
- 搭建node服務(1):日誌處理
- 指令碼處理iOS的Crash日誌指令碼iOS
- logstash nginx error access 日誌處理NginxError
- 處理Apache日誌的Bash指令碼Apache指令碼
- 丟失重做日誌怎麼處理
- SQL Server日誌檔案總結及日誌滿的處理SQLServer
- [zt] SQL Server日誌檔案總結及日誌滿的處理SQLServer
- node專案錯誤處理與日誌
- 基於go開發日誌處理包Go
- ELK 處理 Spring Boot 日誌,不錯!Spring Boot
- 利用 ELK 處理 Percona 審計日誌
- 日誌和實時流計算處理
- 藍屏處理日誌: FuFlt64.sys
- SQL Server事務日誌的處理方法SQLServer
- ORACLE 告警日誌alert過大的處理Oracle
- IBM DB2 日誌處理一IBMDB2
- SQLServer資料庫日誌太大處理方式SQLServer資料庫
- sql server日誌檔案總結及日誌滿的處理辦法SQLServer
- 從0寫一個Golang日誌處理包Golang
- 對 Hyperf 做的那些事 3(日誌處理)
- SpringBoot第十三篇:日誌處理Spring Boot
- Spark SQL:實現日誌離線批處理SparkSQL
- Oracle DataGuard歸檔日誌丟失處理方法Oracle
- 在DATAGUARD中手工處理日誌GAP的方法
- SQL Server事務日誌過大的處理SQLServer
- 結合 AOP 輕鬆處理事件釋出處理日誌事件