針對業務開發人員通常面對的業務需求,我們將日誌分為操作(請求)日誌和系統執行日誌,操作(請求)日誌可以讓管理員或者運營人員方便簡單的在系統介面中查詢追蹤使用者具體做了哪些操作,便於分析統計使用者行為;系統執行日誌又分為不同的級別(Log4j2): OFF > FATAL > ERROR > WARN > INFO > DEBUG > TRACE > ALL,這些日誌級別由開發人員在程式碼編寫時確定,並編寫在程式碼中,系統執行時記錄,方便系統開發人員分析定位解決問題,查詢系統效能瓶頸。
我們可以自定義註解利用AOP攔截Controller請求實現系統(日誌)操作日誌的記錄,系統執行日誌可以使用log4j2或者logback。在SpringCloud微服務架構下,可以使用Gateway統一記錄操作(請求)日誌,由於微服務分散式叢集部署,同一服務也存在多個,這裡的日誌追蹤就需要藉助Skywalking和ELK來實現具體的追蹤分析記錄。
由於最近爆發的log4j2和logback的漏洞問題,請選擇最新修復漏洞的版本。根據網上很多效能的對比,log4j2顯著優於logback,所以我們將SpringBoot預設的日誌Logback修改為Log4j2。
在框架設計時,我們儘可能的考慮到日誌系統的使用場景,將日誌系統實現方式設計為可動態配置的,然後具體根據業務需求,選擇使用合適的日誌系統,根據常用業務需求,我們暫將微服務日誌系統以如下方式實現:
操作日誌:
-
使用AOP特性,自定義註解攔截Controller請求實現系統操作日誌
優勢:實現簡單,通過註解即實現記錄操作日誌。
缺點:需要硬編碼到程式碼中,靈活性差。 -
在閘道器Gateway通過讀取配置,統一記錄操作日誌
優勢:可配置,實時更改需要記錄哪些操作日誌。
缺點:配置實現稍複雜。
操作日誌分為兩種實現方式,各有優劣,不管哪種實現方式,日誌記錄都通過Log4j2來記錄,通過Log4j2的配置,可動態選擇記錄到檔案、關係型資料庫MySQL、NoSQL資料庫MongoDB、訊息中介軟體Kafka等。
系統日誌:
- Log4j2記錄日誌,ELK採集分析展示
系統日誌我們就採取通用的日誌記錄方式即可,通過Log4j2記錄到日誌檔案,在通過ELK採集分析展示。
下面是具體實現步驟:
一、配置SkyWalking+Log4j2列印鏈路追蹤TraceId
很早之前,大家最常用的Java日誌記錄工具是log4j,後來由log4j的創始人設計了另外一款日誌記錄工具logback,它比log4j更加優秀,詳細對比可參照官方說明,所以SpringBoot預設使用logback作為日誌記錄工具。近年來Apache對Log4j進行了升級,推出了log4j2版本,無論從設計還是效能方面都優於log4j和logback,詳細對比可自行測試,網上也有相應的測試報告,這裡不詳細說明。所以,我們肯定需要選擇目前最合適的日誌記錄工具。
1、將SpringBoot預設日誌Logback修改為Log4j2
排除spring-boot-starter-web等依賴的logback
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-web</artifactId>
<!-- 去除springboot預設的logback配置-->
<exclusions>
<exclusion>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-logging</artifactId>
</exclusion>
</exclusions>
</dependency>
引入spring-boot-starter-log4j2依賴,因為對應版本的SpringBoot引入的Log4j2版本漏洞問題,這裡排除預設log4j2版本,引入最新的修復漏洞版本。
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-log4j2</artifactId>
<exclusions>
<exclusion>
<groupId>org.apache.logging.log4j</groupId>
<artifactId>log4j-api</artifactId>
</exclusion>
<exclusion>
<groupId>org.apache.logging.log4j</groupId>
<artifactId>log4j-core</artifactId>
</exclusion>
</exclusions>
</dependency>
引入log4j2修復漏洞版本的依賴
<!-- 修復log4j2漏洞 -->
<log4j2.version>2.17.1</log4j2.version>
<!-- log4j2支援非同步日誌,匯入disruptor依賴,不需要支援非同步日誌,也可以去掉該依賴包 -->
<log4j2.disruptor.version>3.4.4</log4j2.disruptor.version>
<!-- 修復log4j2漏洞 -->
<dependency>
<groupId>org.apache.logging.log4j</groupId>
<artifactId>log4j-api</artifactId>
<version>${log4j2.version}</version>
</dependency>
<dependency>
<groupId>org.apache.logging.log4j</groupId>
<artifactId>log4j-core</artifactId>
<version>${log4j2.version}</version>
</dependency>
<!-- log4j2讀取spring配置的依賴庫 -->
<dependency>
<groupId>org.apache.logging.log4j</groupId>
<artifactId>log4j-spring-boot</artifactId>
<version>${log4j2.version}</version>
</dependency>
<!-- log4j2支援非同步日誌,匯入disruptor依賴,不需要支援非同步日誌,也可以去掉該依賴包 -->
<dependency>
<groupId>com.lmax</groupId>
<artifactId>disruptor</artifactId>
<version>${log4j2.disruptor.version}</version>
</dependency>
因為在SpringBoot中存在很多子依賴,以及在jar包中存在其他依賴logback的jar包,這裡需要使用Maven工具來定位查詢這些依賴Logback的jar包,逐一排除掉。在工程資料夾下執行Maven命令:
mvn dependency:tree -Dverbose -Dincludes="ch.qos.logback:logback-classic"
如上圖所示都是依賴logback的jar包,需要都排除,否則會和log4j2衝突。
2、整合可列印SkyWalking鏈路追蹤TraceId的依賴
<!-- skywalking-log4j2鏈路id版本號 -->
<skywalking.log4j2.version>6.4.0</skywalking.log4j2.version>
<!-- skywalking-log4j2鏈路id -->
<dependency>
<groupId>org.apache.skywalking</groupId>
<artifactId>apm-toolkit-log4j-2.x</artifactId>
<version>${skywalking.log4j2.version}</version>
</dependency>
2、Log4j2配置例項
配置自己需要的log4j2.xml,在Pattern中配置[%traceId]可以顯示鏈路追蹤ID, 如果要讀取springboot中yaml的配置,一定要引入log4j-spring-boot依賴。
<?xml version="1.0" encoding="UTF-8"?>
<!--日誌級別以及優先順序排序: OFF > FATAL > ERROR > WARN > INFO > DEBUG > TRACE > ALL -->
<configuration monitorInterval="5" packages="org.apache.skywalking.apm.toolkit.log.log4j.v2.x">
<!--變數配置-->
<Properties>
<!-- 格式化輸出:%date表示日期,traceId表示微服務Skywalking追蹤id,%thread表示執行緒名,%-5level:級別從左顯示5個字元寬度 %m:日誌訊息,%n是換行符-->
<!-- %c 輸出類詳情 %M 輸出方法名 %pid 輸出pid %line 日誌在哪一行被列印 -->
<!-- %logger{80} 表示 Logger 名字最長80個字元 -->
<!-- value="${LOCAL_IP_HOSTNAME} %date [%p] %C [%thread] pid:%pid line:%line %throwable %c{10} %m%n"/>-->
<property name="CONSOLE_LOG_PATTERN"
value="%d %highlight{%-5level [%traceId] pid:%pid-%line}{ERROR=Bright RED, WARN=Bright Yellow, INFO=Bright Green, DEBUG=Bright Cyan, TRACE=Bright White} %style{[%t]}{bright,magenta} %style{%c{1.}.%M(%L)}{cyan}: %msg%n"/>
<property name="LOG_PATTERN"
value="%d %highlight{%-5level [%traceId] pid:%pid-%line}{ERROR=Bright RED, WARN=Bright Yellow, INFO=Bright Green, DEBUG=Bright Cyan, TRACE=Bright White} %style{[%t]}{bright,magenta} %style{%c{1.}.%M(%L)}{cyan}: %msg%n"/>
<!-- 讀取application.yaml檔案中設定的日誌路徑 logging.file.path-->
<property name="FILE_PATH" value="/var/log"/>
<property name="FILE_STORE_MAX" value="50MB"/>
<property name="FILE_WRITE_INTERVAL" value="1"/>
<property name="LOG_MAX_HISTORY" value="60"/>
</Properties>
<appenders>
<!-- 控制檯輸出 -->
<console name="Console" target="SYSTEM_OUT">
<!-- 輸出日誌的格式 -->
<PatternLayout pattern="${CONSOLE_LOG_PATTERN}"/>
<!-- 控制檯只輸出level及其以上級別的資訊(onMatch),其他的直接拒絕(onMismatch) -->
<ThresholdFilter level="info" onMatch="ACCEPT" onMismatch="DENY"/>
</console>
<!-- 這個會列印出所有的info及以下級別的資訊,每次大小超過size,則這size大小的日誌會自動存入按年份-月份建立的資料夾下面並進行壓縮,作為存檔 -->
<RollingRandomAccessFile name="RollingFileInfo" fileName="${FILE_PATH}/info.log"
filePattern="${FILE_PATH}/INFO-%d{yyyy-MM-dd}_%i.log.gz">
<!-- 控制檯只輸出level及以上級別的資訊(onMatch),其他的直接拒絕(onMismatch)-->
<ThresholdFilter level="info" onMatch="ACCEPT" onMismatch="DENY"/>
<PatternLayout pattern="${LOG_PATTERN}"/>
<Policies>
<!--interval屬性用來指定多久滾動一次,預設是1 hour-->
<TimeBasedTriggeringPolicy interval="${FILE_WRITE_INTERVAL}"/>
<SizeBasedTriggeringPolicy size="${FILE_STORE_MAX}"/>
</Policies>
<!-- DefaultRolloverStrategy屬性如不設定,則預設為最多同一資料夾下7個檔案開始覆蓋-->
<DefaultRolloverStrategy max="${LOG_MAX_HISTORY}"/>
</RollingRandomAccessFile>
<!-- 這個會列印出所有的debug及以下級別的資訊,每次大小超過size,則這size大小的日誌會自動存入按年份-月份建立的資料夾下面並進行壓縮,作為存檔-->
<RollingRandomAccessFile name="RollingFileDebug" fileName="${FILE_PATH}/debug.log"
filePattern="${FILE_PATH}/DEBUG-%d{yyyy-MM-dd}_%i.log.gz">
<!--控制檯只輸出level及以上級別的資訊(onMatch),其他的直接拒絕(onMismatch)-->
<ThresholdFilter level="debug" onMatch="ACCEPT" onMismatch="DENY"/>
<PatternLayout pattern="${LOG_PATTERN}"/>
<Policies>
<!--interval屬性用來指定多久滾動一次,預設是1 hour-->
<TimeBasedTriggeringPolicy interval="${FILE_WRITE_INTERVAL}"/>
<SizeBasedTriggeringPolicy size="${FILE_STORE_MAX}"/>
</Policies>
<!-- DefaultRolloverStrategy屬性如不設定,則預設為最多同一資料夾下7個檔案開始覆蓋-->
<DefaultRolloverStrategy max="${LOG_MAX_HISTORY}"/>
</RollingRandomAccessFile>
<!-- 這個會列印出所有的warn及以上級別的資訊,每次大小超過size,則這size大小的日誌會自動存入按年份-月份建立的資料夾下面並進行壓縮,作為存檔-->
<RollingRandomAccessFile name="RollingFileWarn" fileName="${FILE_PATH}/warn.log"
filePattern="${FILE_PATH}/WARN-%d{yyyy-MM-dd}_%i.log.gz">
<!-- 控制檯只輸出level及以上級別的資訊(onMatch),其他的直接拒絕(onMismatch)-->
<ThresholdFilter level="warn" onMatch="ACCEPT" onMismatch="DENY"/>
<PatternLayout pattern="${LOG_PATTERN}"/>
<Policies>
<!-- interval屬性用來指定多久滾動一次,預設是1 hour -->
<TimeBasedTriggeringPolicy interval="${FILE_WRITE_INTERVAL}"/>
<SizeBasedTriggeringPolicy size="${FILE_STORE_MAX}"/>
</Policies>
<!-- DefaultRolloverStrategy屬性如不設定,則預設為最多同一資料夾下7個檔案開始覆蓋 -->
<DefaultRolloverStrategy max="${LOG_MAX_HISTORY}"/>
</RollingRandomAccessFile>
<!-- 這個會列印出所有的error及以上級別的資訊,每次大小超過size,則這size大小的日誌會自動存入按年份-月份建立的資料夾下面並進行壓縮,作為存檔 -->
<RollingRandomAccessFile name="RollingFileError" fileName="${FILE_PATH}/error.log"
filePattern="${FILE_PATH}/ERROR-%d{yyyy-MM-dd}_%i.log.gz">
<!--只輸出level及以上級別的資訊(onMatch),其他的直接拒絕(onMismatch)-->
<ThresholdFilter level="error" onMatch="ACCEPT" onMismatch="DENY"/>
<PatternLayout pattern="${LOG_PATTERN}"/>
<Policies>
<!--interval屬性用來指定多久滾動一次,預設是1 hour-->
<TimeBasedTriggeringPolicy interval="${FILE_WRITE_INTERVAL}"/>
<SizeBasedTriggeringPolicy size="${FILE_STORE_MAX}"/>
</Policies>
<!-- DefaultRolloverStrategy屬性如不設定,則預設為最多同一資料夾下7個檔案開始覆蓋-->
<DefaultRolloverStrategy max="${LOG_MAX_HISTORY}"/>
</RollingRandomAccessFile>
</appenders>
<!-- Logger節點用來單獨指定日誌的形式,比如要為指定包下的class指定不同的日誌級別等 -->
<!-- 然後定義loggers,只有定義了logger並引入的appender,appender才會生效 -->
<loggers>
<!--過濾掉spring和mybatis的一些無用的DEBUG資訊-->
<logger name="org.mybatis" level="info" additivity="false">
<AppenderRef ref="Console"/>
</logger>
<!--若是additivity設為false,則子Logger 只會在自己的appender裡輸出,而不會在父Logger 的appender裡輸出 -->
<Logger name="org.springframework" level="info" additivity="false">
<AppenderRef ref="Console"/>
</Logger>
<AsyncLogger name="AsyncLogger" level="debug" additivity="false">
<AppenderRef ref="Console"/>
<AppenderRef ref="RollingFileDebug"/>
<AppenderRef ref="RollingFileInfo"/>
<AppenderRef ref="RollingFileWarn"/>
<AppenderRef ref="RollingFileError"/>
</AsyncLogger>
<root level="trace">
<appender-ref ref="Console"/>
<appender-ref ref="RollingFileDebug"/>
<appender-ref ref="RollingFileInfo"/>
<appender-ref ref="RollingFileWarn"/>
<appender-ref ref="RollingFileError"/>
</root>
</loggers>
</configuration>
3、IDEA控制檯顯示彩色日誌
在上面Log4j2.xml中*LOG_PATTERN的配置已經設定了,每種日誌顯示同的顏色,在IDEA中預設沒有效果,這裡需要設定一下:點選右上角執行視窗的Edit Configurations,在VM options中新增-Dlog4j.skipJansi=false ,再次執行,就可以看到IDEA控制檯顯示彩色日誌了。
二、自定義擴充套件日誌級別,實現可配置的日誌存取方式
雖然Log4j2提供了將日誌儲存到MySQL、MongoDB等,但是這裡並不建議直接使用Log4j2將日誌儲存到MySQL、MongoDB等資料庫,這樣做不但在每個微服務引入Log4j2元件的時候,增加資料庫連結池數量(不要考慮複用業務系統已有的資料庫連線池,因為不是每個微服務都要有日誌表),還有在高併發的情況下,對於整個業務系統來說這將是致命的問題。如果考慮到專案系統訪問和操作很少,為了降低系統維護的複雜度,避免引入過多元件和環境,使用這種方式的話,那麼就需要考慮業務系統是否需要使用微服務架構的問題了。
在高併發等情況下,我們推薦有兩種方式來記錄操作日誌:一是將日誌先儲存到訊息佇列,作為資料庫的一個緩衝,由消費者分批將日誌資料儲存到資料庫,降低耦合,儘量使日誌的操作不影響業務操作;二是使用Log4j2的非同步檔案日誌,配合搭建ELK日誌採集分析系統,來實現儲存操作日誌功能。
1、自定義操作日誌和介面訪問日誌級別
預設的日誌記錄級別不能夠滿足我們記錄操作日誌和介面日誌的需求,這裡我們自己擴充套件Log4j2的日誌級別來實現自定義的操作日誌和介面日誌。
新建LogLevelConstant定義日誌級別
/**
* 自定義日誌級別
* 業務操作日誌級別(級別越高,數字越小) off 0, fatal 100, error 200, warn 300, info 400, debug 500
* warn operation api
* @author GitEgg
*/
public class LogLevelConstant {
/**
* 操作日誌
*/
public static final Level OPERATION_LEVEL = Level.forName("OPERATION", 310);
/**
* 介面日誌
*/
public static final Level API_LEVEL = Level.forName("API", 320);
/**
* 操作日誌資訊
*/
public static final String OPERATION_LEVEL_MESSAGE = "{type:'operation', content:{}}";
/**
* 介面日誌資訊
*/
public static final String API_LEVEL_MESSAGE = "{type:'api', content:{}}";
}
這裡需要注意在使用日誌時,需要使用@Log4j2註解而不是@Slf4j,因為@Slf4j預設提供的方法不能設定日誌級別,測試程式碼:
log.log(LogLevelConstant.OPERATION_LEVEL,"操作日誌:{} , {}", "引數1", "引數2");
log.log(LogLevelConstant.API_LEVEL,"介面日誌:{} , {}", "引數1", "引數2");
2、自定義操作日誌註解
在記錄操作日誌時,我們可能不需要在程式碼中直接寫記錄日誌的程式碼,這裡可以自定義註解,通過註解來實現操作日誌的記錄。首先根據Spring AOP的特性自定義三類日誌記錄型別、BeforeLog(執行方法之前)AfterLog(執行方法之後)、AroundLog(執行前和執行後)
BeforeLog
/**
*
* @ClassName: BeforeLog
* @Description: 記錄前置日誌
* @author GitEgg
* @date 2019年4月27日 下午3:36:29
*
*/
@Retention(RetentionPolicy.RUNTIME)
@Target({ ElementType.METHOD })
public @interface BeforeLog {
String name() default "";
}
AfterLog
/**
*
* @ClassName: AfterLog
* @Description: 記錄後置日誌
* @author GitEgg
* @date 2019年4月27日 下午3:36:29
*
*/
@Retention(RetentionPolicy.RUNTIME)
@Target({ ElementType.METHOD })
public @interface AfterLog {
String name() default "";
}
AroundLog
/**
*
* @ClassName: AroundLog
* @Description:記錄around日誌
* @author GitEgg
* @date 2019年4月27日 下午3:36:29
*
*/
@Retention(RetentionPolicy.RUNTIME)
@Target({ ElementType.METHOD })
public @interface AroundLog {
String name() default "";
}
上面自定義註解之後,再編寫LogAspect日誌記錄的切面實現
/**
*
* @ClassName: LogAspect
* @Description:
* @author GitEgg
* @date 2019年4月27日 下午4:02:12
*
*/
@Log4j2
@Aspect
@Component
public class LogAspect {
/**
* Before切點
*/
@Pointcut("@annotation(com.gitegg.platform.base.annotation.log.BeforeLog)")
public void beforeAspect() {
}
/**
* After切點
*/
@Pointcut("@annotation(com.gitegg.platform.base.annotation.log.AfterLog)")
public void afterAspect() {
}
/**
* Around切點
*/
@Pointcut("@annotation(com.gitegg.platform.base.annotation.log.AroundLog)")
public void aroundAspect() {
}
/**
* 前置通知 記錄使用者的操作
*
* @param joinPoint 切點
*/
@Before("beforeAspect()")
public void doBefore(JoinPoint joinPoint) {
try {
// 處理入參
Object[] args = joinPoint.getArgs();
StringBuffer inParams = new StringBuffer("");
for (Object obj : args) {
if (null != obj && !(obj instanceof ServletRequest) && !(obj instanceof ServletResponse)) {
String objJson = JsonUtils.objToJson(obj);
inParams.append(objJson);
}
}
Method method = getMethod(joinPoint);
String operationName = getBeforeLogName(method);
addSysLog(joinPoint, String.valueOf(inParams), "BeforeLog", operationName);
} catch (Exception e) {
log.error("doBefore日誌記錄異常,異常資訊:{}", e.getMessage());
}
}
/**
* 後置通知 記錄使用者的操作
*
* @param joinPoint 切點
*/
@AfterReturning(value = "afterAspect()", returning = "returnObj")
public void doAfter(JoinPoint joinPoint, Object returnObj) {
try {
// 處理出參
String outParams = JsonUtils.objToJson(returnObj);
Method method = getMethod(joinPoint);
String operationName = getAfterLogName(method);
addSysLog(joinPoint, "AfterLog", outParams, operationName);
} catch (Exception e) {
log.error("doAfter日誌記錄異常,異常資訊:{}", e.getMessage());
}
}
/**
* 前後通知 用於攔截記錄使用者的操作記錄
*
* @param joinPoint 切點
* @throws Throwable
*/
@Around("aroundAspect()")
public Object doAround(ProceedingJoinPoint joinPoint) throws Throwable {
// 出參
Object value = null;
// 攔截的方法是否執行
boolean execute = false;
// 入參
Object[] args = joinPoint.getArgs();
try {
// 處理入參
StringBuffer inParams = new StringBuffer();
for (Object obj : args) {
if (null != obj && !(obj instanceof ServletRequest) && !(obj instanceof ServletResponse)) {
String objJson = JsonUtils.objToJson(obj);
inParams.append(objJson);
}
}
execute = true;
// 執行目標方法
value = joinPoint.proceed(args);
// 處理出參
String outParams = JsonUtils.objToJson(value);
Method method = getMethod(joinPoint);
String operationName = getAroundLogName(method);
// 記錄日誌
addSysLog(joinPoint, String.valueOf(inParams), String.valueOf(outParams), operationName);
} catch (Exception e) {
log.error("around日誌記錄異常,異常資訊:{}", e.getMessage());
// 如果未執行則繼續執行,日誌異常不影響操作流程繼續
if (!execute) {
value = joinPoint.proceed(args);
}
throw e;
}
return value;
}
/**
* 日誌入庫 addSysLog(這裡用一句話描述這個方法的作用)
*
* @Title: addSysLog
* @Description:
* @param joinPoint
* @param inParams
* @param outParams
* @param operationName
* @return void
*/
@SneakyThrows
public void addSysLog(JoinPoint joinPoint, String inParams, String outParams, String operationName) throws Exception {
try {
HttpServletRequest request = ((ServletRequestAttributes) RequestContextHolder.getRequestAttributes())
.getRequest();
String ip = request.getRemoteAddr();
GitEggLog gitEggLog = new GitEggLog();
gitEggLog.setMethodName(joinPoint.getSignature().getName());
gitEggLog.setInParams(String.valueOf(inParams));
gitEggLog.setOutParams(String.valueOf(outParams));
gitEggLog.setOperationIp(ip);
gitEggLog.setOperationName(operationName);
log.log(LogLevelConstant.OPERATION_LEVEL,LogLevelConstant.OPERATION_LEVEL_MESSAGE, JsonUtils.objToJson(gitEggLog));
} catch (Exception e) {
log.error("addSysLog日誌記錄異常,異常資訊:{}", e.getMessage());
throw e;
}
}
/**
* 獲取註解中對方法的描述資訊
*
* @param joinPoint 切點
* @return 方法描述
* @throws Exception
*/
public Method getMethod(JoinPoint joinPoint) throws Exception {
String targetName = joinPoint.getTarget().getClass().getName();
String methodName = joinPoint.getSignature().getName();
Object[] arguments = joinPoint.getArgs();
Class<?> targetClass = Class.forName(targetName);
Method[] methods = targetClass.getMethods();
Method methodReturn = null;
for (Method method : methods) {
if (method.getName().equals(methodName)) {
Class<?>[] clazzs = method.getParameterTypes();
if (clazzs.length == arguments.length) {
methodReturn = method;
break;
}
}
}
return methodReturn;
}
/**
*
* getBeforeLogName(獲取before名稱)
*
* @Title: getBeforeLogName
* @Description:
* @param method
* @return String
*/
public String getBeforeLogName(Method method) {
String name = method.getAnnotation(BeforeLog.class).name();
return name;
}
/**
*
* getAfterLogName(獲取after名稱)
*
* @Title: getAfterLogName
* @Description:
* @param method
* @return String
*/
public String getAfterLogName(Method method) {
String name = method.getAnnotation(AfterLog.class).name();
return name;
}
/**
*
* getAroundLogName(獲取around名稱)
* @Title: getAroundLogName
* @Description:
* @param method
* @return String
*
*/
public String getAroundLogName(Method method) {
String name = method.getAnnotation(AroundLog.class).name();
return name;
}
上面程式碼工作完成之後,接下來需要在log4j2.xml中配置自定義日誌級別,實現將自定義的日誌列印到指定的檔案中:
<!-- 這個會列印出所有的operation級別的資訊,每次大小超過size,則這size大小的日誌會自動存入按年份-月份建立的資料夾下面並進行壓縮,作為存檔 -->
<RollingRandomAccessFile name="RollingFileOperation" fileName="${FILE_PATH}/operation.log"
filePattern="${FILE_PATH}/OPERATION-%d{yyyy-MM-dd}_%i.log.gz">
<!--只輸出action level級別的資訊(onMatch),其他的直接拒絕(onMismatch)-->
<LevelRangeFilter minLevel="OPERATION" maxLevel="OPERATION" onMatch="ACCEPT" onMismatch="DENY"/>
<PatternLayout pattern="${LOG_PATTERN}"/>
<Policies>
<!--interval屬性用來指定多久滾動一次,預設是1 hour-->
<TimeBasedTriggeringPolicy interval="${FILE_WRITE_INTERVAL}"/>
<SizeBasedTriggeringPolicy size="${FILE_STORE_MAX}"/>
</Policies>
<!-- DefaultRolloverStrategy屬性如不設定,則預設為最多同一資料夾下7個檔案開始覆蓋-->
<DefaultRolloverStrategy max="${LOG_MAX_HISTORY}"/>
</RollingRandomAccessFile>
<!-- 這個會列印出所有的api級別的資訊,每次大小超過size,則這size大小的日誌會自動存入按年份-月份建立的資料夾下面並進行壓縮,作為存檔 -->
<RollingRandomAccessFile name="RollingFileApi" fileName="${FILE_PATH}/api.log"
filePattern="${FILE_PATH}/API-%d{yyyy-MM-dd}_%i.log.gz">
<!--只輸出visit level級別的資訊(onMatch),其他的直接拒絕(onMismatch)-->
<LevelRangeFilter minLevel="API" maxLevel="API" onMatch="ACCEPT" onMismatch="DENY"/>
<PatternLayout pattern="${LOG_PATTERN}"/>
<Policies>
<!--interval屬性用來指定多久滾動一次,預設是1 hour-->
<TimeBasedTriggeringPolicy interval="${FILE_WRITE_INTERVAL}"/>
<SizeBasedTriggeringPolicy size="${FILE_STORE_MAX}"/>
</Policies>
<!-- DefaultRolloverStrategy屬性如不設定,則預設為最多同一資料夾下7個檔案開始覆蓋-->
<DefaultRolloverStrategy max="${LOG_MAX_HISTORY}"/>
</RollingRandomAccessFile>
<loggers>
<AsyncLogger name="AsyncLogger" level="debug" additivity="false">
<AppenderRef ref="Console"/>
<AppenderRef ref="RollingFileDebug"/>
<AppenderRef ref="RollingFileInfo"/>
<AppenderRef ref="RollingFileWarn"/>
<AppenderRef ref="RollingFileError"/>
<AppenderRef ref="RollingFileOperation"/>
<AppenderRef ref="RollingFileApi"/>
</AsyncLogger>
<root level="trace">
<appender-ref ref="Console"/>
<appender-ref ref="RollingFileDebug"/>
<appender-ref ref="RollingFileInfo"/>
<appender-ref ref="RollingFileWarn"/>
<appender-ref ref="RollingFileError"/>
<AppenderRef ref="RollingFileOperation"/>
<AppenderRef ref="RollingFileApi"/>
</root>
</loggers>
3、實現將日誌儲存到Kafka
前面的配置已基本滿足了我們對於日誌系統的基礎需求,在這裡,我們可以考慮通過配置Log4j2的配置檔案,來實現動態配置將日誌檔案記錄到指定的檔案或訊息中介軟體。
Log4j2將日誌訊息傳送到Kafka需要用到Kfaka的客戶端jar包,所以,這裡首先引入kafka-clients包:
<!-- log4j2記錄到kafka需要的依賴 -->
<kafka.clients.version>3.1.0</kafka.clients.version>
<!-- log4j2 kafka appender -->
<dependency>
<groupId>org.apache.kafka</groupId>
<artifactId>kafka-clients</artifactId>
<version>${kafka.clients.version}</version>
</dependency>
修改log4j2.xml配置將操作日誌記錄到Kafka,這裡需要注意,Log4j2官網說明了這裡必須加
<Kafka name="KafkaOperationLog" topic="operation_log" ignoreExceptions="false">
<LevelRangeFilter minLevel="OPERATION" maxLevel="OPERATION" onMatch="ACCEPT" onMismatch="DENY"/>
<PatternLayout pattern="${LOG_PATTERN}"/>
<Property name="bootstrap.servers">172.16.20.220:9092,172.16.20.221:9092,172.16.20.222:9092</Property>
<Property name="max.block.ms">2000</Property>
</Kafka>
<Kafka name="KafkaApiLog" topic="api_log" ignoreExceptions="false">
<LevelRangeFilter minLevel="API" maxLevel="API" onMatch="ACCEPT" onMismatch="DENY"/>
<PatternLayout pattern="${LOG_PATTERN}"/>
<Property name="bootstrap.servers">172.16.20.220:9092,172.16.20.221:9092,172.16.20.222:9092</Property>
<Property name="max.block.ms">2000</Property>
</Kafka>
<!-- Logger節點用來單獨指定日誌的形式,比如要為指定包下的class指定不同的日誌級別等 -->
<!-- 然後定義loggers,只有定義了logger並引入的appender,appender才會生效 -->
<loggers>
<!--過濾掉spring和mybatis的一些無用的DEBUG資訊-->
<logger name="org.mybatis" level="info" additivity="false">
<AppenderRef ref="Console"/>
</logger>
<!--若是additivity設為false,則子Logger 只會在自己的appender裡輸出,而不會在父Logger 的appender裡輸出 -->
<Logger name="org.springframework" level="info" additivity="false">
<AppenderRef ref="Console"/>
</Logger>
<!-- 避免遞迴記錄日誌 -->
<Logger name="org.apache.kafka" level="INFO" />
<AsyncLogger name="AsyncLogger" level="debug" additivity="false">
<AppenderRef ref="Console"/>
<AppenderRef ref="RollingFileDebug"/>
<AppenderRef ref="RollingFileInfo"/>
<AppenderRef ref="RollingFileWarn"/>
<AppenderRef ref="RollingFileError"/>
<AppenderRef ref="RollingFileOperation"/>
<AppenderRef ref="RollingFileApi"/>
<AppenderRef ref="KafkaOperationLog"/>
<AppenderRef ref="KafkaApiLog"/>
</AsyncLogger>
<root level="trace">
<appender-ref ref="Console"/>
<appender-ref ref="RollingFileDebug"/>
<appender-ref ref="RollingFileInfo"/>
<appender-ref ref="RollingFileWarn"/>
<appender-ref ref="RollingFileError"/>
<AppenderRef ref="RollingFileOperation"/>
<AppenderRef ref="RollingFileApi"/>
<AppenderRef ref="KafkaOperationLog"/>
<AppenderRef ref="KafkaApiLog"/>
</root>
</loggers>
綜上,修改後完整的log4j.xml如下,可根據配置自己選擇不將操作日誌記錄到檔案:
<?xml version="1.0" encoding="UTF-8"?>
<!--日誌級別以及優先順序排序: OFF > FATAL > ERROR > WARN > INFO > DEBUG > TRACE > ALL -->
<configuration monitorInterval="5" packages="org.apache.skywalking.apm.toolkit.log.log4j.v2.x">
<!--變數配置-->
<Properties>
<!-- 格式化輸出:%date表示日期,traceId表示微服務Skywalking追蹤id,%thread表示執行緒名,%-5level:級別從左顯示5個字元寬度 %m:日誌訊息,%n是換行符-->
<!-- %c 輸出類詳情 %M 輸出方法名 %pid 輸出pid %line 日誌在哪一行被列印 -->
<!-- %logger{80} 表示 Logger 名字最長80個字元 -->
<!-- value="${LOCAL_IP_HOSTNAME} %date [%p] %C [%thread] pid:%pid line:%line %throwable %c{10} %m%n"/>-->
<property name="CONSOLE_LOG_PATTERN"
value="%d %highlight{%-5level [%traceId] pid:%pid-%line}{ERROR=Bright RED, WARN=Bright Yellow, INFO=Bright Green, DEBUG=Bright Cyan, TRACE=Bright White} %style{[%t]}{bright,magenta} %style{%c{1.}.%M(%L)}{cyan}: %msg%n"/>
<property name="LOG_PATTERN"
value="%d %highlight{%-5level [%traceId] pid:%pid-%line}{ERROR=Bright RED, WARN=Bright Yellow, INFO=Bright Green, DEBUG=Bright Cyan, TRACE=Bright White} %style{[%t]}{bright,magenta} %style{%c{1.}.%M(%L)}{cyan}: %msg%n"/>
<!-- 讀取application.yaml檔案中設定的日誌路徑 logging.file.path-->
<Property name="FILE_PATH">${spring:logging.file.path}</Property>
<!-- <property name="FILE_PATH">D:\\log4j2_cloud</property> -->
<property name="applicationName">${spring:spring.application.name}</property>
<property name="FILE_STORE_MAX" value="50MB"/>
<property name="FILE_WRITE_INTERVAL" value="1"/>
<property name="LOG_MAX_HISTORY" value="60"/>
</Properties>
<appenders>
<!-- 控制檯輸出 -->
<console name="Console" target="SYSTEM_OUT">
<!-- 輸出日誌的格式 -->
<PatternLayout pattern="${CONSOLE_LOG_PATTERN}"/>
<!-- 控制檯只輸出level及其以上級別的資訊(onMatch),其他的直接拒絕(onMismatch) -->
<ThresholdFilter level="info" onMatch="ACCEPT" onMismatch="DENY"/>
</console>
<!-- 這個會列印出所有的info及以下級別的資訊,每次大小超過size,則這size大小的日誌會自動存入按年份-月份建立的資料夾下面並進行壓縮,作為存檔 -->
<RollingRandomAccessFile name="RollingFileInfo" fileName="${FILE_PATH}/info.log"
filePattern="${FILE_PATH}/INFO-%d{yyyy-MM-dd}_%i.log.gz">
<!-- 控制檯只輸出level及以上級別的資訊(onMatch),其他的直接拒絕(onMismatch)-->
<ThresholdFilter level="info" onMatch="ACCEPT" onMismatch="DENY"/>
<PatternLayout pattern="${LOG_PATTERN}"/>
<Policies>
<!--interval屬性用來指定多久滾動一次,預設是1 hour-->
<TimeBasedTriggeringPolicy interval="${FILE_WRITE_INTERVAL}"/>
<SizeBasedTriggeringPolicy size="${FILE_STORE_MAX}"/>
</Policies>
<!-- DefaultRolloverStrategy屬性如不設定,則預設為最多同一資料夾下7個檔案開始覆蓋-->
<DefaultRolloverStrategy max="${LOG_MAX_HISTORY}"/>
</RollingRandomAccessFile>
<!-- 這個會列印出所有的debug及以下級別的資訊,每次大小超過size,則這size大小的日誌會自動存入按年份-月份建立的資料夾下面並進行壓縮,作為存檔-->
<RollingRandomAccessFile name="RollingFileDebug" fileName="${FILE_PATH}/debug.log"
filePattern="${FILE_PATH}/DEBUG-%d{yyyy-MM-dd}_%i.log.gz">
<!--控制檯只輸出level及以上級別的資訊(onMatch),其他的直接拒絕(onMismatch)-->
<ThresholdFilter level="debug" onMatch="ACCEPT" onMismatch="DENY"/>
<PatternLayout pattern="${LOG_PATTERN}"/>
<Policies>
<!--interval屬性用來指定多久滾動一次,預設是1 hour-->
<TimeBasedTriggeringPolicy interval="${FILE_WRITE_INTERVAL}"/>
<SizeBasedTriggeringPolicy size="${FILE_STORE_MAX}"/>
</Policies>
<!-- DefaultRolloverStrategy屬性如不設定,則預設為最多同一資料夾下7個檔案開始覆蓋-->
<DefaultRolloverStrategy max="${LOG_MAX_HISTORY}"/>
</RollingRandomAccessFile>
<!-- 這個會列印出所有的warn及以上級別的資訊,每次大小超過size,則這size大小的日誌會自動存入按年份-月份建立的資料夾下面並進行壓縮,作為存檔-->
<RollingRandomAccessFile name="RollingFileWarn" fileName="${FILE_PATH}/warn.log"
filePattern="${FILE_PATH}/WARN-%d{yyyy-MM-dd}_%i.log.gz">
<!-- 控制檯只輸出level及以上級別的資訊(onMatch),其他的直接拒絕(onMismatch)-->
<ThresholdFilter level="warn" onMatch="ACCEPT" onMismatch="DENY"/>
<PatternLayout pattern="${LOG_PATTERN}"/>
<Policies>
<!-- interval屬性用來指定多久滾動一次,預設是1 hour -->
<TimeBasedTriggeringPolicy interval="${FILE_WRITE_INTERVAL}"/>
<SizeBasedTriggeringPolicy size="${FILE_STORE_MAX}"/>
</Policies>
<!-- DefaultRolloverStrategy屬性如不設定,則預設為最多同一資料夾下7個檔案開始覆蓋 -->
<DefaultRolloverStrategy max="${LOG_MAX_HISTORY}"/>
</RollingRandomAccessFile>
<!-- 這個會列印出所有的error及以上級別的資訊,每次大小超過size,則這size大小的日誌會自動存入按年份-月份建立的資料夾下面並進行壓縮,作為存檔 -->
<RollingRandomAccessFile name="RollingFileError" fileName="${FILE_PATH}/error.log"
filePattern="${FILE_PATH}/ERROR-%d{yyyy-MM-dd}_%i.log.gz">
<!--只輸出level及以上級別的資訊(onMatch),其他的直接拒絕(onMismatch)-->
<ThresholdFilter level="error" onMatch="ACCEPT" onMismatch="DENY"/>
<PatternLayout pattern="${LOG_PATTERN}"/>
<Policies>
<!--interval屬性用來指定多久滾動一次,預設是1 hour-->
<TimeBasedTriggeringPolicy interval="${FILE_WRITE_INTERVAL}"/>
<SizeBasedTriggeringPolicy size="${FILE_STORE_MAX}"/>
</Policies>
<!-- DefaultRolloverStrategy屬性如不設定,則預設為最多同一資料夾下7個檔案開始覆蓋-->
<DefaultRolloverStrategy max="${LOG_MAX_HISTORY}"/>
</RollingRandomAccessFile>
<!-- 這個會列印出所有的operation級別的資訊,每次大小超過size,則這size大小的日誌會自動存入按年份-月份建立的資料夾下面並進行壓縮,作為存檔 -->
<RollingRandomAccessFile name="RollingFileOperation" fileName="${FILE_PATH}/operation.log"
filePattern="${FILE_PATH}/OPERATION-%d{yyyy-MM-dd}_%i.log.gz">
<!--只輸出action level級別的資訊(onMatch),其他的直接拒絕(onMismatch)-->
<LevelRangeFilter minLevel="OPERATION" maxLevel="OPERATION" onMatch="ACCEPT" onMismatch="DENY"/>
<PatternLayout pattern="${LOG_PATTERN}"/>
<Policies>
<!--interval屬性用來指定多久滾動一次,預設是1 hour-->
<TimeBasedTriggeringPolicy interval="${FILE_WRITE_INTERVAL}"/>
<SizeBasedTriggeringPolicy size="${FILE_STORE_MAX}"/>
</Policies>
<!-- DefaultRolloverStrategy屬性如不設定,則預設為最多同一資料夾下7個檔案開始覆蓋-->
<DefaultRolloverStrategy max="${LOG_MAX_HISTORY}"/>
</RollingRandomAccessFile>
<!-- 這個會列印出所有的api級別的資訊,每次大小超過size,則這size大小的日誌會自動存入按年份-月份建立的資料夾下面並進行壓縮,作為存檔 -->
<RollingRandomAccessFile name="RollingFileApi" fileName="${FILE_PATH}/api.log"
filePattern="${FILE_PATH}/API-%d{yyyy-MM-dd}_%i.log.gz">
<!--只輸出visit level級別的資訊(onMatch),其他的直接拒絕(onMismatch)-->
<LevelRangeFilter minLevel="API" maxLevel="API" onMatch="ACCEPT" onMismatch="DENY"/>
<PatternLayout pattern="${LOG_PATTERN}"/>
<Policies>
<!--interval屬性用來指定多久滾動一次,預設是1 hour-->
<TimeBasedTriggeringPolicy interval="${FILE_WRITE_INTERVAL}"/>
<SizeBasedTriggeringPolicy size="${FILE_STORE_MAX}"/>
</Policies>
<!-- DefaultRolloverStrategy屬性如不設定,則預設為最多同一資料夾下7個檔案開始覆蓋-->
<DefaultRolloverStrategy max="${LOG_MAX_HISTORY}"/>
</RollingRandomAccessFile>
<Kafka name="KafkaOperationLog" topic="operation_log" ignoreExceptions="false">
<LevelRangeFilter minLevel="OPERATION" maxLevel="OPERATION" onMatch="ACCEPT" onMismatch="DENY"/>
<PatternLayout pattern="${LOG_PATTERN}"/>
<Property name="bootstrap.servers">172.16.20.220:9092,172.16.20.221:9092,172.16.20.222:9092</Property>
<Property name="max.block.ms">2000</Property>
</Kafka>
<Kafka name="KafkaApiLog" topic="api_log" ignoreExceptions="false">
<LevelRangeFilter minLevel="API" maxLevel="API" onMatch="ACCEPT" onMismatch="DENY"/>
<PatternLayout pattern="${LOG_PATTERN}"/>
<Property name="bootstrap.servers">172.16.20.220:9092,172.16.20.221:9092,172.16.20.222:9092</Property>
<Property name="max.block.ms">2000</Property>
</Kafka>
</appenders>
<!-- Logger節點用來單獨指定日誌的形式,比如要為指定包下的class指定不同的日誌級別等 -->
<!-- 然後定義loggers,只有定義了logger並引入的appender,appender才會生效 -->
<loggers>
<!--過濾掉spring和mybatis的一些無用的DEBUG資訊-->
<logger name="org.mybatis" level="info" additivity="false">
<AppenderRef ref="Console"/>
</logger>
<!--若是additivity設為false,則子Logger 只會在自己的appender裡輸出,而不會在父Logger 的appender裡輸出 -->
<Logger name="org.springframework" level="info" additivity="false">
<AppenderRef ref="Console"/>
</Logger>
<!-- 避免遞迴記錄日誌 -->
<Logger name="org.apache.kafka" level="INFO" />
<AsyncLogger name="AsyncLogger" level="debug" additivity="false">
<AppenderRef ref="Console"/>
<AppenderRef ref="RollingFileDebug"/>
<AppenderRef ref="RollingFileInfo"/>
<AppenderRef ref="RollingFileWarn"/>
<AppenderRef ref="RollingFileError"/>
<AppenderRef ref="RollingFileOperation"/>
<AppenderRef ref="RollingFileApi"/>
<AppenderRef ref="KafkaOperationLog"/>
<AppenderRef ref="KafkaApiLog"/>
</AsyncLogger>
<root level="trace">
<appender-ref ref="Console"/>
<appender-ref ref="RollingFileDebug"/>
<appender-ref ref="RollingFileInfo"/>
<appender-ref ref="RollingFileWarn"/>
<appender-ref ref="RollingFileError"/>
<AppenderRef ref="RollingFileOperation"/>
<AppenderRef ref="RollingFileApi"/>
<AppenderRef ref="KafkaOperationLog"/>
<AppenderRef ref="KafkaApiLog"/>
</root>
</loggers>
</configuration>
以上配置完成之後,我們對日誌記錄進行測試,檢視日誌是否記錄到非同步檔案和kafka中,在Kfaka伺服器啟動消費者服務,可以實時觀察日誌是否推送到Kafka:
4、由Gateway記錄可配置的請求日誌
在業務開發過程中,除了操作日誌的需求,我們通常還會遇到介面日誌的需求,系統需要對介面的請求做統計分析。閘道器負責把請求轉發到各個微服務,在此處比較適合進行API日誌收集。
我們必然面臨著哪些服務需要收集API日誌,需要收集哪些型別的API日誌的問題,那麼在設計的時候,我們需要考慮使API日誌收集可靈活配置。基於簡單配置的考慮,我們將這些配置放到Nacos配置中心,如果有更多詳細定製化的需求可以設計實現系統配置介面,將配置放到Redis快取。
因為請求中的RequestBody和ResponseBody都是隻能讀取一次的,所以這裡需要在過濾器中對資料進行一下處理,儘管Gateway提供了快取RequestBody的過濾器AdaptCachedBodyGlobalFilter,但是我們這裡除了一些對請求的定製化需求外,有可能會用到ResponseBody,所以這裡最好還是自定義過濾器。
有一款開源外掛spring-cloud-gateway-plugin非常全面的實現Gateway收集請求日誌的過濾器,這裡我們直接引用其實現,因為此款外掛除了日誌記錄還有其他不需要的功能,且外掛依賴SpringCloud版本,所以,這裡只取其日誌記錄的功能,並根據我們的需求進行部分調整。
1、在我們的配置檔案中增加如下配置項:
- 日誌外掛開關
- 記錄請求引數開關
- 記錄返回引數開關
- 需要記錄API日誌的微服務ID列表
- 需要記錄API日誌的URL列表
spring:
cloud:
gateway:
plugin:
config:
# 是否開啟Gateway日誌外掛
enable: true
# requestLog==true && responseLog==false時,只記錄請求引數日誌;responseLog==true時,記錄請求引數和返回引數。
# 記錄入參 requestLog==false時,不記錄日誌
requestLog: true
# 生產環境,儘量只記錄入參,因為返回引數資料太大,且大多數情況是無意義的
# 記錄出參
responseLog: true
# all: 所有日誌 configure:serviceId和pathList交集 serviceId: 只記錄serviceId配置列表 pathList:只記錄pathList配置列表
logType: all
serviceIdList:
- "gitegg-oauth"
- "gitegg-service-system"
pathList:
- "/gitegg-oauth/oauth/token"
- "/gitegg-oauth/oauth/user/info"
2、GatewayPluginConfig配置類,可以根據配置項,選擇啟用初始化哪些過濾器,根據spring-cloud-gateway-plugin GatewayPluginConfig.java修改。
/**
* Quoted from @see https://github.com/chenggangpro/spring-cloud-gateway-plugin
*
* Gateway Plugin Config
* @author chenggang
* @date 2019/01/29
*/
@Slf4j
@Configuration
public class GatewayPluginConfig {
@Bean
@ConditionalOnMissingBean(GatewayPluginProperties.class)
@ConfigurationProperties(GatewayPluginProperties.GATEWAY_PLUGIN_PROPERTIES_PREFIX)
public GatewayPluginProperties gatewayPluginProperties(){
return new GatewayPluginProperties();
}
@Bean
@ConditionalOnBean(GatewayPluginProperties.class)
@ConditionalOnMissingBean(GatewayRequestContextFilter.class)
@ConditionalOnProperty(prefix = GatewayPluginProperties.GATEWAY_PLUGIN_PROPERTIES_PREFIX, value = { "enable", "requestLog" },havingValue = "true")
public GatewayRequestContextFilter gatewayContextFilter(@Autowired GatewayPluginProperties gatewayPluginProperties , @Autowired(required = false) ContextExtraDataGenerator contextExtraDataGenerator){
GatewayRequestContextFilter gatewayContextFilter = new GatewayRequestContextFilter(gatewayPluginProperties, contextExtraDataGenerator);
log.debug("Load GatewayContextFilter Config Bean");
return gatewayContextFilter;
}
@Bean
@ConditionalOnMissingBean(GatewayResponseContextFilter.class)
@ConditionalOnProperty(prefix = GatewayPluginProperties.GATEWAY_PLUGIN_PROPERTIES_PREFIX, value = { "enable", "responseLog" }, havingValue = "true")
public GatewayResponseContextFilter responseLogFilter(){
GatewayResponseContextFilter responseLogFilter = new GatewayResponseContextFilter();
log.debug("Load Response Log Filter Config Bean");
return responseLogFilter;
}
@Bean
@ConditionalOnBean(GatewayPluginProperties.class)
@ConditionalOnMissingBean(RemoveGatewayContextFilter.class)
@ConditionalOnProperty(prefix = GatewayPluginProperties.GATEWAY_PLUGIN_PROPERTIES_PREFIX, value = { "enable" }, havingValue = "true")
public RemoveGatewayContextFilter removeGatewayContextFilter(){
RemoveGatewayContextFilter gatewayContextFilter = new RemoveGatewayContextFilter();
log.debug("Load RemoveGatewayContextFilter Config Bean");
return gatewayContextFilter;
}
@Bean
@ConditionalOnMissingBean(RequestLogFilter.class)
@ConditionalOnProperty(prefix = GatewayPluginProperties.GATEWAY_PLUGIN_PROPERTIES_PREFIX, value = { "enable" },havingValue = "true")
public RequestLogFilter requestLogFilter(@Autowired GatewayPluginProperties gatewayPluginProperties){
RequestLogFilter requestLogFilter = new RequestLogFilter(gatewayPluginProperties);
log.debug("Load Request Log Filter Config Bean");
return requestLogFilter;
}
}
3、GatewayRequestContextFilter處理請求引數的過濾器,根據spring-cloud-gateway-plugin GatewayContextFilter.java修改。
/**
* Quoted from @see https://github.com/chenggangpro/spring-cloud-gateway-plugin
*
* Gateway Context Filter
* @author chenggang
* @date 2019/01/29
*/
@Slf4j
@AllArgsConstructor
public class GatewayRequestContextFilter implements GlobalFilter, Ordered {
private GatewayPluginProperties gatewayPluginProperties;
private ContextExtraDataGenerator contextExtraDataGenerator;
private static final AntPathMatcher ANT_PATH_MATCHER = new AntPathMatcher();
/**
* default HttpMessageReader
*/
private static final List<HttpMessageReader<?>> MESSAGE_READERS = HandlerStrategies.withDefaults().messageReaders();
@Override
public Mono<Void> filter(ServerWebExchange exchange, GatewayFilterChain chain) {
ServerHttpRequest request = exchange.getRequest();
GatewayContext gatewayContext = new GatewayContext();
gatewayContext.setReadRequestData(shouldReadRequestData(exchange));
gatewayContext.setReadResponseData(gatewayPluginProperties.getResponseLog());
HttpHeaders headers = request.getHeaders();
gatewayContext.setRequestHeaders(headers);
if(Objects.nonNull(contextExtraDataGenerator)){
GatewayContextExtraData gatewayContextExtraData = contextExtraDataGenerator.generateContextExtraData(exchange);
gatewayContext.setGatewayContextExtraData(gatewayContextExtraData);
}
if(!gatewayContext.getReadRequestData()){
exchange.getAttributes().put(GatewayContext.CACHE_GATEWAY_CONTEXT, gatewayContext);
log.debug("[GatewayContext]Properties Set To Not Read Request Data");
return chain.filter(exchange);
}
gatewayContext.getAllRequestData().addAll(request.getQueryParams());
/*
* save gateway context into exchange
*/
exchange.getAttributes().put(GatewayContext.CACHE_GATEWAY_CONTEXT, gatewayContext);
MediaType contentType = headers.getContentType();
if(headers.getContentLength()>0){
if(MediaType.APPLICATION_JSON.equals(contentType) || MediaType.APPLICATION_JSON_UTF8.equals(contentType)){
return readBody(exchange, chain,gatewayContext);
}
if(MediaType.APPLICATION_FORM_URLENCODED.equals(contentType)){
return readFormData(exchange, chain,gatewayContext);
}
}
log.debug("[GatewayContext]ContentType:{},Gateway context is set with {}",contentType, gatewayContext);
return chain.filter(exchange);
}
@Override
public int getOrder() {
return FilterOrderEnum.GATEWAY_CONTEXT_FILTER.getOrder();
}
/**
* check should read request data whether or not
* @return boolean
*/
private boolean shouldReadRequestData(ServerWebExchange exchange){
if(gatewayPluginProperties.getRequestLog()
&& GatewayLogTypeEnum.ALL.getType().equals(gatewayPluginProperties.getLogType())){
log.debug("[GatewayContext]Properties Set Read All Request Data");
return true;
}
boolean serviceFlag = false;
boolean pathFlag = false;
boolean lbFlag = false;
List<String> readRequestDataServiceIdList = gatewayPluginProperties.getServiceIdList();
List<String> readRequestDataPathList = gatewayPluginProperties.getPathList();
if(!CollectionUtils.isEmpty(readRequestDataPathList)
&& (GatewayLogTypeEnum.PATH.getType().equals(gatewayPluginProperties.getLogType())
|| GatewayLogTypeEnum.CONFIGURE.getType().equals(gatewayPluginProperties.getLogType()))){
String requestPath = exchange.getRequest().getPath().pathWithinApplication().value();
for(String path : readRequestDataPathList){
if(ANT_PATH_MATCHER.match(path,requestPath)){
log.debug("[GatewayContext]Properties Set Read Specific Request Data With Request Path:{},Math Pattern:{}", requestPath, path);
pathFlag = true;
break;
}
}
}
Route route = exchange.getAttribute(ServerWebExchangeUtils.GATEWAY_ROUTE_ATTR);
URI routeUri = route.getUri();
if(!"lb".equalsIgnoreCase(routeUri.getScheme())){
lbFlag = true;
}
String routeServiceId = routeUri.getHost().toLowerCase();
if(!CollectionUtils.isEmpty(readRequestDataServiceIdList)
&& (GatewayLogTypeEnum.SERVICE.getType().equals(gatewayPluginProperties.getLogType())
|| GatewayLogTypeEnum.CONFIGURE.getType().equals(gatewayPluginProperties.getLogType()))){
if(readRequestDataServiceIdList.contains(routeServiceId)){
log.debug("[GatewayContext]Properties Set Read Specific Request Data With ServiceId:{}",routeServiceId);
serviceFlag = true;
}
}
if (GatewayLogTypeEnum.CONFIGURE.getType().equals(gatewayPluginProperties.getLogType())
&& serviceFlag && pathFlag && !lbFlag)
{
return true;
}
else if (GatewayLogTypeEnum.SERVICE.getType().equals(gatewayPluginProperties.getLogType())
&& serviceFlag && !lbFlag)
{
return true;
}
else if (GatewayLogTypeEnum.PATH.getType().equals(gatewayPluginProperties.getLogType())
&& pathFlag)
{
return true;
}
return false;
}
/**
* ReadFormData
* @param exchange
* @param chain
* @return
*/
private Mono<Void> readFormData(ServerWebExchange exchange, GatewayFilterChain chain, GatewayContext gatewayContext){
HttpHeaders headers = exchange.getRequest().getHeaders();
return exchange.getFormData()
.doOnNext(multiValueMap -> {
gatewayContext.setFormData(multiValueMap);
gatewayContext.getAllRequestData().addAll(multiValueMap);
log.debug("[GatewayContext]Read FormData Success");
})
.then(Mono.defer(() -> {
Charset charset = headers.getContentType().getCharset();
charset = charset == null? StandardCharsets.UTF_8:charset;
String charsetName = charset.name();
MultiValueMap<String, String> formData = gatewayContext.getFormData();
/*
* formData is empty just return
*/
if(null == formData || formData.isEmpty()){
return chain.filter(exchange);
}
StringBuilder formDataBodyBuilder = new StringBuilder();
String entryKey;
List<String> entryValue;
try {
/*
* repackage form data
*/
for (Map.Entry<String, List<String>> entry : formData.entrySet()) {
entryKey = entry.getKey();
entryValue = entry.getValue();
if (entryValue.size() > 1) {
for(String value : entryValue){
formDataBodyBuilder.append(entryKey).append("=").append(URLEncoder.encode(value, charsetName)).append("&");
}
} else {
formDataBodyBuilder.append(entryKey).append("=").append(URLEncoder.encode(entryValue.get(0), charsetName)).append("&");
}
}
}catch (UnsupportedEncodingException e){}
/*
* substring with the last char '&'
*/
String formDataBodyString = "";
if(formDataBodyBuilder.length()>0){
formDataBodyString = formDataBodyBuilder.substring(0, formDataBodyBuilder.length() - 1);
}
/*
* get data bytes
*/
byte[] bodyBytes = formDataBodyString.getBytes(charset);
int contentLength = bodyBytes.length;
HttpHeaders httpHeaders = new HttpHeaders();
httpHeaders.putAll(exchange.getRequest().getHeaders());
httpHeaders.remove(HttpHeaders.CONTENT_LENGTH);
/*
* in case of content-length not matched
*/
httpHeaders.setContentLength(contentLength);
/*
* use BodyInserter to InsertFormData Body
*/
BodyInserter<String, ReactiveHttpOutputMessage> bodyInserter = BodyInserters.fromObject(formDataBodyString);
CachedBodyOutputMessage cachedBodyOutputMessage = new CachedBodyOutputMessage(exchange, httpHeaders);
log.debug("[GatewayContext]Rewrite Form Data :{}",formDataBodyString);
return bodyInserter.insert(cachedBodyOutputMessage, new BodyInserterContext())
.then(Mono.defer(() -> {
ServerHttpRequestDecorator decorator = new ServerHttpRequestDecorator(
exchange.getRequest()) {
@Override
public HttpHeaders getHeaders() {
return httpHeaders;
}
@Override
public Flux<DataBuffer> getBody() {
return cachedBodyOutputMessage.getBody();
}
};
return chain.filter(exchange.mutate().request(decorator).build());
}));
}));
}
/**
* ReadJsonBody
* @param exchange
* @param chain
* @return
*/
private Mono<Void> readBody(ServerWebExchange exchange, GatewayFilterChain chain, GatewayContext gatewayContext){
return DataBufferUtils.join(exchange.getRequest().getBody())
.flatMap(dataBuffer -> {
/*
* read the body Flux<DataBuffer>, and release the buffer
* //TODO when SpringCloudGateway Version Release To G.SR2,this can be update with the new version's feature
* see PR https://github.com/spring-cloud/spring-cloud-gateway/pull/1095
*/
byte[] bytes = new byte[dataBuffer.readableByteCount()];
dataBuffer.read(bytes);
DataBufferUtils.release(dataBuffer);
Flux<DataBuffer> cachedFlux = Flux.defer(() -> {
DataBuffer buffer = exchange.getResponse().bufferFactory().wrap(bytes);
DataBufferUtils.retain(buffer);
return Mono.just(buffer);
});
/*
* repackage ServerHttpRequest
*/
ServerHttpRequest mutatedRequest = new ServerHttpRequestDecorator(exchange.getRequest()) {
@Override
public Flux<DataBuffer> getBody() {
return cachedFlux;
}
};
ServerWebExchange mutatedExchange = exchange.mutate().request(mutatedRequest).build();
return ServerRequest.create(mutatedExchange, MESSAGE_READERS)
.bodyToMono(String.class)
.doOnNext(objectValue -> {
gatewayContext.setRequestBody(objectValue);
log.debug("[GatewayContext]Read JsonBody Success");
}).then(chain.filter(mutatedExchange));
});
}
}
4、GatewayResponseContextFilter處理返回引數的過濾器,根據spring-cloud-gateway-plugin ResponseLogFilter.java修改。
/**
* Quoted from @see https://github.com/chenggangpro/spring-cloud-gateway-plugin
*
*
* @author: chenggang
* @createTime: 2019-04-11
* @version: v1.2.0
*/
@Slf4j
public class GatewayResponseContextFilter implements GlobalFilter, Ordered {
@Override
public Mono<Void> filter(ServerWebExchange exchange, GatewayFilterChain chain) {
GatewayContext gatewayContext = exchange.getAttribute(GatewayContext.CACHE_GATEWAY_CONTEXT);
if(!gatewayContext.getReadResponseData()){
log.debug("[ResponseLogFilter]Properties Set Not To Read Response Data");
return chain.filter(exchange);
}
ServerHttpResponseDecorator responseDecorator = new ServerHttpResponseDecorator(exchange.getResponse()) {
@Override
public Mono<Void> writeWith(Publisher<? extends DataBuffer> body) {
return DataBufferUtils.join(Flux.from(body))
.flatMap(dataBuffer -> {
byte[] bytes = new byte[dataBuffer.readableByteCount()];
dataBuffer.read(bytes);
DataBufferUtils.release(dataBuffer);
Flux<DataBuffer> cachedFlux = Flux.defer(() -> {
DataBuffer buffer = exchange.getResponse().bufferFactory().wrap(bytes);
DataBufferUtils.retain(buffer);
return Mono.just(buffer);
});
BodyInserter<Flux<DataBuffer>, ReactiveHttpOutputMessage> bodyInserter = BodyInserters.fromDataBuffers(cachedFlux);
CachedBodyOutputMessage outputMessage = new CachedBodyOutputMessage(exchange, exchange.getResponse().getHeaders());
DefaultClientResponse clientResponse = new DefaultClientResponse(new ResponseAdapter(cachedFlux, exchange.getResponse().getHeaders()), ExchangeStrategies.withDefaults());
Optional<MediaType> optionalMediaType = clientResponse.headers().contentType();
if(!optionalMediaType.isPresent()){
log.debug("[ResponseLogFilter]Response ContentType Is Not Exist");
return Mono.defer(()-> bodyInserter.insert(outputMessage, new BodyInserterContext())
.then(Mono.defer(() -> {
Flux<DataBuffer> messageBody = cachedFlux;
HttpHeaders headers = getDelegate().getHeaders();
if (!headers.containsKey(HttpHeaders.TRANSFER_ENCODING)) {
messageBody = messageBody.doOnNext(data -> headers.setContentLength(data.readableByteCount()));
}
return getDelegate().writeWith(messageBody);
})));
}
MediaType contentType = optionalMediaType.get();
if(!contentType.equals(MediaType.APPLICATION_JSON) && !contentType.equals(MediaType.APPLICATION_JSON_UTF8)){
log.debug("[ResponseLogFilter]Response ContentType Is Not APPLICATION_JSON Or APPLICATION_JSON_UTF8");
return Mono.defer(()-> bodyInserter.insert(outputMessage, new BodyInserterContext())
.then(Mono.defer(() -> {
Flux<DataBuffer> messageBody = cachedFlux;
HttpHeaders headers = getDelegate().getHeaders();
if (!headers.containsKey(HttpHeaders.TRANSFER_ENCODING)) {
messageBody = messageBody.doOnNext(data -> headers.setContentLength(data.readableByteCount()));
}
return getDelegate().writeWith(messageBody);
})));
}
return clientResponse.bodyToMono(Object.class)
.doOnNext(originalBody -> {
GatewayContext gatewayContext = exchange.getAttribute(GatewayContext.CACHE_GATEWAY_CONTEXT);
gatewayContext.setResponseBody(originalBody);
log.debug("[ResponseLogFilter]Read Response Data To Gateway Context Success");
})
.then(Mono.defer(()-> bodyInserter.insert(outputMessage, new BodyInserterContext())
.then(Mono.defer(() -> {
Flux<DataBuffer> messageBody = cachedFlux;
HttpHeaders headers = getDelegate().getHeaders();
if (!headers.containsKey(HttpHeaders.TRANSFER_ENCODING)) {
messageBody = messageBody.doOnNext(data -> headers.setContentLength(data.readableByteCount()));
}
return getDelegate().writeWith(messageBody);
}))));
});
}
@Override
public Mono<Void> writeAndFlushWith(Publisher<? extends Publisher<? extends DataBuffer>> body) {
return writeWith(Flux.from(body)
.flatMapSequential(p -> p));
}
};
return chain.filter(exchange.mutate().response(responseDecorator).build());
}
@Override
public int getOrder() {
return FilterOrderEnum.RESPONSE_DATA_FILTER.getOrder();
}
public class ResponseAdapter implements ClientHttpResponse {
private final Flux<DataBuffer> flux;
private final HttpHeaders headers;
public ResponseAdapter(Publisher<? extends DataBuffer> body, HttpHeaders headers) {
this.headers = headers;
if (body instanceof Flux) {
flux = (Flux) body;
} else {
flux = ((Mono)body).flux();
}
}
@Override
public Flux<DataBuffer> getBody() {
return flux;
}
@Override
public HttpHeaders getHeaders() {
return headers;
}
@Override
public HttpStatus getStatusCode() {
return null;
}
@Override
public int getRawStatusCode() {
return 0;
}
@Override
public MultiValueMap<String, ResponseCookie> getCookies() {
return null;
}
}
}
5、RemoveGatewayContextFilter清空請求引數的過濾器,根據spring-cloud-gateway-plugin RemoveGatewayContextFilter.java修改。
/**
* Quoted from @see https://github.com/chenggangpro/spring-cloud-gateway-plugin
*
* remove gatewayContext Attribute
* @author chenggang
* @date 2019/06/19
*/
@Slf4j
public class RemoveGatewayContextFilter implements GlobalFilter, Ordered {
@Override
public Mono<Void> filter(ServerWebExchange exchange, GatewayFilterChain chain) {
return chain.filter(exchange).doFinally(s -> exchange.getAttributes().remove(GatewayContext.CACHE_GATEWAY_CONTEXT));
}
@Override
public int getOrder() {
return HIGHEST_PRECEDENCE;
}
}
6、RequestLogFilter進行日誌記錄的過濾器,根據spring-cloud-gateway-plugin RequestLogFilter.java修改。
/**
* Quoted from @see https://github.com/chenggangpro/spring-cloud-gateway-plugin
*
* Filter To Log Request And Response(exclude response body)
* @author chenggang
* @date 2019/01/29
*/
@Log4j2
@AllArgsConstructor
public class RequestLogFilter implements GlobalFilter, Ordered {
private static final String START_TIME = "startTime";
private static final String HTTP_SCHEME = "http";
private static final String HTTPS_SCHEME = "https";
private GatewayPluginProperties gatewayPluginProperties;
@Override
public Mono<Void> filter(ServerWebExchange exchange, GatewayFilterChain chain) {
ServerHttpRequest request = exchange.getRequest();
URI requestURI = request.getURI();
String scheme = requestURI.getScheme();
GatewayContext gatewayContext = exchange.getAttribute(GatewayContext.CACHE_GATEWAY_CONTEXT);
/*
* not http or https scheme
*/
if ((!HTTP_SCHEME.equalsIgnoreCase(scheme) && !HTTPS_SCHEME.equals(scheme)) || !gatewayContext.getReadRequestData()){
return chain.filter(exchange);
}
long startTime = System.currentTimeMillis();
exchange.getAttributes().put(START_TIME, startTime);
// 當返回引數為true時,記錄請求引數和返回引數
if (gatewayPluginProperties.getEnable())
{
return chain.filter(exchange).then(Mono.fromRunnable(() -> logApiRequest(exchange)));
}
else {
return chain.filter(exchange);
}
}
@Override
public int getOrder() {
return FilterOrderEnum.REQUEST_LOG_FILTER.getOrder();
}
/**
* log api request
* @param exchange
*/
private Mono<Void> logApiRequest(ServerWebExchange exchange){
ServerHttpRequest request = exchange.getRequest();
URI requestURI = request.getURI();
String scheme = requestURI.getScheme();
Long startTime = exchange.getAttribute(START_TIME);
Long endTime = System.currentTimeMillis();
Long duration = ( endTime - startTime);
ServerHttpResponse response = exchange.getResponse();
GatewayApiLog gatewayApiLog = new GatewayApiLog();
gatewayApiLog.setClientHost(requestURI.getHost());
gatewayApiLog.setClientIp(IpUtils.getIP(request));
gatewayApiLog.setStartTime(startTime);
gatewayApiLog.setEndTime(startTime);
gatewayApiLog.setDuration(duration);
gatewayApiLog.setMethod(request.getMethodValue());
gatewayApiLog.setScheme(scheme);
gatewayApiLog.setRequestUri(requestURI.getPath());
gatewayApiLog.setResponseCode(String.valueOf(response.getRawStatusCode()));
GatewayContext gatewayContext = exchange.getAttribute(GatewayContext.CACHE_GATEWAY_CONTEXT);
// 記錄引數請求日誌
if (gatewayPluginProperties.getRequestLog())
{
MultiValueMap<String, String> queryParams = request.getQueryParams();
if(!queryParams.isEmpty()){
queryParams.forEach((key,value)-> log.debug("[RequestLogFilter](Request)Query Param :Key->({}),Value->({})",key,value));
gatewayApiLog.setQueryParams(JsonUtils.mapToJson(queryParams));
}
HttpHeaders headers = request.getHeaders();
MediaType contentType = headers.getContentType();
long length = headers.getContentLength();
log.debug("[RequestLogFilter](Request)ContentType:{},Content Length:{}",contentType,length);
if(length>0 && null != contentType && (contentType.includes(MediaType.APPLICATION_JSON)
||contentType.includes(MediaType.APPLICATION_JSON_UTF8))){
log.debug("[RequestLogFilter](Request)JsonBody:{}",gatewayContext.getRequestBody());
gatewayApiLog.setRequestBody(gatewayContext.getRequestBody());
}
if(length>0 && null != contentType && contentType.includes(MediaType.APPLICATION_FORM_URLENCODED)){
log.debug("[RequestLogFilter](Request)FormData:{}",gatewayContext.getFormData());
gatewayApiLog.setRequestBody(JsonUtils.mapToJson(gatewayContext.getFormData()));
}
}
// 記錄引數返回日誌
if (gatewayPluginProperties.getResponseLog())
{
log.debug("[RequestLogFilter](Response)HttpStatus:{}",response.getStatusCode());
HttpHeaders headers = response.getHeaders();
headers.forEach((key,value)-> log.debug("[RequestLogFilter]Headers:Key->{},Value->{}",key,value));
MediaType contentType = headers.getContentType();
long length = headers.getContentLength();
log.info("[RequestLogFilter](Response)ContentType:{},Content Length:{}", contentType, length);
log.debug("[RequestLogFilter](Response)Response Body:{}", gatewayContext.getResponseBody());
try {
gatewayApiLog.setResponseBody(JsonUtils.objToJson(gatewayContext.getResponseBody()));
} catch (Exception e) {
log.error("記錄API日誌返回資料轉換JSON發生錯誤:{}", e);
}
log.debug("[RequestLogFilter](Response)Original Path:{},Cost:{} ms", exchange.getRequest().getURI().getPath(), duration);
}
Route route = exchange.getAttribute(ServerWebExchangeUtils.GATEWAY_ROUTE_ATTR);
URI routeUri = route.getUri();
String routeServiceId = routeUri.getHost().toLowerCase();
// API日誌記錄級別
try {
log.log(LogLevelConstant.API_LEVEL,"{\"serviceId\":{}, \"data\":{}}", routeServiceId, JsonUtils.objToJson(gatewayApiLog));
} catch (Exception e) {
log.error("記錄API日誌資料發生錯誤:{}", e);
}
return Mono.empty();
}
}
7、啟動服務,對資料進行測試,我們可以在控制檯啟動Kfaka消費者,並檢視是否有api_log主題的訊息:
8、關於日誌資料的儲存和處理
將日誌訊息儲存到檔案或者Kafka之後,就需要考慮如何處理這些資料,在有規模的微服務叢集模式下,是儘量不提倡或者說禁止儲存到MySQL這類關聯式資料庫的,如果實在有需要的話,可以通過上篇介紹的,使用Spring Cloud Stream消費日誌訊息,並儲存到指定資料庫。下一篇講如何搭建ELK日誌分析系統,處理分析提取這些資料量龐大的日誌資料。