Android OkHttp原始碼解析入門教程(二)

Focusing發表於2019-01-22

文章連結:

Android OkHttp原始碼解析入門教程:同步和非同步(一)

Android OkHttp原始碼解析入門教程:攔截器和責任鏈(二)

前言

上一篇文章我們主要講解OkHttp的基本使用以及同步請求,非同步請求的原始碼分析,相信大家也對其內部的基本流程以及作用有了大致的瞭解,還記得上一篇我們提到過getResponseWithInterceptorChain責任鏈,那麼這篇文章就帶你深入OkHttp內部5大攔截器(Interceptors)和責任鏈模式的原始碼分析,滴滴滴。

正文

首先,我們要清楚OkHttp攔截器(Interceptors)是幹什麼用的,來看看官網的解釋

Interceptors are a powerful mechanism that can monitor, rewrite, and retry calls
攔截器是一種強大的機制,可以做網路監視、重寫和重試呼叫

這句話怎麼理解呢?簡單來說,就好比如你帶著盤纏上京趕考,遇到了五批賊寇,但其奇怪的是他們的目標有的是為財,有的是為色,各不相同;例子中的你就相當於一個正在執行任務的網路請求,賊寇就是攔截器,它們的作用就是取走請求所攜帶的引數拿過去修改,判斷,校驗然後放行,其實際是實現了AOP(面向切面程式設計),關於AOP的瞭解,相信接觸過Spring框架的是最熟悉不過的。

我們再來看官方給出的攔截器圖

攔截器

攔截器有兩種:APP層面的攔截器(Application Interception)、網路攔截器(Network Interception),而我們這篇文章注重講解OkHttp core(系統內部五大攔截器)

那麼OkHttp系統內部的攔截器有哪些呢?作用分別是幹什麼的?

Android OkHttp原始碼解析入門教程(二)
緊接上篇文章,我們到getResponseWithInterceptorChain()原始碼中分析,這些攔截器是怎麼發揮各自作用的?

  Response getResponseWithInterceptorChain() throws IOException {
    // Build a full stack of interceptors.
    // 責任鏈
    List<Interceptor> interceptors = new ArrayList<>();
    // 新增自定義攔截器(Application Interception)
    interceptors.addAll(client.interceptors()); 
    
    // 新增 負責處理錯誤,失敗重試,重定向攔截器 RetryAndFollowUpInterceptor
    interceptors.add(retryAndFollowUpInterceptor);
  
    // 新增 負責補充使用者建立請求中缺少一些必須的請求頭以及壓縮處理的 BridgeInterceptor
    interceptors.add(new BridgeInterceptor(client.cookieJar()));
    
    // 新增 負責進行快取處理的 CacheInterceptor
    interceptors.add(new CacheInterceptor(client.internalCache()));
    
    // 新增 負責與伺服器建立連結的 ConnectInterceptor
    interceptors.add(new ConnectInterceptor(client));
    
    if (!forWebSocket) {
     // 新增網路攔截器(Network Interception)
      interceptors.addAll(client.networkInterceptors()); 
    }
    //新增 負責向伺服器傳送請求資料、從伺服器讀取響應資料的 CallServerInterceptor
    interceptors.add(new CallServerInterceptor(forWebSocket));
    
    // 將interceptors集合以及相應引數傳到RealInterceptorChain構造方法中完成責任鏈的建立
    Interceptor.Chain chain = new RealInterceptorChain(interceptors, null, null, null, 0,
        originalRequest, this, eventListener, client.connectTimeoutMillis(),
        client.readTimeoutMillis(), client.writeTimeoutMillis());
    // 呼叫責任鏈的執行
    return chain.proceed(originalRequest);
  }
複製程式碼

從這裡可以看出,getResponseWithInterceptorChain方法首先建立了一系列攔截器,並整合到一個Interceptor集合當中,同時每個攔截器各自負責不同的部分,處理不同的功能,然後把集合加入到RealInterceptorChain構造方法中完成攔截器鏈的建立,而責任鏈模式就是管理多個攔截器鏈。 關於責任鏈模式的理解,簡單來說,其實日常的開發程式碼中隨處可見

Android OkHttp原始碼解析入門教程(二)

 @Override
    protected void onActivityResult(int requestCode, int resultCode, Intent data) {
        super.onActivityResult(requestCode, resultCode, data);
        if (resultCode == RESULT_OK) {
            switch (requestCode) {
                case 1:
                      System.out.println("我是第一個攔截器: " + requestCode);
                    break;
                case 2:
                      System.out.println("我是第二個攔截器: " + requestCode);
                    break;
                case 3:
                      System.out.println("我是第三個攔截器: " + requestCode);
                    break;
                default:
                    break;
            }
        }
    }
複製程式碼

相信當你看到這段程式碼就明白責任鏈模式是幹什麼的吧,根據requestCode的狀態處理不同的業務,不屬於自己的任務傳遞給下一個,如果還不明白,你肯定是個假的Android程式設計師,當然這是個非常簡化的責任鏈模式,很多定義都有錯誤,這裡只是給大家做個簡單介紹下責任鏈模式的流程是怎樣的,要想深入並應用到實際開發中,還需要看相關文件,你懂我意思吧。

Android OkHttp原始碼解析入門教程(二)
我們直奔主題,上文也說到,getResponseWithInterceptorChain最終呼叫的RealInterceptorChain的proceed方法,我們接著從原始碼出發

public interface Interceptor {
  // 每個攔截器會根據Chain攔截器鏈觸發對下一個攔截器的呼叫,直到最後一個攔截器不觸發
  // 當攔截器鏈中所有的攔截器被依次執行完成後,就會將每次生成後的結果進行組裝
  Response intercept(Chain chain) throws IOException;

  interface Chain {
    // 返回請求
    Request request();
    
    // 處理請求
    Response proceed(Request request) throws IOException;
  }
}

public final class RealInterceptorChain implements Interceptor.Chain {
  private final List<Interceptor> interceptors;
  private final StreamAllocation streamAllocation;
  private final HttpCodec httpCodec;
  private final RealConnection connection;
  private final int index;
  private final Request request;
  private final Call call;
  private final EventListener eventListener;
  private final int connectTimeout;
  private final int readTimeout;
  private final int writeTimeout;
  private int calls;

  public RealInterceptorChain(List<Interceptor> interceptors, StreamAllocation streamAllocation,
      HttpCodec httpCodec, RealConnection connection, int index, Request request, Call call,
      EventListener eventListener, int connectTimeout, int readTimeout, int writeTimeout) {
    this.interceptors = interceptors;
    this.connection = connection;
    this.streamAllocation = streamAllocation;
    this.httpCodec = httpCodec;
    this.index = index;
    this.request = request;
    this.call = call;
    this.eventListener = eventListener;
    this.connectTimeout = connectTimeout;
    this.readTimeout = readTimeout;
    this.writeTimeout = writeTimeout;
  }
   @Override public Response proceed(Request request) throws IOException {
    return proceed(request, streamAllocation, httpCodec, connection);
  }
}
複製程式碼

上述程式碼中可以看出Interceptor是個介面類,RealInterceptorChain實現了Interceptor.chain方法,在這裡我們也看到了之前我們初始化時傳進來的引數,可見我們實際呼叫的是RealInterceptorChain的proceed方法,接著往下看

  public Response proceed(Request request, StreamAllocation streamAllocation, HttpCodec httpCodec,
      RealConnection connection) throws IOException {
    if (index >= interceptors.size()) throw new AssertionError();

    calls++;

    // 去掉一些邏輯判斷處理,留下核心程式碼

    // Call the next interceptor in the chain.(呼叫鏈中的下一個攔截器)

    // 建立下一個攔截器鏈,index+1表示如果要繼續訪問攔截器鏈中的攔截器,只能從下一個攔截器訪問,而不能從當前攔截器開始
    // 即根據index游標不斷建立新的攔截器鏈,新的攔截器鏈會比之前少一個攔截器,這樣就可以防止重複執行
    RealInterceptorChain next = new RealInterceptorChain(interceptors, streamAllocation, httpCodec,
        connection, index + 1, request, call, eventListener, connectTimeout, readTimeout,
        writeTimeout);
    // 獲取當前攔截器
    Interceptor interceptor = interceptors.get(index);
    // 執行當前攔截器,並將下一個攔截器鏈傳入
    Response response = interceptor.intercept(next);

    return response;
  }
複製程式碼

看到這裡你會覺得一頭霧水,簡單瞭解後,你的疑問可能如下

1.index+1的作用是幹什麼的?

2.當前攔截器是怎麼執行的?

3.攔截器鏈是怎麼依次呼叫執行的?

4.上面3點都理解,但每個攔截器返回的結果都不一樣,它是怎麼返回給我一個最終結果?

首先,我們要清楚責任鏈模式的流程,如同上文所說的,書生攜帶書籍,軟銀,家眷上京趕考,途遇強盜,第一批強盜搶走了書籍並且放行,第二批強盜搶走了軟銀放行,一直到最後一無所有才會停止。 書生就是最開始的RealInterceptorChain,強盜就是Interceptor,,index+1的作用就是建立一個新的攔截器鏈,簡單來說,就是 書生(書籍,軟銀,家眷) →劫書籍強盜→書生(軟銀,家眷)→劫財強盜→書生(家眷)→... 即通過不斷建立新的RealInterceptorChain鏈輪循執行interceptors中的攔截器形成一個責任鏈(搶劫鏈)模式直到全部攔截器鏈中的攔截器處理完成後返回最終結果

// 獲取當前攔截器 Interceptor interceptor = interceptors.get(index); index+1不斷執行,list.get(0),list.get(1),... 這樣就能取出interceptors中所有的攔截器,我們之前也說過Interceptor是一個介面,okhttp內部的攔截器都實現了這個介面處理各自的業務

Android OkHttp原始碼解析入門教程(二)
如此,每當 Interceptor interceptor = interceptors.get(index)執行時,就可以根據interceptor.intercept(next)執行相應攔截器實現的intercept方法處理相應的業務,同時把建立好新的攔截器鏈傳進來,這樣就可以避免重複執行一個攔截器。

RetryAndFollowUpInterceptor(重定向攔截器)

負責處理錯誤,失敗重試,重定向

/**
 * This interceptor recovers from failures and follows redirects as necessary. It may throw an
 * {@link IOException} if the call was canceled.
 */
public final class RetryAndFollowUpInterceptor implements Interceptor {
  /**
   * How many redirects and auth challenges should we attempt? Chrome follows 21 redirects; Firefox,
   * curl, and wget follow 20; Safari follows 16; and HTTP/1.0 recommends 5.
   */
  //最大失敗重連次數:
  private static final int MAX_FOLLOW_UPS = 20;

  public RetryAndFollowUpInterceptor(OkHttpClient client, boolean forWebSocket) {
    this.client = client;
    this.forWebSocket = forWebSocket;
  }

@Override public Response intercept(Chain chain) throws IOException {
    Request request = chain.request();
    // 建立執行Http請求所需要的物件,,通過責任鏈模式不斷傳遞,直到在ConnectInterceptor中具體使用,
    // 主要用於 ①獲取連線服務端的Connection ②連線用於服務端進行資料傳輸的輸入輸出流
    // 1.全域性的連線池,2.連線線路Address,3.堆疊物件
    streamAllocation = new StreamAllocation(
        client.connectionPool(), createAddress(request.url()), callStackTrace);

    int followUpCount = 0;
    Response priorResponse = null;  // 最終response
    while (true) {
      if (canceled) {
        streamAllocation.release();
        throw new IOException("Canceled");
      }

      Response response = null;
      boolean releaseConnection = true;
      try {
        //  執行下一個攔截器,即BridgeInterceptor
       // 將初始化好的連線物件傳遞給下一個攔截器,通過proceed方法執行下一個攔截器鏈
      // 這裡返回的response是下一個攔截器處理返回的response,通過priorResponse不斷結合,最終成為返回給我們的結果
        response = ((RealInterceptorChain) chain).proceed(request, streamAllocation, null, null);
        releaseConnection = false;
      } catch (RouteException e) {
        // The attempt to connect via a route failed. The request will not have been sent.
        //  如果有異常,判斷是否要恢復
        if (!recover(e.getLastConnectException(), false, request)) {
          throw e.getLastConnectException();
        }
        releaseConnection = false;
        continue;
      } catch (IOException e) {
        // An attempt to communicate with a server failed. The request may have been sent.
        boolean requestSendStarted = !(e instanceof ConnectionShutdownException);
        if (!recover(e, requestSendStarted, request)) throw e;
        releaseConnection = false;
        continue;
      } finally {
        // We're throwing an unchecked exception. Release any resources.
        if (releaseConnection) {
          streamAllocation.streamFailed(null);
          streamAllocation.release();
        }
      }

      // Attach the prior response if it exists. Such responses never have a body.
      if (priorResponse != null) {
        response = response.newBuilder()
            .priorResponse(priorResponse.newBuilder()
                    .body(null)
                    .build())
            .build();
      }
       // 檢查是否符合要求
      Request followUp = followUpRequest(response);

      if (followUp == null) {
        if (!forWebSocket) {
          streamAllocation.release();
        }
        // 返回結果
        return response;
      }
       //不符合,關閉響應流
      closeQuietly(response.body());
       // 是否超過最大限制
      if (++followUpCount > MAX_FOLLOW_UPS) {
        streamAllocation.release();
        throw new ProtocolException("Too many follow-up requests: " + followUpCount);
      }

      if (followUp.body() instanceof UnrepeatableRequestBody) {
        streamAllocation.release();
        throw new HttpRetryException("Cannot retry streamed HTTP body", response.code());
      }
       // 是否有相同的連線
      if (!sameConnection(response, followUp.url())) {
        streamAllocation.release();
        streamAllocation = new StreamAllocation(
            client.connectionPool(), createAddress(followUp.url()), callStackTrace);
      } else if (streamAllocation.codec() != null) {
        throw new IllegalStateException("Closing the body of " + response
            + " didn't close its backing stream. Bad interceptor?");
      }

      request = followUp;
      priorResponse = response;
    }
  }
複製程式碼

在這裡我們看到response = ((RealInterceptorChain) chain).proceed(request, streamAllocation, null, null),很明顯proceed執行的就是我們傳進來的新攔截器鏈,從而形成責任鏈,這樣也就能明白攔截器鏈是如何依次執行的。 其實RetryAndFollowUpInterceptor 主要負責的是失敗重連,但是要注意的,不是所有的網路請求失敗後都可以重連,所以RetryAndFollowUpInterceptor內部會幫我們進行檢測網路請求異常和響應碼情況判斷,符合條件即可進行失敗重連。

StreamAllocation: 建立執行Http請求所需的物件,主要用於

1. 獲取連線服務端的Connection 2.連線用於服務端進行資料傳輸的輸入輸出流

通過責任鏈模式不斷傳遞,直到在ConnectInterceptor中具體使用,

(1)全域性的連線池,(2)連線線路Address,(3)堆疊物件

streamAllocation = new StreamAllocation( client.connectionPool(), createAddress(request.url()), callStackTrace); 這裡的createAddress(request.url())是根據Url建立一個基於Okio的Socket連線的Address物件。狀態處理情況流程如下

1、首先執行whie(true)迴圈,如果取消網路請求canceled,則釋放streamAllocation 資源同時丟擲異常結束

2、執行下一個攔截器鏈,如果發生異常,走到catch裡面,判斷是否恢復請求繼續執行,否則退出釋放

3、 如果priorResponse不為空,則結合當前返回Response和之前響應返回後的Response (這就是為什麼最後返回的是一個完整的Response)

4、呼叫followUpRequest檢視響應是否需要重定向,如果不需要重定向則返回當前請求

5、followUpCount 重定向次數+1,同時判斷是否達到最大重定向次數。符合則釋放streamAllocation並丟擲異常

6、sameConnection檢查是否有相同的連結,相同則StreamAllocation釋放並重建

7、重新設定request,並把當前的Response儲存到priorResponse,繼續while迴圈

由此可以看出RetryAndFollowUpInterceptor主要執行流程:

1)建立StreamAllocation物件

2)呼叫RealInterceptorChain.proceed(...)進行網路請求

3)根據異常結果或則響應結果判斷是否要進行重新請求

4)呼叫下一個攔截器,處理response並返回給上一個攔截器

BridgeInterceptor(橋接攔截器)

負責設定編碼方式,新增頭部,Keep-Alive 連線以及應用層和網路層請求和響應型別之間的相互轉換

/**
 * Bridges from application code to network code. First it builds a network request from a user
 * request. Then it proceeds to call the network. Finally it builds a user response from the network
 * response.
 */
public final class BridgeInterceptor implements Interceptor {
  private final CookieJar cookieJar;

  public BridgeInterceptor(CookieJar cookieJar) {
    this.cookieJar = cookieJar;
  }

  @Override
      //1、cookie的處理, 2、Gzip壓縮
    public Response intercept(Interceptor.Chain chain) throws IOException {
        Request userRequest = chain.request();
        Request.Builder requestBuilder = userRequest.newBuilder();
        RequestBody body = userRequest.body();
        if (body != null) {
            MediaType contentType = body.contentType();
            if (contentType != null) {
                requestBuilder.header("Content-Type", contentType.toString());
            }
            long contentLength = body.contentLength();
            if (contentLength != -1) {
                requestBuilder.header("Content-Length", Long.toString(contentLength));
                requestBuilder.removeHeader("Transfer-Encoding");
            } else {
                requestBuilder.header("Transfer-Encoding", "chunked");
                requestBuilder.removeHeader("Content-Length");
            }
        }
        if (userRequest.header("Host") == null) {
            requestBuilder.header("Host", hostHeader(userRequest.url(), false));
        }
        if (userRequest.header("Connection") == null) {
            requestBuilder.header("Connection", "Keep-Alive");
        }
        // If we add an "Accept-Encoding: gzip" header field we're responsible for also decompressing
        // the transfer stream.
        boolean transparentGzip = false;
        if (userRequest.header("Accept-Encoding") == null && userRequest.header("Range") == null) {
            transparentGzip = true;
            requestBuilder.header("Accept-Encoding", "gzip");
        }
         // 所以返回的cookies不能為空,否則這裡會報空指標
        List<Cookie> cookies = cookieJar.loadForRequest(userRequest.url());
        if (!cookies.isEmpty()) {
            // 建立Okhpptclitent時候配置的cookieJar,
            requestBuilder.header("Cookie", cookieHeader(cookies));
        }
        if (userRequest.header("User-Agent") == null) {
            requestBuilder.header("User-Agent", Version.userAgent());
        }

        //  以上為請求前的頭處理
        Response networkResponse = chain.proceed(requestBuilder.build());
         // 以下是請求完成,拿到返回後的頭處理
         // 響應header, 如果沒有自定義配置cookie不會解析
        HttpHeaders.receiveHeaders(cookieJar, userRequest.url(), networkResponse.headers());
        Response.Builder responseBuilder = networkResponse.newBuilder()
                .request(userRequest);
      
    }
複製程式碼

從這裡可以看出,BridgeInterceptor在傳送網路請求之前所做的操作都是幫我們傳進來的普通Request新增必要的頭部資訊Content-Type、Content-Length、Transfer-Encoding、Host、Connection(預設Keep-Alive)、Accept-Encoding、User-Agent,使之變成可以傳送網路請求的Request。 我們具體來看HttpHeaders.receiveHeaders(cookieJar, userRequest.url(), networkResponse.headers()),呼叫Http頭部的receiveHeaders靜態方法將伺服器響應回來的Response轉化為使用者響應可以使用的Response

public static void receiveHeaders(CookieJar cookieJar, HttpUrl url, Headers headers) {
    // 無配置則不解析
    if (cookieJar == CookieJar.NO_COOKIES) return;
    // 遍歷Cookie解析
    List<Cookie> cookies = Cookie.parseAll(url, headers);
    if (cookies.isEmpty()) return;
    // 然後儲存,即自定義
    cookieJar.saveFromResponse(url, cookies);
  }
複製程式碼

當我們自定義Cookie配置後,receiveHeaders方法就會幫我們解析Cookie並新增到header頭部中儲存

    // 前面解析完header後,判斷伺服器是否支援Gzip壓縮格式,如果支援將交給Okio處理
        if (transparentGzip
                && "gzip".equalsIgnoreCase(networkResponse.header("Content-Encoding"))
                && HttpHeaders.hasBody(networkResponse)) {
            GzipSource responseBody = new GzipSource(networkResponse.body().source());
            Headers strippedHeaders = networkResponse.headers().newBuilder()
                    .removeAll("Content-Encoding")
                    .removeAll("Content-Length")
                    .build();
            responseBuilder.headers(strippedHeaders);
            // 處理完成後,重新生成一個response
            responseBuilder.body(new RealResponseBody(strippedHeaders, Okio.buffer(responseBody)));
        }
        return responseBuilder.build();
}
複製程式碼
  1. transparentGzip判斷伺服器是否支援Gzip壓縮

  2. 滿足判斷當前頭部Content-Encoding是否支援gzip

  3. 判斷Http頭部是否有body體

滿足上述條件,就將Response.body輸入流轉換成GzipSource型別,獲得解壓過後的資料流後,移除響應中的header Content-Encoding和Content-Length,構造新的響應返回。

由此可以看出BridgeInterceptor主要執行流程:

1)負責將使用者構建的Request請求轉化成能夠進行網路訪問的請求

2)將符合條件的Request執行網路請求

3)將網路請求響應後的Response轉化(Gzip壓縮,Gzip解壓縮)為使用者可用的Response

CacheInterceptor(快取攔截器)

負責進行快取處理

/** Serves requests from the cache and writes responses to the cache. */
public final class CacheInterceptor implements Interceptor {
  final InternalCache cache;

  public CacheInterceptor(InternalCache cache) {
    this.cache = cache;
  }

  @Override public Response intercept(Chain chain) throws IOException {
    // 通過Request從快取中獲取Response
    Response cacheCandidate = cache != null
        ? cache.get(chain.request())
        : null;

    long now = System.currentTimeMillis(); // 獲取系統時間
     // 快取策略類,該類決定了是使用快取還是進行網路請求
     // 根據請求頭獲取使用者指定的快取策略,並根據快取策略來獲取networkRequest,cacheResponse;
    CacheStrategy strategy = new CacheStrategy.Factory(now, chain.request(), cacheCandidate).get();
     // 網路請求,如果為null就代表不用進行網路請求
    Request networkRequest = strategy.networkRequest;
     // 獲取CacheStrategy快取中的Response,如果為null,則代表不使用快取
    Response cacheResponse = strategy.cacheResponse;

    if (cache != null) {//根據快取策略,更新統計指標:請求次數、使用網路請求次數、使用快取次數
      cache.trackResponse(strategy);
    }

    if (cacheCandidate != null && cacheResponse == null) {
     //cacheResponse不讀快取,那麼cacheCandidate不可用,關閉它
      closeQuietly(cacheCandidate.body()); // The cache candidate wasn't applicable. Close it.
    }

    // If we're forbidden from using the network and the cache is insufficient, fail.
    // 如果我們禁止使用網路和快取不足,則返回504。
    if (networkRequest == null && cacheResponse == null) {
      return new Response.Builder()
          .request(chain.request())
          .protocol(Protocol.HTTP_1_1)
          .code(504)
          .message("Unsatisfiable Request (only-if-cached)")
          .body(Util.EMPTY_RESPONSE)
          .sentRequestAtMillis(-1L)
          .receivedResponseAtMillis(System.currentTimeMillis())
          .build();
    }

    // If we don't need the network, we're done.
    // 不使用網路請求 且存在快取 直接返回響應
    if (networkRequest == null) {
      return cacheResponse.newBuilder()
          .cacheResponse(stripBody(cacheResponse))
          .build();
    }

複製程式碼

如果開始之前,你對Http快取協議還不太懂,建議先去了解在回來看講解會更好理解 上面的程式碼可以看出,CacheInterceptor的主要作用就是負責快取的管理,期間也涉及到對網路狀態的判斷,更新快取等,流程如下

1.首先根據Request中獲取快取的Response來獲取快取(Chain就是Interceptor介面類中的介面方法,主要處理請求和返回請求)

2.獲取當前時間戳,同時通過 CacheStrategy.Factory工廠類來獲取快取策略

public final class CacheStrategy {
  /** The request to send on the network, or null if this call doesn't use the network. */
  public final @Nullable Request networkRequest;

  /** The cached response to return or validate; or null if this call doesn't use a cache. */
  public final @Nullable Response cacheResponse;

  CacheStrategy(Request networkRequest, Response cacheResponse) {
    this.networkRequest = networkRequest;
    this.cacheResponse = cacheResponse;
  }

  public CacheStrategy get() {
      CacheStrategy candidate = getCandidate();

      if (candidate.networkRequest != null && request.cacheControl().onlyIfCached()) {
        // We're forbidden from using the network and the cache is insufficient.
        return new CacheStrategy(null, null);
      }

      return candidate;
    }

    /** Returns a strategy to use assuming the request can use the network. */
    private CacheStrategy getCandidate() {
      //如果快取沒有命中(即null),網路請求也不需要加快取Header了
      if (cacheResponse == null) {
      //沒有快取的網路請求,查上文的表可知是直接訪問
        return new CacheStrategy(request, null);
      }

      // Drop the cached response if it's missing a required handshake.
    // 如果快取的TLS握手資訊丟失,返回進行直接連線
      if (request.isHttps() && cacheResponse.handshake() == null) {
        return new CacheStrategy(request, null);
      }

      //檢測response的狀態碼,Expired時間,是否有no-cache標籤
      if (!isCacheable(cacheResponse, request)) {
        return new CacheStrategy(request, null);
      }

      CacheControl requestCaching = request.cacheControl();
      // 如果請求指定不使用快取響應並且當前request是可選擇的get請求
      if (requestCaching.noCache() || hasConditions(request)) {
        // 重新請求
        return new CacheStrategy(request, null);
      }

      CacheControl responseCaching = cacheResponse.cacheControl();
      // 如果快取的response中的immutable標誌位為true,則不請求網路
      if (responseCaching.immutable()) {
        return new CacheStrategy(null, cacheResponse);
      }

      long ageMillis = cacheResponseAge();
      long freshMillis = computeFreshnessLifetime();

      if (requestCaching.maxAgeSeconds() != -1) {
        freshMillis = Math.min(freshMillis, SECONDS.toMillis(requestCaching.maxAgeSeconds()));
      }

      long minFreshMillis = 0;
      if (requestCaching.minFreshSeconds() != -1) {
        minFreshMillis = SECONDS.toMillis(requestCaching.minFreshSeconds());
      }

      long maxStaleMillis = 0;
      if (!responseCaching.mustRevalidate() && requestCaching.maxStaleSeconds() != -1) {
        maxStaleMillis = SECONDS.toMillis(requestCaching.maxStaleSeconds());
      }

      if (!responseCaching.noCache() && ageMillis + minFreshMillis < freshMillis + maxStaleMillis) {
        Response.Builder builder = cacheResponse.newBuilder();
        if (ageMillis + minFreshMillis >= freshMillis) {
          builder.addHeader("Warning", "110 HttpURLConnection \"Response is stale\"");
        }
        long oneDayMillis = 24 * 60 * 60 * 1000L;
        if (ageMillis > oneDayMillis && isFreshnessLifetimeHeuristic()) {
          builder.addHeader("Warning", "113 HttpURLConnection \"Heuristic expiration\"");
        }
        return new CacheStrategy(null, builder.build());
      }

      Headers.Builder conditionalRequestHeaders = request.headers().newBuilder();
      Internal.instance.addLenient(conditionalRequestHeaders, conditionName, conditionValue);

      Request conditionalRequest = request.newBuilder()
          .headers(conditionalRequestHeaders.build())
          .build();
      return new CacheStrategy(conditionalRequest, cacheResponse);
    }
複製程式碼

CacheStrategy快取策略維護兩個變數networkRequest和cacheResponse,其內部工廠類Factory中的getCandidate方法會通過相應的邏輯判斷對比選擇最好的策略,如果返回networkRequest為null,則表示不進行網路請求;而如果返回cacheResponse為null,則表示沒有有效的快取。

3.緊接上文,判斷快取是否為空,則呼叫trackResponse(如果有快取,根據快取策略,更新統計指標:請求次數、使用網路請求次數、使用快取次數)

4.快取無效,則關閉快取

5.如果networkRequest和cacheResponse都為null,則表示不請求網路而快取又為null,那就返回504,請求失敗

6.如果當前返回網路請求為空,且存在快取,直接返回響應

    // 緊接上文
    Response networkResponse = null;
    try {
    //執行下一個攔截器
      networkResponse = chain.proceed(networkRequest);
    } finally {
      // If we're crashing on I/O or otherwise, don't leak the cache body.
      if (networkResponse == null && cacheCandidate != null) {
        closeQuietly(cacheCandidate.body());
      }
    }

    // If we have a cache response too, then we're doing a conditional get.
    if (cacheResponse != null) {
      if (networkResponse.code() == HTTP_NOT_MODIFIED) {
        Response response = cacheResponse.newBuilder()
            .headers(combine(cacheResponse.headers(), networkResponse.headers()))
            .sentRequestAtMillis(networkResponse.sentRequestAtMillis())
            .receivedResponseAtMillis(networkResponse.receivedResponseAtMillis())
            .cacheResponse(stripBody(cacheResponse))
            .networkResponse(stripBody(networkResponse))
            .build();
        networkResponse.body().close();

        // Update the cache after combining headers but before stripping the
        // Content-Encoding header (as performed by initContentStream()).
        cache.trackConditionalCacheHit();
        cache.update(cacheResponse, response);
        return response;
      } else {
        closeQuietly(cacheResponse.body());
      }
    }

    Response response = networkResponse.newBuilder()
        .cacheResponse(stripBody(cacheResponse))
        .networkResponse(stripBody(networkResponse))
        .build();

    if (cache != null) {
      if (HttpHeaders.hasBody(response) && CacheStrategy.isCacheable(response, networkRequest)) {
        // Offer this request to the cache.
        CacheRequest cacheRequest = cache.put(response);
        return cacheWritingResponse(cacheRequest, response);
      }

      if (HttpMethod.invalidatesCache(networkRequest.method())) {
        try {
          cache.remove(networkRequest);
        } catch (IOException ignored) {
          // The cache cannot be written.
        }
      }
    }

    return response;
  }
複製程式碼

下部分程式碼主要做了這幾件事:

1)執行下一個攔截器,即ConnectInterceptor

2)責任鏈執行完畢後,會返回最終響應資料,如果返回結果為空,即無網路情況下,關閉快取

3)如果cacheResponse快取不為空並且最終響應資料的返回碼為304,那麼就直接從快取中讀取資料,否則關閉快取

4)有網路狀態下,直接返回最終響應資料

5)如果Http頭部是否有響應體且快取策略是可以快取的,true=將響應體寫入到Cache,下次直接呼叫

6)判斷最終響應資料的是否是無效快取方法,true,則從Cache清除掉

7)返回Response

ConnectInterceptor

負責與伺服器建立連結

/** Opens a connection to the target server and proceeds to the next interceptor. */
public final class ConnectInterceptor implements Interceptor {
  public final OkHttpClient client;

  public ConnectInterceptor(OkHttpClient client) {
    this.client = client;
  }

  @Override public Response intercept(Chain chain) throws IOException {
    RealInterceptorChain realChain = (RealInterceptorChain) chain;
    Request request = realChain.request();
    // 建立執行Http請求所需要的物件
    // 主要用於 ①獲取連線服務端的Connection ②連線用於服務端進行資料傳輸的輸入輸出流
    StreamAllocation streamAllocation = realChain.streamAllocation();

    // We need the network to satisfy this request. Possibly for validating a conditional GET.
    boolean doExtensiveHealthChecks = !request.method().equals("GET");
    HttpCodec httpCodec = streamAllocation.newStream(client, chain, doExtensiveHealthChecks);
    RealConnection connection = streamAllocation.connection();

    return realChain.proceed(request, streamAllocation, httpCodec, connection);
  }
}
複製程式碼

上面我們講到重定向攔截器時,發現RetryAndFollowUpInterceptor建立初始化StreamAllocation,但沒有使用,只是跟著攔截器鏈傳遞給下一個攔截器,最終會傳到ConnectInterceptor中使用。從上面程式碼可以看出ConnectInterceptor主要執行的流程:

1.ConnectInterceptor獲取Interceptor傳過來的StreamAllocation,streamAllocation.newStream。

2.將剛才建立的用於網路IO的RealConnection物件以及對於與伺服器互動最為關鍵的HttpCodec等物件傳遞給後面的攔截。

ConnectInterceptor的Interceptor程式碼很簡單,但是關鍵程式碼還是在streamAllocation.newStream(…)方法裡,在這個方法完成所有連結的建立。

public HttpCodec newStream(
      OkHttpClient client, Interceptor.Chain chain, boolean doExtensiveHealthChecks) {
    int connectTimeout = chain.connectTimeoutMillis();
    int readTimeout = chain.readTimeoutMillis();
    int writeTimeout = chain.writeTimeoutMillis();
    int pingIntervalMillis = client.pingIntervalMillis();
    boolean connectionRetryEnabled = client.retryOnConnectionFailure();

    try {
      RealConnection resultConnection = findHealthyConnection(connectTimeout, readTimeout,
          writeTimeout, pingIntervalMillis, connectionRetryEnabled, doExtensiveHealthChecks);
      HttpCodec resultCodec = resultConnection.newCodec(client, chain, this);

      synchronized (connectionPool) {
        codec = resultCodec;
        return resultCodec;
      }
    } catch (IOException e) {
      throw new RouteException(e);
    }
  }
複製程式碼

這裡的流程主要就是

1.呼叫findHealthyConnection來建立RealConnection進行實際的網路連線(能複用就複用,不能複用就新建)

2.通過獲取到的RealConnection來建立HttpCodec物件並通過同步程式碼中返回

我們具體往findHealthyConnection看

/**
   * Finds a connection and returns it if it is healthy. If it is unhealthy the process is repeated
   * until a healthy connection is found.
   * 找到一個健康的連線並返回它。 如果不健康,則重複該過程直到發現健康的連線。
   */
  private RealConnection findHealthyConnection(int connectTimeout, int readTimeout,
      int writeTimeout, boolean connectionRetryEnabled, boolean doExtensiveHealthChecks)
      throws IOException {
    while (true) {
      RealConnection candidate = findConnection(connectTimeout, readTimeout, writeTimeout,
          connectionRetryEnabled);

      // If this is a brand new connection, we can skip the extensive health checks.
      // 如果這是一個全新的聯絡,我們可以跳過廣泛的健康檢查
      synchronized (connectionPool) {
        if (candidate.successCount == 0) {
          return candidate;
        }
      }

      // Do a (potentially slow) check to confirm that the pooled connection is still good. If it
      // isn't, take it out of the pool and start again.
      // 做一個(潛在的慢)檢查,以確認彙集的連線是否仍然良好。 如果不是,請將其從池中取出並重新開始。
      if (!candidate.isHealthy(doExtensiveHealthChecks)) {
        noNewStreams();
        continue;
      }

      return candidate;
    }
  }
複製程式碼

1.首先開啟while(true){}迴圈,迴圈不斷地從findConnection方法中獲取Connection物件

2.然後在同步程式碼塊,如果滿足candidate.successCount ==0(就是整個網路連線結束了),就返回

3.接著判斷如果這個candidate是不健康的,則執行銷燬並且重新呼叫findConnection獲取Connection物件 不健康的RealConnection條件為如下幾種情況:

RealConnection物件的socket沒有關閉socket的輸入流沒有關閉socket的輸出流沒有關閉http2時連線沒有關閉

我們接著看findConnection中具體做了什麼操作

 /**
   * Returns a connection to host a new stream. This prefers the existing connection if it exists,
   * then the pool, finally building a new connection.
   */
  private RealConnection findConnection(int connectTimeout, int readTimeout, int writeTimeout,
      boolean connectionRetryEnabled) throws IOException {
    boolean foundPooledConnection = false;
    RealConnection result = null;
    Route selectedRoute = null;
    Connection releasedConnection;
    Socket toClose;
    synchronized (connectionPool) {
      if (released) throw new IllegalStateException("released");
      if (codec != null) throw new IllegalStateException("codec != null");
      if (canceled) throw new IOException("Canceled");

      // Attempt to use an already-allocated connection. We need to be careful here because our
      // already-allocated connection may have been restricted from creating new streams.
      //選擇嘗試複用Connection
      releasedConnection = this.connection;
      toClose = releaseIfNoNewStreams();

      //判斷可複用的Connection是否為空
      if (this.connection != null) {
        // We had an already-allocated connection and it's good.
        result = this.connection;
        releasedConnection = null;
      }
      if (!reportedAcquired) {
        // If the connection was never reported acquired, don't report it as released!
        // 如果連線從未被報告獲得,不要報告它被釋放
        releasedConnection = null;
      }

      //如果RealConnection不能複用,就從連線池中獲取
      if (result == null) { 
        // Attempt to get a connection from the pool.
        Internal.instance.get(connectionPool, address, this, null);
        if (connection != null) {
          foundPooledConnection = true;
          result = connection;
        } else {
          selectedRoute = route;
        }
      }
    }
    closeQuietly(toClose);

    if (releasedConnection != null) {
      eventListener.connectionReleased(call, releasedConnection);
    }
    if (foundPooledConnection) {
      eventListener.connectionAcquired(call, result);
    }
    if (result != null) {
      // If we found an already-allocated or pooled connection, we're done.
      return result;
    }

    // If we need a route selection, make one. This is a blocking operation.
    boolean newRouteSelection = false;
    if (selectedRoute == null && (routeSelection == null || !routeSelection.hasNext())) {
      newRouteSelection = true;
      routeSelection = routeSelector.next();
    }

    synchronized (connectionPool) {
      if (canceled) throw new IOException("Canceled");

      if (newRouteSelection) {
        // Now that we have a set of IP addresses, make another attempt at getting a connection from
        // the pool. This could match due to connection coalescing.
        // 遍歷所有路由地址,再次嘗試從ConnectionPool中獲取connection
        List<Route> routes = routeSelection.getAll();
        for (int i = 0, size = routes.size(); i < size; i++) {
          Route route = routes.get(i);
          Internal.instance.get(connectionPool, address, this, route);
          if (connection != null) {
            foundPooledConnection = true;
            result = connection;
            this.route = route;
            break;
          }
        }
      }

      if (!foundPooledConnection) {
        if (selectedRoute == null) {
          selectedRoute = routeSelection.next();
        }

        // Create a connection and assign it to this allocation immediately. This makes it possible
        // for an asynchronous cancel() to interrupt the handshake we're about to do.
        route = selectedRoute;
        refusedStreamCount = 0;
        result = new RealConnection(connectionPool, selectedRoute);
        acquire(result, false);
      }
    }

    // If we found a pooled connection on the 2nd time around, we're done.
    if (foundPooledConnection) {
      eventListener.connectionAcquired(call, result);
      return result;
    }

    // Do TCP + TLS handshakes. This is a blocking operation.
    //進行實際網路連線
    result.connect(
        connectTimeout, readTimeout, writeTimeout, connectionRetryEnabled, call, eventListener);
    routeDatabase().connected(result.route());

    Socket socket = null;
    synchronized (connectionPool) {
      reportedAcquired = true;

      // Pool the connection.
      //獲取成功後放入到連線池當中
      Internal.instance.put(connectionPool, result);

      // If another multiplexed connection to the same address was created concurrently, then
      // release this connection and acquire that one.
      if (result.isMultiplexed()) {
        socket = Internal.instance.deduplicate(connectionPool, address, this);
        result = connection;
      }
    }
    closeQuietly(socket);

    eventListener.connectionAcquired(call, result);
    return result;
  }
複製程式碼

程式碼量很多,重點我都標識出來方便理解,大致流程如下:

1.首先判斷當前StreamAllocation物件是否存在Connection物件,有則返回(能複用就複用)

2.如果1沒有獲取到,就去ConnectionPool中獲取

3.如果2也沒有拿到,則會遍歷所有路由地址,並再次嘗試從ConnectionPool中獲取

4.如果3還是拿不到,就嘗試重新建立一個新的Connection進行實際的網路連線

5.將新的Connection新增到ConnectionPool連線池中去,返回結果

從流程中可以,findConnection印證了其命名,主要都是在如何尋找可複用的Connection連線,能複用就複用,不能複用就重建,這裡我們注重關注result.connect(),它是怎麼建立一個可進行實際網路連線的Connection

  public void connect(int connectTimeout, int readTimeout, int writeTimeout,
      int pingIntervalMillis, boolean connectionRetryEnabled, Call call,
      EventListener eventListener) {
    //檢查連結是否已經建立,protocol 標識位代表請求協議,在介紹OkHttpClient的Builder模式下引數講過
    if (protocol != null) throw new IllegalStateException("already connected");

    RouteException routeException = null;
    //Socket連結的配置
    List<ConnectionSpec> connectionSpecs = route.address().connectionSpecs();
    //用於選擇連結(隧道連結還是Socket連結)
    ConnectionSpecSelector connectionSpecSelector = new ConnectionSpecSelector(connectionSpecs);

    while (true) {
      try {
         //是否建立Tunnel隧道連結
        if (route.requiresTunnel()) {
          connectTunnel(connectTimeout, readTimeout, writeTimeout, call, eventListener);
          if (rawSocket == null) {
            // We were unable to connect the tunnel but properly closed down our resources.
            break;
          }
        } else {
          connectSocket(connectTimeout, readTimeout, call, eventListener);
        }
        establishProtocol(connectionSpecSelector, pingIntervalMillis, call, eventListener);
        eventListener.connectEnd(call, route.socketAddress(), route.proxy(), protocol);
        break;
      } catch (IOException e) {
        closeQuietly(socket);
        closeQuietly(rawSocket);
        socket = null;
        rawSocket = null;
        source = null;
        sink = null;
        handshake = null;
        protocol = null;
        http2Connection = null;

        eventListener.connectFailed(call, route.socketAddress(), route.proxy(), null, e);
      }
    }
  }
複製程式碼

我們這裡留下重點程式碼來講解

1.首先判斷當前protocol協議是否為空,如果不為空,則丟擲異常(連結已存在)

2.建立Socket連結配置的list集合

3.通過2建立好的list構造了一個ConnectionSpecSelector(用於選擇隧道連結還是Socket連結)

4.接著執行while迴圈,route.requiresTunnel()判斷是否需要建立隧道連結,否則建立Socket連結

5.執行establishProtocol方法設定protocol協議並執行網路請求

ConnectionPool

不管protocol協議是http1.1的Keep-Live機制還是http2.0的Multiplexing機制都需要引入連線池來維護整個連結,而OkHttp會將客戶端和服務端的連結作為一個Connection類,RealConnection就是Connection的實現類,ConnectionPool連線池就是負責維護管理所有Connection,並在有限時間內選擇Connection是否複用還是保持開啟狀態,超時則及時清理掉

Get方法

//每當要選擇複用Connection時,就會呼叫ConnectionPool的get方法獲取
 @Nullable RealConnection get(Address address, StreamAllocation streamAllocation, Route route) {
    assert (Thread.holdsLock(this));
    for (RealConnection connection : connections) {
      if (connection.isEligible(address, route)) {
        streamAllocation.acquire(connection, true);
        return connection;
      }
    }
    return null;
  }
複製程式碼

首先執行for迴圈遍歷Connections佇列,根據地址address和路由route判斷連結是否為可複用,true則執行streamAllocation.acquire返回connection,這裡也就明白之前講解findConnection時,ConnectionPool時怎麼查詢並返回可複用的Connection的

  /**
   * Use this allocation to hold {@code connection}. Each call to this must be paired with a call to
   * {@link #release} on the same connection.
   */
  public void acquire(RealConnection connection, boolean reportedAcquired) {
    assert (Thread.holdsLock(connectionPool));
    if (this.connection != null) throw new IllegalStateException();

    this.connection = connection;
    this.reportedAcquired = reportedAcquired;
    connection.allocations.add(new StreamAllocationReference(this, callStackTrace));
  }
 public final List<Reference<StreamAllocation>> allocations = new ArrayList<>();
複製程式碼

首先就是把從連線池中獲取到的RealConnection物件賦值給StreamAllocation的connection屬性,然後把StreamAllocation物件的弱引用新增到RealConnection物件的allocations集合中去,判斷出當前連結物件所持有的StreamAllocation數目,該陣列的大小用來判定一個連結的負載量是否超過OkHttp指定的最大次數。

Put方法

我們在講到findConnection時,ConnectionPool如果找不到可複用的Connection,就會重新建立一個Connection並通過Put方法新增到ConnectionPool中

 void put(RealConnection connection) {
    assert (Thread.holdsLock(this));
    if (!cleanupRunning) {
      cleanupRunning = true;
      // 非同步回收執行緒
      executor.execute(cleanupRunnable);
    }
    connections.add(connection);
  }
  private final Deque<RealConnection> connections = new ArrayDeque<>();
複製程式碼

在這裡我們可以看到,首先在進行新增佇列之前,程式碼會先判斷當前的cleanupRunning(當前清理是否正在執行),符合條件則執行cleanupRunnable非同步清理任務回收連線池中的無效連線,其內部時如何執行的呢?

private final Runnable cleanupRunnable = new Runnable() {
    @Override public void run() {
      while (true) {
        // 下次清理的間隔時間
        long waitNanos = cleanup(System.nanoTime());
        if (waitNanos == -1) return;
        if (waitNanos > 0) {
          long waitMillis = waitNanos / 1000000L;
          waitNanos -= (waitMillis * 1000000L);
          synchronized (ConnectionPool.this) {
            try {
              // 等待釋放鎖和時間間隔
              ConnectionPool.this.wait(waitMillis, (int) waitNanos);
            } catch (InterruptedException ignored) {
            }
          }
        }
      }
    }
  };
複製程式碼

這裡我們重點看cleanUp方法是如何清理無效連線的,請看程式碼

long cleanup(long now) {
    int inUseConnectionCount = 0;
    int idleConnectionCount = 0;
    RealConnection longestIdleConnection = null;
    long longestIdleDurationNs = Long.MIN_VALUE;

    // Find either a connection to evict, or the time that the next eviction is due.
    synchronized (this) {
      // 迴圈遍歷RealConnection
      for (Iterator<RealConnection> i = connections.iterator(); i.hasNext(); ) {
        RealConnection connection = i.next();

        // If the connection is in use, keep searching.
        // 判斷當前connection是否正在使用,trueinUseConnectionCount+1,false則idleConnectionCount+1
        if (pruneAndGetAllocationCount(connection, now) > 0) {
          // 正在使用的連線數
          inUseConnectionCount++;
          continue;
        }
        // 空閒連線數
        idleConnectionCount++;

        // If the connection is ready to be evicted, we're done.
        long idleDurationNs = now - connection.idleAtNanos;
        if (idleDurationNs > longestIdleDurationNs) {
          longestIdleDurationNs = idleDurationNs;
          longestIdleConnection = connection;
        }
      }
       
      // 
      if (longestIdleDurationNs >= this.keepAliveDurationNs
          || idleConnectionCount > this.maxIdleConnections) {
       // 如果被標記的連線數超過5個,就移除這個連線
        // We've found a connection to evict. Remove it from the list, then close it below (outside
        // of the synchronized block).
        connections.remove(longestIdleConnection);
      } else if (idleConnectionCount > 0) {
        //如果上面處理返回的空閒連線數目大於0。則返回保活時間與空閒時間差
        // A connection will be ready to evict soon.
        return keepAliveDurationNs - longestIdleDurationNs;
      } else if (inUseConnectionCount > 0) {
        // 如果上面處理返回的都是活躍連線。則返回保活時間
        // All connections are in use. It'll be at least the keep alive duration 'til we run again.
        return keepAliveDurationNs;
      } else {
       // 跳出死迴圈
        // No connections, idle or in use.
        cleanupRunning = false;
        return -1;
      }
    }

    closeQuietly(longestIdleConnection.socket());

    // Cleanup again immediately.
    return 0;
  }
複製程式碼
  1. for迴圈遍歷connections佇列,如果當前的connection物件正在被使用,則inUseConnectionCount+1 (活躍連線數),跳出當前判斷邏輯,執行下一個判斷邏輯,否則idleConnectionCount+1(空閒連線數)
  2. 判斷當前的connection物件的空閒時間是否比已知的長,true就記錄
  3. 如果空閒連線的時間超過5min,空閒連線的數量超過5,就直接從連線池中移除,並關閉底層socket,返回等待時間0,直接再次迴圈遍歷連線池(這是個死迴圈)
  4. 如果3不滿足,判斷idleConnectionCount是否大於0 (返回的空閒連線數目大於0)。則返回保活時間與空閒時間差
  5. 如果4不滿足,判斷有沒有正在使用的連線,有的話返回保活時間
  6. 如果5也不滿足,當前說明連線池中沒有連線,跳出死迴圈返回-1

這裡可以算是核心程式碼,引用Java GC演算法中的標記-擦除演算法,標記處最不活躍的連線進行清除。每當要建立一個新的Connection時,ConnectionPool都會迴圈遍歷整個connections佇列,標記查詢不活躍的連線,當不活躍連線達到一定數量時及時清除,這也是OkHttp 連線複用的核心。

還有一點要注意,pruneAndGetAllocationCount方法就是判斷當前connection物件是空閒還是活躍的

private int pruneAndGetAllocationCount(RealConnection connection, long now) {
    List<Reference<StreamAllocation>> references = connection.allocations;
    for (int i = 0; i < references.size(); ) {
      Reference<StreamAllocation> reference = references.get(i);

      if (reference.get() != null) {
        i++;
        continue;
      }

      // We've discovered a leaked allocation. This is an application bug.
      StreamAllocation.StreamAllocationReference streamAllocRef =
          (StreamAllocation.StreamAllocationReference) reference;
      String message = "A connection to " + connection.route().address().url()
          + " was leaked. Did you forget to close a response body?";
      Platform.get().logCloseableLeak(message, streamAllocRef.callStackTrace);

      references.remove(i);
      connection.noNewStreams = true;

      // If this was the last allocation, the connection is eligible for immediate eviction.
      if (references.isEmpty()) {
        connection.idleAtNanos = now - keepAliveDurationNs;
        return 0;
      }
    }

    return references.size();
  }
複製程式碼

1.首先for迴圈遍歷RealConnection的StreamAllocationList列表,判斷每個StreamAllocation是否為空,false則表示無物件引用這個StreamAllocation,及時清除,執行下一個邏輯判斷(這裡有個坑,當作彩蛋吧)

2.接著判斷如果當前StreamAllocation集合因為1的remove而導致列表為空,即沒有任何物件引用,返回0結束

3.返回結果

CallServerInterceptor

負責發起網路請求和接受伺服器返回響應

/** This is the last interceptor in the chain. It makes a network call to the server. */
public final class CallServerInterceptor implements Interceptor {
  private final boolean forWebSocket;

  public CallServerInterceptor(boolean forWebSocket) {
    this.forWebSocket = forWebSocket;
  }

  @Override public Response intercept(Chain chain) throws IOException {
    // 攔截器鏈
    RealInterceptorChain realChain = (RealInterceptorChain) chain;
    // HttpCodec介面,封裝了底層IO可以直接用來收發資料的元件流物件
    HttpCodec httpCodec = realChain.httpStream();
    // 用來Http網路請求所需要的元件
    StreamAllocation streamAllocation = realChain.streamAllocation();
    // Connection類的具體實現
    RealConnection connection = (RealConnection) realChain.connection();
    Request request = realChain.request();

    long sentRequestMillis = System.currentTimeMillis();

    realChain.eventListener().requestHeadersStart(realChain.call());
    // 先向Socket中寫請求頭部資訊
    httpCodec.writeRequestHeaders(request);
    realChain.eventListener().requestHeadersEnd(realChain.call(), request);

    Response.Builder responseBuilder = null;
    //詢問伺服器是否可以傳送帶有請求體的資訊
    if (HttpMethod.permitsRequestBody(request.method()) && request.body() != null) {
      // If there's a "Expect: 100-continue" header on the request, wait for a "HTTP/1.1 100
      // Continue" response before transmitting the request body. If we don't get that, return
      // what we did get (such as a 4xx response) without ever transmitting the request body.
      //特殊處理,如果伺服器允許請求頭部可以攜帶Expect或則100-continue欄位,直接獲取響應資訊
      if ("100-continue".equalsIgnoreCase(request.header("Expect"))) {
        httpCodec.flushRequest();
        realChain.eventListener().responseHeadersStart(realChain.call());
        responseBuilder = httpCodec.readResponseHeaders(true);
      }

      if (responseBuilder == null) {
        // Write the request body if the "Expect: 100-continue" expectation was met.
        realChain.eventListener().requestBodyStart(realChain.call());
        long contentLength = request.body().contentLength();
        CountingSink requestBodyOut =
            new CountingSink(httpCodec.createRequestBody(request, contentLength));
        BufferedSink bufferedRequestBody = Okio.buffer(requestBodyOut);
        
        // 向Socket中寫入請求資訊
        request.body().writeTo(bufferedRequestBody);
        bufferedRequestBody.close();
        realChain.eventListener()
            .requestBodyEnd(realChain.call(), requestBodyOut.successfulCount);
      } else if (!connection.isMultiplexed()) {
        // If the "Expect: 100-continue" expectation wasn't met, prevent the HTTP/1 connection
        // from being reused. Otherwise we're still obligated to transmit the request body to
        // leave the connection in a consistent state.
        streamAllocation.noNewStreams();
      }
    }
    // 完成網路請求的寫入
    httpCodec.finishRequest();

    if (responseBuilder == null) {
      realChain.eventListener().responseHeadersStart(realChain.call());
      //讀取響應資訊的頭部
      responseBuilder = httpCodec.readResponseHeaders(false);
    }

    Response response = responseBuilder
        .request(request)
        .handshake(streamAllocation.connection().handshake())
        .sentRequestAtMillis(sentRequestMillis)
        .receivedResponseAtMillis(System.currentTimeMillis())
        .build();

    realChain.eventListener()
        .responseHeadersEnd(realChain.call(), response);

    int code = response.code();
     //
    if (forWebSocket && code == 101) {
      // Connection is upgrading, but we need to ensure interceptors see a non-null response body.
      response = response.newBuilder()
          .body(Util.EMPTY_RESPONSE)
          .build();
    } else {
      response = response.newBuilder()
          .body(httpCodec.openResponseBody(response))
          .build();
    }
    //
    if ("close".equalsIgnoreCase(response.request().header("Connection"))
        || "close".equalsIgnoreCase(response.header("Connection"))) {
      streamAllocation.noNewStreams();
    }

    if ((code == 204 || code == 205) && response.body().contentLength() > 0) {
      throw new ProtocolException(
          "HTTP " + code + " had non-zero Content-Length: " + response.body().contentLength());
    }

    return response;
  }
複製程式碼

其實程式碼很簡單,我們這裡只做OkHttp上的講解,關於HttpCodec(HttpCodec介面,封裝了底層IO可以直接用來收發資料的元件流物件)可以去相關文獻中瞭解。流程如下

1.首先初始化物件,同時呼叫httpCodec.writeRequestHeaders(request)向Socket中寫入Header資訊

2.判斷伺服器是否可以傳送帶有請求體的資訊,返回為True時會做一個特殊處理,判斷伺服器是否允許請求頭部可以攜帶Expect或則100-continue欄位(握手操作),可以就直接獲取響應資訊

3.當前面的"100-continue",需要握手,但又握手失敗,如果body資訊為空,並寫入請求body資訊,否則判斷是否是多路複用,滿足則關閉寫入流和Connection

4.完成網路請求的寫入

5.判斷body資訊是否為空(即表示沒有處理特殊情況),就直接讀取響應的頭部資訊,並通過構造者模式一個寫入原請求,握手情況,請求時間,得到的結果時間的Response

6.通過狀態碼判斷以及是否webSocket判斷,是否返回一個空的body,true則返回空的body,否則讀取body資訊

7.如果設定了連線 close ,斷開連線,關閉寫入流和Connection

8.返回Response

到此,OkHttp內部5大攔截器的作用已經講解完成,流程大致如下

流程圖

相關文章