[gRPC via C#] gRPC本質的探究與實踐

internalnet發表於2022-03-03

鑑於內容過多,先上太長不看版:

  • grpc 就是請求流&響應流特殊一點的 Http 請求,效能和 WebAPI 比起來只快在 Protobuf 上;

附上完整試驗程式碼:GrpcWithOutSDK.zip

另附小Demo,基於 ControllerHttpClient 的線上聊天室:ChatRoomOnController.zip


本文內容有點長,涉及較多基礎知識點,某些結論可能直接得出,沒有上下文,限於篇幅,不會在本文內詳細描述,如有疑惑請友好交流或嘗試搜尋網際網路。

本文僅代表個人試驗結果和觀點,可能會有偏頗,請自行判斷。


一、背景

個人經常在網上看到 grpc高效能 字眼的文章;有幸也面試過一些同僚,問及 grpc 對比 WebAPI,答案都是更快、效能更高;至於能快多少,答案就各種各樣了,幾倍到幾十倍的回答都有,但大概是統一的:“grpc 要快得多”。那麼具體快在哪裡呢?回答我就覺得不那麼準確了。

現在我們就來探索一下 grpcWebAPI 的差別是什麼? grpc 快在哪裡?

二、驗證請求模型

就是個常規的 asp.net core 使用 grpc 的步驟

建立服務端

  • 建立一個 asp.net core grpc 專案

img1

  • 新增一個測試的 reverse.proto 用於測試 grpc 的幾種通訊模式,併為其生成服務端
syntax = "proto3";

option csharp_namespace = "GrpcWithOutSDK";

package reverse;

service Reverse {
 rpc Simple (Request) returns (Reply);
 rpc ClientSide (stream Request) returns (Reply);
 rpc ServerSide (Request) returns (stream Reply);
 rpc Bidirectional (stream Request) returns (stream Reply);
}

message Request {
 string message = 1;
}

message Reply {
 string message = 1;
}
  • 新建 ReverseService.cs 實現具體的方法邏輯
public class ReverseService : Reverse.ReverseBase
{
   private readonly ILogger<ReverseService> _logger;

   public ReverseService(ILogger<ReverseService> logger)
   {
       _logger = logger;
   }

   private static Reply CreateReplay(Request request)
   {
       return new Reply
       {
           Message = new string(request.Message.Reverse().ToArray())
       };
   }

   private void DisplayReceivedMessage(Request request, [CallerMemberName] string? methodName = null)
   {
       _logger.LogInformation($"{methodName} Received: {request.Message}");
   }

   public override async Task Bidirectional(IAsyncStreamReader<Request> requestStream, IServerStreamWriter<Reply> responseStream, ServerCallContext context)
   {
       while (await requestStream.MoveNext())
       {
           DisplayReceivedMessage(requestStream.Current);
           await responseStream.WriteAsync(CreateReplay(requestStream.Current));
       }
   }

   public override async Task<Reply> ClientSide(IAsyncStreamReader<Request> requestStream, ServerCallContext context)
   {
       var total = 0;
       while (await requestStream.MoveNext())
       {
           total++;
           DisplayReceivedMessage(requestStream.Current);
       }
       return new Reply
       {
           Message = $"{nameof(ServerSide)} Received Over. Total: {total}"
       };
   }

   public override async Task ServerSide(Request request, IServerStreamWriter<Reply> responseStream, ServerCallContext context)
   {
       DisplayReceivedMessage(request);

       for (int i = 0; i < 5; i++)
       {
           await responseStream.WriteAsync(CreateReplay(request));
       }
   }

   public override Task<Reply> Simple(Request request, ServerCallContext context)
   {
       return Task.FromResult(CreateReplay(request));
   }
}

最後記得 app.MapGrpcService<ReverseService>();

建立客戶端

  • 新建一個控制檯專案,並新增Google.ProtobufGrpc.Net.ClientGrpc.Tools這幾個包的引用
  • 引用之前寫好的 reverse.proto 併為其生成客戶端
  • 寫幾個用於測試各種通訊模式的方法
private static async Task Bidirectional(Reverse.ReverseClient client)
{
   var stream = client.Bidirectional();

   var sendTask = Task.Run(async () =>
   {
       for (int i = 0; i < 10; i++)
       {
           await stream.RequestStream.WriteAsync(new() { Message = $"{nameof(Bidirectional)}-{i}" });
       }
       await stream.RequestStream.CompleteAsync();
   });

   var receiveTask = Task.Run(async () =>
   {
       while (await stream.ResponseStream.MoveNext(default))
       {
           DisplayReceivedMessage(stream.ResponseStream.Current);
       }
   });

   await Task.WhenAll(sendTask, receiveTask);
}

private static async Task ClientSide(Reverse.ReverseClient client)
{
   var stream = client.ClientSide();

   for (int i = 0; i < 5; i++)
   {
       await stream.RequestStream.WriteAsync(new() { Message = $"{nameof(ClientSide)}-{i}" });
   }

   await stream.RequestStream.CompleteAsync();

   var reply = await stream.ResponseAsync;

   DisplayReceivedMessage(reply);
}

private static async Task Sample(Reverse.ReverseClient client)
{
   var reply = await client.SimpleAsync(new() { Message = nameof(Sample) });
   DisplayReceivedMessage(reply);
}

private static async Task ServerSide(Reverse.ReverseClient client)
{
   var stream = client.ServerSide(new() { Message = nameof(ServerSide) });

   while (await stream.ResponseStream.MoveNext(default))
   {
       DisplayReceivedMessage(stream.ResponseStream.Current);
   }
}
  • 測試程式碼
const string Host = "http://localhost:5035";
var channel = GrpcChannel.ForAddress(Host);
var grpcClient = new Reverse.ReverseClient(channel);

await Sample(grpcClient);
await ClientSide(grpcClient);
await ServerSide(grpcClient);
await Bidirectional(grpcClient);

進行驗證

  • 將服務端的 Microsoft.AspNetCore 日誌等級調整為 Information 以列印請求日誌
  • 執行服務端與客戶端
  • 不出意外的話服務端會看到如下輸出(為便於觀察,已按方法進行分段,不重要的資訊已省略)
info: Microsoft.AspNetCore.Hosting.Diagnostics[1]
     Request starting HTTP/2 POST http://localhost:5035/reverse.Reverse/Simple application/grpc -
info: Microsoft.AspNetCore.Routing.EndpointMiddleware[0]
     Executing endpoint 'gRPC - /reverse.Reverse/Simple'
info: Microsoft.AspNetCore.Routing.EndpointMiddleware[1]
     Executed endpoint 'gRPC - /reverse.Reverse/Simple'
info: Microsoft.AspNetCore.Hosting.Diagnostics[2]
     Request finished HTTP/2 POST http://localhost:5035/reverse.Reverse/Simple application/grpc - - 200 - application/grpc 99.1956ms
info: Microsoft.AspNetCore.Hosting.Diagnostics[1]
      Request starting HTTP/2 POST http://localhost:5035/reverse.Reverse/ClientSide application/grpc -
info: Microsoft.AspNetCore.Routing.EndpointMiddleware[0]
      Executing endpoint 'gRPC - /reverse.Reverse/ClientSide'
info: Microsoft.AspNetCore.Routing.EndpointMiddleware[1]
      Executed endpoint 'gRPC - /reverse.Reverse/ClientSide'
info: Microsoft.AspNetCore.Hosting.Diagnostics[2]
      Request finished HTTP/2 POST http://localhost:5035/reverse.Reverse/ClientSide application/grpc - - 200 - application/grpc 21.9445ms
info: Microsoft.AspNetCore.Hosting.Diagnostics[1]
      Request starting HTTP/2 POST http://localhost:5035/reverse.Reverse/ServerSide application/grpc -
info: Microsoft.AspNetCore.Routing.EndpointMiddleware[0]
      Executing endpoint 'gRPC - /reverse.Reverse/ServerSide'
info: Microsoft.AspNetCore.Routing.EndpointMiddleware[1]
      Executed endpoint 'gRPC - /reverse.Reverse/ServerSide'
info: Microsoft.AspNetCore.Hosting.Diagnostics[2]
      Request finished HTTP/2 POST http://localhost:5035/reverse.Reverse/ServerSide application/grpc - - 200 - application/grpc 12.7054ms
info: Microsoft.AspNetCore.Hosting.Diagnostics[1]
      Request starting HTTP/2 POST http://localhost:5035/reverse.Reverse/Bidirectional application/grpc -
info: Microsoft.AspNetCore.Routing.EndpointMiddleware[0]
      Executing endpoint 'gRPC - /reverse.Reverse/Bidirectional'
info: Microsoft.AspNetCore.Routing.EndpointMiddleware[1]
      Executed endpoint 'gRPC - /reverse.Reverse/Bidirectional'
info: Microsoft.AspNetCore.Hosting.Diagnostics[2]
      Request finished HTTP/2 POST http://localhost:5035/reverse.Reverse/Bidirectional application/grpc - - 200 - application/grpc 41.2414ms

對日誌進行一些分析我們可以發現:

  • 所有型別的 grpc 通訊模式執行邏輯都是相同的,都是一次完整的http請求週期;
  • 請求的協議使用的是 HTTP/2
  • 方法都為 POST
  • 所有grpc方法都對映到了對應的終結點 /{package名}.{service名}/{方法名}
  • 請求&響應的 ContentType 都為 application/grpc

三、進一步驗證請求模型

如果我們上一步的分析是對的,那麼資料只能承載在 請求流 & 響應流 中,我們可以嘗試獲取流中的資料,進一步分析具體細節;

dump請求&響應資料

藉助 asp.net core 的中介軟體,我們可以比較容易的進行 請求流 & 響應流 的內容 dump

請求流 是隻讀的,響應流 是隻寫的,我們需要兩個代理流替換原有的流,進行資料dump,將資料儲存到 MemoryStream 中,以便我們觀察;

這兩個流分別為 ReadCacheProxyStream.csWriteCacheProxyStream.cs,直接上程式碼:

public class ReadCacheProxyStream : Stream
{
    private readonly Stream _innerStream;

    public MemoryStream CachedStream { get; } = new MemoryStream(1024);

    public override bool CanRead => _innerStream.CanRead;

    public override bool CanSeek => false;

    public override bool CanWrite => false;

    public override long Length => _innerStream.Length;

    public override long Position { get => _innerStream.Length; set => throw new NotSupportedException(); }

    public ReadCacheProxyStream(Stream innerStream)
    {
        _innerStream = innerStream;
    }

    public override void Flush() => throw new NotSupportedException();

    public override Task FlushAsync(CancellationToken cancellationToken) => _innerStream.FlushAsync(cancellationToken);

    public override int Read(byte[] buffer, int offset, int count) => throw new NotSupportedException();

    public override async ValueTask<int> ReadAsync(Memory<byte> buffer, CancellationToken cancellationToken = default)
    {
        var len = await _innerStream.ReadAsync(buffer, cancellationToken);
        if (len > 0)
        {
            CachedStream.Write(buffer.Span.Slice(0, len));
        }
        return len;
    }

    public override long Seek(long offset, SeekOrigin origin) => throw new NotSupportedException();

    public override void SetLength(long value) => throw new NotSupportedException();

    public override void Write(byte[] buffer, int offset, int count) => throw new NotSupportedException();
}

public class WriteCacheProxyStream : Stream
{
    private readonly Stream _innerStream;

    public MemoryStream CachedStream { get; } = new MemoryStream(1024);

    public override bool CanRead => false;

    public override bool CanSeek => false;

    public override bool CanWrite => _innerStream.CanWrite;

    public override long Length => _innerStream.Length;

    public override long Position { get => _innerStream.Length; set => throw new NotSupportedException(); }

    public WriteCacheProxyStream(Stream innerStream)
    {
        _innerStream = innerStream;
    }

    public override void Flush() => throw new NotSupportedException();

    public override Task FlushAsync(CancellationToken cancellationToken) => _innerStream.FlushAsync(cancellationToken);

    public override int Read(byte[] buffer, int offset, int count) => throw new NotSupportedException();

    public override long Seek(long offset, SeekOrigin origin) => throw new NotSupportedException();

    public override void SetLength(long value) => throw new NotSupportedException();

    public override void Write(byte[] buffer, int offset, int count) => throw new NotSupportedException();

    public override async ValueTask WriteAsync(ReadOnlyMemory<byte> buffer, CancellationToken cancellationToken = default)
    {
        await _innerStream.WriteAsync(buffer, cancellationToken);
        CachedStream.Write(buffer.Span);
    }
}
  • 在請求管道中替換流
    將如下中介軟體新增到請求管道的最開始
app.Use(async (context, next) =>
{
    var originRequestBody = context.Request.Body;
    var originResponseBody = context.Response.Body;
    var requestCacheStream = new ReadCacheProxyStream(originRequestBody);
    var responseCacheStream = new WriteCacheProxyStream(originResponseBody);

    context.Request.Body = requestCacheStream;
    context.Response.Body = responseCacheStream;
    try
    {
        await next();
    }
    finally
    {
        await context.Response.CompleteAsync();

        //要不要還回去不在這裡進行討論了
        context.Request.Body = originRequestBody;
        context.Response.Body = originResponseBody;

        var requestData = requestCacheStream.CachedStream.ToArray();
        var responseData = requestCacheStream.CachedStream.ToArray();
    }
});
  • 接下來在 finally 塊的最後打上斷點,然後執行服務端和客戶端,即可在中介軟體中通過 requestDataresponseData 觀察資料互動

分析資料結構


理論上我們可以直接使用 Protobuf 進行解析,不過這裡我們目的是為了手動實現一個超級簡單的編碼器。。。


客戶端執行 Sample 方法,並在服務端獲取 requestDataresponseData

分析requestData

img2

這個樣子太不直觀了,由於我們的訊息定義 Request 只有一個 string 型別的欄位,那麼如果之前猜測正確,這個資料裡面必定有對應字串。我們直接嘗試拿來看看:

img3

果然有對應的資料 Sample ,我們嘗試去掉多餘的資料看看:

img4

那麼前7個byte是幹什麼的呢,我們改一下請求的訊息內容,將 Sample 修改為 Sample1 再次進行分析:

img5

這樣就比較明顯了,稍做分析,我們可以先做個簡單的總結,第5個位元組為訊息的總長度,第6個位元組應該是欄位描述之類的,當前訊息體固定為10,第7個位元組為Request.message欄位的長度

不過這樣有點草率,byte最大為255,我們再探索一下內容超過255時,是什麼結構。將 Sample 修改為 50 個重複的 Sample 再次進行分析:

img6

情況一下就複雜了。。。不過第6個位元組仍然是10,那麼前5個位元組應該有描述訊息總長度,[0,0,0,1,47] 和長度 303 (注:308-5)之間的關係是什麼呢;稍微試了一下,資料的第1個位元組目前假設固定為0,第2-5位元組應該是一個大端序uint32,用來宣告訊息總長度img7但是第78個位元組如何轉換為300,就有點難琢磨了。。。算了,我們先不處理內容過大的情況吧(具體編碼邏輯可參見 protocol-buffers-encoding

分析responseData

檢視後發現結構和 requestData 是一樣的(因為 RequestReply 訊息宣告的結構相同),這裡就不多描述了,可以自行Debug檢視。

分析流式請求的requestDataresponseData

分析後發現流式請求裡面的多個訊息每個都是單個訊息的結構,然後順序放到請求或響應流中,這裡也不多描述了,可以自行Debug進行檢視,直接上基於以上總結的解碼器程式碼:

public static IEnumerable<string> ReadMessages(byte[] originData)
{
    var slice = originData.AsMemory();

    while (!slice.IsEmpty)
    {
        var messageLen = BinaryPrimitives.ReadInt32BigEndian(slice.Slice(1, 4).Span);

        var messageData = slice.Slice(5, messageLen);
        slice = slice.Slice(5 + messageLen);

        int len = messageData.Span[1];
        var content = Encoding.UTF8.GetString(messageData.Slice(2, len).Span);

        yield return content;
    }
}

然後在中介軟體中展示內容

TempMessageCodecUtil.DisplayMessages(requestData);
TempMessageCodecUtil.DisplayMessages(responseData);

再次執行程式,能夠正確看到控制檯直接輸出的請求和響應訊息內容,形如:
img8

四、使用 Controller 實現能夠與 Grpc Client SDK 互動的服務端

基於之前的分析,理論上我們只需要滿足:

 - 請求的協議使用的是 `HTTP/2`;
 - 方法都為 `POST`;
 - 所有grpc方法都對映到了對應的終結點 `/{package名}.{service名}/{方法名}`;
 - 請求&響應的 `ContentType` 都為 `application/grpc`;

然後正確的從請求流中解析資料結構,將正確的資料結構寫入響應流,就可以響應 Grpc Client 的請求了。

  • 現在我們需要一個編碼器,能夠將字串編碼為 Reply 訊息格式;以及一個解碼器,從請求流中讀取 Request 訊息。直接上程式碼。編碼器:
public static byte[] BuildMessage(string message)
{
    var contentData = Encoding.UTF8.GetBytes(message);
    if (contentData.Length > 127)
    {
        throw new ArgumentException();
    }
    var messageData = new byte[contentData.Length + 7];
    Array.Copy(contentData, 0, messageData, 7, contentData.Length);
    messageData[5] = 10;
    messageData[6] = (byte)contentData.Length;
    BinaryPrimitives.WriteInt32BigEndian(messageData.AsSpan().Slice(1), contentData.Length + 2);
    return messageData;
}

解碼器:

private async IAsyncEnumerable<string> ReadMessageAsync([EnumeratorCancellation] CancellationToken cancellationToken)
{
    var pipeReader = Request.BodyReader;

    while (!cancellationToken.IsCancellationRequested)
    {
        var readResult = await pipeReader.ReadAsync(cancellationToken);

        var buffer = readResult.Buffer;

        if (readResult.IsCompleted
            && buffer.IsEmpty)
        {
            yield break;
        }

        if (buffer.Length < 5)
        {
            pipeReader.AdvanceTo(buffer.Start, buffer.End);
            continue;
        }

        var messageBuffer = buffer.IsSingleSegment
                            ? buffer.First
                            : buffer.ToArray();

        var messageLen = BinaryPrimitives.ReadInt32BigEndian(messageBuffer.Slice(1, 4).Span);

        if (buffer.Length < messageLen + 5)
        {
            pipeReader.AdvanceTo(buffer.Start, buffer.End);
            continue;
        }

        messageBuffer = messageBuffer.Slice(5);

        int len = messageBuffer.Span[1];
        var content = Encoding.UTF8.GetString(messageBuffer.Slice(2, len).Span);

        yield return content;

        pipeReader.AdvanceTo(readResult.Buffer.GetPosition(7 + len));
    }
}
  • 實現一個 ReverseController.cs ,對映 reverse.proto 中對應的方法,實現和 ReverseService.cs 中相同的執行邏輯。程式碼如下:
[Route("reverse.Reverse")]
[ApiController]
public class ReverseController : ControllerBase
{
    [HttpPost]
    [Route(nameof(Bidirectional))]
    public async Task Bidirectional()
    {
        await foreach (var item in ReadMessageAsync(HttpContext.RequestAborted))
        {
            DisplayReceivedMessage(item);
            await ReplayReverseAsync(item);
        }
    }

    [HttpPost]
    [Route(nameof(ClientSide))]
    public async Task ClientSide()
    {
        var total = 0;

        await foreach (var item in ReadMessageAsync(HttpContext.RequestAborted))
        {
            total++;
            DisplayReceivedMessage(item);
        }

        await ReplayAsync($"{nameof(ServerSide)} Received Over. Total: {total}");
    }

    [HttpPost]
    [Route(nameof(ServerSide))]
    public async Task ServerSide()
    {
        string message = null!;
        await foreach (var item in ReadMessageAsync(HttpContext.RequestAborted))
        {
            message = item;
        }

        DisplayReceivedMessage(message);

        for (int i = 0; i < 5; i++)
        {
            await ReplayReverseAsync(message);
        }
    }

    [HttpPost]
    [Route(nameof(Simple))]
    public async Task Simple()
    {
        string message = null!;
        await foreach (var item in ReadMessageAsync(HttpContext.RequestAborted))
        {
            message = item;
        }

        DisplayReceivedMessage(message);
        await ReplayReverseAsync(message);
    }

    private async Task ReplayAsync(string message)
    {
        if (!Response.HasStarted)
        {
            Response.Headers.ContentType = "application/grpc";
            Response.AppendTrailer("grpc-status", "0");

            await Response.StartAsync();
        }

        await Response.Body.WriteAsync(TempMessageCodecUtil.BuildMessage(message));
    }

    private Task ReplayReverseAsync(string rawMessage) => ReplayAsync(new string(rawMessage.Reverse().ToArray()));

    //省略其他資訊
}

最後記得 services.AddControllers()app.MapControllers() 並取消Grpc的ServiceMap;

此時分別使用 ControllerGrpcService 執行服務端,並檢視客戶端日誌,可以看到執行結果相同,如圖:
img9

五、使用 HttpClient 實現能夠與 Grpc Server 互動的客戶端

在上面我們已經使用原生 Controller 實現了一個可以讓客戶端正常執行的服務端,現在我們不使用 Grpc SDK 來實現一個可以和服務端互動的客戶端。

  • 服務端獲取請求流和響應流比較簡單,目前 HttpClient 沒有直接獲取請求流的辦法,我們需要從 HttpContentSerializeToStreamAsync 方法中獲取到真正的請求流。具體細節不在這裡贅述,直接上程式碼:
class LongAliveHttpContent : HttpContent
{
    private readonly TaskCompletionSource<Stream> _streamGetCompletionSource = new(TaskCreationOptions.RunContinuationsAsynchronously);
    private readonly TaskCompletionSource _taskCompletionSource = new(TaskCreationOptions.RunContinuationsAsynchronously);

    public LongAliveHttpContent()
    {
        Headers.ContentType = new MediaTypeHeaderValue("application/grpc");
    }

    protected override Task SerializeToStreamAsync(Stream stream, TransportContext? context)
    {
        _streamGetCompletionSource.SetResult(stream);
        return _taskCompletionSource.Task;
    }

    protected override bool TryComputeLength(out long length)
    {
        length = -1;
        return false;
    }

    public void Complete()
    {
        _taskCompletionSource.TrySetResult();
    }

    public Task<Stream> GetStreamAsync()
    {
        return _streamGetCompletionSource.Task;
    }
}
  • 客戶端同樣需要滿足對應的請求要求:
 - 請求的協議使用的是 `HTTP/2`;
 - 方法都為 `POST`;
 - 所有grpc方法都對映到了對應的終結點 `/{package名}.{service名}/{方法名}`;
 - 請求&響應的 `ContentType` 都為 `application/grpc`;

直接上程式碼,使用 HttpClient 發起請求,並獲取 請求流 & 響應流

private static (Task<Stream> RequestStreamGetTask, Task<Stream> ResponseStreamGetTask, LongAliveHttpContent HttpContent) CreateStreamGetTasksAsync(HttpClient client, string path)
{
    var content = new LongAliveHttpContent();

    var httpRequestMessage = new HttpRequestMessage()
    {
        Method = HttpMethod.Post,
        RequestUri = new Uri(path, UriKind.Relative),
        Content = content,
        Version = HttpVersion.Version20,
        VersionPolicy = HttpVersionPolicy.RequestVersionExact,
    };

    var responseStreamGetTask = client.SendAsync(httpRequestMessage, HttpCompletionOption.ResponseHeadersRead)
                                      .ContinueWith(m => m.Result.Content.ReadAsStreamAsync())
                                      .Unwrap();

    return (content.GetStreamAsync(), responseStreamGetTask, content);
}
  • 實現和Grpc客戶端相同的執行邏輯。程式碼如下:
private static async Task BidirectionalWithOutSDK(HttpClient client)
{
    var (requestStreamGetTask, responseStreamGetTask, httpContent) = CreateStreamGetTasksAsync(client, "reverse.Reverse/Bidirectional");

    var requestStream = await requestStreamGetTask;

    var sendTask = Task.Run(async () =>
    {
        for (int i = 0; i < 10; i++)
        {
            await requestStream.WriteAsync(TempMessageCodecUtil.BuildMessage($"{nameof(Bidirectional)}-{i}"));
        }

        httpContent.Complete();
    });

    var receiveTask = DisplayReceivedMessageAsync(responseStreamGetTask);

    await Task.WhenAll(sendTask, receiveTask);
}

private static async Task ClientSideWithOutSDK(HttpClient client)
{
    var (requestStreamGetTask, responseStreamGetTask, httpContent) = CreateStreamGetTasksAsync(client, "reverse.Reverse/ClientSide");

    var requestStream = await requestStreamGetTask;

    for (int i = 0; i < 5; i++)
    {
        await requestStream.WriteAsync(TempMessageCodecUtil.BuildMessage($"{nameof(ClientSide)}-{i}"));

        await requestStream.FlushAsync();
    }

    httpContent.Complete();

    await DisplayReceivedMessageAsync(responseStreamGetTask);
}

private static async Task SampleWithOutSDK(HttpClient client)
{
    var (requestStreamGetTask, responseStreamGetTask, httpContent) = CreateStreamGetTasksAsync(client, "reverse.Reverse/Simple");

    var requestStream = await requestStreamGetTask;

    await requestStream.WriteAsync(TempMessageCodecUtil.BuildMessage(nameof(Sample)));

    httpContent.Complete();

    await DisplayReceivedMessageAsync(responseStreamGetTask);
}

private static async Task ServerSideWithOutSDK(HttpClient client)
{
    var (requestStreamGetTask, responseStreamGetTask, httpContent) = CreateStreamGetTasksAsync(client, "reverse.Reverse/ServerSide");

    var requestStream = await requestStreamGetTask;

    await requestStream.WriteAsync(TempMessageCodecUtil.BuildMessage(nameof(ServerSide)));

    httpContent.Complete();

    await DisplayReceivedMessageAsync(responseStreamGetTask);
}

此時分別進行如下測試:

  • 使用 GrpcService 執行服務端,並分別使用sdk客戶端HttpClient客戶端進行請求;
  • 使用 Controller 執行服務端,並分別使用sdk客戶端HttpClient客戶端進行請求;

可以看到客戶端執行結果相同,如下:

Sample Received: elpmaS
ClientSide Received: ServerSide Received Over. Total: 5
ServerSide Received: ediSrevreS
ServerSide Received: ediSrevreS
ServerSide Received: ediSrevreS
ServerSide Received: ediSrevreS
ServerSide Received: ediSrevreS
Bidirectional Received: 0-lanoitceridiB
Bidirectional Received: 1-lanoitceridiB
Bidirectional Received: 2-lanoitceridiB
Bidirectional Received: 3-lanoitceridiB
Bidirectional Received: 4-lanoitceridiB
Bidirectional Received: 5-lanoitceridiB
Bidirectional Received: 6-lanoitceridiB
Bidirectional Received: 7-lanoitceridiB
Bidirectional Received: 8-lanoitceridiB
Bidirectional Received: 9-lanoitceridiB
  ----------------- WithOutSDK -----------------
SampleWithOutSDK Received: elpmaS
ClientSideWithOutSDK Received: ServerSide Received Over. Total: 5
ServerSideWithOutSDK Received: ediSrevreS
ServerSideWithOutSDK Received: ediSrevreS
ServerSideWithOutSDK Received: ediSrevreS
ServerSideWithOutSDK Received: ediSrevreS
ServerSideWithOutSDK Received: ediSrevreS
BidirectionalWithOutSDK Received: 0-lanoitceridiB
BidirectionalWithOutSDK Received: 1-lanoitceridiB
BidirectionalWithOutSDK Received: 2-lanoitceridiB
BidirectionalWithOutSDK Received: 3-lanoitceridiB
BidirectionalWithOutSDK Received: 4-lanoitceridiB
BidirectionalWithOutSDK Received: 5-lanoitceridiB
BidirectionalWithOutSDK Received: 6-lanoitceridiB
BidirectionalWithOutSDK Received: 7-lanoitceridiB
BidirectionalWithOutSDK Received: 8-lanoitceridiB
BidirectionalWithOutSDK Received: 9-lanoitceridiB

六、結論

至此,我們稍作分析和總結,可以得出結論:

  • Grpc 所有型別的方法呼叫都是普通的Http請求,只是請求和響應的內容是經過 Protobuf 編碼的資料;

我們再稍作擴充,可以得出更多結論:

  • 多路複用Header壓縮 什麼的,都是 Http2 帶來的優化,不是和 Grpc 繫結的,使用 Http2 訪問常規 WebAPI 也能享受到其帶來的好處;
  • GrpcUnary 請求模式和和 WebAPI 邏輯是一樣的;Server streamingClient streaming 請求模式都可以通過 Http1.1 進行實現(但不能多路複用,每個請求會獨佔一個連線);Bidirectional streaming 是基於 二進位制分幀 的,只能在 Http2 及以上版本實現雙向流通訊;

基於以上結論,我們總結一下 GrpcWebAPI 的優勢在哪裡:

  • 執行速度更快(一定情況下),Protobuf 基於二進位制的編碼,在資料量較多時,比 json 這種基於文字的編碼效率更高;但丟失了直接的可閱讀性;(沒做效能測試,理論是這樣,如果效能打不過 json 的話,那就沒有存在價值了。理論上資料量越大,效能差距越大)
  • 傳輸資料更少,json 因為要自我描述,所有欄位都有名字,在序列化 List 時這種浪費就比較多了,重複物件越多,浪費越多(但可閱讀性也是這樣來的);Protobuf 沒有這方面的浪費,還有一些其它的優化,參見 protocol-buffers-encoding
  • 開發速度更快,SDK使用 proto 檔案直接生成服務端和客戶端,上手更快,跨語言也能快速生成客戶端(這點其實見仁見智,WebAPI 也有類似的工具);

Grpc 比傳統 WebAPI 的劣勢有哪些呢:

  • 可閱讀性;不借助工具 Grpc 的訊息內容是沒法直接閱讀的;
  • HTTP2 強繫結;WebAPI 可以在低版本協議下執行,某些時候會方便一點;
  • 依賴 Grpc SDK;雖然 Grpc SDK 已經覆蓋了很多主流語言,但如果恰好某個需求要使用的語言沒有SDK,那就有點麻煩了;相比之下基於文字的 WebAPI 會更通用一點;
  • 型別不能完全覆蓋某些語言的基礎型別,需要額外的編碼量(方法不能直接接收/返回基礎型別、Nullable等);
  • Protobuf 要求嚴格的格式,欄位增刪
  • 額外的學習成本;

最後再基於結論,總結一些我認為有問題的 grpc 使用方法吧:

  • grpc 當作一個封包/拆包工具;在訊息體中放一個 json 之類的東西,拿到訊息之後在反序列化一次。。。這又是何必呢。。。直接基於原生 Http 寫一個 基於訊息頭指定訊息長度 的分包邏輯並花不了多少工作量,也不會額外引入grpc的相關東西;這個用法也和 grpc高效能 背道而馳,還多了一層 序列化/反序列化 操作;(我在這裡沒有說nacos)
  • 使用單獨的認證邏輯;grpc 呼叫就是 Http 請求,那麼 Header 的工作邏輯是和 WebAPI 完全一樣的;那麼 grpc 請求完全可以使用現有的 Http 認證 和 Header處理 程式碼甚至請求管道;額外再自定義訊息實現相關功能不是多此一舉嗎?(我在這裡也沒有說nacos)

綜上,個人認為,不是別人說 grpc 高效能,就認為它碾壓傳統 WebAPI,就去用它;還是需要了解原理後好好考慮的,確認它能否為你帶來理想的效果;有時候或許自己手寫一個變體的 Http 請求處理邏輯能更快更好的滿足需求;


擴充

如果有閒心的話,理論上甚至可以做下列的玩具:

  • WebAPIgrpc 相容層,使 Controller 既能以 grpc 工作又能處理普通請求;通過 Controller 定義,反向生成 DTOproto 訊息定義,以及整個service的 proto 定義;
  • grpcWebAPI 相容層,使 grpc 服務能工作的像 Controller 一樣,對外輸入輸出 json

相關文章