这是本节的多页打印视图。 点击此处打印.

返回本页常规视图.

文档

原文:https://grpc.io/docs/

​ 了解关键的 gRPC 概念,尝试快速入门,找到支持的所有语言平台的教程和参考资料:

官方支持

​ 以下是官方支持的 gRPC 语言、平台和操作系统版本:

语言操作系统编译器 / SDK
C/C++Linux、MacGCC 7.3.1+、Clang 6+
C/C++Windows 10+Visual Studio 2019+
C#Linux、Mac.NET Core、Mono 4+
C#Windows 10+.NET Core、NET 4.5+
DartWindows、Linux、MacDart 2.12+
GoWindows、Linux、MacGo 1.13+
JavaWindows、Linux、MacJava 8+(Android KitKat+)
KotlinWindows、Linux、MacKotlin 1.3+
Node.jsWindows、Linux、MacNode v8+
Objective-CmacOS 10.10+、iOS 9.0+Xcode 12+
PHPLinux、MacPHP 7.0+
PythonWindows、Linux、MacPython 3.7+
RubyWindows、Linux、MacRuby 2.3+

1 - Protocol Buffer 编译器安装

Protocol Buffer Compiler Installation - Protocol Buffer 编译器安装

​ 如何安装 Protocol Buffer 编译器?

​ 虽然不是强制要求,但 gRPC 应用程序通常使用 Protocol Buffers 来进行服务定义和数据序列化。本站的大部分示例代码使用 Protocol Buffer 语言的版本 3 (proto3)

​ Protocol Buffer 编译器 protoc 用于编译包含服务和消息定义的 .proto 文件。请按照以下方法之一安装 protoc

使用软件包管理器安装

​ 你可以使用软件包管理器在 Linux 或 macOS 上安装 protocol 编译器 —— protoc,请使用以下命令:

警告

​ 安装后,请检查 protoc 的版本(如下所示)以确保它足够新。某些软件包管理器安装的 protoc 版本可能相当旧。

​ 按照下一节中所示的预编译二进制文件进行安装,是确保您使用最新版本的 protoc 的最佳方式。

  • Linux,请使用 aptapt-get,例如:

    1
    2
    
    $ apt install -y protobuf-compiler
    $ protoc --version  # Ensure compiler version is 3+
    
  • MacOS,请使用 Homebrew:

    1
    2
    
    $ brew install protobuf
    $ protoc --version  # Ensure compiler version is 3+
    

安装预编译二进制文件(任何操作系统)

​ 要从预编译的二进制文件安装 Protocol 编译器的最新版本,请按照以下说明进行操作:

  1. github.com/google/protobuf/releases 手动下载与您的操作系统和计算机架构相对应的 zip 文件(protoc-<version>-<os>-<arch>.zip),或使用类似以下的命令来获取文件:

    1
    2
    
    $ PB_REL="https://github.com/protocolbuffers/protobuf/releases"
    $ curl -LO $PB_REL/download/v3.15.8/protoc-3.15.8-linux-x86_64.zip
    
  2. 将文件解压缩到 $HOME/.local 或您选择的目录下。例如:

    1
    
    $ unzip protoc-3.15.8-linux-x86_64.zip -d $HOME/.local
    
  3. 更新您的环境变量路径,将 protoc 可执行文件的路径包含在其中。例如:

    1
    
    $ export PATH="$PATH:$HOME/.local/bin"
    

其他安装选项

​ 如果您想从源代码构建 Protocol 编译器,或者访问旧版本的预编译二进制文件,请参阅 Download Protocol Buffers

2 - What is gRPC?

​ 新手入门 gRPC?从以下页面开始

2.1 - gRPC 简介

gRPC 和协议缓冲区(protocol buffers)的简介。

Introduction to gRPC - gRPC 简介

原文:https://grpc.io/docs/what-is-grpc/introduction/

​ gRPC 和协议缓冲区(protocol buffers)的简介。

​ 本页面向您介绍了 gRPC 和协议缓冲区。gRPC 可以将协议缓冲区用作其接口定义语言(IDL)和基础消息交换格式。如果您对 gRPC 和/或协议缓冲区还不熟悉,请先阅读本文!如果您只想立即深入了解 gRPC 并进行实践,选择一种语言,并尝试其中的快速入门

概述

​ 在 gRPC 中,客户端应用程序可以直接调用不同机器上的服务端应用程序的方法,就像调用本地对象一样,这使得您更容易创建分布式应用程序和服务。与许多 RPC 系统一样,gRPC 基于定义服务的思想,指定可远程调用的方法及其参数和返回类型。在服务端,服务端实现此接口并运行 gRPC 服务端来处理客户端调用。在客户端,客户端具有一个存根(stub)(在某些语言中仅称为客户端),它提供与服务端相同的方法。

Concept Diagram

​ gRPC 客户端和服务端可以在各种环境中运行和相互通信,从 Google 内部的服务器到您自己的笔记本电脑,并且可以使用 gRPC 支持的任何语言进行编写。因此,例如,您可以轻松地在 Java 中创建一个 gRPC 服务端,并使用 Go、Python 或 Ruby 编写客户端。此外,最新的 Google API 将具有 gRPC 版本的接口,让您可以轻松地将 Google 功能集成到您的应用程序中。

使用协议缓冲区

​ 默认情况下,gRPC 使用 协议缓冲区(Protocol Buffers),这是 Google 成熟的开源机制(Google’s mature open source mechanism),用于序列化结构化数据(尽管它也可以与其他数据格式如 JSON 一起使用)。下面是它的工作原理的简要介绍。如果您已经熟悉协议缓冲区,请随时跳到下一节。

​ 在使用协议缓冲区时的第一步是在一个 proto 文件 的文件中定义要序列化的数据的结构:这是一个带有 .proto 扩展名的普通文本文件。协议缓冲区数据被结构化为 消息(messages),其中每个消息是包含一系列名值对(称为 字段(fields))的小型逻辑记录的信息。下面是一个简单的例子:

1
2
3
4
5
message Person {
  string name = 1;
  int32 id = 2;
  bool has_ponycopter = 3;
}

​ 一旦您指定了数据结构,您就可以使用协议缓冲区编译器 protoc 根据您的 proto 定义生成所需语言的数据访问类。这些类提供了对每个字段的简单访问器,如 name()set_name(),以及将整个结构序列化/解析为原始字节的方法。例如,如果您选择的语言是 C++,在上面的示例上运行编译器将生成一个名为 Person 的类。然后,您可以在应用程序中使用此类来填充(populate)、序列化(serialize)和检索(retrieve) Person 协议缓冲区消息。

​ 您可以在普通的 proto 文件中定义 gRPC 服务,其中 RPC 方法的参数和返回类型被指定为协议缓冲区消息:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
// The greeter service definition. - Greeter 服务定义。
service Greeter {
  // Sends a greeting 发送问候
  rpc SayHello (HelloRequest) returns (HelloReply) {}
}

// The request message containing the user's name. 包含用户 name 的请求消息。
message HelloRequest {
  string name = 1;
}

// The response message containing the greetings 包含问候语的响应消息。
message HelloReply {
  string message = 1;
}

​ gRPC 使用带有特殊 gRPC 插件的 protoc 从您的 proto 文件生成代码:您将获得生成的 gRPC 客户端和服务端代码,以及用于填充(populating)、序列化(serializing)和检索(retrieving )消息类型的常规协议缓冲区代码。要了解有关协议缓冲区的更多信息,包括如何在所选语言中安装带有 gRPC 插件的 protoc,请参阅 协议缓冲区文档

协议缓冲区版本

​ 虽然协议缓冲区已经可供开源用户使用一段时间了,但本站的大多数示例使用协议缓冲区版本 3(proto3),它具有稍微简化的语法、一些有用的新功能,并支持更多语言。proto3 目前可在 Java、C++、Dart、Python、Objective-C、C#、轻量级运行时(Android Java)、Ruby 和 JavaScript 中使用,这些版本可以从 协议缓冲区 GitHub 仓库获取,还有一个来自 golang/protobuf 官方包 的 Go 语言生成器,同时还有更多语言正在开发中。您可以在每种语言的 proto3 语言指南参考文档 中了解更多信息。参考文档还包括 .proto 文件格式的 正式规范

​ 总的来说,尽管您可以使用 proto2(当前默认的协议缓冲区版本),但我们建议您在使用gRPC时使用proto3,因为它可以让您使用完整范围的 gRPC 支持的编程语言,同时避免 proto2 客户端与 proto3 服务端之间的兼容性问题。

2.2 - 核心概念

介绍关键的 gRPC 概念,概述 gRPC 的架构和 RPC 生命周期。

Core concepts, architecture and lifecycle 核心概念、架构和生命周期

原文:https://grpc.io/docs/what-is-grpc/core-concepts/

​ 介绍关键的 gRPC 概念,概述 gRPC 的架构和 RPC 生命周期。

​ 对 gRPC 不熟悉吗?那么请首先阅读gRPC简介。对于特定语言的详细信息,请参阅您选择的编程语言的快速入门、教程和参考文档。

概述

服务定义

​ 与许多 RPC 系统类似,gRPC的基本思想是定义一个服务,指定可以远程调用的方法以及它们的参数和返回类型。默认情况下,gRPC 使用协议缓冲区作为接口定义语言 (IDL),用于描述服务接口和有效负载消息的结构。如果需要,也可以使用其他替代方案。

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
service HelloService {
  rpc SayHello (HelloRequest) returns (HelloResponse);
}

message HelloRequest {
  string greeting = 1;
}

message HelloResponse {
  string reply = 1;
}

​ gRPC 允许您定义四种类型的服务方法:

  • 一元 RPCs(Unary RPCs):客户端向服务端发送单个请求并获取单个响应,就像普通函数调用一样。

    1
    
    rpc SayHello(HelloRequest) returns (HelloResponse);
    
  • 服务端流式 RPCs:客户端向服务端发送请求并获取一个流以读取一系列的消息。客户端从返回的流中读取消息,直到没有更多的消息为止。gRPC 保证在单个 RPC 调用中消息的顺序。

    1
    
    rpc LotsOfReplies(HelloRequest) returns (stream HelloResponse);
    
  • 客户端流式 RPCs:客户端写入一系列消息并将它们发送到服务端,再次使用提供的流。客户端完成写入这些消息后,等待服务端读取它们并返回响应。同样,gRPC 保证在单个 RPC 调用中消息的顺序。

    1
    
    rpc LotsOfGreetings(stream HelloRequest) returns (HelloResponse);
    
  • 双向流式 RPCs:双方都使用读写流发送一系列的消息。两个流独立运行,因此客户端和服务端可以按任何顺序进行读写操作:例如,服务端可以在写入响应之前等待接收所有客户端消息,或者它可以交替读取消息然后写入消息,或者进行其他读写组合。每个流中的消息顺序保持不变。

    1
    
    rpc BidiHello(stream HelloRequest) returns (stream HelloResponse);
    

​ 您将在下面的RPC 生命周期章节中了解更多关于不同类型的 RPC。

使用 API

​ 从 .proto 文件中的服务定义开始,gRPC 提供了协议缓冲区编译器插件,用于生成客户端和服务端代码。gRPC 用户们通常在客户端调用这些 API,并在服务端实现相应的 API。

  • 在服务端,服务端实现了服务声明的方法,并运行 gRPC 服务端来处理客户端调用。gRPC 基础设施(The gRPC infrastructure)会解码传入的请求,执行服务方法,并对服务响应进行编码。
  • 在客户端,客户端有一个称为 存根(stub)(对于某些语言,首选术语是 客户端(client))的本地对象,实现了与服务相同的方法。然后,客户端只需在本地对象上调用这些方法,方法会将调用的参数封装为适当的协议缓冲区消息类型,将请求发送到服务端,并返回服务端的协议缓冲区响应。

同步 vs. 异步

​ 同步的 RPC (Synchronous RPC)调用会阻塞,直到从服务端接收到响应,这是RPC所追求的过程调用抽象的最接近逼真的表现。另一方面,网络本质上是异步的,在许多场景中,能够在不阻塞当前线程的情况下启动 RPC 是非常有用的。

​ 大多数语言中的 gRPC 编程 API 都提供同步(synchronous )和异步(asynchronous )两种风格(flavors)。您可以在各编程语言的教程和参考文档中了解更多信息(完整的参考文档即将推出)。

RPC 生命周期

​ 在本节中,您将更详细地了解当 gRPC 客户端调用 gRPC 服务端方法时发生的情况。有关完整的实现细节,请参阅特定编程语言的页面。

一元 RPC

​ 首先考虑最简单类型的 RPC,即客户端发送一个请求并收到一个响应。

  1. 一旦客户端调用存根(stub )方法后,服务端就会收到有关此调用的客户端元数据(metadata)、方法名称以及指定的截止时间(deadline)(如果适用)的通知。
  2. 然后,服务端可以立即发送自己的初始元数据(initial metadata)(必须在任何响应之前发送),或者等待客户端的请求消息。哪个先发生取决于应用程序的特定实现。
  3. 一旦服务端收到客户端的请求消息,它会执行必要的工作来创建和填充响应。然后,将响应(如果成功)与状态详细信息(状态码和可选状态消息)以及可选的尾部元数据(optional trailing metadata)一起返回给客户端。
  4. 如果响应状态为 OK,则客户端接收到响应,这样在客户端上就完成了调用。

服务端流式 RPC

​ 服务端流式 RPC 类似于一元 RPC,不同之处在于服务端以流式方式返回一系列消息作为对客户端请求的响应。在发送完所有消息后,服务端将其状态详细信息(状态码和可选状态消息)以及可选的尾部元数据(optional trailing metadata)发送给客户端。这样就完成了在服务端的处理。客户端在接收到所有服务端的消息后完成。

客户端流式 RPC

​ 客户端流式 RPC 类似于一元 RPC,不同之处在于客户端向服务端发送一系列消息,而不是单个消息。服务端以单个消息作为响应(连同其状态详细信息和可选的尾部元数据(optional trailing metadata))进行响应,通常在接收到所有客户端的消息后响应,但不一定是这样。

双向流式 RPC

​ 在双向流式 RPC 中,调用由客户端发起方法调用,服务端接收客户端的元数据、方法名称和截止时间。服务端可以选择发送初始元数据,或者等待客户端开始流式传输消息。

​ 客户端和服务端流处理是特定于应用程序的。由于这两个流是独立的,因此客户端和服务端可以按任何顺序读取和写入消息。例如,服务端可以等待接收到所有客户端的消息后再写入其消息,或者服务端和客户端可以进行"乒乓(ping-pong)“式的交互 —— 服务端接收请求,然后发送响应,然后客户端根据这一次的响应发送另一个请求,依此类推。

截止时间/超时

​ gRPC 允许客户端指定在 RPC 完成之前愿意等待的时间长度,如果RPC在此时间内未完成,将以DEADLINE_EXCEEDED错误终止。在服务端,服务端可以查询特定 RPC 是否已超时,或者还剩多少时间来完成 该RPC。

​ 指定截止时间或超时是特定于编程语言的:某些编程语言的 API 使用超时 timeout(时间段(durations of time)),而某些编程语言的 API 使用截止时间 deadline (固定时间点(a fixed point in time)),并且可能有默认的截止时间,也可能没有。

RPC 终止

​ 在 gRPC 中,客户端和服务器对调用的成功进行独立且本地的判断,它们的结论可能不一致。这意味着,例如,你可能有一个在服务端成功完成的 RPC(“我已发送完所有响应!"),但在客户端失败(“响应在我的截止时间之后到达!")。服务端也可以在客户端发送完所有请求之前决定完成调用。

取消 RPC

​ 客户端或服务端可以随时取消 RPC。取消操作会立即终止 RPC,不再进行进一步的工作。

警告

​ 在取消操作之前所做的更改不会被回滚。

元数据

​ 元数据(metadata)是关于特定 RPC 调用的信息(例如 认证细节),以键值对的形式表示,其中键是字符串,值通常是字符串,但也可以是二进制数据。

​ 键是不区分大小写的,并由 ASCII 字母、数字和特殊字符 -_. 组成,且不能以 grpc- 开头(这是为 gRPC 保留的)。二进制值的键以 -bin 结尾,而 ASCII 值的键则不以-bin结尾。

​ gRPC 不使用用户定义的元数据(这些用户自定义的元数据允许客户端提供与调用相关的信息给服务端),反之亦然。

​ 访问元数据是与编程语言相关的。

通道

​ gRPC 通道提供与指定主机和端口上的 gRPC 服务端的连接。在创建客户端存根(stub)时使用它。客户端可以指定通道参数以修改 gRPC 的默认行为,例如打开或关闭消息压缩(message compression)。通道具有状态,包括 connectedidle

​ gRPC 如何处理关闭通道取决于编程语言。某些编程语言还允许查询通道状态。

2.3 - FAQ

FAQ - 常见问题

https://grpc.io/docs/what-is-grpc/faq/

​ 以下是一些常见问题。希望你能在这里找到答案 :-)

What is gRPC? 什么是 gRPC?

​ gRPC 是一个现代的、开源的远程过程调用 (RPC) 框架,可以运行在任何地方。它使客户端和服务端应用程序能够透明地进行通信,并且更容易构建连接的系统。

​ 阅读更详细的 动机和设计原则(Motivation & Design Principles) 文章,了解我们为什么创建gRPC的背景。

gRPC 是什么的缩写?

gRPC Remote Procedure Calls, of course!

为什么我要使用 gRPC?

​ 主要的使用场景包括:

  • 低延迟、高可扩展性的分布式系统。
  • 开发与云服务端通信的移动客户端。
  • 设计一个需要准确、高效和语言无关的新协议。
  • 分层设计以实现扩展,例如认证、负载均衡、日志记录和监控等。

谁在使用它以及为什么?

​ gRPC 是 云原生计算基金会(Cloud Native Computing Foundation) (CNCF) 的项目。

​ Google 长期以来一直在使用许多 gRPC 的底层技术和概念。当前的实现被应用于 Google 的多个云产品和 Google 外部 API。它还被 SquareNetflixCoreOSDockerCockroachDBCiscoJuniper Networks 等许多组织和个人所使用。

支持哪些编程语言?

​ 请参阅 官方支持的语言和平台(Officially supported languages and platforms)

如何开始使用 gRPC?

​ 你可以按照这里的说明安装 gRPC。或者前往 gRPC GitHub 组织页面,选择你感兴趣的运行时或语言,并按照 README 中的说明进行操作。

gRPC 使用哪种许可证?

​ 所有实现都使用 Apache 2.0 许可证

我如何做出贡献?

​ 非常欢迎 贡献者,代码库托管在 GitHub 上。我们期待社区的反馈、补充和错误报告。个人贡献者和公司贡献者都需要签署我们的贡献者许可协议 (CLA)。如果你有关于 gRPC 的项目想法,请阅读指南并提交到 这里。我们在 GitHub 上的 gRPC 生态系统 组织下有一个不断增长的项目列表。

文档在哪里?

​ 请查阅 grpc.io 上的文档

What is the road map?

​ gRPC 项目有一个 RFC 过程,通过该过程设计和批准新功能的实现。这些功能在 该代码库 中进行跟踪。

gRPC 的版本支持有多久?

​ gRPC 项目没有长期支持 (LTS) 版本。根据上述滚动发布模型,我们支持当前的最新版本和上一个版本。这里的支持意味着修复错误和安全问题。

gRPC 的版本控制策略是什么?

​ 请参阅 gRPC 的版本控制策略 此处

最新的 gRPC 版本是多少?

​ 最新的发布标签是 v1.55.0。

gRPC 的发布时间是什么时候?

​ gRPC 项目采用的模式是主分支的 tip 在任何时候都是稳定的。该项目(在各种运行时中)目标是以尽力而为的方式每隔 6 周发布一个checkpoint 版本。请参阅此处的发布计划 。

我如何报告 gRPC 中的安全漏洞?

​ 如果要报告 gRPC 中的安全漏洞,请按照此处的文档流程进行操作。

是否可以在浏览器中使用它?

gRPC-Web 项目已经正式可用。

我可以在 gRPC 中使用我喜欢的数据格式(JSON、Protobuf、Thrift、XML)吗?

​ 可以。gRPC 的设计目标是支持多种内容类型的可扩展性。初始版本包含对 Protobuf 的支持,并在不同程度上支持其他内容类型,如 FlatBuffers 和 Thrift,但这些支持可能具有不同的成熟度水平。

我可以在服务网格(service mesh)中使用 gRPC 吗?

​ 可以。gRPC 应用程序可以像其他应用程序一样部署在服务网格中。gRPC 还支持 xDS API,可以在服务网格中部署 gRPC 应用程序而无需使用 sidecar 代理。gRPC 支持的无代理服务网格功能可以在 这里 查看。

gRPC 如何在移动应用程序开发中发挥作用?

​ gRPC和Protobuf提供了一种精确定义服务并自动生成可靠的iOS、Android客户端库以及提供后端的服务端的简便方法。客户端可以利用高级的流式传输和连接功能,帮助节省带宽,通过较少的TCP连接实现更多功能,并节省CPU使用和电池寿命。

为什么 gRPC 比使用 HTTP/2 传输的二进制数据块更好?

​ 这主要是gRPC在通信中的作用。然而,gRPC也是一组库,在各个平台上提供高级功能,这些功能通常是常见的HTTP库所不具备的。此类功能的示例包括:

  • 与应用层流量控制的交互
  • 级联的调用取消
  • 负载均衡和故障转移

为什么 gRPC 比 REST 更好/更差?

​ gRPC 在很大程度上遵循 HTTP/2 上的 HTTP 语义,但我们明确允许全双工流式传输。我们与典型的 REST 约定有所不同,因为为了性能原因,我们在调用分派过程中使用静态路径,从路径、查询参数和有效载荷主体中解析调用参数会增加延迟和复杂性。我们还形成了一套错误集,我们认为这些错误集与API用例更直接相关,而不是HTTP状态码。

gRPC 的发音是怎样的?

​ Jee-Arr-Pee-See.

3 - 编程语言

Supported languages - 支持的编程语言

Each gRPC language / platform has links to the following pages and more:

每种 gRPC 语言/平台都有以下页面的链接和更多内容:

  • Quick start 快速入门
  • Tutorials 教程
  • API reference API 参考

Select a language to get started:

选择一种语言开始:

3.1 - go

Go

https://grpc.io/docs/languages/go/

快速入门

​ 几分钟内运行您的第一个 Go gRPC 应用程序!

基础教程

​ 了解 Go gRPC 的基础知识。


学习更多

参考

其他

开发者故事和演讲

3.1.1 - 快速入门

Quick start - 快速入门

https://grpc.io/docs/languages/go/quickstart/

​ 本指南将通过一个简单的工作示例帮助您入门使用 Go 中的 gRPC。

先决条件

  • Go,任意三个最新主要版本Go 发行版

    有关安装说明,请参阅 Go 的 入门指南

  • Protocol Buffers 编译器 protoc第 3 版

    有关安装说明,请参阅 Protocol Buffer 编译器安装

  • Go 插件用于协议编译器:

    1. 使用以下命令安装 Go 的协议编译器插件

      1
      2
      
      $ go install google.golang.org/protobuf/cmd/protoc-gen-go@v1.28
      $ go install google.golang.org/grpc/cmd/protoc-gen-go-grpc@v1.2
      
    2. 更新您的 PATH,以便 protoc 编译器可以找到这些插件:

      1
      
      $ export PATH="$PATH:$(go env GOPATH)/bin"
      

获取示例代码

​ 该示例代码是 grpc-go 仓库的一部分。

  1. 下载仓库的 zip 文件 并解压,或者克隆仓库:

    1
    
    $ git clone -b v1.55.0 --depth 1 https://github.com/grpc/grpc-go
    
  2. 切换到快速入门示例目录:

    1
    
    $ cd grpc-go/examples/helloworld
    

运行示例

​ 从 examples/helloworld 目录开始:

  1. 编译并执行服务端代码:

    1
    
    $ go run greeter_server/main.go
    
  2. 从另一个终端编译并执行客户端代码,并查看客户端输出:

    1
    2
    
    $ go run greeter_client/main.go
    Greeting: Hello world
    

​ 恭喜!您刚刚使用 gRPC 运行了一个客户端-服务端(client-server )应用程序。

更新 gRPC 服务

In this section you’ll update the application with an extra server method. The gRPC service is defined using protocol buffers. To learn more about how to define a service in a .proto file see Basics tutorial. For now, all you need to know is that both the server and the client stub have a SayHello() RPC method that takes a HelloRequest parameter from the client and returns a HelloReply from the server, and that the method is defined like this:

​ 在本节中,您将使用额外的服务端方法更新应用程序。gRPC 服务使用Protocol Buffers定义。要了解有关如何在 .proto 文件中定义服务的更多信息,请参阅基础教程。目前,您只需要知道服务端和客户端存根都有一个 SayHello() 的 RPC 方法,该方法从客户端接收一个 HelloRequest 参数,并从服务端返回一个 HelloReply,该方法定义如下:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
// greeting 服务的定义。
service Greeter {
  // Sends a greeting 发送问候
  rpc SayHello (HelloRequest) returns (HelloReply) {}
}

// 包含用户名称的请求消息。
message HelloRequest {
  string name = 1;
}

// The response message containing the greetings 包含 greetings 的响应消息
message HelloReply {
  string message = 1;
}

​ 打开 helloworld/helloworld.proto 文件,并添加一个新的 SayHelloAgain() 方法,使用相同的请求和响应类型:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
// greeting 服务的定义。
service Greeter {
  // Sends a greeting 发送问候
  rpc SayHello (HelloRequest) returns (HelloReply) {}
  // Sends another greeting 发送另一个问候
  rpc SayHelloAgain (HelloRequest) returns (HelloReply) {}
}

// 包含用户名称的请求消息。
message HelloRequest {
  string name = 1;
}

// The response message containing the greetings 包含 greetings 的响应消息。
message HelloReply {
  string message = 1;
}

​ 记得保存文件!

重新生成 gRPC 代码

​ 在使用新的服务方法之前,您需要重新编译已更新的 .proto 文件。

​ 仍然位于 examples/helloworld 目录中,运行以下命令:

1
2
3
$ protoc --go_out=. --go_opt=paths=source_relative \
    --go-grpc_out=. --go-grpc_opt=paths=source_relative \
    helloworld/helloworld.proto

​ 这将重新生成 helloworld/helloworld.pb.gohelloworld/helloworld_grpc.pb.go 文件,其中包含:

  • 用于填充、序列化和检索 HelloRequestHelloReply 消息类型的代码。
  • 生成的客户端和服务端代码。

更新并运行应用程序

​ 您已经重新生成了服务端和客户端代码,但仍需要在该示例应用程序的人工编写部分中实现和调用新的方法。

更新服务端

​ 打开 greeter_server/main.go,并添加以下函数(这里应该叫做方法吧):

1
2
3
func (s *server) SayHelloAgain(ctx context.Context, in *pb.HelloRequest) (*pb.HelloReply, error) {
        return &pb.HelloReply{Message: "Hello again " + in.GetName()}, nil
}

更新客户端

​ 打开 greeter_client/main.go,并在 main() 函数体的末尾添加以下代码:

1
2
3
4
5
r, err = c.SayHelloAgain(ctx, &pb.HelloRequest{Name: *name})
if err != nil {
        log.Fatalf("could not greet: %v", err)
}
log.Printf("Greeting: %s", r.GetMessage())

​ 记得保存您的更改。

运行!

​ 像之前一样运行客户端和服务端。从 examples/helloworld 目录执行以下命令:

  1. 运行该服务端:

    1
    
    $ go run greeter_server/main.go
    
  2. 在另一个终端中运行该客户端。这次,在命令行参数中添加一个名称。

    1
    
    $ go run greeter_client/main.go --name=Alice
    

    您将看到以下输出:

    1
    2
    
    Greeting: Hello Alice
    Greeting: Hello again Alice
    

下一步

3.1.2 - 基础教程

Basics tutorial - 基础教程

https://grpc.io/docs/languages/go/basics/

​ 这是关于在 Go 中使用 gRPC 的基础教程。

​ 本教程为Go程序员提供了使用gRPC的基础介绍。

​ 通过完成这个示例,您将学会如何:

  • .proto 文件中定义一个服务。
  • 使用协议缓冲区编译器生成服务端和客户端代码。
  • 使用 Go gRPC API 为您的服务编写一个简单的客户端和服务端。

​ 本教程假设您已经阅读了gRPC 简介并熟悉protocol buffers(协议缓冲区)。请注意,本教程中的示例使用的是协议缓冲区语言的proto3版本:您可以在proto3语言指南Go生成代码指南中了解更多信息。

为什么使用gRPC?

​ 我们的示例是一个简单的路由映射应用,它允许客户端获取有关其路由上的特定信息,进而创建这些路由摘要,并让服务端和其他客户端交换路由信息(如路由更新)。

​ 使用gRPC,我们可以在一个.proto文件中定义我们的服务,并在gRPC支持的任何语言中生成客户端和服务端,这些客户端和服务端可以在各种环境中运行,从大型数据中心的服务器到您自己的平板电脑,gRPC会为不同语言和环境之间的通信复杂性提供支持。我们还可以获得使用协议缓冲区的所有优势,包括高效的序列化、简单的IDL(Interactive Data Language 交互式数据语言)和易于更新的接口。

准备

​ 您应该已经安装了生成客户端和服务端接口代码所需的工具 —— 如果尚未安装,请参阅快速入门中的先决条件章节的安装说明。

获取示例代码

​ 该示例代码是grpc-go存储库的一部分。

  1. 下载该存储库的zip文件并解压,或者克隆存储库:

    1
    
    $ git clone -b v1.55.0 --depth 1 https://github.com/grpc/grpc-go
    
  2. 切换到该示例目录:

    1
    
    $ cd grpc-go/examples/route_guide
    

定义服务

​ 我们的第一步(正如您从gRPC 简介中了解到的)是使用protocol buffers定义gRPC的*服务(service)以及方法的请求(request)响应(response)*类型。有关完整的.proto文件,请参见routeguide/route_guide.proto

​ 要定义一个服务(service),请在您的.proto文件中指定一个命名的service

1
2
3
service RouteGuide {
   ...
}

​ 然后,在服务(service )定义内部定义一些rpc方法,指定其请求(request )和响应(response )类型。gRPC允许您定义四种类型的服务(service )方法,所有这些方法都可在该RouteGuide服务(service )中使用:

  • 简单的RPC(simple RPC),其中客户端使用存根(stub )发送请求到服务端并等待响应返回,就像普通的函数调用一样。

    1
    2
    
    // 获取给定位置的 feature 。
    rpc GetFeature(Point) returns (Feature) {}
    
  • 服务端流式RPC(server-side streaming RPC),其中的客户端向其服务端发送请求并获得一个流以读取一系列返回的消息。该客户端从返回的流中读取,直到没有更多的消息为止。如我们的示例所示,通过在*响应(response)*类型之前放置stream关键字来指定服务端流式方法(server-side streaming method)。

    1
    2
    3
    4
    5
    6
    7
    8
    
    // Obtains the Features available within the given Rectangle.  Results are
    // streamed rather than returned at once (e.g. in a response message with a
    // repeated field), as the rectangle may cover a large area and contain a
    // huge number of features.
    // 获取给定矩形范围内的可用 Features 。
    // 结果以流式方式传输,而不是一次性返回(例如在带有重复(repeated)字段的响应消息中),
    // 因为该矩形可能涵盖了一个大范围并包含大量 features 。
    rpc ListFeatures(Rectangle) returns (stream Feature) {}
    
  • 客户端流式RPC(client-side streaming RPC),其中的客户端写入一系列消息并将它们发送到其服务端,再次使用提供的流。该客户端完成写入消息后,等待其服务端读取所有消息并返回其响应。通过在*请求(request)*类型之前放置stream关键字来指定客户端流式方法(client-side streaming method)。

    1
    2
    3
    4
    5
    
    // Accepts a stream of Points on a route being traversed, returning a
    // RouteSummary when traversal is completed.
    // 在遍历 route 时,接受一系列 Points 的流,当遍历完成时返回一个RouteSummary。
    rpc RecordRoute(stream Point) returns 
    (RouteSummary) {}
    
  • 双向流式RPC(bidirectional streaming RPC),双方都使用读写流(read-write stream)发送一系列消息。这两个流操作是独立的,因此其客户端们(clients,这里怎么翻译比较合适)和服务端们(servers,这里怎么翻译比较合适)可以按照任意顺序读取和写入:例如,服务端可以在写入响应之前等待接收所有客户端消息,或者它可以交替读取消息然后写入消息,或者其他读取和写入的组合。每个流中消息的顺序保持不变。通过在*请求(request )响应(response)*之前放置stream关键字来指定这种类型的方法。

    1
    2
    3
    4
    
    // Accepts a stream of RouteNotes sent while a route is being traversed,
    // while receiving other RouteNotes (e.g. from other users).
    // 在遍历 route 时接收一系列发送的 RouteNote,同时接收其他 RouteNote(例如来自其他用户)。
    rpc RouteChat(stream RouteNote) returns (stream RouteNote) {}
    

​ 我们的.proto文件还包含用于服务方法(service methods)中使用的所有请求和响应类型的protocol buffer消息类型定义 —— 例如,这是Point消息类型的定义:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
// Points are represented as latitude-longitude pairs in the E7 representation
// (degrees multiplied by 10**7 and rounded to the nearest integer).
// Latitudes should be in the range +/- 90 degrees and longitude should be in
// the range +/- 180 degrees (inclusive).
// Points 用E7表示法表示为纬度-经度对(度乘以10**7并四舍五入到最接近的整数)。
// 纬度(latitudes)应在+/- 90度范围内,经度(longitude)应在+/- 180度范围内(包括边界)。
message Point {
  int32 latitude = 1;
  int32 longitude = 2;
}

生成客户端和服务端代码

​ 接下来,我们需要从.proto服务定义中生成gRPC客户端和服务端接口。我们使用带有特殊gRPC Go插件的protocol buffer编译器protoc来实现这一点。这类似于我们在快速入门中所做的。

​ 在examples/route_guide目录中运行以下命令:

1
2
3
$ protoc --go_out=. --go_opt=paths=source_relative \
    --go-grpc_out=. --go-grpc_opt=paths=source_relative \
    routeguide/route_guide.proto

​ 运行此命令将在routeguide目录中生成以下文件:

  • route_guide.pb.go,其中包含用于填充(populate)、序列化(serialize)和检索(retrieve )请求和响应消息类型的所有protocol buffer代码。
  • route_guide_grpc.pb.go,其中包含以下内容:
    • 一个接口类型(或存根(stub)),供客户端调用,其中定义了RouteGuide服务中的方法。
    • 一个接口类型,供服务端实现,其中同样定义了RouteGuide服务中的方法。

创建服务端

​ 首先让我们看看如何创建RouteGuide服务端。如果您只对创建gRPC客户端感兴趣,可以跳过本节,直接阅读创建客户端(当然您可能还是会觉得有趣!)。

​ 使我们的RouteGuide服务发挥作用有两个部分(parts)工作要做:

  • 实现从我们的服务定义生成的服务接口:它是执行我们的服务的实际"工作(work)"。
  • 运行gRPC服务端以侦听来自客户端的请求并将其分派给正确的服务实现。

​ 您可以在server/server.go中找到我们的示例RouteGuide服务端。让我们更详细地看看它是如何工作的。

实现RouteGuide

​ 正如您所看到的,我们的服务端有一个routeGuideServer结构类型,实现了生成的RouteGuideServer接口:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
type routeGuideServer struct {
        ...
}
...

func (s *routeGuideServer) GetFeature(ctx context.Context, point *pb.Point) (*pb.Feature, error) {
        ...
}
...

func (s *routeGuideServer) ListFeatures(rect *pb.Rectangle, stream pb.RouteGuide_ListFeaturesServer) error {
        ...
}
...

func (s *routeGuideServer) RecordRoute(stream pb.RouteGuide_RecordRouteServer) error {
        ...
}
...

func (s *routeGuideServer) RouteChat(stream pb.RouteGuide_RouteChatServer) error {
        ...
}
...
简单RPC

​ 该routeGuideServer实现了我们所有的服务方法。让我们先看最简单的类型,GetFeature,它只是从客户端获取一个Point,并从其数据库中(以Feature的形式)返回对应的feature信息。

1
2
3
4
5
6
7
8
9
func (s *routeGuideServer) GetFeature(ctx context.Context, point *pb.Point) (*pb.Feature, error) {
  for _, feature := range s.savedFeatures {
    if proto.Equal(feature.Location, point) {
      return feature, nil
    }
  }
  // 未找到feature,返回一个未命名feature
  return &pb.Feature{Location: point}, nil
}

​ 该方法接收一个用于RPC的上下文对象和客户端的Point协议缓冲区请求。它返回一个带有响应信息的Feature协议缓冲区对象和一个error。在该方法中,我们使用适当的信息填充Feature,然后将其与nil错误一起return,告诉gRPC我们已经完成了对该RPC的处理,并且该Feature可以返回给客户端。

服务端流式 RPC

​ 现在让我们来看一个流式 RPC 的例子。ListFeatures 是一个服务端流式 RPC,因此我们需要向客户端发送多个 Feature

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
func (s *routeGuideServer) ListFeatures(rect *pb.Rectangle, stream pb.RouteGuide_ListFeaturesServer) error {
  for _, feature := range s.savedFeatures {
    if inRange(feature.Location, rect) {
      if err := stream.Send(feature); err != nil {
        return err
      }
    }
  }
  return nil
}

​ 如您所见,与在我们的方法参数中获得简单的请求和响应对象不同,这次我们获得了一个请求对象(客户端希望在其中找到FeatureRectangle)和一个特殊的RouteGuide_ListFeaturesServer对象来编写我们的响应。

​ 在这个方法中,我们填充了我们需要返回的许多Feature对象,并使用RouteGuide_ListFeaturesServerSend()方法将它们写入。最后,就像在我们简单的RPC中一样,我们返回一个nil错误,告诉gRPC我们已经完成了响应的编写。如果在此调用中发生任何错误,我们将返回一个非nil错误;gRPC层将把它(即非nil错误)转换为适当的RPC状态,以发送到网络上。

客户端流式 RPC

​ 现在让我们看一些稍微复杂一点的东西:客户端流式方法 RecordRoute,我们从客户端获取一系列的 Point,并返回一个包含有关他们行程信息的单个 RouteSummary。如您所见,这次该方法根本没有请求参数。相反,它获取了一个 RouteGuide_RecordRouteServer 流,服务端可以使用该流来读取和写入消息 —— 它可以使用其 Recv() 方法接收客户端消息,并使用其 SendAndClose() 方法返回单个响应。

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
func (s *routeGuideServer) RecordRoute(stream pb.RouteGuide_RecordRouteServer) error {
  var pointCount, featureCount, distance int32
  var lastPoint *pb.Point
  startTime := time.Now()
  for {
    point, err := stream.Recv()
    if err == io.EOF {
      endTime := time.Now()
      return stream.SendAndClose(&pb.RouteSummary{
        PointCount:   pointCount,
        FeatureCount: featureCount,
        Distance:     distance,
        ElapsedTime:  int32(endTime.Sub(startTime).Seconds()),
      })
    }
    if err != nil {
      return err
    }
    pointCount++
    for _, feature := range s.savedFeatures {
      if proto.Equal(feature.Location, point) {
        featureCount++
      }
    }
    if lastPoint != nil {
      distance += calcDistance(lastPoint, point)
    }
    lastPoint = point
  }
}

​ 在该方法体中,我们使用 RouteGuide_RecordRouteServerRecv() 方法重复地将客户端的请求读取到一个请求对象(在本例中为 Point),直到没有更多的消息为止:服务端需要在每次调用后检查从 Recv() 返回的错误。如果该错误是 nil,则流仍然有效,可以继续读取;如果是 io.EOF,则消息流已结束,服务端可以返回其 RouteSummary。如果它有任何其他值,我们将原样返回该错误,以便由 gRPC 层将其转换为 RPC 状态。

双向流式 RPC

​ 最后,让我们来看一下双向流式 RPC RouteChat()

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
func (s *routeGuideServer) RouteChat(stream pb.RouteGuide_RouteChatServer) error {
  for {
    in, err := stream.Recv()
    if err == io.EOF {
      return nil
    }
    if err != nil {
      return err
    }
    key := serialize(in.Location)
                ... // look for notes to be sent to client 查找要发送给客户端的注释
    for _, note := range s.routeNotes[key] {
      if err := stream.Send(note); err != nil {
        return err
      }
    }
  }
}

​ 这次我们获取一个 RouteGuide_RouteChatServer 流,就像在客户端流式示例中一样,它可以用于读取和写入消息。但是,这次我们通过方法的流返回值,而客户端仍然在向他们的消息流中写入消息。

​ 在这里,读取和写入的语法与我们的客户端流式方法非常相似,只是服务端使用流的 Send() 方法而不是 SendAndClose(),因为它需要写入多个响应。尽管每一方始终会按照写入的顺序接收到对方的消息,但客户端和服务端都可以按任何顺序读取和写入 —— 这些流操作完全独立。

启动服务端

​ 一旦我们实现了所有的方法,我们还需要启动一个 gRPC 服务端,以便客户端可以真正使用我们的服务。下面的代码片段展示了我们如何为我们的 RouteGuide 服务做到这一点:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
flag.Parse()
lis, err := net.Listen("tcp", fmt.Sprintf("localhost:%d", *port))
if err != nil {
  log.Fatalf("failed to listen: %v", err)
}
var opts []grpc.ServerOption
...
grpcServer := grpc.NewServer(opts...)
pb.RegisterRouteGuideServer(grpcServer, newServer())
grpcServer.Serve(lis)

​ 要构建和启动服务端,我们需要:

  1. 通过 lis, err := net.Listen(...) 指定要用于侦听客户端请求的端口。
  2. 使用 grpc.NewServer(...) 创建一个 gRPC 服务端的实例。
  3. 将我们的服务实现注册到 gRPC 服务端(the gRPC server,怎么翻译?暂且翻译为gRPC服务端吧)中。
  4. 使用我们的端口详细信息调用服务端的 Serve() 方法,以进行阻塞等待,直到其进程被终止或调用了 Stop()

创建客户端

​ 在本节中,我们将介绍如何为我们的 RouteGuide 服务创建一个 Go 客户端。您可以在 grpc-go/examples/route_guide/client/client.go 中查看我们完整的示例客户端代码。

创建存根

​ 要调用服务方法,我们首先需要创建一个 gRPC 通道(channel),用于与服务端进行通信。我们通过将服务端地址和端口号传递给 grpc.Dial() 来创建通道,代码如下所示:

1
2
3
4
5
6
7
var opts []grpc.DialOption
...
conn, err := grpc.Dial(*serverAddr, opts...)
if err != nil {
  ...
}
defer conn.Close()

​ 当服务需要认证凭据(例如,TLS、GCE 凭据或 JWT 凭据)时,您可以使用 DialOptionsgrpc.Dial 中设置这些认证凭据。RouteGuide 服务不需要任何凭据。

​ 一旦设置了 gRPC 通道(channel),我们就需要一个客户端 存根(stub) 来执行 RPC 调用。我们可以使用从示例 .proto 文件生成的 pb 包提供的 NewRouteGuideClient 方法来获取它(即存根(stub))。

1
client := pb.NewRouteGuideClient(conn)

调用服务方法

​ 现在让我们来看一下如何调用我们的服务方法。请注意,在 gRPC-Go 中,RPC 在阻塞/同步模式下运行,这意味着 RPC 调用会等待服务端响应,并且要么返回响应,要么返回错误。

简单 RPC

​ 调用简单 RPC GetFeature 几乎与调用本地方法一样简单。

1
2
3
4
feature, err := client.GetFeature(context.Background(), &pb.Point{409146138, -746188906})
if err != nil {
  ...
}

​ 如您所见,我们在之前获取的存根上调用了该方法。在我们的方法参数中,我们创建并填充了一个请求协议缓冲区对象(在本例中是 Point)。我们还传递了一个 context.Context 对象,它允许我们在需要时更改 RPC 的行为,比如超时/取消正在进行的 RPC。如果该调用没有返回错误,那么我们可以从第一个返回值中读取来自服务端的响应信息。

1
log.Println(feature)
服务端流式 RPC

​ 接下来是调用服务端流式方法 ListFeatures,它返回一系列地理(geographical ) Feature 流。如果您已经阅读过创建服务端的内容,这其中一些内容可能会非常熟悉 —— 这两者都以类似的方式实现了流式RPC。

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
rect := &pb.Rectangle{ ... }  // initialize a pb.Rectangle
stream, err := client.ListFeatures(context.Background(), rect)
if err != nil {
  ...
}
for {
    feature, err := stream.Recv()
    if err == io.EOF {
        break
    }
    if err != nil {
        log.Fatalf("%v.ListFeatures(_) = _, %v", client, err)
    }
    log.Println(feature)
}

​ 与简单的 RPC 类似,我们将上下文和请求传递给该方法。但是,我们不会收到一个响应对象,而是会收到一个RouteGuide_ListFeaturesClient的实例。客户端可以使用 RouteGuide_ListFeaturesClient 流来读取服务端的响应。

​ 我们使用RouteGuide_ListFeaturesClientRecv()方法重复读取服务端的响应到一个响应协议缓冲区对象(在本例中为Feature)中,直到没有更多的消息为止:客户端需要在每次调用后检查Recv()返回的错误err。如果是 nil,则流仍然有效,可以继续读取;如果是 io.EOF,则消息流已经结束;否则,必定存在一个RPC错误,该错误通过err传递。

客户端流式 RPC

​ 客户端流式方法 RecordRoute 与服务端方法类似,只是我们只传递上下文,并获得一个 RouteGuide_RecordRouteClient 流,我们用它来同时写入和读取消息。

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
// Create a random number of random points 创建随机数量的随机点
r := rand.New(rand.NewSource(time.Now().UnixNano()))
pointCount := int(r.Int31n(100)) + 2 // Traverse at least two points 至少遍历两个点
var points []*pb.Point
for i := 0; i < pointCount; i++ {
  points = append(points, randomPoint(r))
}
log.Printf("Traversing %d points.", len(points))
stream, err := client.RecordRoute(context.Background())
if err != nil {
  log.Fatalf("%v.RecordRoute(_) = _, %v", client, err)
}
for _, point := range points {
  if err := stream.Send(point); err != nil {
    log.Fatalf("%v.Send(%v) = %v", stream, point, err)
  }
}
reply, err := stream.CloseAndRecv()
if err != nil {
  log.Fatalf("%v.CloseAndRecv() got error %v, want %v", stream, err, nil)
}
log.Printf("Route summary: %v", reply)

RouteGuide_RecordRouteClient 有一个 Send() 方法,我们可以使用它向服务端发送请求。当使用 Send() 将客户端的请求写入流中后,我们需要在流上调用 CloseAndRecv(),以告知 gRPC 我们已经完成写入并期望接收响应。我们从 CloseAndRecv() 返回的 err 中获取 RPC 的状态。如果状态是 nil,那么 CloseAndRecv() 的第一个返回值将是一个有效的服务端响应。

双向流式 RPC

​ 最后,让我们看一下双向流式 RPC RouteChat()。与 RecordRoute 类似,我们只传递上下文对象给该方法,并返回一个流,我们可以使用它来同时写入和读取消息。但是,这次我们通过方法的流返回值,而服务端仍然在向 它们的 消息流写入消息。

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
stream, err := client.RouteChat(context.Background())
waitc := make(chan struct{})
go func() {
  for {
    in, err := stream.Recv()
    if err == io.EOF {
      // read done.
      close(waitc)
      return
    }
    if err != nil {
      log.Fatalf("Failed to receive a note : %v", err)
    }
    log.Printf("Got message %s at point(%d, %d)", in.Message, in.Location.Latitude, in.Location.Longitude)
  }
}()
for _, note := range notes {
  if err := stream.Send(note); err != nil {
    log.Fatalf("Failed to send a note: %v", err)
  }
}
stream.CloseSend()
<-waitc

​ 在这里,读取和写入的语法与我们的客户端流式方法非常相似,只是在调用完成后,我们使用流的 CloseSend() 方法。尽管每一方始终按照写入的顺序获取对方的消息,但客户端和服务端都可以按任意顺序读取和写入 —— 流操作完全独立。

试一试!

​ 在 examples/route_guide 目录执行以下命令:

  1. 运行服务端:

    1
    
    $ go run server/server.go
    
  2. 在另一个终端中运行客户端:

    1
    
    $ go run client/client.go
    

​ 你将看到类似以下的输出:

Getting feature for point (409146138, -746188906)
name:"Berkshire Valley Management Area Trail, Jefferson, NJ, USA" location:<latitude:409146138 longitude:-746188906 >
Getting feature for point (0, 0)
location:<>
Looking for features within lo:<latitude:400000000 longitude:-750000000 > hi:<latitude:420000000 longitude:-730000000 >
name:"Patriots Path, Mendham, NJ 07945, USA" location:<latitude:407838351 longitude:-746143763 >
...
name:"3 Hasta Way, Newton, NJ 07860, USA" location:<latitude:410248224 longitude:-747127767 >
Traversing 56 points.
Route summary: point_count:56 distance:497013163
Got message First message at point(0, 1)
Got message Second message at point(0, 2)
Got message Third message at point(0, 3)
Got message First message at point(0, 1)
Got message Fourth message at point(0, 1)
Got message Second message at point(0, 2)
Got message Fifth message at point(0, 2)
Got message Third message at point(0, 3)
Got message Sixth message at point(0, 3)

注意

​ 我们在本页面展示的客户端和服务端跟踪输出中省略了时间戳。

3.1.3 - ALTS

ALTS authentication ALTS身份验证

An overview of gRPC authentication in Go using Application Layer Transport Security (ALTS).

使用应用层传输安全(Application Layer Transport Security,ALTS)在Go中进行gRPC身份验证的概述。

Overview 概述

Application Layer Transport Security (ALTS) is a mutual authentication and transport encryption system developed by Google. It is used for securing RPC communications within Google’s infrastructure. ALTS is similar to mutual TLS but has been designed and optimized to meet the needs of Google’s production environments. For more information, take a look at the ALTS whitepaper.

应用层传输安全(ALTS)是由Google开发的相互认证和传输加密系统,用于保护Google基础架构内的RPC通信。ALTS类似于相互TLS,但经过设计和优化以满足Google生产环境的需求。有关更多信息,请参阅ALTS白皮书

ALTS in gRPC has the following features:

gRPC中的ALTS具有以下功能:

  • Create gRPC servers & clients with ALTS as the transport security protocol.
  • ALTS connections are end-to-end protected with privacy and integrity.
  • Applications can access peer information such as the peer service account.
  • Client authorization and server authorization support.
  • Minimal code changes to enable ALTS.
  • 使用ALTS作为传输安全协议创建gRPC服务器和客户端。
  • ALTS连接具有端到端的隐私和完整性保护。
  • 应用程序可以访问对等方信息,例如对等方服务帐号。
  • 支持客户端授权和服务器授权。
  • 最小的代码更改以启用ALTS。

gRPC users can configure their applications to use ALTS as a transport security protocol with few lines of code.

gRPC用户可以配置其应用程序以使用ALTS作为传输安全协议,只需几行代码。

Note that ALTS is fully functional if the application runs on Google Cloud Platform. ALTS could be run on any platforms with a pluggable ALTS handshaker service.

请注意,如果应用程序在Google Cloud Platform上运行,则ALTS是完全功能的。ALTS可以在任何平台上运行,只需具备可插拔的ALTS握手服务

gRPC Client with ALTS Transport Security Protocol 使用ALTS传输安全协议的gRPC客户端

gRPC clients can use ALTS credentials to connect to servers, as illustrated in the following code excerpt:

gRPC客户端可以使用ALTS凭据连接到服务器,如下面的代码摘录所示:

1
2
3
4
5
6
7
import (
  "google.golang.org/grpc"
  "google.golang.org/grpc/credentials/alts"
)

altsTC := alts.NewClientCreds(alts.DefaultClientOptions())
conn, err := grpc.Dial(serverAddr, grpc.WithTransportCredentials(altsTC))

gRPC Server with ALTS Transport Security Protocol 使用ALTS传输安全协议的gRPC服务器

gRPC servers can use ALTS credentials to allow clients to connect to them, as illustrated next:

gRPC服务器可以使用ALTS凭据允许客户端连接到它们,如下所示:

1
2
3
4
5
6
7
import (
  "google.golang.org/grpc"
  "google.golang.org/grpc/credentials/alts"
)

altsTC := alts.NewServerCreds(alts.DefaultServerOptions())
server := grpc.NewServer(grpc.Creds(altsTC))

Server Authorization 服务器授权

gRPC has built-in server authorization support using ALTS. A gRPC client using ALTS can set the expected server service accounts prior to establishing a connection. Then, at the end of the handshake, server authorization guarantees that the server identity matches one of the service accounts specified by the client. Otherwise, the connection fails.

gRPC使用ALTS具有内置的服务器授权支持。使用ALTS的gRPC客户端可以在建立连接之前设置预期的服务器服务帐号。然后,在握手结束时,服务器授权保证服务器标识与客户端指定的服务帐号之一匹配。否则,连接将失败。

1
2
3
4
5
6
7
8
9
import (
  "google.golang.org/grpc"
  "google.golang.org/grpc/credentials/alts"
)

clientOpts := alts.DefaultClientOptions()
clientOpts.TargetServiceAccounts = []string{expectedServerSA}
altsTC := alts.NewClientCreds(clientOpts)
conn, err := grpc.Dial(serverAddr, grpc.WithTransportCredentials(altsTC))

Client Authorization 客户端授权

On a successful connection, the peer information (e.g., client’s service account) is stored in the AltsContext. gRPC provides a utility library for client authorization check. Assuming that the server knows the expected client identity (e.g., foo@iam.gserviceaccount.com), it can run the following example codes to authorize the incoming RPC.

在成功建立连接后,对等方信息(例如,客户端的服务帐号)将存储在AltsContext中。gRPC提供了一个用于客户端授权检查的实用库。假设服务器知道预期的客户端身份(例如,foo@iam.gserviceaccount.com),它可以运行以下示例代码来对传入的RPC进行授权。

1
2
3
4
5
6
import (
  "google.golang.org/grpc"
  "google.golang.org/grpc/credentials/alts"
)

err := alts.ClientAuthorizationCheck(ctx, []string{"foo@iam.gserviceaccount.com"})

3.1.4 - API

API

文档

概述

​ grpc 包实现了一个名为 gRPC 的远程过程调用 (RPC) 系统。

​ 有关 gRPC 的更多信息,请访问 grpc.io

常量

View Source

const (
	SupportPackageIsVersion3 = true
	SupportPackageIsVersion4 = true
	SupportPackageIsVersion5 = true
	SupportPackageIsVersion6 = true
	SupportPackageIsVersion7 = true
)

The SupportPackageIsVersion variables are referenced from generated protocol buffer files to ensure compatibility with the gRPC version used. The latest support package version is 7.

SupportPackageIsVersion 变量在生成的协议缓冲区文件中被引用,以确保与使用的 gRPC 版本兼容。最新的支持包版本为 7。

Older versions are kept for compatibility.

旧版本保留以确保兼容性。

These constants should not be referenced from any other code.

这些常量不应从任何其他代码中引用。

View Source

const PickFirstBalancerName = "pick_first"

PickFirstBalancerName is the name of the pick_first balancer.

PickFirstBalancerName 是 pick_first 负载均衡器的名称。

View Source

const Version = "1.55.0"

Version is the current grpc version.

Version 是当前的 gRPC 版本。

变量

View Source

var DefaultBackoffConfig = BackoffConfig{
	MaxDelay: 120 * time.Second,
}

DefaultBackoffConfig uses values specified for backoff in https://github.com/grpc/grpc/blob/master/doc/connection-backoff.md.

DefaultBackoffConfig 使用在 https://github.com/grpc/grpc/blob/master/doc/connection-backoff.md 中指定的退避值。

Deprecated: use ConnectParams instead. Will be supported throughout 1.x.

已弃用:请改用 ConnectParams。在 1.x 版本中将继续支持。

View Source

var EnableTracing bool

EnableTracing controls whether to trace RPCs using the golang.org/x/net/trace package. This should only be set before any RPCs are sent or received by this program.

EnableTracing 控制是否使用 golang.org/x/net/trace 包跟踪 RPC。这应该在该程序发送或接收任何 RPC 之前设置。

View Source

var (
	// ErrClientConnClosing indicates that the operation is illegal because
	// the ClientConn is closing.
	//
	// Deprecated: this error should not be relied upon by users; use the status
	// code of Canceled instead.
	// ErrClientConnClosing 表示操作非法,因为 ClientConn 正在关闭。
	//
	// 已弃用:用户不应依赖此错误;请使用 Canceled 状态代码。
	ErrClientConnClosing = status.Error(codes.Canceled, "grpc: the client connection is closing")
)

View Source

var ErrClientConnTimeout = errors.New("grpc: timed out when dialing")

ErrClientConnTimeout indicates that the ClientConn cannot establish the underlying connections within the specified timeout.

ErrClientConnTimeout 表示 ClientConn 无法在指定的超时时间内建立底层连接。

Deprecated: This error is never returned by grpc and should not be referenced by users.

已弃用:该错误从未被 grpc 返回过,用户不应引用此错误。

View Source

var ErrServerStopped = errors.New("grpc: the server has been stopped")

ErrServerStopped indicates that the operation is now illegal because of the server being stopped.

xxxxxxxxxx6 1import (2  “google.golang.org/grpc"3  “google.golang.org/grpc/credentials/alts"4)5​6err := alts.ClientAuthorizationCheck(ctx, []string{“foo@iam.gserviceaccount.com”})go

函数

func ClientSupportedCompressors <- v1.54.0

func ClientSupportedCompressors(ctx context.Context) ([]string, error)

ClientSupportedCompressors returns compressor names advertised by the client via grpc-accept-encoding header.

ClientSupportedCompressors 返回客户端通过 grpc-accept-encoding 头字段广告的压缩器名称。

The context provided must be the context passed to the server’s handler.

提供的上下文必须是传递给服务器处理程序的上下文。

Experimental 实验性的

Notice: This function is EXPERIMENTAL and may be changed or removed in a later release.

注意:此函数是实验性的,可能会在以后的版本中更改或移除。

funcCode DEPRECATED

func ErrorDesc DEPRECATED

func Errorf DEPRECATED

func Invoke

func Invoke(ctx context.Context, method string, args, reply interface{}, cc *ClientConn, opts ...CallOption) error

Invoke sends the RPC request on the wire and returns after response is received. This is typically called by generated code.

Invoke 将 RPC 请求发送到网络并在接收到响应后返回。通常由生成的代码调用。

DEPRECATED: Use ClientConn.Invoke instead.

已弃用:请改用 ClientConn.Invoke。

func Method <- v1.11.2

func Method(ctx context.Context) (string, bool)

Method returns the method string for the server context. The returned string is in the format of “/service/method”.

Method 返回服务器上下文的方法字符串。返回的字符串的格式为 “/service/method”。

func MethodFromServerStream <- v1.8.0

func MethodFromServerStream(stream ServerStream) (string, bool)

MethodFromServerStream returns the method string for the input stream. The returned string is in the format of “/service/method”.

MethodFromServerStream 返回输入流的方法字符串。返回的字符串的格式为 “/service/method”。

func NewContextWithServerTransportStream <- v1.11.0

func NewContextWithServerTransportStream(ctx context.Context, stream ServerTransportStream) context.Context

NewContextWithServerTransportStream creates a new context from ctx and attaches stream to it.

Experimental

Notice: This API is EXPERIMENTAL and may be changed or removed in a later release.

func SendHeader

func SendHeader(ctx context.Context, md metadata.MD) error

SendHeader sends header metadata. It may be called at most once, and may not be called after any event that causes headers to be sent (see SetHeader for a complete list). The provided md and headers set by SetHeader() will be sent.

The error returned is compatible with the status package. However, the status code will often not match the RPC status as seen by the client application, and therefore, should not be relied upon for this purpose.

func SetHeader <- v1.0.3

func SetHeader(ctx context.Context, md metadata.MD) error

SetHeader sets the header metadata to be sent from the server to the client. The context provided must be the context passed to the server’s handler.

Streaming RPCs should prefer the SetHeader method of the ServerStream.

When called multiple times, all the provided metadata will be merged. All the metadata will be sent out when one of the following happens:

  • grpc.SendHeader is called, or for streaming handlers, stream.SendHeader.
  • The first response message is sent. For unary handlers, this occurs when the handler returns; for streaming handlers, this can happen when stream’s SendMsg method is called.
  • An RPC status is sent out (error or success). This occurs when the handler returns.

SetHeader will fail if called after any of the events above.

The error returned is compatible with the status package. However, the status code will often not match the RPC status as seen by the client application, and therefore, should not be relied upon for this purpose.

func SetSendCompressor <- v1.54.0

func SetSendCompressor(ctx context.Context, name string) error

SetSendCompressor sets a compressor for outbound messages from the server. It must not be called after any event that causes headers to be sent (see ServerStream.SetHeader for the complete list). Provided compressor is used when below conditions are met:

  • compressor is registered via encoding.RegisterCompressor
  • compressor name must exist in the client advertised compressor names sent in grpc-accept-encoding header. Use ClientSupportedCompressors to get client supported compressor names.

The context provided must be the context passed to the server’s handler. It must be noted that compressor name encoding.Identity disables the outbound compression. By default, server messages will be sent using the same compressor with which request messages were sent.

It is not safe to call SetSendCompressor concurrently with SendHeader and SendMsg.

Experimental

Notice: This function is EXPERIMENTAL and may be changed or removed in a later release.

func SetTrailer

func SetTrailer(ctx context.Context, md metadata.MD) error

SetTrailer sets the trailer metadata that will be sent when an RPC returns. When called more than once, all the provided metadata will be merged.

The error returned is compatible with the status package. However, the status code will often not match the RPC status as seen by the client application, and therefore, should not be relied upon for this purpose.

类型

type BackoffConfig DEPRECATED

type CallOption

type CallOption interface {
	// contains filtered or unexported methods
}

CallOption configures a Call before it starts or extracts information from a Call after it completes.

func CallContentSubtype <- v1.10.0

func CallContentSubtype(contentSubtype string) CallOption

CallContentSubtype returns a CallOption that will set the content-subtype for a call. For example, if content-subtype is “json”, the Content-Type over the wire will be “application/grpc+json”. The content-subtype is converted to lowercase before being included in Content-Type. See Content-Type on https://github.com/grpc/grpc/blob/master/doc/PROTOCOL-HTTP2.md#requests for more details.

If ForceCodec is not also used, the content-subtype will be used to look up the Codec to use in the registry controlled by RegisterCodec. See the documentation on RegisterCodec for details on registration. The lookup of content-subtype is case-insensitive. If no such Codec is found, the call will result in an error with code codes.Internal.

If ForceCodec is also used, that Codec will be used for all request and response messages, with the content-subtype set to the given contentSubtype here for requests.

func CallCustomCodec DEPRECATED

func FailFast DEPRECATED

func ForceCodec <- v1.19.0

func ForceCodec(codec encoding.Codec) CallOption

ForceCodec returns a CallOption that will set codec to be used for all request and response messages for a call. The result of calling Name() will be used as the content-subtype after converting to lowercase, unless CallContentSubtype is also used.

See Content-Type on https://github.com/grpc/grpc/blob/master/doc/PROTOCOL-HTTP2.md#requests for more details. Also see the documentation on RegisterCodec and CallContentSubtype for more details on the interaction between Codec and content-subtype.

This function is provided for advanced users; prefer to use only CallContentSubtype to select a registered codec instead.

Experimental

Notice: This API is EXPERIMENTAL and may be changed or removed in a later release.

func Header

func Header(md *metadata.MD) CallOption

Header returns a CallOptions that retrieves the header metadata for a unary RPC.

func MaxCallRecvMsgSize <- v1.4.0

func MaxCallRecvMsgSize(bytes int) CallOption

MaxCallRecvMsgSize returns a CallOption which sets the maximum message size in bytes the client can receive. If this is not set, gRPC uses the default 4MB.

func MaxCallSendMsgSize <- v1.4.0

func MaxCallSendMsgSize(bytes int) CallOption

MaxCallSendMsgSize returns a CallOption which sets the maximum message size in bytes the client can send. If this is not set, gRPC uses the default math.MaxInt32.

func MaxRetryRPCBufferSize <- v1.14.0

func MaxRetryRPCBufferSize(bytes int) CallOption

MaxRetryRPCBufferSize returns a CallOption that limits the amount of memory used for buffering this RPC’s requests for retry purposes.

Experimental

Notice: This API is EXPERIMENTAL and may be changed or removed in a later release.

func OnFinish <- v1.54.0

func OnFinish(onFinish func(err error)) CallOption

OnFinish returns a CallOption that configures a callback to be called when the call completes. The error passed to the callback is the status of the RPC, and may be nil. The onFinish callback provided will only be called once by gRPC. This is mainly used to be used by streaming interceptors, to be notified when the RPC completes along with information about the status of the RPC.

Experimental

Notice: This API is EXPERIMENTAL and may be changed or removed in a later release.

func Peer <- v1.2.0

func Peer(p *peer.Peer) CallOption

Peer returns a CallOption that retrieves peer information for a unary RPC. The peer field will be populated after the RPC completes.

func PerRPCCredentials <- v1.4.0

func PerRPCCredentials(creds credentials.PerRPCCredentials) CallOption

PerRPCCredentials returns a CallOption that sets credentials.PerRPCCredentials for a call.

func Trailer

func Trailer(md *metadata.MD) CallOption

Trailer returns a CallOptions that retrieves the trailer metadata for a unary RPC.

func UseCompressor <- v1.8.0

func UseCompressor(name string) CallOption

UseCompressor returns a CallOption which sets the compressor used when sending the request. If WithCompressor is also set, UseCompressor has higher priority.

Experimental

Notice: This API is EXPERIMENTAL and may be changed or removed in a later release.

func WaitForReady <- v1.18.0

func WaitForReady(waitForReady bool) CallOption

WaitForReady configures the action to take when an RPC is attempted on broken connections or unreachable servers. If waitForReady is false and the connection is in the TRANSIENT_FAILURE state, the RPC will fail immediately. Otherwise, the RPC client will block the call until a connection is available (or the call is canceled or times out) and will retry the call if it fails due to a transient error. gRPC will not retry if data was written to the wire unless the server indicates it did not process the data. Please refer to https://github.com/grpc/grpc/blob/master/doc/wait-for-ready.md.

By default, RPCs don’t “wait for ready”.

type ClientConn

type ClientConn struct {
	// contains filtered or unexported fields
}

ClientConn represents a virtual connection to a conceptual endpoint, to perform RPCs.

A ClientConn is free to have zero or more actual connections to the endpoint based on configuration, load, etc. It is also free to determine which actual endpoints to use and may change it every RPC, permitting client-side load balancing.

A ClientConn encapsulates a range of functionality including name resolution, TCP connection establishment (with retries and backoff) and TLS handshakes. It also handles errors on established connections by re-resolving the name and reconnecting.

func Dial

func Dial(target string, opts ...DialOption) (*ClientConn, error)

Dial creates a client connection to the given target.

func DialContext <- v1.0.2

func DialContext(ctx context.Context, target string, opts ...DialOption) (conn *ClientConn, err error)

DialContext creates a client connection to the given target. By default, it’s a non-blocking dial (the function won’t wait for connections to be established, and connecting happens in the background). To make it a blocking dial, use WithBlock() dial option.

In the non-blocking case, the ctx does not act against the connection. It only controls the setup steps.

In the blocking case, ctx can be used to cancel or expire the pending connection. Once this function returns, the cancellation and expiration of ctx will be noop. Users should call ClientConn.Close to terminate all the pending operations after this function returns.

The target name syntax is defined in https://github.com/grpc/grpc/blob/master/doc/naming.md. e.g. to use dns resolver, a “dns:///” prefix should be applied to the target.

func (*ClientConn) Close

func (cc *ClientConn) Close() error

Close tears down the ClientConn and all underlying connections.

func (*ClientConn) Connect <- v1.41.0

func (cc *ClientConn) Connect()

Connect causes all subchannels in the ClientConn to attempt to connect if the channel is idle. Does not wait for the connection attempts to begin before returning.

Experimental

Notice: This API is EXPERIMENTAL and may be changed or removed in a later release.

func (*ClientConn) GetMethodConfig <- v1.4.0

func (cc *ClientConn) GetMethodConfig(method string) MethodConfig

GetMethodConfig gets the method config of the input method. If there’s an exact match for input method (i.e. /service/method), we return the corresponding MethodConfig. If there isn’t an exact match for the input method, we look for the service’s default config under the service (i.e /service/) and then for the default for all services (empty string).

If there is a default MethodConfig for the service, we return it. Otherwise, we return an empty MethodConfig.

func (*ClientConn) GetState <- v1.5.2

func (cc *ClientConn) GetState() connectivity.State

GetState returns the connectivity.State of ClientConn.

Experimental

Notice: This API is EXPERIMENTAL and may be changed or removed in a later release.

func (*ClientConn) Invoke <- v1.8.0

func (cc *ClientConn) Invoke(ctx context.Context, method string, args, reply interface{}, opts ...CallOption) error

Invoke sends the RPC request on the wire and returns after response is received. This is typically called by generated code.

All errors returned by Invoke are compatible with the status package.

func (*ClientConn) NewStream <- v1.8.0

func (cc *ClientConn) NewStream(ctx context.Context, desc *StreamDesc, method string, opts ...CallOption) (ClientStream, error)

NewStream creates a new Stream for the client side. This is typically called by generated code. ctx is used for the lifetime of the stream.

To ensure resources are not leaked due to the stream returned, one of the following actions must be performed:

  1. Call Close on the ClientConn.
  2. Cancel the context provided.
  3. Call RecvMsg until a non-nil error is returned. A protobuf-generated client-streaming RPC, for instance, might use the helper function CloseAndRecv (note that CloseSend does not Recv, therefore is not guaranteed to release all resources).
  4. Receive a non-nil, non-io.EOF error from Header or SendMsg.

If none of the above happen, a goroutine and a context will be leaked, and grpc will not call the optionally-configured stats handler with a stats.End message.

func (*ClientConn) ResetConnectBackoff <- v1.15.0

func (cc *ClientConn) ResetConnectBackoff()

ResetConnectBackoff wakes up all subchannels in transient failure and causes them to attempt another connection immediately. It also resets the backoff times used for subsequent attempts regardless of the current state.

In general, this function should not be used. Typical service or network outages result in a reasonable client reconnection strategy by default. However, if a previously unavailable network becomes available, this may be used to trigger an immediate reconnect.

Experimental

Notice: This API is EXPERIMENTAL and may be changed or removed in a later release.

func (*ClientConn) Target <- v1.14.0

func (cc *ClientConn) Target() string

Target returns the target string of the ClientConn.

Experimental

Notice: This API is EXPERIMENTAL and may be changed or removed in a later release.

func (*ClientConn) WaitForStateChange <- v1.5.2

func (cc *ClientConn) WaitForStateChange(ctx context.Context, sourceState connectivity.State) bool

WaitForStateChange waits until the connectivity.State of ClientConn changes from sourceState or ctx expires. A true value is returned in former case and false in latter.

Experimental

Notice: This API is EXPERIMENTAL and may be changed or removed in a later release.

type ClientConnInterface <- v1.27.0

type ClientConnInterface interface {
	// Invoke performs a unary RPC and returns after the response is received
	// into reply.
	Invoke(ctx context.Context, method string, args interface{}, reply interface{}, opts ...CallOption) error
	// NewStream begins a streaming RPC.
	NewStream(ctx context.Context, desc *StreamDesc, method string, opts ...CallOption) (ClientStream, error)
}

ClientConnInterface defines the functions clients need to perform unary and streaming RPCs. It is implemented by *ClientConn, and is only intended to be referenced by generated code.

type ClientStream

type ClientStream interface {
	// Header returns the header metadata received from the server if there
	// is any. It blocks if the metadata is not ready to read.
	Header() (metadata.MD, error)
	// Trailer returns the trailer metadata from the server, if there is any.
	// It must only be called after stream.CloseAndRecv has returned, or
	// stream.Recv has returned a non-nil error (including io.EOF).
	Trailer() metadata.MD
	// CloseSend closes the send direction of the stream. It closes the stream
	// when non-nil error is met. It is also not safe to call CloseSend
	// concurrently with SendMsg.
	CloseSend() error
	// Context returns the context for this stream.
	//
	// It should not be called until after Header or RecvMsg has returned. Once
	// called, subsequent client-side retries are disabled.
	Context() context.Context
	// SendMsg is generally called by generated code. On error, SendMsg aborts
	// the stream. If the error was generated by the client, the status is
	// returned directly; otherwise, io.EOF is returned and the status of
	// the stream may be discovered using RecvMsg.
	//
	// SendMsg blocks until:
	//   - There is sufficient flow control to schedule m with the transport, or
	//   - The stream is done, or
	//   - The stream breaks.
	//
	// SendMsg does not wait until the message is received by the server. An
	// untimely stream closure may result in lost messages. To ensure delivery,
	// users should ensure the RPC completed successfully using RecvMsg.
	//
	// It is safe to have a goroutine calling SendMsg and another goroutine
	// calling RecvMsg on the same stream at the same time, but it is not safe
	// to call SendMsg on the same stream in different goroutines. It is also
	// not safe to call CloseSend concurrently with SendMsg.
	SendMsg(m interface{}) error
	// RecvMsg blocks until it receives a message into m or the stream is
	// done. It returns io.EOF when the stream completes successfully. On
	// any other error, the stream is aborted and the error contains the RPC
	// status.
	//
	// It is safe to have a goroutine calling SendMsg and another goroutine
	// calling RecvMsg on the same stream at the same time, but it is not
	// safe to call RecvMsg on the same stream in different goroutines.
	RecvMsg(m interface{}) error
}

ClientStream defines the client-side behavior of a streaming RPC.

All errors returned from ClientStream methods are compatible with the status package.

func NewClientStream

func NewClientStream(ctx context.Context, desc *StreamDesc, cc *ClientConn, method string, opts ...CallOption) (ClientStream, error)

NewClientStream is a wrapper for ClientConn.NewStream.

type Codec DEPRECATED

type Compressor DEPRECATED

type CompressorCallOption <- v1.11.0

type CompressorCallOption struct {
	CompressorType string
}

CompressorCallOption is a CallOption that indicates the compressor to use.

Experimental

Notice: This type is EXPERIMENTAL and may be changed or removed in a later release.

type ConnectParams <- v1.25.0

type ConnectParams struct {
	// Backoff specifies the configuration options for connection backoff.
	Backoff backoff.Config
	// MinConnectTimeout is the minimum amount of time we are willing to give a
	// connection to complete.
	MinConnectTimeout time.Duration
}

ConnectParams defines the parameters for connecting and retrying. Users are encouraged to use this instead of the BackoffConfig type defined above. See here for more details: https://github.com/grpc/grpc/blob/master/doc/connection-backoff.md.

Experimental

Notice: This type is EXPERIMENTAL and may be changed or removed in a later release.

type ContentSubtypeCallOption <- v1.11.0

type ContentSubtypeCallOption struct {
	ContentSubtype string
}

ContentSubtypeCallOption is a CallOption that indicates the content-subtype used for marshaling messages.

Experimental

Notice: This type is EXPERIMENTAL and may be changed or removed in a later release.

type CustomCodecCallOption <- v1.11.0

type CustomCodecCallOption struct {
	Codec Codec
}

CustomCodecCallOption is a CallOption that indicates the codec used for marshaling messages.

Experimental

Notice: This type is EXPERIMENTAL and may be changed or removed in a later release.

type Decompressor DEPRECATED

type DialOption

type DialOption interface {
	// contains filtered or unexported methods
}

DialOption configures how we set up the connection.

func FailOnNonTempDialError <- v1.0.5

func FailOnNonTempDialError(f bool) DialOption

FailOnNonTempDialError returns a DialOption that specifies if gRPC fails on non-temporary dial errors. If f is true, and dialer returns a non-temporary error, gRPC will fail the connection to the network address and won’t try to reconnect. The default value of FailOnNonTempDialError is false.

FailOnNonTempDialError only affects the initial dial, and does not do anything useful unless you are also using WithBlock().

Use of this feature is not recommended. For more information, please see: https://github.com/grpc/grpc-go/blob/master/Documentation/anti-patterns.md

Experimental

Notice: This API is EXPERIMENTAL and may be changed or removed in a later release.

func WithAuthority <- v1.2.0

func WithAuthority(a string) DialOption

WithAuthority returns a DialOption that specifies the value to be used as the :authority pseudo-header and as the server name in authentication handshake.

func WithBackoffConfig DEPRECATED

func WithBackoffMaxDelay DEPRECATED

func WithBlock

func WithBlock() DialOption

WithBlock returns a DialOption which makes callers of Dial block until the underlying connection is up. Without this, Dial returns immediately and connecting the server happens in background.

Use of this feature is not recommended. For more information, please see: https://github.com/grpc/grpc-go/blob/master/Documentation/anti-patterns.md

func WithChainStreamInterceptor <- v1.21.0

func WithChainStreamInterceptor(interceptors ...StreamClientInterceptor) DialOption

WithChainStreamInterceptor returns a DialOption that specifies the chained interceptor for streaming RPCs. The first interceptor will be the outer most, while the last interceptor will be the inner most wrapper around the real call. All interceptors added by this method will be chained, and the interceptor defined by WithStreamInterceptor will always be prepended to the chain.

func WithChainUnaryInterceptor <- v1.21.0

func WithChainUnaryInterceptor(interceptors ...UnaryClientInterceptor) DialOption

WithChainUnaryInterceptor returns a DialOption that specifies the chained interceptor for unary RPCs. The first interceptor will be the outer most, while the last interceptor will be the inner most wrapper around the real call. All interceptors added by this method will be chained, and the interceptor defined by WithUnaryInterceptor will always be prepended to the chain.

func WithChannelzParentID <- v1.12.0

func WithChannelzParentID(id *channelz.Identifier) DialOption

WithChannelzParentID returns a DialOption that specifies the channelz ID of current ClientConn’s parent. This function is used in nested channel creation (e.g. grpclb dial).

Experimental

Notice: This API is EXPERIMENTAL and may be changed or removed in a later release.

func WithCodec DEPRECATED

func WithCompressor DEPRECATED

func WithConnectParams <- v1.25.0

func WithConnectParams(p ConnectParams) DialOption

WithConnectParams configures the ClientConn to use the provided ConnectParams for creating and maintaining connections to servers.

The backoff configuration specified as part of the ConnectParams overrides all defaults specified in https://github.com/grpc/grpc/blob/master/doc/connection-backoff.md. Consider using the backoff.DefaultConfig as a base, in cases where you want to override only a subset of the backoff configuration.

func WithContextDialer <- v1.19.0

func WithContextDialer(f func(context.Context, string) (net.Conn, error)) DialOption

WithContextDialer returns a DialOption that sets a dialer to create connections. If FailOnNonTempDialError() is set to true, and an error is returned by f, gRPC checks the error’s Temporary() method to decide if it should try to reconnect to the network address.

func WithCredentialsBundle <- v1.16.0

func WithCredentialsBundle(b credentials.Bundle) DialOption

WithCredentialsBundle returns a DialOption to set a credentials bundle for the ClientConn.WithCreds. This should not be used together with WithTransportCredentials.

Experimental

Notice: This API is EXPERIMENTAL and may be changed or removed in a later release.

func WithDecompressor DEPRECATED

func WithDefaultCallOptions <- v1.4.0

func WithDefaultCallOptions(cos ...CallOption) DialOption

WithDefaultCallOptions returns a DialOption which sets the default CallOptions for calls over the connection.

func WithDefaultServiceConfig <- v1.20.0

func WithDefaultServiceConfig(s string) DialOption

WithDefaultServiceConfig returns a DialOption that configures the default service config, which will be used in cases where:

  1. WithDisableServiceConfig is also used, or
  2. The name resolver does not provide a service config or provides an invalid service config.

The parameter s is the JSON representation of the default service config. For more information about service configs, see: https://github.com/grpc/grpc/blob/master/doc/service_config.md For a simple example of usage, see: examples/features/load_balancing/client/main.go

func WithDialer DEPRECATED

func WithDisableHealthCheck <- v1.17.0

func WithDisableHealthCheck() DialOption

WithDisableHealthCheck disables the LB channel health checking for all SubConns of this ClientConn.

Experimental

Notice: This API is EXPERIMENTAL and may be changed or removed in a later release.

func WithDisableRetry <- v1.14.0

func WithDisableRetry() DialOption

WithDisableRetry returns a DialOption that disables retries, even if the service config enables them. This does not impact transparent retries, which will happen automatically if no data is written to the wire or if the RPC is unprocessed by the remote server.

func WithDisableServiceConfig <- v1.12.0

func WithDisableServiceConfig() DialOption

WithDisableServiceConfig returns a DialOption that causes gRPC to ignore any service config provided by the resolver and provides a hint to the resolver to not fetch service configs.

Note that this dial option only disables service config from resolver. If default service config is provided, gRPC will use the default service config.

func WithInitialConnWindowSize <- v1.4.0

func WithInitialConnWindowSize(s int32) DialOption

WithInitialConnWindowSize returns a DialOption which sets the value for initial window size on a connection. The lower bound for window size is 64K and any value smaller than that will be ignored.

func WithInitialWindowSize <- v1.4.0

func WithInitialWindowSize(s int32) DialOption

WithInitialWindowSize returns a DialOption which sets the value for initial window size on a stream. The lower bound for window size is 64K and any value smaller than that will be ignored.

func WithInsecure DEPRECATED

func WithKeepaliveParams <- v1.2.0

func WithKeepaliveParams(kp keepalive.ClientParameters) DialOption

WithKeepaliveParams returns a DialOption that specifies keepalive parameters for the client transport.

func WithMaxHeaderListSize <- v1.14.0

func WithMaxHeaderListSize(s uint32) DialOption

WithMaxHeaderListSize returns a DialOption that specifies the maximum (uncompressed) size of header list that the client is prepared to accept.

func WithMaxMsgSize DEPRECATED

func WithNoProxy <- v1.29.0

func WithNoProxy() DialOption

WithNoProxy returns a DialOption which disables the use of proxies for this ClientConn. This is ignored if WithDialer or WithContextDialer are used.

Experimental

Notice: This API is EXPERIMENTAL and may be changed or removed in a later release.

func WithPerRPCCredentials

func WithPerRPCCredentials(creds credentials.PerRPCCredentials) DialOption

WithPerRPCCredentials returns a DialOption which sets credentials and places auth state on each outbound RPC.

func WithReadBufferSize <- v1.7.0

func WithReadBufferSize(s int) DialOption

WithReadBufferSize lets you set the size of read buffer, this determines how much data can be read at most for each read syscall.

The default value for this buffer is 32KB. Zero or negative values will disable read buffer for a connection so data framer can access the underlying conn directly.

func WithResolvers <- v1.27.0

func WithResolvers(rs ...resolver.Builder) DialOption

WithResolvers allows a list of resolver implementations to be registered locally with the ClientConn without needing to be globally registered via resolver.Register. They will be matched against the scheme used for the current Dial only, and will take precedence over the global registry.

Experimental

Notice: This API is EXPERIMENTAL and may be changed or removed in a later release.

func WithReturnConnectionError <- v1.30.0

func WithReturnConnectionError() DialOption

WithReturnConnectionError returns a DialOption which makes the client connection return a string containing both the last connection error that occurred and the context.DeadlineExceeded error. Implies WithBlock()

Use of this feature is not recommended. For more information, please see: https://github.com/grpc/grpc-go/blob/master/Documentation/anti-patterns.md

Experimental

Notice: This API is EXPERIMENTAL and may be changed or removed in a later release.

func WithServiceConfig DEPRECATED

func WithStatsHandler <- v1.2.0

func WithStatsHandler(h stats.Handler) DialOption

WithStatsHandler returns a DialOption that specifies the stats handler for all the RPCs and underlying network connections in this ClientConn.

func WithStreamInterceptor <- v1.0.2

func WithStreamInterceptor(f StreamClientInterceptor) DialOption

WithStreamInterceptor returns a DialOption that specifies the interceptor for streaming RPCs.

func WithTimeout DEPRECATED

func WithTransportCredentials

func WithTransportCredentials(creds credentials.TransportCredentials) DialOption

WithTransportCredentials returns a DialOption which configures a connection level security credentials (e.g., TLS/SSL). This should not be used together with WithCredentialsBundle.

func WithUnaryInterceptor <- v1.0.2

func WithUnaryInterceptor(f UnaryClientInterceptor) DialOption

WithUnaryInterceptor returns a DialOption that specifies the interceptor for unary RPCs.

func WithUserAgent

func WithUserAgent(s string) DialOption

WithUserAgent returns a DialOption that specifies a user agent string for all the RPCs.

func WithWriteBufferSize <- v1.7.0

func WithWriteBufferSize(s int) DialOption

WithWriteBufferSize determines how much data can be batched before doing a write on the wire. The corresponding memory allocation for this buffer will be twice the size to keep syscalls low. The default value for this buffer is 32KB.

Zero or negative values will disable the write buffer such that each write will be on underlying connection. Note: A Send call may not directly translate to a write.

type EmptyCallOption <- v1.4.0

type EmptyCallOption struct{}

EmptyCallOption does not alter the Call configuration. It can be embedded in another structure to carry satellite data for use by interceptors.

type EmptyDialOption <- v1.14.0

type EmptyDialOption struct{}

EmptyDialOption does not alter the dial configuration. It can be embedded in another structure to build custom dial options.

Experimental

Notice: This type is EXPERIMENTAL and may be changed or removed in a later release.

type EmptyServerOption <- v1.21.0

type EmptyServerOption struct{}

EmptyServerOption does not alter the server configuration. It can be embedded in another structure to build custom server options.

Experimental

Notice: This type is EXPERIMENTAL and may be changed or removed in a later release.

type FailFastCallOption <- v1.11.0

type FailFastCallOption struct {
	FailFast bool
}

FailFastCallOption is a CallOption for indicating whether an RPC should fail fast or not.

Experimental

Notice: This type is EXPERIMENTAL and may be changed or removed in a later release.

type ForceCodecCallOption <- v1.19.0

type ForceCodecCallOption struct {
	Codec encoding.Codec
}

ForceCodecCallOption is a CallOption that indicates the codec used for marshaling messages.

Experimental

Notice: This type is EXPERIMENTAL and may be changed or removed in a later release.

type HeaderCallOption <- v1.11.0

type HeaderCallOption struct {
	HeaderAddr *metadata.MD
}

HeaderCallOption is a CallOption for collecting response header metadata. The metadata field will be populated after the RPC completes.

Experimental

Notice: This type is EXPERIMENTAL and may be changed or removed in a later release.

type MaxRecvMsgSizeCallOption <- v1.11.0

type MaxRecvMsgSizeCallOption struct {
	MaxRecvMsgSize int
}

MaxRecvMsgSizeCallOption is a CallOption that indicates the maximum message size in bytes the client can receive.

Experimental

Notice: This type is EXPERIMENTAL and may be changed or removed in a later release.

type MaxRetryRPCBufferSizeCallOption <- v1.14.0

type MaxRetryRPCBufferSizeCallOption struct {
	MaxRetryRPCBufferSize int
}

MaxRetryRPCBufferSizeCallOption is a CallOption indicating the amount of memory to be used for caching this RPC for retry purposes.

Experimental

Notice: This type is EXPERIMENTAL and may be changed or removed in a later release.

type MaxSendMsgSizeCallOption <- v1.11.0

type MaxSendMsgSizeCallOption struct {
	MaxSendMsgSize int
}

MaxSendMsgSizeCallOption is a CallOption that indicates the maximum message size in bytes the client can send.

Experimental

Notice: This type is EXPERIMENTAL and may be changed or removed in a later release.

type MethodConfig DEPRECATED

type MethodDesc

type MethodDesc struct {
	MethodName string
	Handler    methodHandler
}

MethodDesc represents an RPC service’s method specification.

type MethodInfo

type MethodInfo struct {
	// Name is the method name only, without the service name or package name.
	Name string
	// IsClientStream indicates whether the RPC is a client streaming RPC.
	IsClientStream bool
	// IsServerStream indicates whether the RPC is a server streaming RPC.
	IsServerStream bool
}

MethodInfo contains the information of an RPC including its method name and type.

type OnFinishCallOption <- v1.54.0

type OnFinishCallOption struct {
	OnFinish func(error)
}

OnFinishCallOption is CallOption that indicates a callback to be called when the call completes.

Experimental

Notice: This type is EXPERIMENTAL and may be changed or removed in a later release.

type PeerCallOption <- v1.11.0

type PeerCallOption struct {
	PeerAddr *peer.Peer
}

PeerCallOption is a CallOption for collecting the identity of the remote peer. The peer field will be populated after the RPC completes.

Experimental

Notice: This type is EXPERIMENTAL and may be changed or removed in a later release.

type PerRPCCredsCallOption <- v1.11.0

type PerRPCCredsCallOption struct {
	Creds credentials.PerRPCCredentials
}

PerRPCCredsCallOption is a CallOption that indicates the per-RPC credentials to use for the call.

Experimental

Notice: This type is EXPERIMENTAL and may be changed or removed in a later release.

type PreparedMsg <- v1.21.0

type PreparedMsg struct {
	// contains filtered or unexported fields
}

PreparedMsg is responsible for creating a Marshalled and Compressed object.

Experimental

Notice: This type is EXPERIMENTAL and may be changed or removed in a later release.

func (*PreparedMsg) Encode <- v1.21.0

func (p *PreparedMsg) Encode(s Stream, msg interface{}) error

Encode marshalls and compresses the message using the codec and compressor for the stream.

type Server

type Server struct {
	// contains filtered or unexported fields
}

Server is a gRPC server to serve RPC requests.

func NewServer

func NewServer(opt ...ServerOption) *Server

NewServer creates a gRPC server which has no service registered and has not started to accept requests yet.

func (*Server) GetServiceInfo

func (s *Server) GetServiceInfo() map[string]ServiceInfo

GetServiceInfo returns a map from service names to ServiceInfo. Service names include the package names, in the form of ..

func (*Server) GracefulStop <- v1.0.2

func (s *Server) GracefulStop()

GracefulStop stops the gRPC server gracefully. It stops the server from accepting new connections and RPCs and blocks until all the pending RPCs are finished.

func (*Server) RegisterService

func (s *Server) RegisterService(sd *ServiceDesc, ss interface{})

RegisterService registers a service and its implementation to the gRPC server. It is called from the IDL generated code. This must be called before invoking Serve. If ss is non-nil (for legacy code), its type is checked to ensure it implements sd.HandlerType.

func (*Server) Serve

func (s *Server) Serve(lis net.Listener) error

Serve accepts incoming connections on the listener lis, creating a new ServerTransport and service goroutine for each. The service goroutines read gRPC requests and then call the registered handlers to reply to them. Serve returns when lis.Accept fails with fatal errors. lis will be closed when this method returns. Serve will return a non-nil error unless Stop or GracefulStop is called.

func (*Server) ServeHTTP

func (s *Server) ServeHTTP(w http.ResponseWriter, r *http.Request)

ServeHTTP implements the Go standard library’s http.Handler interface by responding to the gRPC request r, by looking up the requested gRPC method in the gRPC server s.

The provided HTTP request must have arrived on an HTTP/2 connection. When using the Go standard library’s server, practically this means that the Request must also have arrived over TLS.

To share one port (such as 443 for https) between gRPC and an existing http.Handler, use a root http.Handler such as:

if r.ProtoMajor == 2 && strings.HasPrefix(
	r.Header.Get("Content-Type"), "application/grpc") {
	grpcServer.ServeHTTP(w, r)
} else {
	yourMux.ServeHTTP(w, r)
}

Note that ServeHTTP uses Go’s HTTP/2 server implementation which is totally separate from grpc-go’s HTTP/2 server. Performance and features may vary between the two paths. ServeHTTP does not support some gRPC features available through grpc-go’s HTTP/2 server.

Experimental

Notice: This API is EXPERIMENTAL and may be changed or removed in a later release.

func (*Server) Stop

func (s *Server) Stop()

Stop stops the gRPC server. It immediately closes all open connections and listeners. It cancels all active RPCs on the server side and the corresponding pending RPCs on the client side will get notified by connection errors.

type ServerOption

type ServerOption interface {
	// contains filtered or unexported methods
}

A ServerOption sets options such as credentials, codec and keepalive parameters, etc.

func ChainStreamInterceptor <- v1.28.0

func ChainStreamInterceptor(interceptors ...StreamServerInterceptor) ServerOption

ChainStreamInterceptor returns a ServerOption that specifies the chained interceptor for streaming RPCs. The first interceptor will be the outer most, while the last interceptor will be the inner most wrapper around the real call. All stream interceptors added by this method will be chained.

func ChainUnaryInterceptor <- v1.28.0

func ChainUnaryInterceptor(interceptors ...UnaryServerInterceptor) ServerOption

ChainUnaryInterceptor returns a ServerOption that specifies the chained interceptor for unary RPCs. The first interceptor will be the outer most, while the last interceptor will be the inner most wrapper around the real call. All unary interceptors added by this method will be chained.

func ConnectionTimeout <- v1.7.3

func ConnectionTimeout(d time.Duration) ServerOption

ConnectionTimeout returns a ServerOption that sets the timeout for connection establishment (up to and including HTTP/2 handshaking) for all new connections. If this is not set, the default is 120 seconds. A zero or negative value will result in an immediate timeout.

Experimental

Notice: This API is EXPERIMENTAL and may be changed or removed in a later release.

func Creds

func Creds(c credentials.TransportCredentials) ServerOption

Creds returns a ServerOption that sets credentials for server connections.

func CustomCodec DEPRECATED

func ForceServerCodec <- v1.38.0

func ForceServerCodec(codec encoding.Codec) ServerOption

ForceServerCodec returns a ServerOption that sets a codec for message marshaling and unmarshaling.

This will override any lookups by content-subtype for Codecs registered with RegisterCodec.

See Content-Type on https://github.com/grpc/grpc/blob/master/doc/PROTOCOL-HTTP2.md#requests for more details. Also see the documentation on RegisterCodec and CallContentSubtype for more details on the interaction between encoding.Codec and content-subtype.

This function is provided for advanced users; prefer to register codecs using encoding.RegisterCodec. The server will automatically use registered codecs based on the incoming requests’ headers. See also https://github.com/grpc/grpc-go/blob/master/Documentation/encoding.md#using-a-codec. Will be supported throughout 1.x.

Experimental

Notice: This API is EXPERIMENTAL and may be changed or removed in a later release.

func HeaderTableSize <- v1.25.0

func HeaderTableSize(s uint32) ServerOption

HeaderTableSize returns a ServerOption that sets the size of dynamic header table for stream.

Experimental

Notice: This API is EXPERIMENTAL and may be changed or removed in a later release.

func InTapHandle <- v1.0.5

func InTapHandle(h tap.ServerInHandle) ServerOption

InTapHandle returns a ServerOption that sets the tap handle for all the server transport to be created. Only one can be installed.

Experimental

Notice: This API is EXPERIMENTAL and may be changed or removed in a later release.

func InitialConnWindowSize <- v1.4.0

func InitialConnWindowSize(s int32) ServerOption

InitialConnWindowSize returns a ServerOption that sets window size for a connection. The lower bound for window size is 64K and any value smaller than that will be ignored.

func InitialWindowSize <- v1.4.0

func InitialWindowSize(s int32) ServerOption

InitialWindowSize returns a ServerOption that sets window size for stream. The lower bound for window size is 64K and any value smaller than that will be ignored.

func KeepaliveEnforcementPolicy <- v1.3.0

func KeepaliveEnforcementPolicy(kep keepalive.EnforcementPolicy) ServerOption

KeepaliveEnforcementPolicy returns a ServerOption that sets keepalive enforcement policy for the server.

func KeepaliveParams <- v1.3.0

func KeepaliveParams(kp keepalive.ServerParameters) ServerOption

KeepaliveParams returns a ServerOption that sets keepalive and max-age parameters for the server.

func MaxConcurrentStreams

func MaxConcurrentStreams(n uint32) ServerOption

MaxConcurrentStreams returns a ServerOption that will apply a limit on the number of concurrent streams to each ServerTransport.

func MaxHeaderListSize <- v1.14.0

func MaxHeaderListSize(s uint32) ServerOption

MaxHeaderListSize returns a ServerOption that sets the max (uncompressed) size of header list that the server is prepared to accept.

func MaxMsgSize DEPRECATED

func MaxRecvMsgSize <- v1.4.0

func MaxRecvMsgSize(m int) ServerOption

MaxRecvMsgSize returns a ServerOption to set the max message size in bytes the server can receive. If this is not set, gRPC uses the default 4MB.

func MaxSendMsgSize <- v1.4.0

func MaxSendMsgSize(m int) ServerOption

MaxSendMsgSize returns a ServerOption to set the max message size in bytes the server can send. If this is not set, gRPC uses the default math.MaxInt32.

func NumStreamWorkers <- v1.30.0

func NumStreamWorkers(numServerWorkers uint32) ServerOption

NumStreamWorkers returns a ServerOption that sets the number of worker goroutines that should be used to process incoming streams. Setting this to zero (default) will disable workers and spawn a new goroutine for each stream.

Experimental

Notice: This API is EXPERIMENTAL and may be changed or removed in a later release.

func RPCCompressor DEPRECATED

func RPCDecompressor DEPRECATED

func ReadBufferSize <- v1.7.0

func ReadBufferSize(s int) ServerOption

ReadBufferSize lets you set the size of read buffer, this determines how much data can be read at most for one read syscall. The default value for this buffer is 32KB. Zero or negative values will disable read buffer for a connection so data framer can access the underlying conn directly.

func StatsHandler <- v1.2.0

func StatsHandler(h stats.Handler) ServerOption

StatsHandler returns a ServerOption that sets the stats handler for the server.

func StreamInterceptor

func StreamInterceptor(i StreamServerInterceptor) ServerOption

StreamInterceptor returns a ServerOption that sets the StreamServerInterceptor for the server. Only one stream interceptor can be installed.

func UnaryInterceptor

func UnaryInterceptor(i UnaryServerInterceptor) ServerOption

UnaryInterceptor returns a ServerOption that sets the UnaryServerInterceptor for the server. Only one unary interceptor can be installed. The construction of multiple interceptors (e.g., chaining) can be implemented at the caller.

func UnknownServiceHandler <- v1.2.0

func UnknownServiceHandler(streamHandler StreamHandler) ServerOption

UnknownServiceHandler returns a ServerOption that allows for adding a custom unknown service handler. The provided method is a bidi-streaming RPC service handler that will be invoked instead of returning the “unimplemented” gRPC error whenever a request is received for an unregistered service or method. The handling function and stream interceptor (if set) have full access to the ServerStream, including its Context.

func WriteBufferSize <- v1.7.0

func WriteBufferSize(s int) ServerOption

WriteBufferSize determines how much data can be batched before doing a write on the wire. The corresponding memory allocation for this buffer will be twice the size to keep syscalls low. The default value for this buffer is 32KB. Zero or negative values will disable the write buffer such that each write will be on underlying connection. Note: A Send call may not directly translate to a write.

type ServerStream

type ServerStream interface {
	// SetHeader sets the header metadata. It may be called multiple times.
	// When call multiple times, all the provided metadata will be merged.
	// All the metadata will be sent out when one of the following happens:
	//  - ServerStream.SendHeader() is called;
	//  - The first response is sent out;
	//  - An RPC status is sent out (error or success).
	SetHeader(metadata.MD) error
	// SendHeader sends the header metadata.
	// The provided md and headers set by SetHeader() will be sent.
	// It fails if called multiple times.
	SendHeader(metadata.MD) error
	// SetTrailer sets the trailer metadata which will be sent with the RPC status.
	// When called more than once, all the provided metadata will be merged.
	SetTrailer(metadata.MD)
	// Context returns the context for this stream.
	Context() context.Context
	// SendMsg sends a message. On error, SendMsg aborts the stream and the
	// error is returned directly.
	//
	// SendMsg blocks until:
	//   - There is sufficient flow control to schedule m with the transport, or
	//   - The stream is done, or
	//   - The stream breaks.
	//
	// SendMsg does not wait until the message is received by the client. An
	// untimely stream closure may result in lost messages.
	//
	// It is safe to have a goroutine calling SendMsg and another goroutine
	// calling RecvMsg on the same stream at the same time, but it is not safe
	// to call SendMsg on the same stream in different goroutines.
	//
	// It is not safe to modify the message after calling SendMsg. Tracing
	// libraries and stats handlers may use the message lazily.
	SendMsg(m interface{}) error
	// RecvMsg blocks until it receives a message into m or the stream is
	// done. It returns io.EOF when the client has performed a CloseSend. On
	// any non-EOF error, the stream is aborted and the error contains the
	// RPC status.
	//
	// It is safe to have a goroutine calling SendMsg and another goroutine
	// calling RecvMsg on the same stream at the same time, but it is not
	// safe to call RecvMsg on the same stream in different goroutines.
	RecvMsg(m interface{}) error
}

ServerStream defines the server-side behavior of a streaming RPC.

Errors returned from ServerStream methods are compatible with the status package. However, the status code will often not match the RPC status as seen by the client application, and therefore, should not be relied upon for this purpose.

type ServerTransportStream <- v1.11.0

type ServerTransportStream interface {
	Method() string
	SetHeader(md metadata.MD) error
	SendHeader(md metadata.MD) error
	SetTrailer(md metadata.MD) error
}

ServerTransportStream is a minimal interface that a transport stream must implement. This can be used to mock an actual transport stream for tests of handler code that use, for example, grpc.SetHeader (which requires some stream to be in context).

See also NewContextWithServerTransportStream.

Experimental

Notice: This type is EXPERIMENTAL and may be changed or removed in a later release.

func ServerTransportStreamFromContext <- v1.12.0

func ServerTransportStreamFromContext(ctx context.Context) ServerTransportStream

ServerTransportStreamFromContext returns the ServerTransportStream saved in ctx. Returns nil if the given context has no stream associated with it (which implies it is not an RPC invocation context).

Experimental

Notice: This API is EXPERIMENTAL and may be changed or removed in a later release.

type ServiceConfig DEPRECATED

type ServiceDesc

type ServiceDesc struct {
	ServiceName string
	// The pointer to the service interface. Used to check whether the user
	// provided implementation satisfies the interface requirements.
	HandlerType interface{}
	Methods     []MethodDesc
	Streams     []StreamDesc
	Metadata    interface{}
}

ServiceDesc represents an RPC service’s specification.

type ServiceInfo

type ServiceInfo struct {
	Methods []MethodInfo
	// Metadata is the metadata specified in ServiceDesc when registering service.
	Metadata interface{}
}

ServiceInfo contains unary RPC method info, streaming RPC method info and metadata for a service.

type ServiceRegistrar <- v1.32.0

type ServiceRegistrar interface {
	// RegisterService registers a service and its implementation to the
	// concrete type implementing this interface.  It may not be called
	// once the server has started serving.
	// desc describes the service and its methods and handlers. impl is the
	// service implementation which is passed to the method handlers.
	RegisterService(desc *ServiceDesc, impl interface{})
}

ServiceRegistrar wraps a single method that supports service registration. It enables users to pass concrete types other than grpc.Server to the service registration methods exported by the IDL generated code.

type Stream DEPRECATED

type StreamClientInterceptor <- v1.0.2

type StreamClientInterceptor func(ctx context.Context, desc *StreamDesc, cc *ClientConn, method string, streamer Streamer, opts ...CallOption) (ClientStream, error)

StreamClientInterceptor intercepts the creation of a ClientStream. Stream interceptors can be specified as a DialOption, using WithStreamInterceptor() or WithChainStreamInterceptor(), when creating a ClientConn. When a stream interceptor(s) is set on the ClientConn, gRPC delegates all stream creations to the interceptor, and it is the responsibility of the interceptor to call streamer.

desc contains a description of the stream. cc is the ClientConn on which the RPC was invoked. streamer is the handler to create a ClientStream and it is the responsibility of the interceptor to call it. opts contain all applicable call options, including defaults from the ClientConn as well as per-call options.

StreamClientInterceptor may return a custom ClientStream to intercept all I/O operations. The returned error must be compatible with the status package.

type StreamDesc

type StreamDesc struct {
	// StreamName and Handler are only used when registering handlers on a
	// server.
	StreamName string        // the name of the method excluding the service
	Handler    StreamHandler // the handler called for the method

	// ServerStreams and ClientStreams are used for registering handlers on a
	// server as well as defining RPC behavior when passed to NewClientStream
	// and ClientConn.NewStream.  At least one must be true.
	ServerStreams bool // indicates the server can perform streaming sends
	ClientStreams bool // indicates the client can perform streaming sends
}

StreamDesc represents a streaming RPC service’s method specification. Used on the server when registering services and on the client when initiating new streams.

type StreamHandler

type StreamHandler func(srv interface{}, stream ServerStream) error

StreamHandler defines the handler called by gRPC server to complete the execution of a streaming RPC.

If a StreamHandler returns an error, it should either be produced by the status package, or be one of the context errors. Otherwise, gRPC will use codes.Unknown as the status code and err.Error() as the status message of the RPC.

type StreamServerInfo

type StreamServerInfo struct {
	// FullMethod is the full RPC method string, i.e., /package.service/method.
	FullMethod string
	// IsClientStream indicates whether the RPC is a client streaming RPC.
	IsClientStream bool
	// IsServerStream indicates whether the RPC is a server streaming RPC.
	IsServerStream bool
}

StreamServerInfo consists of various information about a streaming RPC on server side. All per-rpc information may be mutated by the interceptor.

type StreamServerInterceptor

type StreamServerInterceptor func(srv interface{}, ss ServerStream, info *StreamServerInfo, handler StreamHandler) error

StreamServerInterceptor provides a hook to intercept the execution of a streaming RPC on the server. info contains all the information of this RPC the interceptor can operate on. And handler is the service method implementation. It is the responsibility of the interceptor to invoke handler to complete the RPC.

type Streamer <- v1.0.2

type Streamer func(ctx context.Context, desc *StreamDesc, cc *ClientConn, method string, opts ...CallOption) (ClientStream, error)

Streamer is called by StreamClientInterceptor to create a ClientStream.

type TrailerCallOption <- v1.11.0

type TrailerCallOption struct {
	TrailerAddr *metadata.MD
}

TrailerCallOption is a CallOption for collecting response trailer metadata. The metadata field will be populated after the RPC completes.

Experimental

Notice: This type is EXPERIMENTAL and may be changed or removed in a later release.

type UnaryClientInterceptor <- v1.0.2

type UnaryClientInterceptor func(ctx context.Context, method string, req, reply interface{}, cc *ClientConn, invoker UnaryInvoker, opts ...CallOption) error

UnaryClientInterceptor intercepts the execution of a unary RPC on the client. Unary interceptors can be specified as a DialOption, using WithUnaryInterceptor() or WithChainUnaryInterceptor(), when creating a ClientConn. When a unary interceptor(s) is set on a ClientConn, gRPC delegates all unary RPC invocations to the interceptor, and it is the responsibility of the interceptor to call invoker to complete the processing of the RPC.

method is the RPC name. req and reply are the corresponding request and response messages. cc is the ClientConn on which the RPC was invoked. invoker is the handler to complete the RPC and it is the responsibility of the interceptor to call it. opts contain all applicable call options, including defaults from the ClientConn as well as per-call options.

The returned error must be compatible with the status package.

type UnaryHandler

type UnaryHandler func(ctx context.Context, req interface{}) (interface{}, error)

UnaryHandler defines the handler invoked by UnaryServerInterceptor to complete the normal execution of a unary RPC.

If a UnaryHandler returns an error, it should either be produced by the status package, or be one of the context errors. Otherwise, gRPC will use codes.Unknown as the status code and err.Error() as the status message of the RPC.

type UnaryInvoker <- v1.0.2

type UnaryInvoker func(ctx context.Context, method string, req, reply interface{}, cc *ClientConn, opts ...CallOption) error

UnaryInvoker is called by UnaryClientInterceptor to complete RPCs.

type UnaryServerInfo

type UnaryServerInfo struct {
	// Server is the service implementation the user provides. This is read-only.
	Server interface{}
	// FullMethod is the full RPC method string, i.e., /package.service/method.
	FullMethod string
}

UnaryServerInfo consists of various information about a unary RPC on server side. All per-rpc information may be mutated by the interceptor.

type UnaryServerInterceptor

type UnaryServerInterceptor func(ctx context.Context, req interface{}, info *UnaryServerInfo, handler UnaryHandler) (resp interface{}, err error)

UnaryServerInterceptor provides a hook to intercept the execution of a unary RPC on the server. info contains all the information of this RPC the interceptor can operate on. And handler is the wrapper of the service method implementation. It is the responsibility of the interceptor to invoke handler to complete the RPC.

3.1.5 - 生成的代码参考

Generated-code reference 生成的代码参考

https://grpc.io/docs/languages/go/generated-code/

​ 本页面描述了使用 grpc插件,即protoc-gen-go-grpc,在使用protoc编译.proto文件时生成的代码。

​ 您可以在服务定义中了解如何在.proto文件中定义gRPC服务。

Thread-safety: note that client-side RPC invocations and server-side RPC handlers are thread-safe and are meant to be run on concurrent goroutines. But also note that for individual streams, incoming and outgoing data is bi-directional but serial; so e.g. individual streams do not support concurrent reads or concurrent writes (but reads are safely concurrent with writes).

线程安全性(Thread-safety):请注意,客户端的RPC调用和服务端的RPC处理程序是线程安全(thread-safe)的,并且可以在并发的goroutine上运行。但是,请注意对于**单个流(individual streams)**而言,传入和传出的数据是双向但串行的;因此,例如,单个流不支持并发读取并发写入(但读取与写入之间是安全并发的)。

生成的服务端接口上的方法

​ 在服务端,.proto文件中的每个service Bar都会生成以下函数:

func RegisterBarServer(s *grpc.Server, srv BarServer)

​ 该应用程序可以定义BarServer接口的具体实现,并使用此函数将其注册到grpc.Server实例上(在启动服务端实例之前)。

一元方法

​ 这些方法在生成的服务接口上具有以下签名:

Foo(context.Context, *MsgA) (*MsgB, error)

​ 在这个上下文中,MsgA是从客户端发送的Protobuf消息,MsgB是从服务端返回的Protobuf消息。

服务端流式方法

​ 这些方法在生成的服务接口上具有以下签名:

Foo(*MsgA, <ServiceName>_FooServer) error

​ 在这个上下文中,MsgA是来自客户端的单个请求,<ServiceName>_FooServer参数表示MsgB消息的服务端到客户端的流。

<ServiceName>_FooServer嵌入了grpc.ServerStream和以下接口:

1
2
3
4
type <ServiceName>_FooServer interface {
	Send(*MsgB) error
	grpc.ServerStream
}

​ 服务端处理程序可以通过此参数的Send方法向客户端发送一系列的Protobuf消息。服务端到客户端的流通过处理程序方法的return语句来结束。

客户端流式方法

​ 这些方法在生成的服务接口上具有以下签名:

Foo(<ServiceName>_FooServer) error

​ 在这个上下文中,<ServiceName>_FooServer既可以用于读取客户端到服务端的消息流,也可以用于发送单个服务端响应消息。

<ServiceName>_FooServer嵌入了grpc.ServerStream和以下接口:

1
2
3
4
5
type <ServiceName>_FooServer interface {
	SendAndClose(*MsgA) error
	Recv() (*MsgB, error)
	grpc.ServerStream
}

​ 服务端处理程序可以重复调用此参数上的Recv方法,以接收来自客户端的完整消息流。一旦达到流的末尾,Recv将返回(nil, io.EOF)。通过在<ServiceName>_FooServer参数上调用SendAndClose方法,可以发送来自服务端的单个响应消息。请注意,SendAndClose方法必须且只能被调用一次。

双向流式方法

​ 这些方法在生成的服务接口上具有以下签名:

Foo(<ServiceName>_FooServer) error

​ 在这个上下文中,<ServiceName>_FooServer可用于访问客户端到服务端的消息流和服务端到客户端的消息流。<ServiceName>_FooServer嵌入了grpc.ServerStream和以下接口:

1
2
3
4
5
type <ServiceName>_FooServer interface {
	Send(*MsgA) error
	Recv() (*MsgB, error)
	grpc.ServerStream
}

​ 服务端处理程序可以重复调用此参数上的Recv方法,以读取客户端到服务端的消息流。一旦达到客户端到服务端流的末尾,Recv方法将返回(nil, io.EOF)。通过重复调用<ServiceName>_FooServer参数上的Send方法,可以发送服务端到客户端的响应消息流。服务端到客户端的流通过双向方法处理程序的return语句来结束。

生成的客户端接口上的方法

​ 对于客户端用法,.proto文件中的每个service Bar还会生成函数:func BarClient(cc *grpc.ClientConn) BarClient,该函数返回BarClient接口的具体实现(该具体实现也位于生成的.pb.go文件中)。

一元方法

​ 在生成的客户端存根(stub)上,这些方法具有以下签名:

(ctx context.Context, in *MsgA, opts ...grpc.CallOption) (*MsgB, error)

​ 在这个上下文中,MsgA是客户端发送到服务端的单个请求,MsgB包含了服务端发送回来的响应。

服务端流式方法

​ 在生成的客户端存根(stub)上,这些方法具有以下签名:

Foo(ctx context.Context, in *MsgA, opts ...grpc.CallOption) (<ServiceName>_FooClient, error)

In this context, <ServiceName>_FooClient represents the server-to-client stream of MsgB messages.

​ 在这个上下文中,<ServiceName>_FooClient表示服务端到客户端的stream,其中包含MsgB消息。

​ 这个流嵌入了grpc.ClientStream和以下接口:

1
2
3
4
type <ServiceName>_FooClient interface {
	Recv() (*MsgB, error)
	grpc.ClientStream
}

​ 这个流开始于,当客户端在存根(stub)上调用Foo方法时。然后,客户端可以重复调用返回的<ServiceName>_FooClient stream 上的Recv方法,以读取服务端到客户端的响应流。一旦完全读取了服务端到客户端的流,Recv方法将返回(nil, io.EOF)

客户端流式方法

​ 在生成的客户端存根(stub)上,这些方法具有以下签名:

Foo(ctx context.Context, opts ...grpc.CallOption) (<ServiceName>_FooClient, error)

In this context, <ServiceName>_FooClient represents the client-to-server stream of MsgA messages.

​ 在这个上下文中,<ServiceName>_FooClient表示客户端到服务端的stream,其中包含MsgA消息。

<ServiceName>_FooClient嵌入了grpc.ClientStream和以下接口:

1
2
3
4
5
type <ServiceName>_FooClient interface {
	Send(*MsgA) error
	CloseAndRecv() (*MsgB, error)
	grpc.ClientStream
}

​ 这个流开始于,当客户端在存根(stub)上调用Foo方法时。然后,客户端可以重复调用返回的<ServiceName>_FooClient流上的Send方法,来发送客户端到服务端的消息流。为了关闭客户端到服务器的流并接收来自服务器的单个响应消息,必须仅调用一次 CloseAndRecv 方法。

双向流式方法

​ 在生成的客户端存根(stub)上,这些方法具有以下签名:

Foo(ctx context.Context, opts ...grpc.CallOption) (<ServiceName>_FooClient, error)

​ 在这个上下文中,<ServiceName>_FooClient表示客户端到服务端和服务端到客户端的消息流。

<ServiceName>_FooClient嵌入了grpc.ClientStream和以下接口:

1
2
3
4
5
type <ServiceName>_FooClient interface {
	Send(*MsgA) error
	Recv() (*MsgB, error)
	grpc.ClientStream
}

​ 这个流开始于,当客户端在存根(stub)上调用Foo方法时。然后,客户端可以重复调用返回的<SericeName>_FooClient流上的Send方法,来发送客户端到服务端的消息流。客户端还可以重复调用此流上的Recv方法,来接收完整的服务端到客户端的消息流。

​ 对于服务器到客户端的流,通过流的 Recv 方法返回值为 (nil, io.EOF) 来表示流的结束。对于客户端到服务器的流,客户端可以通过在流上调用CloseSend方法来表示流结束。

包和命名空间

When the protoc compiler is invoked with --go_out=plugins=grpc:, the proto package to Go package translation works the same as when the protoc-gen-go plugin is used without the grpc plugin.

​ 当使用protoc编译器调用--go_out=plugins=grpc:时,proto package 到 Go 包的转换方式与在没有使用 grpc 插件的情况下使用 protoc-gen-go 插件时相同。

​ 例如,如果foo.proto声明其位于package foo中,则生成的foo.pb.go文件也将位于Go包foo中。

4 - 平台

Supported platforms 支持的平台

gRPC is supported across different software and hardware platforms.

gRPC 在不同的软件和硬件平台上都得到支持。

Each gRPC language / platform has links to the following pages and more: quick start, tutorials, API reference.

每个 gRPC 语言 / 平台都有链接到以下页面和更多内容:快速入门、教程、API 参考。

New sections coming soon:

即将推出的新章节:

  • Flutter
    • Docs coming soon 文档即将推出
  • Mobile:
    • iOS – docs coming soon 文档即将推出

Select a development or target platform to get started:

选择一个开发或目标平台开始使用:

4.1 - web

https://grpc.io/docs/platforms/web/

Quick start

This guide gets you started with gRPC-Web with a simple working example.

这个指南通过一个简单的工作示例让你开始使用 gRPC-Web。

Basics tutorial

A basic tutorial introduction to gRPC-web.

gRPC-Web 的基础教程介绍。

4.1.1 - 快速入门

Quick start - 快速入门

​ 这个指南通过一个简单的工作示例让你开始使用 gRPC-Web。

先决条件

获取示例代码

​ 该示例代码是 grpc-web 仓库中的一部分。

  1. 下载该仓库的 zip 文件 并解压,或者克隆该仓库:

    1
    
    $ git clone https://github.com/grpc/grpc-web
    
  2. 切换到该仓库的根目录:

    1
    
    $ cd grpc-web
    

从浏览器中运行 Echo 示例!

​ 在 grpc-web 目录:

  1. 获取所需的包和工具:

    1
    
    $ docker-compose pull prereqs node-server envoy commonjs-client
    

    注意

    收到以下警告?你可以忽略它,以便运行该示例应用程序:

    WARNING: Some service image(s) must be built from source
    
  2. 启动服务作为后台进程:

    1
    
    $ docker-compose up -d node-server envoy commonjs-client
    
  3. 在你的浏览器中:

    你将在输入框下方看到服务器返回的消息。

​ 恭喜!你刚刚使用 gRPC 运行了一个客户端-服务端(client-server)应用程序。

​ 一旦完成后,您可通过运行以下命令关闭之前启动的服务:

1
$ docker-compose down

发生了什么?

​ 这个示例应用程序有三个关键组件(components):

  1. node-server 是一个使用 Node 实现的标准 gRPC 服务端。该服务端在端口 :9090 上监听,并实现应用程序的业务逻辑(回显客户端消息)。
  2. envoy 是 Envoy 代理。它在端口 :8080 上监听,并将浏览器的 gRPC-Web 请求转发到端口 :9090
  3. commonjs-client:该组件使用 protoc-gen-grpc-web protoc 插件生成客户端存根(stub)类,使用 webpack 编译所有的 JS 依赖项,并使用一个简单的 Web 服务器在端口 :8081 上托管静态内容(echotest.htmldist/main.js)。从网页中输入的用户消息将作为 gRPC-web 请求发送到 Envoy 代理。

下一步

4.1.2 - 基础教程

Basics tutorial 基础教程

​ gRPC-web 的基础教程介绍。

​ 本教程提供了如何在浏览器中使用 gRPC-Web 的基本介绍。

​ 通过学习本示例,您将了解如何:

  • .proto 文件中定义一个服务。
  • 使用协议缓冲区编译器生成客户端代码。
  • 使用 gRPC-Web API 编写一个简单的服务客户端。

​ 本教程假设您对协议缓冲区有基本了解。

为什么使用 gRPC 和 gRPC-Web?

​ 使用 gRPC,您可以在 .proto 文件中定义一次您的服务,并在gRPC支持的任何语言中实现客户端和服务端,这些客户端和服务端可以在从大型数据中心的服务器到您自己的平板电脑等各种环境中运行 —— gRPC 为不同编程语言和环境之间的通信复杂性提供了便利。您还可以享受使用协议缓冲区的所有优势,包括高效的序列化、简单的接口定义语言和易于更新接口。gRPC-Web 允许您使用符合惯例的 API 从浏览器访问以这种方式构建的 gRPC 服务。

定义服务

​ 创建 gRPC 服务的第一步是使用协议缓冲区定义服务方法以及它们的请求和响应消息类型。在本示例中,我们在名为 echo.proto 的文件中定义了我们的 EchoService。有关协议缓冲区和 proto3 语法的更多信息,请参阅 protobuf 文档

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
message EchoRequest {
  string message = 1;
}

message EchoResponse {
  string message = 1;
}

service EchoService {
  rpc Echo(EchoRequest) returns (EchoResponse);
}

实现 gRPC 后端服务端

​ 接下来,我们使用Node在后端实现了EchoService接口,创建了gRPC的EchoServer。它将处理来自客户端的请求。有关详细信息,请参阅文件 node-server/server.js

​ 您可以使用 gRPC 支持的任何编程语言来实现服务端。有关详细信息,请参阅主页

1
2
3
function doEcho(call, callback) {
  callback(null, {message: call.request.message});
}

配置 Envoy 代理

​ 在这个示例中,我们将使用 Envoy 代理将 gRPC 浏览器请求转发到后端服务端。您可以在 envoy.yaml 文件中查看完整的配置文件。

​ 为了将 gRPC 请求转发到后端服务端,我们需要添加以下配置块:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
  listeners:
  - name: listener_0
    address:
      socket_address: { address: 0.0.0.0, port_value: 8080 }
    filter_chains:
    - filters:
      - name: envoy.http_connection_manager
        config:
          codec_type: auto
          stat_prefix: ingress_http
          route_config:
            name: local_route
            virtual_hosts:
            - name: local_service
              domains: ["*"]
              routes:
              - match: { prefix: "/" }
                route: { cluster: echo_service }
          http_filters:
          - name: envoy.grpc_web
          - name: envoy.router
  clusters:
  - name: echo_service
    connect_timeout: 0.25s
    type: logical_dns
    http2_protocol_options: {}
    lb_policy: round_robin
    hosts: [{ socket_address: { address: node-server, port_value: 9090 }}]

​ 您可能还需要添加一些 CORS 设置,以确保浏览器可以请求跨域内容(cross-origin content)。

​ 在这个简单示例中,浏览器向端口 :8080 发出 gRPC 请求。Envoy将请求转发到在端口9090上监听的后端gRPC服务端。

生成 Protobuf 消息和服务客户端 Stub

​ 要从我们的 echo.proto 生成 protobuf 消息类,请运行以下命令:

1
2
$ protoc -I=$DIR echo.proto \
  --js_out=import_style=commonjs:$OUT_DIR

​ 传递给--js_out标志的import_style选项,确保生成的文件将具有CommonJS样式的require()语句。

​ 要生成gRPC-Web服务的客户端存根(stub),首先需要gRPC-Web protoc插件。要编译protoc-gen-grpc-web插件,需要从存储库的根目录运行以下命令:

1
2
$ cd grpc-web
$ sudo make install-plugin

​ 要生成该服务的客户端存根(stub) 文件,请运行以下命令:

1
2
$ protoc -I=$DIR echo.proto \
  --grpc-web_out=import_style=commonjs,mode=grpcwebtext:$OUT_DIR

​ 在上述的 --grpc-web_out 参数中:

  • mode 可以是 grpcwebtext(默认)或 grpcweb
  • import_style 可以是 closure(默认)或 commonjs

​ 我们的命令会生成客户端存根 (stub),默认情况下会生成到文件 echo_grpc_web_pb.js 中。

编写 JS 客户端代码

​ 现在,您可以编写一些 JS 客户端代码了。将以下代码放入一个名为 client.js 的文件中。

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
const {EchoRequest, EchoResponse} = require('./echo_pb.js');
const {EchoServiceClient} = require('./echo_grpc_web_pb.js');

var echoService = new EchoServiceClient('http://localhost:8080');

var request = new EchoRequest();
request.setMessage('Hello World!');

echoService.echo(request, {}, function(err, response) {
  // ...
});

​ 您将需要一个 package.json 文件:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
{
  "name": "grpc-web-commonjs-example",
  "dependencies": {
    "google-protobuf": "^3.6.1",
    "grpc-web": "^0.4.0"
  },
  "devDependencies": {
    "browserify": "^16.2.2",
    "webpack": "^4.16.5",
    "webpack-cli": "^3.1.0"
  }
}

编译 JS 库

​ 最后,将所有这些内容放在一起,我们可以将所有相关的 JS 文件编译为一个单一(可以在浏览器中使用)的 JS 库。

1
2
$ npm install
$ npx webpack client.js

​ 现在将 dist/main.js 嵌入到您的项目中,并查看它的运行情况!

5 - 指南

Guides 指南

Task-oriented walkthroughs of common use cases

常见用例的任务导向式演示

The documentation covers the following techniques:

文档涵盖以下技术:


Authentication 认证

An overview of gRPC authentication, including built-in auth mechanisms, and how to plug in your own authentication systems.

gRPC 认证的概述,包括内置认证机制以及如何插入自己的认证系统。

Benchmarking 基准测试

gRPC is designed to support high-performance open-source RPCs in many languages. This page describes performance benchmarking tools, scenarios considered by tests, and the testing infrastructure.

gRPC 旨在支持多种语言中的高性能开源 RPC。该页面介绍了性能基准测试工具、测试考虑的场景以及测试基础架构。

Compression 压缩

How to compress the data sent over the wire while using gRPC.

如何在使用 gRPC 时压缩在传输中发送的数据。

Custom Load Balancing Policies 自定义负载均衡策略

Explains how custom load balancing policies can help optimize load balancing under unique circumstances.

解释了如何使用自定义负载均衡策略来优化特定情况下的负载均衡。

Deadlines 超时

Explains how deadlines can be used to effectively deal with unreliable backends.

解释了如何使用超时处理来有效处理不可靠的后端。

Error handling 错误处理

How gRPC deals with errors, and gRPC error codes.

gRPC 如何处理错误以及 gRPC 错误代码。

Flow Control 流量控制

Explains what flow control is and how you can manually control it.

解释了流量控制是什么以及如何手动控制它。

Keepalive

How to use HTTP/2 PING-based keepalives in gRPC.

如何在 gRPC 中使用基于 HTTP/2 PING 的保持活动状态。

Performance Best Practices 性能最佳实践

A user guide of both general and language-specific best practices to improve performance.

通用和特定语言的性能最佳实践用户指南。

Wait-for-Ready 等待就绪

Explains how to configure RPCs to wait for the server to be ready before sending the request.

解释了如何配置 RPC,在发送请求之前等待服务器准备就绪。

5.1 - 认证

Authentication 认证

https://grpc.io/docs/guides/auth/

An overview of gRPC authentication, including built-in auth mechanisms, and how to plug in your own authentication systems.

gRPC 认证概述,包括内置的认证机制以及如何插入自己的认证系统。

Overview 概述

gRPC is designed to work with a variety of authentication mechanisms, making it easy to safely use gRPC to talk to other systems. You can use our supported mechanisms - SSL/TLS with or without Google token-based authentication - or you can plug in your own authentication system by extending our provided code.

gRPC 设计用于与各种认证机制配合工作,使得通过 gRPC 安全地与其他系统进行通信变得容易。您可以使用我们支持的机制 - SSL/TLS(可选使用 Google 基于令牌的认证) - 或者通过扩展我们提供的代码来插入自己的认证系统。

gRPC also provides a simple authentication API that lets you provide all the necessary authentication information as Credentials when creating a channel or making a call.

gRPC 还提供了一个简单的认证 API,允许您在创建通道或进行调用时提供所有必要的认证信息作为 Credentials

Supported auth mechanisms 支持的认证机制

The following authentication mechanisms are built-in to gRPC:

以下认证机制已内置到 gRPC 中:

  • SSL/TLS: gRPC has SSL/TLS integration and promotes the use of SSL/TLS to authenticate the server, and to encrypt all the data exchanged between the client and the server. Optional mechanisms are available for clients to provide certificates for mutual authentication.
  • SSL/TLS: gRPC 具有 SSL/TLS 集成,并推广使用 SSL/TLS 对服务器进行身份验证,并对客户端和服务器之间交换的所有数据进行加密。对于客户端提供证书以进行互相认证,可提供可选机制。
  • ALTS: gRPC supports ALTS as a transport security mechanism, if the application is running on Google Cloud Platform (GCP). For details, see one of the following language-specific pages: ALTS in C++, ALTS in Go, ALTS in Java, ALTS in Python.
  • ALTS: 如果应用程序在 Google Cloud Platform (GCP) 上运行,gRPC 支持 ALTS 作为传输安全机制。有关详细信息,请参阅以下特定语言的页面:C++ 中的 ALTSGo 中的 ALTSJava 中的 ALTSPython 中的 ALTS
  • Token-based authentication with Google: gRPC provides a generic mechanism (described below) to attach metadata based credentials to requests and responses. Additional support for acquiring access tokens (typically OAuth2 tokens) while accessing Google APIs through gRPC is provided for certain auth flows: you can see how this works in our code examples below. In general this mechanism must be used as well as SSL/TLS on the channel - Google will not allow connections without SSL/TLS, and most gRPC language implementations will not let you send credentials on an unencrypted channel.
  • 使用 Google 的基于令牌的身份验证: gRPC 提供了一种通用机制(下文描述),用于将基于元数据的凭据附加到请求和响应中。在通过 gRPC 访问 Google API 时,还提供了获取访问令牌(通常是 OAuth2 令牌)的其他支持,您可以在下面的代码示例中查看其工作原理。通常情况下,此机制必须与通道上的 SSL/TLS 一起使用 - Google 不允许在没有 SSL/TLS 的情况下建立连接,大多数 gRPC 语言实现也不允许您在未加密的通道上发送凭据。

Warning 警告

Google credentials should only be used to connect to Google services. Sending a Google issued OAuth2 token to a non-Google service could result in this token being stolen and used to impersonate the client to Google services.

Google 凭据应仅用于连接到 Google 服务。将由 Google 发行的 OAuth2 令牌发送到非 Google 服务可能导致该令牌被窃取并用于冒充客户端访问 Google 服务。

Authentication API 认证 API

gRPC provides a simple authentication API based around the unified concept of Credentials objects, which can be used when creating an entire gRPC channel or an individual call.

gRPC 提供了一个简单的认证 API,基于 Credentials 对象的统一概念,可以在创建整个 gRPC 通道或单个调用时使用。

Credential types 凭据类型

Credentials can be of two types:

凭据可以分为两种类型:

  • Channel credentials, which are attached to a Channel, such as SSL credentials.
  • Call credentials, which are attached to a call (or ClientContext in C++).
  • 通道凭据,附加到 Channel 上的凭据,例如 SSL 凭据。
  • 调用凭据,附加到调用(或 C++ 中的 ClientContext)上的凭据。

You can also combine these in a CompositeChannelCredentials, allowing you to specify, for example, SSL details for the channel along with call credentials for each call made on the channel. A CompositeChannelCredentials associates a ChannelCredentials and a CallCredentials to create a new ChannelCredentials. The result will send the authentication data associated with the composed CallCredentials with every call made on the channel.

您还可以将它们组合在 CompositeChannelCredentials 中,允许您为通道指定 SSL 详细信息以及每个在通道上进行的调用的调用凭据,例如。CompositeChannelCredentialsChannelCredentialsCallCredentials 关联起来,以创建新的 ChannelCredentials。结果将使用与组合的 CallCredentials 相关联的身份验证数据发送到通道上进行的每个调用。

For example, you could create a ChannelCredentials from an SslCredentials and an AccessTokenCredentials. The result when applied to a Channel would send the appropriate access token for each call on this channel.

例如,您可以使用 SslCredentialsAccessTokenCredentials 创建一个 ChannelCredentials。将其应用于通道时,结果将为该通道上的每个调用发送适当的访问令牌。

Individual CallCredentials can also be composed using CompositeCallCredentials. The resulting CallCredentials when used in a call will trigger the sending of the authentication data associated with the two CallCredentials.

还可以使用 CompositeCallCredentials 组合单独的 CallCredentials。在调用中使用生成的 CallCredentials 将触发与这两个 CallCredentials 相关联的身份验证数据的发送。

Using client-side SSL/TLS 使用客户端 SSL/TLS

Now let’s look at how Credentials work with one of our supported auth mechanisms. This is the simplest authentication scenario, where a client just wants to authenticate the server and encrypt all data. The example is in C++, but the API is similar for all languages: you can see how to enable SSL/TLS in more languages in our Examples section below.

现在让我们看一下 Credentials 如何与我们支持的一种认证机制配合工作。这是最简单的认证场景,其中客户端只想对服务器进行认证并加密所有数据。以下示例是用 C++ 编写的,但 API 在所有语言中的用法类似:您可以在下面的示例部分中查看如何在其他语言中启用 SSL/TLS。

1
2
3
4
5
6
7
8
// Create a default SSL ChannelCredentials object. 创建一个默认的 SSL ChannelCredentials 对象。
auto channel_creds = grpc::SslCredentials(grpc::SslCredentialsOptions());
// Create a channel using the credentials created in the previous step. 使用前面创建的凭据创建一个通道。
auto channel = grpc::CreateChannel(server_name, channel_creds);
// Create a stub on the channel. 在通道上创建一个存根。
std::unique_ptr<Greeter::Stub> stub(Greeter::NewStub(channel));
// Make actual RPC calls on the stub. 在存根上进行实际的 RPC 调用。
grpc::Status s = stub->sayHello(&context, *request, response);

For advanced use cases such as modifying the root CA or using client certs, the corresponding options can be set in the SslCredentialsOptions parameter passed to the factory method.

对于诸如修改根 CA 或使用客户端证书之类的高级用例,可以在传递给工厂方法的 SslCredentialsOptions 参数中设置相应的选项。

Note 注意

Non-POSIX-compliant systems (such as Windows) need to specify the root certificates in SslCredentialsOptions, since the defaults are only configured for POSIX filesystems.

非 POSIX 兼容的系统(例如 Windows)需要在 SslCredentialsOptions 中指定根证书,因为默认值仅针对 POSIX 文件系统进行了配置。

Using Google token-based authentication 使用基于 Google 令牌的身份验证

gRPC applications can use a simple API to create a credential that works for authentication with Google in various deployment scenarios. Again, our example is in C++ but you can find examples in other languages in our Examples section.

gRPC 应用程序可以使用简单的 API 创建适用于在各种部署场景下与 Google 进行身份验证的凭据。以下示例再次以 C++ 为例,但您可以在我们的示例部分中找到其他语言的示例。

1
2
3
4
5
auto creds = grpc::GoogleDefaultCredentials();
// Create a channel, stub and make RPC calls (same as in the previous example) 创建一个通道、存根并进行 RPC 调用(与前面的示例相同)
auto channel = grpc::CreateChannel(server_name, creds);
std::unique_ptr<Greeter::Stub> stub(Greeter::NewStub(channel));
grpc::Status s = stub->sayHello(&context, *request, response);

This channel credentials object works for applications using Service Accounts as well as for applications running in Google Compute Engine (GCE). In the former case, the service account’s private keys are loaded from the file named in the environment variable GOOGLE_APPLICATION_CREDENTIALS. The keys are used to generate bearer tokens that are attached to each outgoing RPC on the corresponding channel.

此通道凭据对象适用于使用服务账号以及在 Google Compute Engine (GCE) 上运行的应用程序。对于前一种情况,服务账号的私钥将从环境变量 GOOGLE_APPLICATION_CREDENTIALS 中指定的文件中加载。这些密钥用于生成附加到相应通道上的每个传出 RPC 的承载令牌。

For applications running in GCE, a default service account and corresponding OAuth2 scopes can be configured during VM setup. At run-time, this credential handles communication with the authentication systems to obtain OAuth2 access tokens and attaches them to each outgoing RPC on the corresponding channel.

对于在 GCE 上运行的应用程序,可以在虚拟机设置期间配置默认服务账号和相应的 OAuth2 范围。在运行时,该凭据与身份验证系统进行通信,获取 OAuth2 访问令牌,并将其附加到相应通道上的每个传出 RPC。

Extending gRPC to support other authentication mechanisms 扩展 gRPC 支持其他身份验证机制

The Credentials plugin API allows developers to plug in their own type of credentials. This consists of:

Credentials 插件 API 允许开发人员插入自己的凭据类型。这包括以下内容:

  • The MetadataCredentialsPlugin abstract class, which contains the pure virtual GetMetadata method that needs to be implemented by a sub-class created by the developer.
  • The MetadataCredentialsFromPlugin function, which creates a CallCredentials from the MetadataCredentialsPlugin.
  • MetadataCredentialsPlugin 抽象类,其中包含由开发人员创建的子类需要实现的纯虚拟 GetMetadata 方法。
  • MetadataCredentialsFromPlugin 函数,从 MetadataCredentialsPlugin 创建一个 CallCredentials

Here is example of a simple credentials plugin which sets an authentication ticket in a custom header.

以下是一个简单的凭据插件示例,它在自定义标头中设置身份验证票据。

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
class MyCustomAuthenticator : public grpc::MetadataCredentialsPlugin {
 public:
  MyCustomAuthenticator(const grpc::string& ticket) : ticket_(ticket) {}

  grpc::Status GetMetadata(
      grpc::string_ref service_url, grpc::string_ref method_name,
      const grpc::AuthContext& channel_auth_context,
      std::multimap<grpc::string, grpc::string>* metadata) override {
    metadata->insert(std::make_pair("x-custom-auth-ticket", ticket_));
    return grpc::Status::OK;
  }

 private:
  grpc::string ticket_;
};

auto call_creds = grpc::MetadataCredentialsFromPlugin(
    std::unique_ptr<grpc::MetadataCredentialsPlugin>(
        new MyCustomAuthenticator("super-secret-ticket")));

A deeper integration can be achieved by plugging in a gRPC credentials implementation at the core level. gRPC internals also allow switching out SSL/TLS with other encryption mechanisms.

通过在核心级别插入 gRPC 凭据实现,可以实现更深入的集成。gRPC 内部还允许将 SSL/TLS 替换为其他加密机制。

Examples 示例

These authentication mechanisms will be available in all gRPC’s supported languages. The following sections demonstrate how authentication and authorization features described above appear in each language: more languages are coming soon.

这些身份验证机制将适用于所有 gRPC 支持的语言。以下部分展示了每种语言中上述身份验证和授权功能的示例:更多语言即将推出。

Go

Base case - no encryption or authentication 基本情况 - 无加密或身份验证

Client:

1
2
3
4
conn, _ := grpc.Dial("localhost:50051", grpc.WithTransportCredentials(insecure.NewCredentials()))
// error handling omitted
client := pb.NewGreeterClient(conn)
// ...

Server:

1
2
3
4
s := grpc.NewServer()
lis, _ := net.Listen("tcp", "localhost:50051")
// error handling omitted
s.Serve(lis)
With server authentication SSL/TLS 使用服务器身份验证 SSL/TLS

Client:

1
2
3
4
5
creds, _ := credentials.NewClientTLSFromFile(certFile, "")
conn, _ := grpc.Dial("localhost:50051", grpc.WithTransportCredentials(creds))
// error handling omitted 省略错误处理
client := pb.NewGreeterClient(conn)
// ...

Server:

1
2
3
4
5
creds, _ := credentials.NewServerTLSFromFile(certFile, keyFile)
s := grpc.NewServer(grpc.Creds(creds))
lis, _ := net.Listen("tcp", "localhost:50051")
// error handling omitted 省略错误处理
s.Serve(lis)
Authenticate with Google 使用 Google 进行身份验证
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
pool, _ := x509.SystemCertPool()
// error handling omitted 省略错误处理
creds := credentials.NewClientTLSFromCert(pool, "")
perRPC, _ := oauth.NewServiceAccountFromFile("service-account.json", scope)
conn, _ := grpc.Dial(
	"greeter.googleapis.com",
	grpc.WithTransportCredentials(creds),
	grpc.WithPerRPCCredentials(perRPC),
)
// error handling omitted 省略错误处理
client := pb.NewGreeterClient(conn)
// ...

Ruby

Base case - no encryption or authentication 基本情况 - 无加密或身份验证
1
2
stub = Helloworld::Greeter::Stub.new('localhost:50051', :this_channel_is_insecure)
...
With server authentication SSL/TLS 使用服务器身份验证 SSL/TLS
1
2
creds = GRPC::Core::ChannelCredentials.new(load_certs)  # load_certs typically loads a CA roots file   - load_certs 通常加载 CA 根证书文件
stub = Helloworld::Greeter::Stub.new('myservice.example.com', creds)
Authenticate with Google 使用 Google 进行身份验证
1
2
3
4
5
6
7
require 'googleauth'  # from http://www.rubydoc.info/gems/googleauth/0.1.0
...
ssl_creds = GRPC::Core::ChannelCredentials.new(load_certs)  # load_certs typically loads a CA roots file - load_certs 通常加载 CA 根证书文件
authentication = Google::Auth.get_application_default()
call_creds = GRPC::Core::CallCredentials.new(authentication.updater_proc)
combined_creds = ssl_creds.compose(call_creds)
stub = Helloworld::Greeter::Stub.new('greeter.googleapis.com', combined_creds)

C++

Base case - no encryption or authentication 基本情况 - 无加密或身份验证
1
2
3
auto channel = grpc::CreateChannel("localhost:50051", InsecureChannelCredentials());
std::unique_ptr<Greeter::Stub> stub(Greeter::NewStub(channel));
...
With server authentication SSL/TLS 使用服务器身份验证 SSL/TLS
1
2
3
4
auto channel_creds = grpc::SslCredentials(grpc::SslCredentialsOptions());
auto channel = grpc::CreateChannel("myservice.example.com", channel_creds);
std::unique_ptr<Greeter::Stub> stub(Greeter::NewStub(channel));
...
Authenticate with Google 使用 Google 进行身份验证
1
2
3
4
auto creds = grpc::GoogleDefaultCredentials();
auto channel = grpc::CreateChannel("greeter.googleapis.com", creds);
std::unique_ptr<Greeter::Stub> stub(Greeter::NewStub(channel));
...

Python

Base case - No encryption or authentication 基本情况 - 无加密或身份验证
1
2
3
4
5
import grpc
import helloworld_pb2

channel = grpc.insecure_channel('localhost:50051')
stub = helloworld_pb2.GreeterStub(channel)
With server authentication SSL/TLS 使用服务器身份验证 SSL/TLS

Client:

1
2
3
4
5
6
7
import grpc
import helloworld_pb2

with open('roots.pem', 'rb') as f:
    creds = grpc.ssl_channel_credentials(f.read())
channel = grpc.secure_channel('myservice.example.com:443', creds)
stub = helloworld_pb2.GreeterStub(channel)

Server:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
import grpc
import helloworld_pb2
from concurrent import futures

server = grpc.server(futures.ThreadPoolExecutor(max_workers=10))
with open('key.pem', 'rb') as f:
    private_key = f.read()
with open('chain.pem', 'rb') as f:
    certificate_chain = f.read()
server_credentials = grpc.ssl_server_credentials( ( (private_key, certificate_chain), ) )
# Adding GreeterServicer to server omitted
server.add_secure_port('myservice.example.com:443', server_credentials)
server.start()
# Server sleep omitted
Authenticate with Google using a JWT 使用 Google 进行身份验证使用 JWT
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
import grpc
import helloworld_pb2

from google import auth as google_auth
from google.auth import jwt as google_auth_jwt
from google.auth.transport import grpc as google_auth_transport_grpc

credentials, _ = google_auth.default()
jwt_creds = google_auth_jwt.OnDemandCredentials.from_signing_credentials(
    credentials)
channel = google_auth_transport_grpc.secure_authorized_channel(
    jwt_creds, None, 'greeter.googleapis.com:443')
stub = helloworld_pb2.GreeterStub(channel)
Authenticate with Google using an Oauth2 token 使用 Google 进行身份验证使用 OAuth2 令牌
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
import grpc
import helloworld_pb2

from google import auth as google_auth
from google.auth.transport import grpc as google_auth_transport_grpc
from google.auth.transport import requests as google_auth_transport_requests

credentials, _ = google_auth.default(scopes=(scope,))
request = google_auth_transport_requests.Request()
channel = google_auth_transport_grpc.secure_authorized_channel(
    credentials, request, 'greeter.googleapis.com:443')
stub = helloworld_pb2.GreeterStub(channel)
With server authentication SSL/TLS and a custom header with token 使用服务器身份验证 SSL/TLS 和带有令牌的自定义标头

Client:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
import grpc
import helloworld_pb2

class GrpcAuth(grpc.AuthMetadataPlugin):
    def __init__(self, key):
        self._key = key

    def __call__(self, context, callback):
        callback((('rpc-auth-header', self._key),), None)

with open('path/to/root-cert', 'rb') as fh:
    root_cert = fh.read()

channel = grpc.secure_channel(
    'myservice.example.com:443',
    grpc.composite_channel_credentials(
        grpc.ssl_channel_credentials(root_cert),
        grpc.metadata_call_credentials(
            GrpcAuth('access_key')
        )
    )
)

stub = helloworld_pb2.GreeterStub(channel)

Server:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
from concurrent import futures

import grpc
import helloworld_pb2

class AuthInterceptor(grpc.ServerInterceptor):
    def __init__(self, key):
        self._valid_metadata = ('rpc-auth-header', key)

        def deny(_, context):
            context.abort(grpc.StatusCode.UNAUTHENTICATED, 'Invalid key')

        self._deny = grpc.unary_unary_rpc_method_handler(deny)

    def intercept_service(self, continuation, handler_call_details):
        meta = handler_call_details.invocation_metadata

        if meta and meta[0] == self._valid_metadata:
            return continuation(handler_call_details)
        else:
            return self._deny

server = grpc.server(
    futures.ThreadPoolExecutor(max_workers=10),
    interceptors=(AuthInterceptor('access_key'),)
)
with open('key.pem', 'rb') as f:
    private_key = f.read()
with open('chain.pem', 'rb') as f:
    certificate_chain = f.read()
server_credentials = grpc.ssl_server_credentials( ( (private_key, certificate_chain), ) )
# Adding GreeterServicer to server omitted
server.add_secure_port('myservice.example.com:443', server_credentials)
server.start()
# Server sleep omitted

Java

Base case - no encryption or authentication 基本情况 - 无加密或身份验证
1
2
3
4
ManagedChannel channel = Grpc.newChannelBuilder(
        "localhost:50051", InsecureChannelCredentials.create())
    .build();
GreeterGrpc.GreeterStub stub = GreeterGrpc.newStub(channel);
With server authentication SSL/TLS 使用服务器身份验证 SSL/TLS

In Java we recommend that you use netty-tcnative with BoringSSL when using gRPC over TLS. You can find details about installing and using netty-tcnative and other required libraries for both Android and non-Android Java in the gRPC Java Security documentation.

在 Java 中,我们建议您在使用 TLS 的 gRPC 时使用 netty-tcnative 和 BoringSSL。您可以在 gRPC Java Security 文档中找到有关安装和使用 netty-tcnative 以及其他所需库的详细信息,适用于 Android 和非 Android Java。

To enable TLS on a server, a certificate chain and private key need to be specified in PEM format. Such private key should not be using a password. The order of certificates in the chain matters: more specifically, the certificate at the top has to be the host CA, while the one at the very bottom has to be the root CA. The standard TLS port is 443, but we use 8443 below to avoid needing extra permissions from the OS.

要在服务器上启用 TLS,需要以 PEM 格式指定证书链和私钥。此类私钥不应使用密码。证书链中证书的顺序很重要:具体来说,顶部的证书必须是主机 CA,而底部的证书必须是根 CA。标准的 TLS 端口是 443,但为了避免需要额外的操作系统权限,下面使用了 8443。

1
2
3
4
5
ServerCredentials creds = TlsServerCredentials.create(certChainFile, privateKeyFile);
Server server = Grpc.newServerBuilderForPort(8443, creds)
    .addService(TestServiceGrpc.bindService(serviceImplementation))
    .build();
server.start();

If the issuing certificate authority is not known to the client then it can be configured using TlsChannelCredentials.newBuilder().

如果客户端不知道颁发证书的机构,则可以使用 TlsChannelCredentials.newBuilder() 进行配置。

On the client side, server authentication with SSL/TLS looks like this:

在客户端上,使用 SSL/TLS 进行服务器身份验证的代码如下:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
// With server authentication SSL/TLS
ManagedChannel channel = Grpc.newChannelBuilder(
        "myservice.example.com:443", TlsChannelCredentials.create())
    .build();
GreeterGrpc.GreeterStub stub = GreeterGrpc.newStub(channel);

// With server authentication SSL/TLS; custom CA root certificates
ChannelCredentials creds = TlsChannelCredentials.newBuilder()
    .trustManager(new File("roots.pem"))
    .build();
ManagedChannel channel = Grpc.newChannelBuilder("myservice.example.com:443", creds)
    .build();
GreeterGrpc.GreeterStub stub = GreeterGrpc.newStub(channel);
Authenticate with Google 使用 Google 进行身份验证

The following code snippet shows how you can call the Google Cloud PubSub API using gRPC with a service account. The credentials are loaded from a key stored in a well-known location or by detecting that the application is running in an environment that can provide one automatically, e.g. Google Compute Engine. While this example is specific to Google and its services, similar patterns can be followed for other service providers.

以下代码片段显示了如何使用 gRPC 调用 Google Cloud PubSub API,并使用服务帐号进行身份验证。凭据从存储在众所周知的位置的密钥加载,或者通过检测应用程序在可以自动提供凭据的环境中运行(例如 Google Compute Engine)进行加载。尽管此示例特定于 Google 及其服务,但可以针对其他服务提供商采用类似的模式。

1
2
3
4
5
6
ChannelCredentials creds = CompositeChannelCredentials.create(
    TlsChannelCredentials.create(),
    MoreCallCredentials.from(GoogleCredentials.getApplicationDefault()));
ManagedChannel channel = ManagedChannelBuilder.forTarget("greeter.googleapis.com", creds)
    .build();
GreeterGrpc.GreeterStub stub = GreeterGrpc.newStub(channel);

Node.js

Base case - No encryption/authentication 基本情况 - 无加密/身份验证
1
var stub = new helloworld.Greeter('localhost:50051', grpc.credentials.createInsecure());
With server authentication SSL/TLS 使用服务器身份验证 SSL/TLS
1
2
3
const root_cert = fs.readFileSync('path/to/root-cert');
const ssl_creds = grpc.credentials.createSsl(root_cert);
const stub = new helloworld.Greeter('myservice.example.com', ssl_creds);
Authenticate with Google 使用 Google 进行身份验证
1
2
3
4
5
6
7
8
9
// Authenticating with Google 使用 Google 进行身份验证
var GoogleAuth = require('google-auth-library'); // from https://www.npmjs.com/package/google-auth-library
...
var ssl_creds = grpc.credentials.createSsl(root_certs);
(new GoogleAuth()).getApplicationDefault(function(err, auth) {
  var call_creds = grpc.credentials.createFromGoogleCredential(auth);
  var combined_creds = grpc.credentials.combineChannelCredentials(ssl_creds, call_creds);
  var stub = new helloworld.Greeter('greeter.googleapis.com', combined_credentials);
});
Authenticate with Google using Oauth2 token (legacy approach) 使用 OAuth2 令牌对 Google 进行身份验证(传统方法)
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
var GoogleAuth = require('google-auth-library'); // from https://www.npmjs.com/package/google-auth-library
...
var ssl_creds = grpc.Credentials.createSsl(root_certs); // load_certs typically loads a CA roots file -load_certs 通常加载一个 CA 根证书文件
var scope = 'https://www.googleapis.com/auth/grpc-testing';
(new GoogleAuth()).getApplicationDefault(function(err, auth) {
  if (auth.createScopeRequired()) {
    auth = auth.createScoped(scope);
  }
  var call_creds = grpc.credentials.createFromGoogleCredential(auth);
  var combined_creds = grpc.credentials.combineChannelCredentials(ssl_creds, call_creds);
  var stub = new helloworld.Greeter('greeter.googleapis.com', combined_credentials);
});
With server authentication SSL/TLS and a custom header with token 使用服务器身份验证 SSL/TLS 和带有令牌的自定义标头
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
const rootCert = fs.readFileSync('path/to/root-cert');
const channelCreds = grpc.credentials.createSsl(rootCert);
const metaCallback = (_params, callback) => {
    const meta = new grpc.Metadata();
    meta.add('custom-auth-header', 'token');
    callback(null, meta);
}
const callCreds = grpc.credentials.createFromMetadataGenerator(metaCallback);
const combCreds = grpc.credentials.combineChannelCredentials(channelCreds, callCreds);
const stub = new helloworld.Greeter('myservice.example.com', combCreds);

PHP

Base case - No encryption/authorization 基本情况 - 无加密/授权
1
2
3
$client = new helloworld\GreeterClient('localhost:50051', [
    'credentials' => Grpc\ChannelCredentials::createInsecure(),
]);
With server authentication SSL/TLS 使用服务器身份验证 SSL/TLS
1
2
3
$client = new helloworld\GreeterClient('myservice.example.com', [
    'credentials' => Grpc\ChannelCredentials::createSsl(file_get_contents('roots.pem')),
]);
Authenticate with Google 使用 Google 进行身份验证
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
function updateAuthMetadataCallback($context)
{
    $auth_credentials = ApplicationDefaultCredentials::getCredentials();
    return $auth_credentials->updateMetadata($metadata = [], $context->service_url);
}
$channel_credentials = Grpc\ChannelCredentials::createComposite(
    Grpc\ChannelCredentials::createSsl(file_get_contents('roots.pem')),
    Grpc\CallCredentials::createFromPlugin('updateAuthMetadataCallback')
);
$opts = [
  'credentials' => $channel_credentials
];
$client = new helloworld\GreeterClient('greeter.googleapis.com', $opts);
Authenticate with Google using Oauth2 token (legacy approach) 使用 Google 进行身份验证,使用 OAuth2 令牌(传统方法)
1
2
3
4
5
6
7
8
// the environment variable "GOOGLE_APPLICATION_CREDENTIALS" needs to be set
$scope = "https://www.googleapis.com/auth/grpc-testing";
$auth = Google\Auth\ApplicationDefaultCredentials::getCredentials($scope);
$opts = [
  'credentials' => Grpc\Credentials::createSsl(file_get_contents('roots.pem'));
  'update_metadata' => $auth->getUpdateMetadataFunc(),
];
$client = new helloworld\GreeterClient('greeter.googleapis.com', $opts);

Dart

Base case - no encryption or authentication 使用 Google 进行身份验证,使用 OAuth2 令牌(传统方法)
1
2
3
4
5
final channel = new ClientChannel('localhost',
      port: 50051,
      options: const ChannelOptions(
          credentials: const ChannelCredentials.insecure()));
final stub = new GreeterClient(channel);
With server authentication SSL/TLS 使用服务器身份验证 SSL/TLS
1
2
3
4
5
6
7
8
// Load a custom roots file. 加载自定义的根证书文件。
final trustedRoot = new File('roots.pem').readAsBytesSync();
final channelCredentials =
    new ChannelCredentials.secure(certificates: trustedRoot);
final channelOptions = new ChannelOptions(credentials: channelCredentials);
final channel = new ClientChannel('myservice.example.com',
    options: channelOptions);
final client = new GreeterClient(channel);
Authenticate with Google 使用 Google 进行身份验证
1
2
3
4
5
6
7
// Uses publicly trusted roots by default. 默认情况下使用公开信任的根证书。
final channel = new ClientChannel('greeter.googleapis.com');
final serviceAccountJson =
     new File('service-account.json').readAsStringSync();
final credentials = new JwtServiceAccountAuthenticator(serviceAccountJson);
final client =
    new GreeterClient(channel, options: credentials.toCallOptions);
Authenticate a single RPC call 对单个 RPC 调用进行身份验证
1
2
3
4
5
6
7
8
9
// Uses publicly trusted roots by default. 默认情况下使用公开信任的根证书。
final channel = new ClientChannel('greeter.googleapis.com');
final client = new GreeterClient(channel);
...
final serviceAccountJson =
     new File('service-account.json').readAsStringSync();
final credentials = new JwtServiceAccountAuthenticator(serviceAccountJson);
final response =
    await client.sayHello(request, options: credentials.toCallOptions);

5.2 - 性能基准测试

Benchmarking 性能基准测试

gRPC is designed to support high-performance open-source RPCs in many languages. This page describes performance benchmarking tools, scenarios considered by tests, and the testing infrastructure.

gRPC旨在支持许多语言中高性能的开源RPC。本页面介绍了性能基准测试工具、测试中考虑的场景以及测试基础架构。

Overview 概述

gRPC is designed for both high-performance and high-productivity design of distributed applications. Continuous performance benchmarking is a critical part of the gRPC development workflow. Multi-language performance tests run every few hours against the master branch, and these numbers are reported to a dashboard for visualization.

gRPC旨在实现高性能和高生产力的分布式应用程序设计。持续的性能基准测试是gRPC开发工作流程的关键部分。每隔几个小时,多语言性能测试会针对主干分支运行,并将这些数据报告给仪表板进行可视化显示。

Performance testing design 性能测试设计

Each language implements a performance testing worker that implements a gRPC WorkerService. This service directs the worker to act as either a client or a server for the actual benchmark test, represented as BenchmarkService. That service has two methods:

每种语言都实现了一个性能测试工作器,它实现了一个gRPC WorkerService。该服务指示工作器在实际的基准测试中充当客户端或服务器,表示为BenchmarkService。该服务有两个方法:

  • UnaryCall – a unary RPC of a simple request that specifies the number of bytes to return in the response.
  • StreamingCall – a streaming RPC that allows repeated ping-pongs of request and response messages akin to the UnaryCall.
  • UnaryCall - 一个简单请求的一元RPC,该请求指定要在响应中返回的字节数。
  • StreamingCall - 允许重复的请求和响应消息之间的类似UnaryCall的ping-pong的流式RPC。

gRPC performance testing worker diagram

These workers are controlled by a driver that takes as input a scenario description (in JSON format) and an environment variable specifying the host:port of each worker process.

这些工作器由一个驱动程序控制,该驱动程序以JSON格式的场景描述和一个环境变量作为输入,该环境变量指定每个工作器进程的主机:端口。

Languages under test 测试的语言

The following languages have continuous performance testing as both clients and servers at master:

以下语言在主干分支上都具有连续的性能测试,既作为客户端也作为服务器:

  • C++
  • Java
  • Go
  • C#
  • Node.js
  • Python
  • Ruby

In addition to running as both the client-side and server-side of performance tests, all languages are tested as clients against a C++ server, and as servers against a C++ client. This test aims to provide the current upper bound of performance for a given language’s client or server implementation without testing the other side.

除了作为性能测试的客户端和服务器的一侧外,所有语言都作为客户端针对C++服务器进行测试,并作为服务器针对C++客户端进行测试。此测试旨在为给定语言的客户端或服务器实现提供当前性能的上限,而不测试另一侧。

Although PHP or mobile environments do not support a gRPC server (which is needed for our performance tests), their client-side performance can be benchmarked using a proxy WorkerService written in another language. This code is implemented for PHP but is not yet in continuous testing mode.

虽然PHP或移动环境不支持gRPC服务器(这是我们性能测试所需的),但可以使用另一种语言编写的代理WorkerService来对其客户端性能进行基准测试。这段代码已经针对PHP实现,但尚未处于持续测试模式。

Scenarios under test 测试的场景

There are several important scenarios under test and displayed in the dashboards above, including the following:

上述仪表板中进行了几个重要的测试场景,包括以下内容:

  • Contentionless latency – the median and tail response latencies seen with only 1 client sending a single message at a time using StreamingCall.
  • QPS – the messages/second rate when there are 2 clients and a total of 64 channels, each of which has 100 outstanding messages at a time sent using StreamingCall.
  • Scalability (for selected languages) – the number of messages/second per server core.
  • 无竞争延迟 - 使用StreamingCall,只有一个客户端发送一条消息时的中位数和尾部响应延迟。
  • QPS(每秒处理消息数) - 使用StreamingCall,当存在2个客户端和总共64个通道时,每个通道每次发送100个未完成的消息时的消息/秒率。
  • 可扩展性(对于选择的语言)- 每个服务器核心的每秒处理消息数。

Most performance testing is using secure communication and protobufs. Some C++ tests additionally use insecure communication and the generic (non-protobuf) API to display peak performance. Additional scenarios may be added in the future.

大多数性能测试使用安全通信和protobuf。一些C++测试还使用不安全的通信和通用(非protobuf)API以显示峰值性能。未来可能会添加其他场景。

Testing infrastructure 测试基础架构

All performance benchmarks are run in our dedicated GKE cluster, where each benchmark worker (a client or a server) gets scheduled to different GKE node (and each GKE node is a separate GCE VM) in one of our worker pools. The source code for the benchmarking framework we use is publicly available in the test-infra github repository.

所有性能基准测试都在我们专用的GKE集群中运行,其中每个基准测试工作器(客户端或服务器)被调度到不同的GKE节点(每个GKE节点是一个单独的GCE VM)中的一个工作池。我们使用的基准测试框架的源代码可在test-infra GitHub存储库中公开获取。

Most test instances are 8-core systems, and these are used for both latency and QPS measurement. For C++ and Java, we additionally support QPS testing on 32-core systems. All QPS tests use 2 identical client machines for each server, to make sure that QPS measurement is not client-limited.

大多数测试实例都是8核系统,用于延迟和QPS测量。对于C++和Java,我们还支持在32核系统上进行QPS测试。所有QPS测试使用2台相同的客户机进行每个服务器的测试,以确保QPS测量不受客户端限制。

5.3 - 压缩

Compression 压缩

How to compress the data sent over the wire while using gRPC.

如何在使用gRPC时压缩通过网络发送的数据。

Overview 概述

Compression is used to reduce the amount of bandwidth used when communicating between peers and can be enabled or disabled based on call or message level for all languages. For some languages, it is also possible to control compression settings at the channel level. Different languages also support different compression algorithms, including a customized compressor.

压缩用于在对等方之间通信时减少带宽使用量,并且可以基于调用或消息级别在所有语言中启用或禁用。对于某些语言,还可以在通道级别上控制压缩设置。不同的语言还支持不同的压缩算法,包括自定义的压缩器。

Compression Method Asymmetry Between Peers 对等方之间的压缩方法不对称性

gRPC allows asymmetrically compressed communication, whereby a response may be compressed differently with the request, or not compressed at all. A gRPC peer may choose to respond using a different compression method to that of the request, including not performing any compression, regardless of channel and RPC settings (for example, if compression would result in small or negative gains).

gRPC允许不对称压缩通信,其中响应可以与请求以不同的方式进行压缩,或者根本不进行压缩。不论通道和RPC设置如何,gRPC对等方可以选择使用与请求不同的压缩方法来响应,包括不执行任何压缩(例如,如果压缩会导致较小或负面的收益)。

If a client message is compressed by an algorithm that is not supported by a server, the message will result in an UNIMPLEMENTED error status on the server. The server will include a grpc-accept-encoding header to the response which specifies the algorithms that the server accepts.

如果客户端消息使用服务器不支持的算法进行压缩,则服务器将在服务器上产生UNIMPLEMENTED错误状态。服务器将在响应中包含一个grpc-accept-encoding头,该头指定服务器接受的算法。

If the client message is compressed using one of the algorithms from the grpc-accept-encoding header and an UNIMPLEMENTED error status is returned from the server, the cause of the error won’t be related to compression.

如果客户端消息使用grpc-accept-encoding头中的算法之一进行压缩,并且从服务器返回UNIMPLEMENTED错误状态,则错误的原因与压缩无关。

Note that a peer may choose to not disclose all the encodings it supports. However, if it receives a message compressed in an undisclosed but supported encoding, it will include said encoding in the response’s grpc-accept-encoding header.

注意,对等方可能选择不披露其支持的所有编码。但是,如果它接收到使用未披露但受支持的编码进行压缩的消息,则会在响应的grpc-accept-encoding头中包含该编码。

For every message a server is requested to compress using an algorithm it knows the client doesn’t support (as indicated by the last grpc-accept-encoding header received from the client), it will send the message uncompressed.

对于服务器被请求使用其知道客户端不支持的算法进行压缩的每个消息,服务器将以未压缩的形式发送该消息,这是根据从客户端接收到的最后一个grpc-accept-encoding头指示的。

Specific Disabling of Compression 明确禁用压缩

If the user requests to disable compression, the next message will be sent uncompressed. This is instrumental in preventing BEAST and CRIME attacks. This applies to both the unary and streaming cases.

如果用户请求禁用压缩,下一条消息将以未压缩的形式发送。这对于防止BEASTCRIME攻击至关重要。这适用于一元和流式的情况。

Language guides and examples 语言指南和示例

语言示例文档
C++C++ 示例C++ 文档
GoGo 示例Go 文档
JavaJava 示例Java 文档
PythonPython 示例Python 文档

Additional Resources 其他资源

5.4 - 自定义负载均衡策略

Custom Load Balancing Policies 自定义负载均衡策略

https://grpc.io/docs/guides/custom-load-balancing/

Explains how custom load balancing policies can help optimize load balancing under unique circumstances.

解释了如何使用自定义负载均衡策略在特定情况下优化负载均衡。

Overview 概述

One of the key features of gRPC is load balancing, which allows requests from clients to be distributed across multiple servers. This helps prevent any one server from becoming overloaded and allows the system to scale up by adding more servers.

gRPC的关键功能之一是负载均衡,它允许来自客户端的请求分布到多个服务器上。这有助于防止任何一个服务器过载,并允许通过添加更多服务器来扩展系统。

A gRPC load balancing policy is given a list of server IP addresses by the name resolver. The policy is responsible for maintaining connections (subchannels) to the servers and picking a connection to use when an RPC is sent.

gRPC负载均衡策略由名称解析器提供一个服务器IP地址列表。该策略负责维护与服务器的连接(子通道),并在发送RPC时选择要使用的连接。

Implementing Your Own Policy 实现自定义策略

By default the pick_first policy will be used. This policy actually does no load balancing but just tries each address it gets from the name resolver and uses the first one it can connect to. By updating the gRPC service config you can also switch to using round_robin that connects to every address it gets and rotates through the connected backends for each RPC. There are also some other load balancing policies available, but the exact set varies by language. If the built-in policies do not meet your needs you can also implement you own custom policy.

默认情况下,将使用pick_first策略。该策略实际上不进行负载均衡,只是尝试连接名称解析器获取的每个地址,并使用其中第一个可连接的地址。通过更新gRPC服务配置,还可以切换到使用round_robin策略,该策略连接获取到的每个地址,并为每个RPC在已连接的后端之间进行轮询。还提供了其他一些负载均衡策略,但具体的可用策略因语言而异。如果内置的策略无法满足您的需求,您还可以实现自定义策略。

This involves implementing a load balancer interface in the language you are using. At a high level, you will have to:

这涉及在您使用的语言中实现一个负载均衡器接口。在高层上,您需要:

  • Register your implementation in the load balancer registry so that it can be referred to from the service config
  • Parse the JSON configuration object of your implementation. This allows your load balancer to be configured in the service config with any arbitrary JSON you choose to support
  • Manage what backends to maintain a connection with
  • Implement a picker that will choose which backend to connect to when an RPC is made. Note that this needs to be a fast operation as it is on the RPC call path
  • To enable your load balancer, configure it in your service config
  • 在负载均衡器注册表中注册您的实现,以便可以从服务配置中引用它
  • 解析您的实现的JSON配置对象。这允许您的负载均衡器在服务配置中以您选择支持的任意JSON进行配置
  • 管理要与之保持连接的后端
  • 实现一个picker,在进行RPC调用时选择要连接的后端。请注意,这必须是一个快速的操作,因为它在RPC调用路径上进行
  • 要启用您的负载均衡器,请在服务配置中进行配置

The exact steps vary by language, see the language support section for some concrete examples in your language.

具体的步骤因语言而异,请参阅语言支持部分,了解您所使用的语言中的一些具体示例。

image-20230531152354242

Backend Metrics 后端指标

What if your load balancing policy needs to know what is going on with the backend servers in real-time? For this you can rely on backend metrics. You can have metrics provided to you either in-band, in the backend RPC responses, or out-of-band as separate RPCs from the backends. Standard metrics like CPU and memory utilization are provided but you can also implement your own, custom metrics.

如果您的负载均衡策略需要实时了解后端服务器的情况怎么办?为此,您可以依赖后端指标。您可以通过内部通道,在后端RPC响应中提供指标,或者作为来自后端的单独RPC进行提供。提供了标准指标,如CPU和内存利用率,但您也可以实现自己的自定义指标。

For more information on this, please see the custom backend metrics guide (TBD)

有关详细信息,请参阅自定义后端指标指南(待定)。

Service Mesh 服务网格

If you have a service mesh setup where a central control plane is coordinating the configuration of your microservices, you cannot configure your custom load balancer directly via the service config. But support is provided to do this with the xDS protocol that your control plane uses to communicate with your gRPC clients. Please refer to your control plane documentation to determine how custom load balancing configuration is supported.

如果您设置了一个服务网格,其中一个中央控制平面协调您的微服务的配置,您不能直接通过服务配置来配置您的自定义负载均衡器。但是,通过xDS协议提供了支持,该协议是您的控制平面用于与gRPC客户端通信的。请参阅您的控制平面文档,了解如何支持自定义负载均衡配置。

For more details, please see gRPC proposal A52.

有关详细信息,请参阅gRPC的提案A52

Language Support 语言支持

语言示例注释
JavaJava 示例
Go示例和xDS支持即将推出
C++尚未支持

5.5 - 截止时间

Deadlines 截止时间

https://grpc.io/docs/guides/deadlines/

Explains how deadlines can be used to effectively deal with unreliable backends.

解释了如何使用截止时间有效地处理不可靠的后端。

Overview 概述

A deadline is used to specify a point in time past which a client is unwilling to wait for a response from a server. This simple idea is very important in building robust distributed systems. Clients that do not wait around unnecessarily and servers that know when to give up processing requests will improve the resource utilization and latency of your system.

截止时间用于指定客户端在此时间点之后不愿意等待来自服务器的响应。这个简单的概念在构建健壮的分布式系统中非常重要。不必要地等待的客户端和知道何时放弃处理请求的服务器将改善系统的资源利用率和延迟。

Note that while some language APIs have the concept of a deadline, others use the idea of a timeout. When an API asks for a deadline, you provide a point in time which the request should not go past. A timeout is the max duration of time that the request can take. For simplicity, we will only refer to deadline in this document.

请注意,尽管某些语言的API中有截止时间的概念,但其他语言使用超时的概念。当API要求截止时间时,您提供了请求不应超过的时间点。超时是请求可以花费的最长时间。为了简单起见,本文档中将仅使用截止时间一词。

Deadlines on the Client 客户端的截止时间

By default, gRPC does not set a deadline which means it is possible for a client to end up waiting for a response effectively forever. To avoid this you should always explicitly set a realistic deadline in your clients. To determine the appropriate deadline you would ideally start with an educated guess based on what you know about your system (network latency, server processing time, etc.), validated by some load testing.

默认情况下,gRPC不设置截止时间,这意味着客户端有可能无限期地等待响应。为了避免这种情况,您应该始终在客户端显式设置一个合理的截止时间。为确定适当的截止时间,您应该根据对系统的了解(网络延迟、服务器处理时间等)进行有根据的猜测,并通过一些负载测试进行验证。

If a server has gone past the deadline when processing a request, the client will give up and fail the RPC with the DEADLINE_EXCEEDED status.

如果服务器在处理请求时超过了截止时间,客户端将放弃并以“DEADLINE_EXCEEDED”状态失败该RPC。

Deadlines on the Server 服务器的截止时间

A server might receive requests from a client with an unrealistically short deadline that would not give the server enough time to ever respond in time. This would result in the server just wasting valuable resources and in the worst case scenario, crash the server. A gRPC server deals with this situation by automatically cancelling a call (CANCELLED status) once a deadline set by the client has passed.

服务器可能会接收到具有不切实际短截止时间的客户端请求,这将不给服务器足够的时间来及时响应。这将导致服务器浪费宝贵的资源,并且在最坏的情况下可能导致服务器崩溃。gRPC服务器通过在客户端设置的截止时间过去后自动取消调用(“CANCELLED”状态)来处理这种情况。

Please note that the server application is responsible for stopping any activity it has spawned to service the request. If your application is running a long-running process you should periodically check if the request that initiated it has been cancelled and if so, stop the processing.

请注意,服务器应用程序负责停止为服务请求而产生的任何活动。如果您的应用程序运行了一个长时间运行的进程,您应该定期检查是否已取消发起该进程的请求,如果是,则停止处理。

Deadline Propagation 截止时间传播

Your server might need to call another server to produce a response. In these cases where your server also acts as a client you would want to honor the deadline set by the original client. Automatically propagating the deadline from an incoming request to an outgoing one is supported by some gRPC implementations. In some languages this behavior needs to be explicitly enabled (e.g. C++) and in others it is enabled by default (e.g. Java and Go). Using this capability lets you avoid the error-prone approach of manually including the deadline for each outgoing RPC.

您的服务器可能需要调用另一个服务器来生成响应。在这些情况下,当您的服务器充当客户端时,您希望遵守原始客户端设置的截止时间。一些gRPC实现支持自动将来自传入请求的截止时间传播到传出请求。在某些语言中,需要显式启用此行为(例如C++),而在其他语言中,它默认启用(例如Java和Go)。使用此功能可以避免手动为每个传出的RPC单独包含截止时间的错误做法。

Since a deadline is set point in time, propagating it as-is to a server can be problematic as the clocks on the two servers might not be synchronized. To address this gRPC converts the deadline to a timeout from which the already elapsed time is already deducted. This shields your system from any clock skew issues.

由于截止时间是设置的一个时间点,直接将其传播给服务器可能会有问题,因为两个服务器上的时钟可能不同步。为解决这个问题,gRPC将截止时间转换为已经过去的超时时间。这样可以避免系统受到任何时钟偏差问题的影响。

image-20230531152905319

语言支持

语言示例
JavaJava 示例
GoGo 示例
C++
PythonPython 示例

其他资源

5.6 - 错误处理

Error handling 错误处理

https://grpc.io/docs/guides/error/

How gRPC deals with errors, and gRPC error codes.

介绍了gRPC如何处理错误以及gRPC的错误代码。

Standard error model 标准错误模型

As you’ll have seen in our concepts document and examples, when a gRPC call completes successfully the server returns an OK status to the client (depending on the language the OK status may or may not be directly used in your code). But what happens if the call isn’t successful?

正如您在我们的概念文档和示例中所看到的,当gRPC调用成功完成时,服务器将向客户端返回一个OK状态(根据语言,OK状态可能会或可能不会直接在您的代码中使用)。但是如果调用不成功会发生什么呢?

If an error occurs, gRPC returns one of its error status codes instead, with an optional string error message that provides further details about what happened. Error information is available to gRPC clients in all supported languages.

如果发生错误,gRPC会返回其中一个错误状态码,并可选地提供一个字符串错误消息,该消息提供了更多关于发生的情况的详细信息。所有支持的语言的gRPC客户端都可以获取错误信息。

Richer error model 更丰富的错误模型

The error model described above is the official gRPC error model, is supported by all gRPC client/server libraries, and is independent of the gRPC data format (whether protocol buffers or something else). You may have noticed that it’s quite limited and doesn’t include the ability to communicate error details.

上述的错误模型是官方的gRPC错误模型,由所有gRPC客户端/服务器库支持,并且与gRPC的数据格式(无论是协议缓冲区还是其他格式)无关。您可能已经注意到它相当有限,并且没有包含传递错误详细信息的能力。

If you’re using protocol buffers as your data format, however, you may wish to consider using the richer error model developed and used by Google as described here. This model enables servers to return and clients to consume additional error details expressed as one or more protobuf messages. It further specifies a standard set of error message types to cover the most common needs (such as invalid parameters, quota violations, and stack traces). The protobuf binary encoding of this extra error information is provided as trailing metadata in the response.

然而,如果您正在使用协议缓冲区作为数据格式,您可能希望考虑使用由Google开发和使用的更丰富的错误模型,详细信息可以在这里中找到。该模型使得服务器能够返回并且客户端能够消费表示为一个或多个protobuf消息的额外错误详细信息。它进一步指定了一组标准的错误消息类型,以涵盖最常见的需求(例如无效参数、配额超限和堆栈跟踪)。该额外的错误信息的protobuf二进制编码作为响应中的尾部元数据提供。

This richer error model is already supported in the C++, Go, Java, Python, and Ruby libraries, and at least the grpc-web and Node.js libraries have open issues requesting it. Other language libraries may add support in the future if there’s demand, so check their github repos if interested. Note however that the grpc-core library written in C will not likely ever support it since it is purposely data format agnostic.

这种更丰富的错误模型已经在C++、Go、Java、Python和Ruby库中得到支持,至少grpc-web和Node.js库存在请求支持的问题。如果有需求,其他语言库可能在将来添加支持,所以如果感兴趣的话可以查看它们的GitHub存储库。但请注意,以C语言编写的grpc-core库很可能永远不会支持它,因为它有意地与数据格式无关。

You could use a similar approach (put error details in trailing response metadata) if you’re not using protocol buffers, but you’d likely need to find or develop library support for accessing this data in order to make practical use of it in your APIs.

如果您不使用协议缓冲区,您可以采用类似的方法(将错误详细信息放在尾部的响应元数据中),但您可能需要找到或开发库来访问这些数据,以便在API中实际使用它。

There are important considerations to be aware of when deciding whether to use such an extended error model, however, including:

然而,在决定是否使用此扩展错误模型时,有一些重要的注意事项需要注意,包括:

  • Library implementations of the extended error model may not be consistent across languages in terms of requirements for and expectations of the error details payload
  • Existing proxies, loggers, and other standard HTTP request processors don’t have visibility into the error details and thus wouldn’t be able to leverage them for monitoring or other purposes
  • Additional error detail in the trailers interferes with head-of-line blocking, and will decrease HTTP/2 header compression efficiency due to more frequent cache misses
  • Larger error detail payloads may run into protocol limits (like max headers size), effectively losing the original error
  • 扩展错误模型的库实现在错误详细信息的要求和期望方面可能在不同的语言之间不一致
  • 现有的代理、日志记录器和其他标准HTTP请求处理器无法获取错误详细信息,因此无法利用它们进行监控或其他目的
  • 尾部中的附加错误详细信息会干扰首部阻塞,并且由于更频繁的缓存未命中,会降低HTTP/2首部压缩的效率
  • 较大的错误详细信息负载可能会超过协议限制(如最大头大小),从而丢失原始错误信息

Error status codes 错误状态码

Errors are raised by gRPC under various circumstances, from network failures to unauthenticated connections, each of which is associated with a particular status code. The following error status codes are supported in all gRPC languages.

gRPC在各种情况下引发错误,从网络故障到未经身份验证的连接,每种情况都与特定的状态码相关联。以下错误状态码在所有gRPC语言中都受支持。

General errors 通用错误

案例状态码
客户端应用程序取消了请求GRPC_STATUS_CANCELLED
截止时间在服务器返回状态之前过期GRPC_STATUS_DEADLINE_EXCEEDED
服务器上找不到该方法GRPC_STATUS_UNIMPLEMENTED
服务器关闭中GRPC_STATUS_UNAVAILABLE
服务器抛出异常(或执行了其他操作而不是返回状态码来终止RPC)GRPC_STATUS_UNKNOWN

Network failures 网络故障

案例状态码
截止时间到期之前未传输任何数据。也适用于在截止时间到期之前传输了一些数据且未检测到其他故障的情况GRPC_STATUS_DEADLINE_EXCEEDED
在连接断开之前传输了一些数据(例如,请求元数据已写入TCP连接)GRPC_STATUS_UNAVAILABLE

协议错误

案例状态码
无法解压缩,但支持压缩算法GRPC_STATUS_INTERNAL
客户端使用的压缩机制不被服务器支持GRPC_STATUS_UNIMPLEMENTED
流量控制资源限制已达到GRPC_STATUS_RESOURCE_EXHAUSTED
流量控制协议违规GRPC_STATUS_INTERNAL
解析返回的状态时出错GRPC_STATUS_UNKNOWN
未经身份验证:凭据未能获取元数据GRPC_STATUS_UNAUTHENTICATED
在授权元数据中设置了无效的主机GRPC_STATUS_UNAUTHENTICATED
解析响应协议缓冲区时出错GRPC_STATUS_INTERNAL
解析请求协议缓冲区时出错GRPC_STATUS_INTERNAL

Sample code 示例代码

For sample code illustrating how to handle various gRPC errors, see the grpc-errors repo.

有关如何处理各种gRPC错误的示例代码,请参阅grpc-errors存储库。

5.7 - 流量控制

Flow Control 流量控制

https://grpc.io/docs/guides/flow-control/

Explains what flow control is and how you can manually control it.

解释了什么是流量控制,以及如何手动控制流量。

Overview 概述

Flow control is a mechanism to ensure that a receiver of messages does not get overwhelmed by a fast sender. Flow control prevents data loss, improves performance and increases reliability. It applies to streaming RPCs and is not relevant for unary RPCs. By default, gRPC handles the interactions with flow control for you, though some languages allow you to take override the default behavior and take explicit control.

流量控制是一种机制,确保消息的接收方不会被快速发送方压垮。流量控制可以防止数据丢失,提高性能和可靠性。它适用于流式RPC,并且对于一元RPC来说不相关。默认情况下,gRPC会为您处理流量控制的交互,但某些语言允许您覆盖默认行为并显式控制。

gRPC utilizes the underlying transport to detect when it is safe to send more data. As data is read on the receiving side, an acknowledgement is returned to the sender letting it know that the receiver has more capacity.

gRPC利用底层传输来检测何时可以安全地发送更多数据。当在接收端读取数据时,会向发送方返回确认,告知接收方还有更多的容量。

As needed, the gRPC framework will wait before returning from a write call. In gRPC, when a value is written to a stream, that does not mean that it has gone out over the network. Rather, that it has been passed to the framework which will now take care of the nitty gritty details of buffering it and sending it to the OS on its way over the network.

根据需要,gRPC框架将在写入调用返回之前等待。在gRPC中,当将值写入流时,并不意味着它已经通过网络发送出去。相反,它已经传递给框架,框架将负责处理细节,对其进行缓冲并将其发送到操作系统以便通过网络发送。

Note 注意

The flow is the same for writing from a Server to a Client as when a Client writes to a Server

从服务器向客户端写入的流程与客户端向服务器写入的流程相同。

image-20230531110942275

Warning 警告

There is the potential for a deadlock if both the client and server are doing synchronous reads or using manual flow control and both try to do a lot of writing without doing any reads.

如果客户端和服务器都在进行同步读取或使用手动流量控制,并且都试图在不进行任何读取的情况下进行大量写入,可能会导致死锁。

语言支持

语言示例
JavaJava示例

5.8 - 保持连接

Keepalive 保持连接

https://grpc.io/docs/guides/keepalive/

How to use HTTP/2 PING-based keepalives in gRPC.

如何在 gRPC 中使用基于 HTTP/2 PING 的保持连接。

Overview 概述

HTTP/2 PING-based keepalives are a way to keep an HTTP/2 connection alive even when there is no data being transferred. This is done by periodically sending a PING frame to the other end of the connection. HTTP/2 keepalives can improve performance and reliability of HTTP/2 connections, but it is important to configure the keepalive interval carefully.

基于 HTTP/2 PING 的保持连接是一种在没有数据传输时保持 HTTP/2 连接活跃的方式。通过定期向连接的另一端发送 PING 帧 来实现。HTTP/2 保持连接可以提高 HTTP/2 连接的性能和可靠性,但需要仔细配置保持连接间隔。

Note 注意

There is a related but separate concern called [Health Checking]. Health checking allows a server to signal whether a service is healthy while keepalive is only about the connection.

还有一个相关但独立的问题,称为健康检查。健康检查允许服务器表示一个服务是否健康,而保持连接只涉及连接

Background 背景

TCP keepalive is a well-known method of maintaining connections and detecting broken connections. When TCP keepalive was enabled, either side of the connection can send redundant packets. Once ACKed by the other side, the connection will be considered as good. If no ACK is received after repeated attempts, the connection is deemed broken.

TCP keepalive 是一种常用的维护连接和检测断开连接的方法。启用 TCP keepalive 后,连接的任一方都可以发送冗余数据包。一旦得到另一方的确认(ACK),连接将被视为正常。如果经过多次尝试仍未收到确认,连接将被视为断开。

Unlike TCP keepalive, gRPC uses HTTP/2 which provides a mandatory PING frame which can be used to estimate round-trip time, bandwidth-delay product, or test the connection. The interval and retry in TCP keepalive don’t quite apply to PING because the transport is reliable, so they’re replaced with timeout (equivalent to interval * retry) in gRPC PING-based keepalive implementation.

与 TCP keepalive 不同,gRPC 使用的是 HTTP/2 协议,该协议提供了一个强制性的 PING 帧,可以用于估算往返时间、带宽延迟乘积或测试连接。由于传输是可靠的,TCP keepalive 中的间隔和重试并不适用于 PING,因此在 gRPC 的基于 PING 的保持连接实现中,使用超时(等效于间隔 * 重试次数)来取代。

Note 注意

It’s not required for service owners to support keepalive. Client authors must coordinate with service owners for whether a particular client-side setting is acceptable. Service owners decide what they are willing to support, including whether they are willing to receive keepalives at all (If the service does not support keepalive, the first few keepalive pings will be ignored, and the server will eventually send a GOAWAY message with debug data equal to the ASCII code for too_many_pings).

服务所有者并非必须支持保持连接。客户端作者必须与服务所有者协商,确定特定的客户端设置是否可接受。服务所有者决定他们愿意支持的内容,包括是否愿意接收保持连接的请求(如果服务不支持保持连接,则前几个保持连接的 PING 将被忽略,服务器最终将发送带有“too_many_pings” ASCII 编码的调试数据的 GOAWAY 消息)。

How configuring keepalive affects a call 配置保持连接如何影响调用

Keepalive is less likely to be triggered for unary RPCs with quick replies. Keepalive is primarily triggered when there is a long-lived RPC, which will fail if the keepalive check fails and the connection is closed.

对于具有快速响应的一元 RPC,不太可能触发保持连接。保持连接主要在存在长时间运行的 RPC 时触发,如果保持连接检查失败并关闭连接,该 RPC 将失败。

For streaming RPCs, if the connection is closed, any in-progress RPCs will fail. If a call is streaming data, the stream will also be closed and any data that has not yet been sent will be lost.

对于流式 RPC,如果连接关闭,任何正在进行中的 RPC 都将失败。如果调用正在流式传输数据,流也将关闭,并且尚未发送的任何数据将丢失。

Warning 警告

To avoid DDoSing, it’s important to take caution when setting the keepalive configurations. Thus, it is recommended to avoid enabling keepalive without calls and for clients to avoid configuring their keepalive much below one minute.

为了避免 DDoS 攻击,请在设置保持连接配置时要谨慎。因此,建议在没有调用的情况下避免启用保持连接,并且客户端应避免将保持连接配置得太短,不要低于一分钟。

Common situations where keepalives can be useful 保持连接可用的常见情况

gRPC HTTP/2 keepalives can be useful in a variety of situations, including but not limited to:

gRPC 的 HTTP/2 保持连接可在多种情况下发挥作用,包括但不限于以下情况:

  • When sending data over a long-lived connection which might be considered as idle by proxy or load balancers.
  • When the network is less reliable (For example, mobile applications).
  • When using a connection after a long period of inactivity.
  • 在长时间运行的连接上发送数据,可能会被代理或负载均衡器视为空闲连接。
  • 在网络不太可靠时(例如移动应用程序)。
  • 在长时间不活动后重新使用连接时。

Keepalive configuration specification 保持连接配置规范

选项可用性描述客户端默认值服务器默认值
KEEPALIVE_TIME客户端和服务器PING 帧之间的间隔(以毫秒为单位)。INT_MAX(禁用)27200000(2 小时)
KEEPALIVE_TIMEOUT客户端和服务器PING 帧被确认的超时时间(以毫秒为单位)。如果在此时间内发送方未收到确认,将关闭连接。20000(20 秒)20000(20 秒)
KEEPALIVE_WITHOUT_CALLS客户端客户端在没有未完成的流的情况下是否可以发送保持连接的 PING。0(false)N/A
PERMIT_KEEPALIVE_WITHOUT_CALLS服务器服务器在没有未完成的流的情况下是否可以发送保持连接的 PING。N/A0(false)
PERMIT_KEEPALIVE_TIME服务器服务器在连续接收到 PING 帧而未发送任何数据/头帧之间允许的最小时间间隔。N/A300000(5 分钟)
MAX_CONNECTION_IDLE服务器通道在没有未完成的 RPC 的情况下允许存在的最长时间,超过该时间后服务器将关闭连接。N/AINT_MAX(无限制)
MAX_CONNECTION_AGE服务器通道允许存在的最长时间。N/AINT_MAX(无限制)
MAX_CONNECTION_AGE_GRACE服务器在通道达到最长存活时间后的宽限期。N/AINT_MAX(无限制)

注意

Some languages may provide additional options, please refer to language examples and additional resource for more details.

某些语言可能提供其他选项,请参考语言示例和其他资源以获取更多详细信息。

语言指南和示例

语言示例文档
C++C++ 示例C++ 文档
GoGo 示例Go 文档
JavaJava 示例Java 文档
PythonPython 示例Python 文档

其他资源

5.9 - 性能最佳实践

Performance Best Practices 性能最佳实践

https://grpc.io/docs/guides/performance/

A user guide of both general and language-specific best practices to improve performance.

本文介绍了通用和特定语言的性能最佳实践,以提高性能。

通用

  • Always re-use stubs and channels when possible.

  • 本文介绍了一般性和特定语言的性能最佳实践,以提高性能。

  • Use keepalive pings to keep HTTP/2 connections alive during periods of inactivity to allow initial RPCs to be made quickly without a delay (i.e. C++ channel arg GRPC_ARG_KEEPALIVE_TIME_MS).

  • 在不活动期间使用保持连接的 ping,以保持 HTTP/2 连接处于活动状态,以便可以快速进行初始 RPC 调用,而无需延迟(例如 C++ 中的通道参数 GRPC_ARG_KEEPALIVE_TIME_MS)。

  • Use streaming RPCs when handling a long-lived logical flow of data from the client-to-server, server-to-client, or in both directions. Streams can avoid continuous RPC initiation, which includes connection load balancing at the client-side, starting a new HTTP/2 request at the transport layer, and invoking a user-defined method handler on the server side.

  • 在处理长时间的客户端到服务器、服务器到客户端或双向数据逻辑流时,使用流式 RPC。流可以避免连续的 RPC 启动,其中包括客户端的连接负载均衡,在传输层开始新的 HTTP/2 请求,并在服务器端调用用户定义的方法处理程序。

    Streams, however, cannot be load balanced once they have started and can be hard to debug for stream failures. They also might increase performance at a small scale but can reduce scalability due to load balancing and complexity, so they should only be used when they provide substantial performance or simplicity benefit to application logic. Use streams to optimize the application, not gRPC.

    然而,一旦流开始,就无法进行负载均衡,并且对于流故障的调试可能会很困难。它们在小规模上可能会提高性能,但由于负载均衡和复杂性,可能会降低可扩展性,因此只有在对应用程序逻辑提供实质性性能或简化方面有益时才应使用。使用流来优化应用程序,而不是 gRPC。

    Side note: This does not apply to Python (see Python section for details).

    注: 这不适用于 Python(请参考 Python 部分了解详细信息)

  • (Special topic) Each gRPC channel uses 0 or more HTTP/2 connections and each connection usually has a limit on the number of concurrent streams. When the number of active RPCs on the connection reaches this limit, additional RPCs are queued in the client and must wait for active RPCs to finish before they are sent. Applications with high load or long-lived streaming RPCs might see performance issues because of this queueing. There are two possible solutions:

  • (特殊主题) 每个 gRPC 通道使用 0 个或多个 HTTP/2 连接,每个连接通常对并发流的数量有限制。当连接上的活动 RPC 数量达到此限制时,额外的 RPC 将在客户端排队,并在等待活动的 RPC 完成后才会发送。具有高负载或长时间运行的流式 RPC 的应用程序可能会因此排队而出现性能问题。有两种可能的解决方案:

    1. Create a separate channel for each area of high load in the application.
    2. 在应用程序中的每个高负载区域创建一个单独的通道
    3. Use a pool of gRPC channels to distribute RPCs over multiple connections (channels must have different channel args to prevent re-use so define a use-specific channel arg such as channel number).
    4. 使用一组 gRPC 通道将 RPC 分布到多个连接上(通道必须具有不同的通道参数,以防止重用,因此定义一个特定于使用的通道参数,如通道编号)。

    Side note: The gRPC team has plans to add a feature to fix these performance issues (see grpc/grpc#21386 for more info), so any solution involving creating multiple channels is a temporary workaround that should eventually not be needed.

    注: gRPC 团队计划添加一个功能来解决这些性能问题(有关详细信息,请参阅 grpc/grpc#21386),因此涉及创建多个通道的解决方案是一个临时解决方法,最终可能不再需要

C++

  • Do not use Sync API for performance sensitive servers. If performance and/or resource consumption are not concerns, use the Sync API as it is the simplest to implement for low-QPS services.

  • 对于对性能敏感的服务器,不要使用同步 API。如果性能和/或资源消耗不是问题,可以使用同步 API,因为它是实现低 QPS 服务最简单的方式。

  • Favor callback API over other APIs for most RPCs, given that the application can avoid all blocking operations or blocking operations can be moved to a separate thread. The callback API is easier to use than the completion-queue async API but is currently slower for truly high-QPS workloads.

  • 对于大多数 RPC,优先使用回调 API,前提是应用程序可以避免所有阻塞操作,或者阻塞操作可以移到单独的线程中。回调 API 比完成队列异步 API 更易于使用,但在真正高 QPS 的工作负载中,它目前速度较慢。

  • If having to use the async completion-queue API, the best scalability trade-off is having numcpu’s threads. The ideal number of completion queues in relation to the number of threads can change over time (as gRPC C++ evolves), but as of gRPC 1.41 (Sept 2021), using 2 threads per completion queue seems to give the best performance.

  • 如果必须使用异步完成队列 API,则在可扩展性和性能之间进行最佳平衡的方法是使用 numcpu 个线程。关于完成队列数量与线程数之间的理想关系可能会随时间而变化(随着 gRPC C++ 的发展),但截至 gRPC 1.41(2021 年 9 月),每个完成队列使用 2 个线程似乎可以获得最佳性能。

  • For the async completion-queue API, make sure to register enough server requests for the desired level of concurrency to avoid the server continuously getting stuck in a slow path that results in essentially serial request processing.

  • 对于异步完成队列 API,请确保为所需的并发级别注册足够的服务器请求,以避免服务器持续陷入导致实质上的串行请求处理的慢路径。

  • (Special topic) Enable write batching in streams if message k + 1 does not rely on responses from message k by passing a WriteOptions argument to Write with buffer_hint set:

  • (特殊主题) 如果第 k + 1 个消息不依赖于第 k 个消息的响应,则在流中启用写批处理,通过传递带有设置了 buffer_hint 的 WriteOptions 参数来实现:

    stream_writer->Write(message, WriteOptions().set_buffer_hint());

  • (Special topic) gRPC::GenericStub can be useful in certain cases when there is high contention / CPU time spent on proto serialization. This class allows the application to directly send raw gRPC::ByteBuffer as data rather than serializing from some proto. This can also be helpful if the same data is being sent multiple times, with one explicit proto-to-ByteBuffer serialization followed by multiple ByteBuffer sends.

  • (特殊主题) 在某些情况下,当存在高竞争/花费 CPU 时间用于 proto 序列化时,可以使用gRPC::GenericStub。该类允许应用程序直接发送原始的 gRPC::ByteBuffer 作为数据,而不是从某个 proto 进行序列化。如果同样的数据被多次发送,通过一个显式的 proto-to-ByteBuffer 序列化,然后进行多次 ByteBuffer 发送,可以提供帮助。

Java

  • Use non-blocking stubs to parallelize RPCs.
  • **使用非阻塞存根(stubs)**以并行处理 RPC。
  • Provide a custom executor that limits the number of threads, based on your workload (cached (default), fixed, forkjoin, etc).
  • 提供自定义执行器,根据工作负载限制线程数(缓存(默认),固定,forkjoin 等)。

Python

  • Streaming RPCs create extra threads for receiving and possibly sending the messages, which makes streaming RPCs much slower than unary RPCs in gRPC Python, unlike the other languages supported by gRPC.
  • 流式 RPC 会创建额外的线程来接收和可能发送消息,这使得在 gRPC Python 中流式 RPC 比一元 RPC 慢得多,而其他由 gRPC 支持的语言不会出现这个问题。
  • Using asyncio could improve performance.
  • 使用 asyncio 可以改善性能。
  • Using the future API in the sync stack results in the creation of an extra thread. Avoid the future API if possible.
  • 在同步堆栈中使用 future API 会创建额外的线程。尽可能避免使用 future API
  • (Experimental) An experimental single-threaded unary-stream implementation is available via the SingleThreadedUnaryStream channel option, which can save up to 7% latency per message.
  • (实验性) 可通过SingleThreadedUnaryStream 通道选项获得实验性的单线程一元流实现,可节省每个消息高达 7% 的延迟。

5.10 - 等待就绪

Wait-for-Ready 等待就绪

https://grpc.io/docs/guides/wait-for-ready/

Explains how to configure RPCs to wait for the server to be ready before sending the request.

解释了如何配置 RPC,在发送请求之前等待服务器就绪。

Overview 概述

This is a feature which can be used on a stub which will cause the RPCs to wait for the server to become available before sending the request. This allows for robust batch workflows since transient server problems won’t cause failures. The deadline still applies, so the wait will be interrupted if the deadline is passed.

这是一个可以用于存根(stub)的特性,它会导致 RPC 在发送请求之前等待服务器可用。这样可以实现稳健的批处理工作流,因为临时的服务器问题不会导致失败。仍然适用截止时间(deadline),所以如果超过截止时间,等待将被中断。

When an RPC is created when the channel has failed to connect to the server, without Wait-for-Ready it will immediately return a failure; with Wait-for-Ready it will simply be queued until the connection becomes ready. The default is without Wait-for-Ready.

当通道无法连接到服务器时创建 RPC,如果没有设置等待就绪(Wait-for-Ready),它会立即返回失败;如果设置了等待就绪,它将被简单地排队,直到连接就绪。默认情况下是不使用等待就绪。

For detailed semantics see this.

有关详细的语义,请参阅这个文档

How to use Wait-for-Ready 如何使用等待就绪

You can specify for a stub whether or not it should use Wait-for-Ready, which will automatically be passed along when an RPC is created.

您可以为存根指定是否应该使用等待就绪,当创建 RPC 时,它会自动传递。

Note 注意

The RPC can still fail for other reasons besides the server not being ready, so error handling is still necessary.

除了服务器未就绪之外,RPC 仍然可能因其他原因而失败,因此仍然需要进行错误处理。

The following shows the sequence of events that occur, when a client sends a message to a server, based upon channel state and whether or not Wait-for-Ready is set.

以下是基于通道状态和是否设置了等待就绪的客户端向服务器发送消息时发生的事件序列。

image-20230531111124821

The following is a state based view

以下是基于状态的视图

image-20230531111143792

Alternatives 替代方案

  • Loop (with exponential backoff) until the RPC stops returning transient failures.
  • 循环(使用指数退避)直到 RPC 不再返回临时失败。
    • This could be combined, for efficiency, with implementing an onReady Handler (for languages that support this).
    • 这可以与实现onReady处理程序(适用于支持此功能的语言)结合使用以提高效率。
  • Accept failures that might have been avoided by waiting because you want to fail fast
  • 忽略可能因等待而避免的故障,因为您希望尽快失败。

语言支持

语言示例
JavaJava 示例
GoGo 示例
PythonPython 示例