![]() Use sendLock to guard the entire stream allocation + write to wire operation, and streamLock to only guard access to the underlying stream map. This ensures the following: - We uphold the constraint that new stream IDs on the wire are always increasing, because whoever holds sendLock will be ensured to get the next stream ID and be the next to write to the wire. - Locks are always released in LIFO order. This prevents deadlocks. Taking sendLock before releasing streamLock means that if a goroutine blocks writing to the pipe, it can make another goroutine get stuck trying to take sendLock, and therefore streamLock will be kept locked as well. This can lead to the receiver goroutine no longer being able to read responses from the pipe, since it needs to take streamLock when processing a response. This ultimately leads to a complete deadlock of the client. It is reasonable for a server to block writes to the pipe if the client is not reading responses fast enough. So we can't expect writes to never block. I have repro'd the hang with a simple ttrpc client and server. The client spins up 100 goroutines that spam the server with requests constantly. After a few seconds of running I can see it hang. I have set the buffer size for the pipe to 0 to more easily repro, but it would still be possible to hit with a larger buffer size (just may take a higher volume of requests or larger payloads). I also validated that I no longer see the hang with this fix, by leaving the test client/server running for a few minutes. Obviously not 100% conclusive, but before I could get a hang within several seconds of running. Signed-off-by: Kevin Parsons <kevpar@microsoft.com> |
||
---|---|---|
.github/workflows | ||
cmd | ||
example | ||
integration | ||
internal | ||
plugin | ||
proto | ||
script | ||
.gitattributes | ||
.gitignore | ||
.golangci.yml | ||
channel_test.go | ||
channel.go | ||
client_test.go | ||
client.go | ||
codec.go | ||
config.go | ||
doc.go | ||
errors.go | ||
go.mod | ||
go.sum | ||
handshake.go | ||
interceptor_test.go | ||
interceptor.go | ||
LICENSE | ||
Makefile | ||
metadata_test.go | ||
metadata.go | ||
Protobuild.toml | ||
PROTOCOL.md | ||
README.md | ||
request.pb.go | ||
request.proto | ||
server_linux_test.go | ||
server_test.go | ||
server.go | ||
services_test.go | ||
services.go | ||
stream_server.go | ||
stream_test.go | ||
stream.go | ||
test.proto | ||
unixcreds_linux.go |
ttrpc
GRPC for low-memory environments.
The existing grpc-go project requires a lot of memory overhead for importing packages and at runtime. While this is great for many services with low density requirements, this can be a problem when running a large number of services on a single machine or on a machine with a small amount of memory.
Using the same GRPC definitions, this project reduces the binary size and
protocol overhead required. We do this by eliding the net/http
, net/http2
and grpc
package used by grpc replacing it with a lightweight framing
protocol. The result are smaller binaries that use less resident memory with
the same ease of use as GRPC.
Please note that while this project supports generating either end of the protocol, the generated service definitions will be incompatible with regular GRPC services, as they do not speak the same protocol.
Protocol
See the protocol specification.
Usage
Create a gogo vanity binary (see
cmd/protoc-gen-gogottrpc/main.go
for an
example with the ttrpc plugin enabled.
It's recommended to use protobuild
to build the protobufs for this project, but this will work with protoc
directly, if required.
Differences from GRPC
- The protocol stack has been replaced with a lighter protocol that doesn't require http, http2 and tls.
- The client and server interface are identical whereas in GRPC there is a client and server interface that are different.
- The Go stdlib context package is used instead.
Status
TODO:
- Add testing under concurrent load to ensure
- Verify connection error handling
Project details
ttrpc is a containerd sub-project, licensed under the Apache 2.0 license. As a containerd sub-project, you will find the:
information in our containerd/project
repository.