Fix a race where an asynchronous server.Serve() invoked in a
a goroutine races with an almost immediate server.Shutdown().
If Shutdown() finishes its locked closing of listeners before
Serve() gets around to add the new one, Serve will sit stuck
forever in l.Accept(), unless the caller closes the listener
in addition to Shutdown().
This is probably almost impossible to trigger in real life,
but some of the unit tests, which run the server and client
in the same process, occasionally do trigger this. Then, if
the test tries to verify a final ErrServerClosed error from
Serve() after Shutdown() it gets stuck forever.
Signed-off-by: Krisztian Litkey <krisztian.litkey@intel.com>
Unwrap io.EOF and io.ErrUnexpectedEOF in the case of read message error
which would lead to leaked server connections.
Signed-off-by: Austin Vazquez <macedonv@amazon.com>
Simplify close idle connections logic in server shutdown to be more
intuitive. Modify add connection logic to check if server has been
shutdown before adding any new connections. Modify test to make all
calls before server shutdown.
Signed-off-by: Austin Vazquez <macedonv@amazon.com>
The ttrpc server somtimes receives `ECONNRESET` rather than `EOF` when
the client is exited. Such as reading from a closed connection. Handle
it properly to avoid goroutine and connection leak.
Change-Id: If32711cfc1347dd2da27ca846dd13c3f5af351bb
Signed-off-by: liyuxuan.darfux <liyuxuan.darfux@bytedance.com>
In a connection, streams are added only, but not deleted.
When replying to the last data, delete the stream from map.
And delete operations require concurrency.
Signed-off-by: wllenyj <wllenyj@linux.alibaba.com>
Implementation of the 1.2 protocol with support for streaming. Provides
the client and server interfaces for implementing services with
streaming.
Unary behavior is mostly unchanged and avoids extra stream tracking just
for unary calls. Streaming calls are tracked to route data to the
appropriate stream as it is received.
Stricter stream ID handling, disallowing unexpected re-use of stream
IDs.
Signed-off-by: Derek McGowan <derek@mcg.dev>
This copies the codes and status package from grpc as it is the only references
to the grpc project from ttrpc. This will help ensure that API breaking changes
in grpc do not affect ttrpc.
Signed-off-by: Michael Crosby <crosbymichael@gmail.com>
Return if client is gone, log all other errors
Co-authored-by: Giuseppe Capizzi <gcapizzi@pivotal.io>
Signed-off-by: Georgi Sabev <georgethebeatle@gmail.com>
Currently the EOF that the server gets when a client connection is
closed is being ignored, which means that the goroutine that is
processing the client connection will never exit. If it never exits
it will never close the underlying unix socket and this would lead
to a file descriptor leak.
Signed-off-by: Georgi Sabev <georgethebeatle@gmail.com>
Adds a new field to the `Request` type which specifies a timeout (in
nanoseconds) for the request. This is propagated on method dispatch as a
context timeout.
There was some discussion here on supporting a broader "metadata" field
(similar to grpc) that can be used for other things, but we ended up
with a dedicated field because it is lighter weight and expect it to be
used pretty heavily as is.... metadata may be added in the future, but
is not necessary for timeouts.
Also discussed using a deadline vs a timeout in the request and decided
to go with a timeout in order to deal with potential clock skew between
the client and server. This also has the side-effect of eliminating the
protocol/wire overhead from the request timeout.
Signed-off-by: Brian Goff <cpuguy83@gmail.com>
To gracefully handle scenarios where the connection is closed or the
client is closed, we now set the final error to be `ErrClosed`. Callers
can resolve it through using `errors.Cause` to detect this condition.
Signed-off-by: Stephen J Day <stephen.day@docker.com>
Because of issues with glibc, using the `os/user` package can cause when
calling `user.Current()`. Neither the Go maintainers or glibc developers
could be bothered to fix it, so we have to work around it by calling the
uid and gid functions directly. This is probably better because we don't
actually use much of the data provided in the `user.User` struct.
This required some refactoring to have better control over when the uid
and gid are resolved. Rather than checking the current user on every
connection, we now resolve it once at initialization. To test that this
provided an improvement in performance, a benchmark was added.
Unfortunately, this exposed a regression in the performance of unix
sockets in Go when `(*UnixConn).File` is called. The underlying culprit
of this performance regression is still at large.
The following open issues describe the underlying problem in more
detail:
https://github.com/golang/go/issues/13470https://sourceware.org/bugzilla/show_bug.cgi?id=19341
In better news, I now have an entire herd of shaved yaks.
Signed-off-by: Stephen J Day <stephen.day@docker.com>
Because ttrpc can be used with abstract sockets, it is critical to
ensure that only certain users can connect to the unix socket. This is
of particular interest in the primary use case of containerd, where a
shim may run as root and any user can connection.
With this, we get a few nice features. The first is the concept of a
`Handshaker` that allows one to intercept each connection and replace it
with one of their own. The enables credential checks and other measures,
such as tls. The second is that servers now support configuration. This
allows one to inject a handshaker for each connection. Other options
will be added in the future.
Signed-off-by: Stephen J Day <stephen.day@docker.com>
The request and response requests opened up a nasty race condition where
waiters could find themselves either blocked or receiving errant errors.
The result was low performance and inadvertent busy waits. This
refactors the client to have a single request into the main client loop,
eliminating the race.
The reason for the original design was to allow a sender to control
request and response individually to make unit testing easier. The unit
test has now been refactored to use a channel to ensure that requests
are serviced on graceful shutdown.
Signed-off-by: Stephen J Day <stephen.day@docker.com>
This change increases the maximum message size to 4MB to be inline
with the grpc default. The buffer management approach has been changed
to use a pool to minimize allocations and keep memory usage low.
Signed-off-by: Stephen J Day <stephen.day@docker.com>
This apples logic to correctly Close a server, as well as implements
graceful shutdown. This ensures that inflight requests are not
interrupted and works similar to the functionality in `net/http`.
This required a fair bit of refactoring around how the connection is
managed. The connection now has an explicit wrapper object, ensuring
that shutdown happens in a coordinated fashion, whether or not a
forceful close or graceful shutdown is called.
In addition to the above, hardening around the accept loop has been
added. We now correctly exit on non-temporary errors and debounce the
accept call when encountering repeated errors. This should address some
issues where `SIGTERM` was not honored when dropping into the accept
spin.
Signed-off-by: Stephen J Day <stephen.day@docker.com>
Following the convention of http2, we now use odd stream ids for client
initiated streams. This makes it easier to tell who initiates the
stream. We enforce the convention on the server-side.
This allows us to upgrade the protocol in the future to have server
initiated streams.
Signed-off-by: Stephen J Day <stephen.day@docker.com>
With this changeset, ttrpc can now handle mutliple outstanding requests
and responses on the same connection without blocking. On the
server-side, we dispatch a goroutine per outstanding reequest. On the
client side, a management goroutine dispatches responses to blocked
waiters.
The protocol has been changed to support this behavior by including a
"stream id" that can used to identify which request a response belongs
to on the client-side of the connection. With these changes, we should
also be able to support streams in the future.
Signed-off-by: Stephen J Day <stephen.day@docker.com>
Rather than employ the typeurl package, we now generate code to
correctly allocate the incoming types from the caller. As a side-effect
of this activity, the services definitions have been split out into a
separate type that handles the full resolution and dispatch of the
method, incuding correctly mapping the RPC status.
This work is a pre-cursor to larger protocol change that will allow us
to handle multiple, concurrent requests.
Signed-off-by: Stephen J Day <stephen.day@docker.com>
With this change, we define a simple server and client framework to
start generating code against. We define a simple handler system with
back registration into the server definition.
From here, we can start generating code against the handlers.
Signed-off-by: Stephen J Day <stephen.day@docker.com>