diff --git a/hack/update-vendor.sh b/hack/update-vendor.sh
index 2dd6dcd97..e5c381e08 100755
--- a/hack/update-vendor.sh
+++ b/hack/update-vendor.sh
@@ -23,8 +23,6 @@ set -o pipefail
# TODO(random-liu): Remove this after #106 is resolved.
ROOT="$( cd "$( dirname "${BASH_SOURCE[0]}" )" && pwd )"/..
cd ${ROOT}
-echo "Replace invalid imports..."
-find vendor/ -name *.go | xargs sed -i 's/"github.com\/Sirupsen\/logrus"/"github.com\/sirupsen\/logrus"/g'
echo "Sort vendor.conf..."
sort vendor.conf -o vendor.conf
diff --git a/vendor.conf b/vendor.conf
index 37ec7557e..50d247dc7 100644
--- a/vendor.conf
+++ b/vendor.conf
@@ -6,7 +6,7 @@ github.com/containerd/fifo fbfb6a11ec671efbe94ad1c12c2e98773f19e1e6
github.com/containernetworking/cni v0.4.0
github.com/davecgh/go-spew v1.1.0
github.com/docker/distribution b38e5838b7b2f2ad48e06ec4b500011976080621
-github.com/docker/docker v1.13.1
+github.com/docker/docker cc4da8112814cdbb00dbf23370f9ed764383de1f
github.com/docker/go-events 9461782956ad83b30282bf90e31fa6a70c255ba9
github.com/docker/spdystream 449fdfce4d962303d702fec724ef0ad181c92528
github.com/emicklei/go-restful ff4f55a206334ef123e4f79bbf348980da81ca46
@@ -23,7 +23,7 @@ github.com/go-openapi/spec 6aced65f8501fe1217321abf0749d354824ba2ff
github.com/go-openapi/swag 1d0bd113de87027671077d3c71eb3ac5d7dbba72
github.com/jpillora/backoff 06c7a16c845dc8e0bf575fafeeca0f5462f5eb4d
github.com/juju/ratelimit 5b9ff866471762aa2ab2dced63c9fb6f53921342
-github.com/kubernetes-incubator/cri-o v0.3
+github.com/kubernetes-incubator/cri-o 63a218a45844fd912f482dc85f9cc149e68e0e57
github.com/mailru/easyjson d5b7844b561a7bc640052f1b935f7b800330d7e0
github.com/Microsoft/go-winio v0.4.4
github.com/opencontainers/go-digest 21dfd564fd89c944783d00d069f33e3e7123c448
diff --git a/vendor/github.com/docker/docker/LICENSE b/vendor/github.com/docker/docker/LICENSE
index 8f3fee627..9c8e20ab8 100644
--- a/vendor/github.com/docker/docker/LICENSE
+++ b/vendor/github.com/docker/docker/LICENSE
@@ -176,7 +176,7 @@
END OF TERMS AND CONDITIONS
- Copyright 2013-2016 Docker, Inc.
+ Copyright 2013-2017 Docker, Inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
diff --git a/vendor/github.com/docker/docker/NOTICE b/vendor/github.com/docker/docker/NOTICE
index 8a37c1c7b..0c74e15b0 100644
--- a/vendor/github.com/docker/docker/NOTICE
+++ b/vendor/github.com/docker/docker/NOTICE
@@ -1,5 +1,5 @@
Docker
-Copyright 2012-2016 Docker, Inc.
+Copyright 2012-2017 Docker, Inc.
This product includes software developed at Docker, Inc. (https://www.docker.com).
diff --git a/vendor/github.com/docker/docker/README.md b/vendor/github.com/docker/docker/README.md
index 0b33bdca0..533d7717d 100644
--- a/vendor/github.com/docker/docker/README.md
+++ b/vendor/github.com/docker/docker/README.md
@@ -1,270 +1,80 @@
-Docker: the container engine [](https://github.com/docker/docker/releases/latest)
-============================
+### Docker users, see [Moby and Docker](https://mobyproject.org/#moby-and-docker) to clarify the relationship between the projects
-Docker is an open source project to pack, ship and run any application
-as a lightweight container.
+### Docker maintainers and contributors, see [Transitioning to Moby](#transitioning-to-moby) for more details
-Docker containers are both *hardware-agnostic* and *platform-agnostic*.
-This means they can run anywhere, from your laptop to the largest
-cloud compute instance and everything in between - and they don't require
-you to use a particular language, framework or packaging system. That
-makes them great building blocks for deploying and scaling web apps,
-databases, and backend services without depending on a particular stack
-or provider.
+The Moby Project
+================
-Docker began as an open-source implementation of the deployment engine which
-powered [dotCloud](http://web.archive.org/web/20130530031104/https://www.dotcloud.com/),
-a popular Platform-as-a-Service. It benefits directly from the experience
-accumulated over several years of large-scale operation and support of hundreds
-of thousands of applications and databases.
+
-
+Moby is an open-source project created by Docker to advance the software containerization movement.
+It provides a “Lego set” of dozens of components, the framework for assembling them into custom container-based systems, and a place for all container enthusiasts to experiment and exchange ideas.
-## Security Disclosure
+# Moby
-Security is very important to us. If you have any issue regarding security,
-please disclose the information responsibly by sending an email to
-security@docker.com and not by creating a GitHub issue.
+## Overview
-## Better than VMs
+At the core of Moby is a framework to assemble specialized container systems.
+It provides:
-A common method for distributing applications and sandboxing their
-execution is to use virtual machines, or VMs. Typical VM formats are
-VMware's vmdk, Oracle VirtualBox's vdi, and Amazon EC2's ami. In theory
-these formats should allow every developer to automatically package
-their application into a "machine" for easy distribution and deployment.
-In practice, that almost never happens, for a few reasons:
+- A library of containerized components for all vital aspects of a container system: OS, container runtime, orchestration, infrastructure management, networking, storage, security, build, image distribution, etc.
+- Tools to assemble the components into runnable artifacts for a variety of platforms and architectures: bare metal (both x86 and Arm); executables for Linux, Mac and Windows; VM images for popular cloud and virtualization providers.
+- A set of reference assemblies which can be used as-is, modified, or used as inspiration to create your own.
- * *Size*: VMs are very large which makes them impractical to store
- and transfer.
- * *Performance*: running VMs consumes significant CPU and memory,
- which makes them impractical in many scenarios, for example local
- development of multi-tier applications, and large-scale deployment
- of cpu and memory-intensive applications on large numbers of
- machines.
- * *Portability*: competing VM environments don't play well with each
- other. Although conversion tools do exist, they are limited and
- add even more overhead.
- * *Hardware-centric*: VMs were designed with machine operators in
- mind, not software developers. As a result, they offer very
- limited tooling for what developers need most: building, testing
- and running their software. For example, VMs offer no facilities
- for application versioning, monitoring, configuration, logging or
- service discovery.
+All Moby components are containers, so creating new components is as easy as building a new OCI-compatible container.
-By contrast, Docker relies on a different sandboxing method known as
-*containerization*. Unlike traditional virtualization, containerization
-takes place at the kernel level. Most modern operating system kernels
-now support the primitives necessary for containerization, including
-Linux with [openvz](https://openvz.org),
-[vserver](http://linux-vserver.org) and more recently
-[lxc](https://linuxcontainers.org/), Solaris with
-[zones](https://docs.oracle.com/cd/E26502_01/html/E29024/preface-1.html#scrolltoc),
-and FreeBSD with
-[Jails](https://www.freebsd.org/doc/handbook/jails.html).
+## Principles
-Docker builds on top of these low-level primitives to offer developers a
-portable format and runtime environment that solves all four problems.
-Docker containers are small (and their transfer can be optimized with
-layers), they have basically zero memory and cpu overhead, they are
-completely portable, and are designed from the ground up with an
-application-centric design.
+Moby is an open project guided by strong principles, but modular, flexible and without too strong an opinion on user experience, so it is open to the community to help set its direction.
+The guiding principles are:
-Perhaps best of all, because Docker operates at the OS level, it can still be
-run inside a VM!
+- Batteries included but swappable: Moby includes enough components to build fully featured container system, but its modular architecture ensures that most of the components can be swapped by different implementations.
+- Usable security: Moby will provide secure defaults without compromising usability.
+- Container centric: Moby is built with containers, for running containers.
-## Plays well with others
+With Moby, you should be able to describe all the components of your distributed application, from the high-level configuration files down to the kernel you would like to use and build and deploy it easily.
-Docker does not require you to buy into a particular programming
-language, framework, packaging system, or configuration language.
+Moby uses [containerd](https://github.com/containerd/containerd) as the default container runtime.
-Is your application a Unix process? Does it use files, tcp connections,
-environment variables, standard Unix streams and command-line arguments
-as inputs and outputs? Then Docker can run it.
+## Audience
-Can your application's build be expressed as a sequence of such
-commands? Then Docker can build it.
+Moby is recommended for anyone who wants to assemble a container-based system. This includes:
-## Escape dependency hell
+- Hackers who want to customize or patch their Docker build
+- System engineers or integrators building a container system
+- Infrastructure providers looking to adapt existing container systems to their environment
+- Container enthusiasts who want to experiment with the latest container tech
+- Open-source developers looking to test their project in a variety of different systems
+- Anyone curious about Docker internals and how it’s built
-A common problem for developers is the difficulty of managing all
-their application's dependencies in a simple and automated way.
+Moby is NOT recommended for:
-This is usually difficult for several reasons:
+- Application developers looking for an easy way to run their applications in containers. We recommend Docker CE instead.
+- Enterprise IT and development teams looking for a ready-to-use, commercially supported container platform. We recommend Docker EE instead.
+- Anyone curious about containers and looking for an easy way to learn. We recommend the [docker.com](https://www.docker.com/) website instead.
- * *Cross-platform dependencies*. Modern applications often depend on
- a combination of system libraries and binaries, language-specific
- packages, framework-specific modules, internal components
- developed for another project, etc. These dependencies live in
- different "worlds" and require different tools - these tools
- typically don't work well with each other, requiring awkward
- custom integrations.
+# Transitioning to Moby
- * *Conflicting dependencies*. Different applications may depend on
- different versions of the same dependency. Packaging tools handle
- these situations with various degrees of ease - but they all
- handle them in different and incompatible ways, which again forces
- the developer to do extra work.
+Docker is transitioning all of its open source collaborations to the Moby project going forward.
+During the transition, all open source activity should continue as usual.
- * *Custom dependencies*. A developer may need to prepare a custom
- version of their application's dependency. Some packaging systems
- can handle custom versions of a dependency, others can't - and all
- of them handle it differently.
+We are proposing the following list of changes:
+- splitting up the engine into more open components
+- removing the docker UI, SDK etc to keep them in the Docker org
+- clarifying that the project is not limited to the engine, but to the assembly of all the individual components of the Docker platform
+- open-source new tools & components which we currently use to assemble the Docker product, but could benefit the community
+- defining an open, community-centric governance inspired by the Fedora project (a very successful example of balancing the needs of the community with the constraints of the primary corporate sponsor)
-Docker solves the problem of dependency hell by giving the developer a simple
-way to express *all* their application's dependencies in one place, while
-streamlining the process of assembling them. If this makes you think of
-[XKCD 927](https://xkcd.com/927/), don't worry. Docker doesn't
-*replace* your favorite packaging systems. It simply orchestrates
-their use in a simple and repeatable way. How does it do that? With
-layers.
+-----
-Docker defines a build as running a sequence of Unix commands, one
-after the other, in the same container. Build commands modify the
-contents of the container (usually by installing new files on the
-filesystem), the next command modifies it some more, etc. Since each
-build command inherits the result of the previous commands, the
-*order* in which the commands are executed expresses *dependencies*.
-
-Here's a typical Docker build process:
-
-```bash
-FROM ubuntu:12.04
-RUN apt-get update && apt-get install -y python python-pip curl
-RUN curl -sSL https://github.com/shykes/helloflask/archive/master.tar.gz | tar -xzv
-RUN cd helloflask-master && pip install -r requirements.txt
-```
-
-Note that Docker doesn't care *how* dependencies are built - as long
-as they can be built by running a Unix command in a container.
-
-
-Getting started
-===============
-
-Docker can be installed either on your computer for building applications or
-on servers for running them. To get started, [check out the installation
-instructions in the
-documentation](https://docs.docker.com/engine/installation/).
-
-Usage examples
-==============
-
-Docker can be used to run short-lived commands, long-running daemons
-(app servers, databases, etc.), interactive shell sessions, etc.
-
-You can find a [list of real-world
-examples](https://docs.docker.com/engine/examples/) in the
-documentation.
-
-Under the hood
---------------
-
-Under the hood, Docker is built on the following components:
-
-* The
- [cgroups](https://www.kernel.org/doc/Documentation/cgroup-v1/cgroups.txt)
- and
- [namespaces](http://man7.org/linux/man-pages/man7/namespaces.7.html)
- capabilities of the Linux kernel
-* The [Go](https://golang.org) programming language
-* The [Docker Image Specification](https://github.com/docker/docker/blob/master/image/spec/v1.md)
-* The [Libcontainer Specification](https://github.com/opencontainers/runc/blob/master/libcontainer/SPEC.md)
-
-Contributing to Docker [](https://godoc.org/github.com/docker/docker)
-======================
-
-| **Master** (Linux) | **Experimental** (Linux) | **Windows** | **FreeBSD** |
-|------------------|----------------------|---------|---------|
-| [](https://jenkins.dockerproject.org/view/Docker/job/Docker%20Master/) | [](https://jenkins.dockerproject.org/view/Docker/job/Docker%20Master%20%28experimental%29/) | [/badge/icon)](http://jenkins.dockerproject.org/job/Docker%20Master%20(windows)/) | [/badge/icon)](http://jenkins.dockerproject.org/job/Docker%20Master%20(freebsd)/) |
-
-Want to hack on Docker? Awesome! We have [instructions to help you get
-started contributing code or documentation](https://docs.docker.com/opensource/project/who-written-for/).
-
-These instructions are probably not perfect, please let us know if anything
-feels wrong or incomplete. Better yet, submit a PR and improve them yourself.
-
-Getting the development builds
-==============================
-
-Want to run Docker from a master build? You can download
-master builds at [master.dockerproject.org](https://master.dockerproject.org).
-They are updated with each commit merged into the master branch.
-
-Don't know how to use that super cool new feature in the master build? Check
-out the master docs at
-[docs.master.dockerproject.org](http://docs.master.dockerproject.org).
-
-How the project is run
-======================
-
-Docker is a very, very active project. If you want to learn more about how it is run,
-or want to get more involved, the best place to start is [the project directory](https://github.com/docker/docker/tree/master/project).
-
-We are always open to suggestions on process improvements, and are always looking for more maintainers.
-
-### Talking to other Docker users and contributors
-
-
-
-
-
- Internet Relay Chat (IRC) |
-
-
- IRC is a direct line to our most knowledgeable Docker users; we have
- both the #docker and #docker-dev group on
- irc.freenode.net.
- IRC is a rich chat protocol but it can overwhelm new users. You can search
- our chat archives.
-
- Read our IRC quickstart guide for an easy way to get started.
- |
-
-
- Docker Community Forums |
-
- The Docker Engine
- group is for users of the Docker Engine project.
- |
-
-
- Google Groups |
-
- The docker-dev group is for contributors and other people
- contributing to the Docker project. You can join this group without a
- Google account by sending an email to docker-dev+subscribe@googlegroups.com.
- You'll receive a join-request message; simply reply to the message to
- confirm your subscription.
- |
-
-
- Twitter |
-
- You can follow Docker's Twitter feed
- to get updates on our products. You can also tweet us questions or just
- share blogs or stories.
- |
-
-
- Stack Overflow |
-
- Stack Overflow has over 7000 Docker questions listed. We regularly
- monitor Docker questions
- and so do many other knowledgeable Docker users.
- |
-
-
-
-### Legal
+Legal
+=====
*Brought to you courtesy of our legal counsel. For more context,
-please see the [NOTICE](https://github.com/docker/docker/blob/master/NOTICE) document in this repo.*
+please see the [NOTICE](https://github.com/moby/moby/blob/master/NOTICE) document in this repo.*
-Use and transfer of Docker may be subject to certain restrictions by the
+Use and transfer of Moby may be subject to certain restrictions by the
United States and other governments.
It is your responsibility to ensure that your use and/or transfer does not
@@ -275,30 +85,6 @@ For more information, please see https://www.bis.doc.gov
Licensing
=========
-Docker is licensed under the Apache License, Version 2.0. See
-[LICENSE](https://github.com/docker/docker/blob/master/LICENSE) for the full
+Moby is licensed under the Apache License, Version 2.0. See
+[LICENSE](https://github.com/moby/moby/blob/master/LICENSE) for the full
license text.
-
-Other Docker Related Projects
-=============================
-There are a number of projects under development that are based on Docker's
-core technology. These projects expand the tooling built around the
-Docker platform to broaden its application and utility.
-
-* [Docker Registry](https://github.com/docker/distribution): Registry
-server for Docker (hosting/delivery of repositories and images)
-* [Docker Machine](https://github.com/docker/machine): Machine management
-for a container-centric world
-* [Docker Swarm](https://github.com/docker/swarm): A Docker-native clustering
-system
-* [Docker Compose](https://github.com/docker/compose) (formerly Fig):
-Define and run multi-container apps
-* [Kitematic](https://github.com/docker/kitematic): The easiest way to use
-Docker on Mac and Windows
-
-If you know of another project underway that should be listed here, please help
-us keep this list up-to-date by submitting a PR.
-
-Awesome-Docker
-==============
-You can find more projects, tools and articles related to Docker on the [awesome-docker list](https://github.com/veggiemonk/awesome-docker). Add your project there.
diff --git a/vendor/github.com/docker/docker/hack/README.md b/vendor/github.com/docker/docker/hack/README.md
new file mode 100644
index 000000000..802395d53
--- /dev/null
+++ b/vendor/github.com/docker/docker/hack/README.md
@@ -0,0 +1,60 @@
+## About
+
+This directory contains a collection of scripts used to build and manage this
+repository. If there are any issues regarding the intention of a particular
+script (or even part of a certain script), please reach out to us.
+It may help us either refine our current scripts, or add on new ones
+that are appropriate for a given use case.
+
+## DinD (dind.sh)
+
+DinD is a wrapper script which allows Docker to be run inside a Docker
+container. DinD requires the container to
+be run with privileged mode enabled.
+
+## Generate Authors (generate-authors.sh)
+
+Generates AUTHORS; a file with all the names and corresponding emails of
+individual contributors. AUTHORS can be found in the home directory of
+this repository.
+
+## Make
+
+There are two make files, each with different extensions. Neither are supposed
+to be called directly; only invoke `make`. Both scripts run inside a Docker
+container.
+
+### make.ps1
+
+- The Windows native build script that uses PowerShell semantics; it is limited
+unlike `hack\make.sh` since it does not provide support for the full set of
+operations provided by the Linux counterpart, `make.sh`. However, `make.ps1`
+does provide support for local Windows development and Windows to Windows CI.
+More information is found within `make.ps1` by the author, @jhowardmsft
+
+### make.sh
+
+- Referenced via `make test` when running tests on a local machine,
+or directly referenced when running tests inside a Docker development container.
+- When running on a local machine, `make test` to run all tests found in
+`test`, `test-unit`, `test-integration-cli`, and `test-docker-py` on
+your local machine. The default timeout is set in `make.sh` to 60 minutes
+(`${TIMEOUT:=60m}`), since it currently takes up to an hour to run
+all of the tests.
+- When running inside a Docker development container, `hack/make.sh` does
+not have a single target that runs all the tests. You need to provide a
+single command line with multiple targets that performs the same thing.
+An example referenced from [Run targets inside a development container](https://docs.docker.com/opensource/project/test-and-docs/#run-targets-inside-a-development-container): `root@5f8630b873fe:/go/src/github.com/moby/moby# hack/make.sh dynbinary binary cross test-unit test-integration-cli test-docker-py`
+- For more information related to testing outside the scope of this README,
+refer to
+[Run tests and test documentation](https://docs.docker.com/opensource/project/test-and-docs/)
+
+## Release (release.sh)
+
+Releases any bundles built by `make` on a public AWS S3 bucket.
+For information regarding configuration, please view `release.sh`.
+
+## Vendor (vendor.sh)
+
+A shell script that is a wrapper around Vndr. For information on how to use
+this, please refer to [vndr's README](https://github.com/LK4D4/vndr/blob/master/README.md)
diff --git a/vendor/github.com/docker/docker/hack/integration-cli-on-swarm/README.md b/vendor/github.com/docker/docker/hack/integration-cli-on-swarm/README.md
new file mode 100644
index 000000000..1cea52526
--- /dev/null
+++ b/vendor/github.com/docker/docker/hack/integration-cli-on-swarm/README.md
@@ -0,0 +1,69 @@
+# Integration Testing on Swarm
+
+IT on Swarm allows you to execute integration test in parallel across a Docker Swarm cluster
+
+## Architecture
+
+### Master service
+
+ - Works as a funker caller
+ - Calls a worker funker (`-worker-service`) with a chunk of `-check.f` filter strings (passed as a file via `-input` flag, typically `/mnt/input`)
+
+### Worker service
+
+ - Works as a funker callee
+ - Executes an equivalent of `TESTFLAGS=-check.f TestFoo|TestBar|TestBaz ... make test-integration-cli` using the bind-mounted API socket (`docker.sock`)
+
+### Client
+
+ - Controls master and workers via `docker stack`
+ - No need to have a local daemon
+
+Typically, the master and workers are supposed to be running on a cloud environment,
+while the client is supposed to be running on a laptop, e.g. Docker for Mac/Windows.
+
+## Requirement
+
+ - Docker daemon 1.13 or later
+ - Private registry for distributed execution with multiple nodes
+
+## Usage
+
+### Step 1: Prepare images
+
+ $ make build-integration-cli-on-swarm
+
+Following environment variables are known to work in this step:
+
+ - `BUILDFLAGS`
+ - `DOCKER_INCREMENTAL_BINARY`
+
+Note: during the transition into Moby Project, you might need to create a symbolic link `$GOPATH/src/github.com/docker/docker` to `$GOPATH/src/github.com/moby/moby`.
+
+### Step 2: Execute tests
+
+ $ ./hack/integration-cli-on-swarm/integration-cli-on-swarm -replicas 40 -push-worker-image YOUR_REGISTRY.EXAMPLE.COM/integration-cli-worker:latest
+
+Following environment variables are known to work in this step:
+
+ - `DOCKER_GRAPHDRIVER`
+ - `DOCKER_EXPERIMENTAL`
+
+#### Flags
+
+Basic flags:
+
+ - `-replicas N`: the number of worker service replicas. i.e. degree of parallelism.
+ - `-chunks N`: the number of chunks. By default, `chunks` == `replicas`.
+ - `-push-worker-image REGISTRY/IMAGE:TAG`: push the worker image to the registry. Note that if you have only single node and hence you do not need a private registry, you do not need to specify `-push-worker-image`.
+
+Experimental flags for mitigating makespan nonuniformity:
+
+ - `-shuffle`: Shuffle the test filter strings
+
+Flags for debugging IT on Swarm itself:
+
+ - `-rand-seed N`: the random seed. This flag is useful for deterministic replaying. By default(0), the timestamp is used.
+ - `-filters-file FILE`: the file contains `-check.f` strings. By default, the file is automatically generated.
+ - `-dry-run`: skip the actual workload
+ - `keep-executor`: do not auto-remove executor containers, which is used for running privileged programs on Swarm
diff --git a/vendor/github.com/docker/docker/hack/integration-cli-on-swarm/agent/vendor.conf b/vendor/github.com/docker/docker/hack/integration-cli-on-swarm/agent/vendor.conf
new file mode 100644
index 000000000..efd6d6d04
--- /dev/null
+++ b/vendor/github.com/docker/docker/hack/integration-cli-on-swarm/agent/vendor.conf
@@ -0,0 +1,2 @@
+# dependencies specific to worker (i.e. github.com/docker/docker/...) are not vendored here
+github.com/bfirsh/funker-go eaa0a2e06f30e72c9a0b7f858951e581e26ef773
diff --git a/vendor/github.com/docker/docker/pkg/mount/flags_freebsd.go b/vendor/github.com/docker/docker/pkg/mount/flags_freebsd.go
index f166cb2f7..5f76f331b 100644
--- a/vendor/github.com/docker/docker/pkg/mount/flags_freebsd.go
+++ b/vendor/github.com/docker/docker/pkg/mount/flags_freebsd.go
@@ -45,4 +45,5 @@ const (
RELATIME = 0
REMOUNT = 0
STRICTATIME = 0
+ mntDetach = 0
)
diff --git a/vendor/github.com/docker/docker/pkg/mount/flags_linux.go b/vendor/github.com/docker/docker/pkg/mount/flags_linux.go
index dc696dce9..0425d0dd6 100644
--- a/vendor/github.com/docker/docker/pkg/mount/flags_linux.go
+++ b/vendor/github.com/docker/docker/pkg/mount/flags_linux.go
@@ -1,85 +1,87 @@
package mount
import (
- "syscall"
+ "golang.org/x/sys/unix"
)
const (
// RDONLY will mount the file system read-only.
- RDONLY = syscall.MS_RDONLY
+ RDONLY = unix.MS_RDONLY
// NOSUID will not allow set-user-identifier or set-group-identifier bits to
// take effect.
- NOSUID = syscall.MS_NOSUID
+ NOSUID = unix.MS_NOSUID
// NODEV will not interpret character or block special devices on the file
// system.
- NODEV = syscall.MS_NODEV
+ NODEV = unix.MS_NODEV
// NOEXEC will not allow execution of any binaries on the mounted file system.
- NOEXEC = syscall.MS_NOEXEC
+ NOEXEC = unix.MS_NOEXEC
// SYNCHRONOUS will allow I/O to the file system to be done synchronously.
- SYNCHRONOUS = syscall.MS_SYNCHRONOUS
+ SYNCHRONOUS = unix.MS_SYNCHRONOUS
// DIRSYNC will force all directory updates within the file system to be done
// synchronously. This affects the following system calls: create, link,
// unlink, symlink, mkdir, rmdir, mknod and rename.
- DIRSYNC = syscall.MS_DIRSYNC
+ DIRSYNC = unix.MS_DIRSYNC
// REMOUNT will attempt to remount an already-mounted file system. This is
// commonly used to change the mount flags for a file system, especially to
// make a readonly file system writeable. It does not change device or mount
// point.
- REMOUNT = syscall.MS_REMOUNT
+ REMOUNT = unix.MS_REMOUNT
// MANDLOCK will force mandatory locks on a filesystem.
- MANDLOCK = syscall.MS_MANDLOCK
+ MANDLOCK = unix.MS_MANDLOCK
// NOATIME will not update the file access time when reading from a file.
- NOATIME = syscall.MS_NOATIME
+ NOATIME = unix.MS_NOATIME
// NODIRATIME will not update the directory access time.
- NODIRATIME = syscall.MS_NODIRATIME
+ NODIRATIME = unix.MS_NODIRATIME
// BIND remounts a subtree somewhere else.
- BIND = syscall.MS_BIND
+ BIND = unix.MS_BIND
// RBIND remounts a subtree and all possible submounts somewhere else.
- RBIND = syscall.MS_BIND | syscall.MS_REC
+ RBIND = unix.MS_BIND | unix.MS_REC
// UNBINDABLE creates a mount which cannot be cloned through a bind operation.
- UNBINDABLE = syscall.MS_UNBINDABLE
+ UNBINDABLE = unix.MS_UNBINDABLE
// RUNBINDABLE marks the entire mount tree as UNBINDABLE.
- RUNBINDABLE = syscall.MS_UNBINDABLE | syscall.MS_REC
+ RUNBINDABLE = unix.MS_UNBINDABLE | unix.MS_REC
// PRIVATE creates a mount which carries no propagation abilities.
- PRIVATE = syscall.MS_PRIVATE
+ PRIVATE = unix.MS_PRIVATE
// RPRIVATE marks the entire mount tree as PRIVATE.
- RPRIVATE = syscall.MS_PRIVATE | syscall.MS_REC
+ RPRIVATE = unix.MS_PRIVATE | unix.MS_REC
// SLAVE creates a mount which receives propagation from its master, but not
// vice versa.
- SLAVE = syscall.MS_SLAVE
+ SLAVE = unix.MS_SLAVE
// RSLAVE marks the entire mount tree as SLAVE.
- RSLAVE = syscall.MS_SLAVE | syscall.MS_REC
+ RSLAVE = unix.MS_SLAVE | unix.MS_REC
// SHARED creates a mount which provides the ability to create mirrors of
// that mount such that mounts and unmounts within any of the mirrors
// propagate to the other mirrors.
- SHARED = syscall.MS_SHARED
+ SHARED = unix.MS_SHARED
// RSHARED marks the entire mount tree as SHARED.
- RSHARED = syscall.MS_SHARED | syscall.MS_REC
+ RSHARED = unix.MS_SHARED | unix.MS_REC
// RELATIME updates inode access times relative to modify or change time.
- RELATIME = syscall.MS_RELATIME
+ RELATIME = unix.MS_RELATIME
// STRICTATIME allows to explicitly request full atime updates. This makes
// it possible for the kernel to default to relatime or noatime but still
// allow userspace to override it.
- STRICTATIME = syscall.MS_STRICTATIME
+ STRICTATIME = unix.MS_STRICTATIME
+
+ mntDetach = unix.MNT_DETACH
)
diff --git a/vendor/github.com/docker/docker/pkg/mount/flags_unsupported.go b/vendor/github.com/docker/docker/pkg/mount/flags_unsupported.go
index 5564f7b3c..9ed741e3f 100644
--- a/vendor/github.com/docker/docker/pkg/mount/flags_unsupported.go
+++ b/vendor/github.com/docker/docker/pkg/mount/flags_unsupported.go
@@ -27,4 +27,5 @@ const (
STRICTATIME = 0
SYNCHRONOUS = 0
RDONLY = 0
+ mntDetach = 0
)
diff --git a/vendor/github.com/docker/docker/pkg/mount/mount.go b/vendor/github.com/docker/docker/pkg/mount/mount.go
index 66ac4bf47..c9fdfd694 100644
--- a/vendor/github.com/docker/docker/pkg/mount/mount.go
+++ b/vendor/github.com/docker/docker/pkg/mount/mount.go
@@ -1,7 +1,8 @@
package mount
import (
- "time"
+ "sort"
+ "strings"
)
// GetMounts retrieves a list of mounts for the current running process.
@@ -46,29 +47,40 @@ func Mount(device, target, mType, options string) error {
// flags.go for supported option flags.
func ForceMount(device, target, mType, options string) error {
flag, data := parseOptions(options)
- if err := mount(device, target, mType, uintptr(flag), data); err != nil {
- return err
- }
- return nil
+ return mount(device, target, mType, uintptr(flag), data)
}
-// Unmount will unmount the target filesystem, so long as it is mounted.
+// Unmount lazily unmounts a filesystem on supported platforms, otherwise
+// does a normal unmount.
func Unmount(target string) error {
if mounted, err := Mounted(target); err != nil || !mounted {
return err
}
- return ForceUnmount(target)
+ return unmount(target, mntDetach)
}
-// ForceUnmount will force an unmount of the target filesystem, regardless if
-// it is mounted or not.
-func ForceUnmount(target string) (err error) {
- // Simple retry logic for unmount
- for i := 0; i < 10; i++ {
- if err = unmount(target, 0); err == nil {
- return nil
- }
- time.Sleep(100 * time.Millisecond)
+// RecursiveUnmount unmounts the target and all mounts underneath, starting with
+// the deepsest mount first.
+func RecursiveUnmount(target string) error {
+ mounts, err := GetMounts()
+ if err != nil {
+ return err
}
- return
+
+ // Make the deepest mount be first
+ sort.Sort(sort.Reverse(byMountpoint(mounts)))
+
+ for i, m := range mounts {
+ if !strings.HasPrefix(m.Mountpoint, target) {
+ continue
+ }
+ if err := Unmount(m.Mountpoint); err != nil && i == len(mounts)-1 {
+ if mounted, err := Mounted(m.Mountpoint); err != nil || mounted {
+ return err
+ }
+ // Ignore errors for submounts and continue trying to unmount others
+ // The final unmount should fail if there ane any submounts remaining
+ }
+ }
+ return nil
}
diff --git a/vendor/github.com/docker/docker/pkg/mount/mounter_freebsd.go b/vendor/github.com/docker/docker/pkg/mount/mounter_freebsd.go
index bb870e6f5..814896cc9 100644
--- a/vendor/github.com/docker/docker/pkg/mount/mounter_freebsd.go
+++ b/vendor/github.com/docker/docker/pkg/mount/mounter_freebsd.go
@@ -13,8 +13,9 @@ import "C"
import (
"fmt"
"strings"
- "syscall"
"unsafe"
+
+ "golang.org/x/sys/unix"
)
func allocateIOVecs(options []string) []C.struct_iovec {
@@ -55,5 +56,5 @@ func mount(device, target, mType string, flag uintptr, data string) error {
}
func unmount(target string, flag int) error {
- return syscall.Unmount(target, flag)
+ return unix.Unmount(target, flag)
}
diff --git a/vendor/github.com/docker/docker/pkg/mount/mounter_linux.go b/vendor/github.com/docker/docker/pkg/mount/mounter_linux.go
index dd4280c77..39c36d472 100644
--- a/vendor/github.com/docker/docker/pkg/mount/mounter_linux.go
+++ b/vendor/github.com/docker/docker/pkg/mount/mounter_linux.go
@@ -1,21 +1,57 @@
package mount
import (
- "syscall"
+ "golang.org/x/sys/unix"
)
-func mount(device, target, mType string, flag uintptr, data string) error {
- if err := syscall.Mount(device, target, mType, flag, data); err != nil {
- return err
+const (
+ // ptypes is the set propagation types.
+ ptypes = unix.MS_SHARED | unix.MS_PRIVATE | unix.MS_SLAVE | unix.MS_UNBINDABLE
+
+ // pflags is the full set valid flags for a change propagation call.
+ pflags = ptypes | unix.MS_REC | unix.MS_SILENT
+
+ // broflags is the combination of bind and read only
+ broflags = unix.MS_BIND | unix.MS_RDONLY
+)
+
+// isremount returns true if either device name or flags identify a remount request, false otherwise.
+func isremount(device string, flags uintptr) bool {
+ switch {
+ // We treat device "" and "none" as a remount request to provide compatibility with
+ // requests that don't explicitly set MS_REMOUNT such as those manipulating bind mounts.
+ case flags&unix.MS_REMOUNT != 0, device == "", device == "none":
+ return true
+ default:
+ return false
+ }
+}
+
+func mount(device, target, mType string, flags uintptr, data string) error {
+ oflags := flags &^ ptypes
+ if !isremount(device, flags) || data != "" {
+ // Initial call applying all non-propagation flags for mount
+ // or remount with changed data
+ if err := unix.Mount(device, target, mType, oflags, data); err != nil {
+ return err
+ }
}
- // If we have a bind mount or remount, remount...
- if flag&syscall.MS_BIND == syscall.MS_BIND && flag&syscall.MS_RDONLY == syscall.MS_RDONLY {
- return syscall.Mount(device, target, mType, flag|syscall.MS_REMOUNT, data)
+ if flags&ptypes != 0 {
+ // Change the propagation type.
+ if err := unix.Mount("", target, "", flags&pflags, ""); err != nil {
+ return err
+ }
}
+
+ if oflags&broflags == broflags {
+ // Remount the bind to apply read only.
+ return unix.Mount("", target, "", oflags|unix.MS_REMOUNT, "")
+ }
+
return nil
}
func unmount(target string, flag int) error {
- return syscall.Unmount(target, flag)
+ return unix.Unmount(target, flag)
}
diff --git a/vendor/github.com/docker/docker/pkg/mount/mountinfo.go b/vendor/github.com/docker/docker/pkg/mount/mountinfo.go
index e3fc3535e..ff4cc1d86 100644
--- a/vendor/github.com/docker/docker/pkg/mount/mountinfo.go
+++ b/vendor/github.com/docker/docker/pkg/mount/mountinfo.go
@@ -38,3 +38,17 @@ type Info struct {
// VfsOpts represents per super block options.
VfsOpts string
}
+
+type byMountpoint []*Info
+
+func (by byMountpoint) Len() int {
+ return len(by)
+}
+
+func (by byMountpoint) Less(i, j int) bool {
+ return by[i].Mountpoint < by[j].Mountpoint
+}
+
+func (by byMountpoint) Swap(i, j int) {
+ by[i], by[j] = by[j], by[i]
+}
diff --git a/vendor/github.com/docker/docker/pkg/random/random.go b/vendor/github.com/docker/docker/pkg/random/random.go
deleted file mode 100644
index 70de4d130..000000000
--- a/vendor/github.com/docker/docker/pkg/random/random.go
+++ /dev/null
@@ -1,71 +0,0 @@
-package random
-
-import (
- cryptorand "crypto/rand"
- "io"
- "math"
- "math/big"
- "math/rand"
- "sync"
- "time"
-)
-
-// Rand is a global *rand.Rand instance, which initialized with NewSource() source.
-var Rand = rand.New(NewSource())
-
-// Reader is a global, shared instance of a pseudorandom bytes generator.
-// It doesn't consume entropy.
-var Reader io.Reader = &reader{rnd: Rand}
-
-// copypaste from standard math/rand
-type lockedSource struct {
- lk sync.Mutex
- src rand.Source
-}
-
-func (r *lockedSource) Int63() (n int64) {
- r.lk.Lock()
- n = r.src.Int63()
- r.lk.Unlock()
- return
-}
-
-func (r *lockedSource) Seed(seed int64) {
- r.lk.Lock()
- r.src.Seed(seed)
- r.lk.Unlock()
-}
-
-// NewSource returns math/rand.Source safe for concurrent use and initialized
-// with current unix-nano timestamp
-func NewSource() rand.Source {
- var seed int64
- if cryptoseed, err := cryptorand.Int(cryptorand.Reader, big.NewInt(math.MaxInt64)); err != nil {
- // This should not happen, but worst-case fallback to time-based seed.
- seed = time.Now().UnixNano()
- } else {
- seed = cryptoseed.Int64()
- }
- return &lockedSource{
- src: rand.NewSource(seed),
- }
-}
-
-type reader struct {
- rnd *rand.Rand
-}
-
-func (r *reader) Read(b []byte) (int, error) {
- i := 0
- for {
- val := r.rnd.Int63()
- for val > 0 {
- b[i] = byte(val)
- i++
- if i == len(b) {
- return i, nil
- }
- val >>= 8
- }
- }
-}
diff --git a/vendor/github.com/docker/docker/pkg/signal/signal_linux.go b/vendor/github.com/docker/docker/pkg/signal/signal_linux.go
index d418cbe9e..3594796ca 100644
--- a/vendor/github.com/docker/docker/pkg/signal/signal_linux.go
+++ b/vendor/github.com/docker/docker/pkg/signal/signal_linux.go
@@ -2,6 +2,8 @@ package signal
import (
"syscall"
+
+ "golang.org/x/sys/unix"
)
const (
@@ -11,41 +13,41 @@ const (
// SignalMap is a map of Linux signals.
var SignalMap = map[string]syscall.Signal{
- "ABRT": syscall.SIGABRT,
- "ALRM": syscall.SIGALRM,
- "BUS": syscall.SIGBUS,
- "CHLD": syscall.SIGCHLD,
- "CLD": syscall.SIGCLD,
- "CONT": syscall.SIGCONT,
- "FPE": syscall.SIGFPE,
- "HUP": syscall.SIGHUP,
- "ILL": syscall.SIGILL,
- "INT": syscall.SIGINT,
- "IO": syscall.SIGIO,
- "IOT": syscall.SIGIOT,
- "KILL": syscall.SIGKILL,
- "PIPE": syscall.SIGPIPE,
- "POLL": syscall.SIGPOLL,
- "PROF": syscall.SIGPROF,
- "PWR": syscall.SIGPWR,
- "QUIT": syscall.SIGQUIT,
- "SEGV": syscall.SIGSEGV,
- "STKFLT": syscall.SIGSTKFLT,
- "STOP": syscall.SIGSTOP,
- "SYS": syscall.SIGSYS,
- "TERM": syscall.SIGTERM,
- "TRAP": syscall.SIGTRAP,
- "TSTP": syscall.SIGTSTP,
- "TTIN": syscall.SIGTTIN,
- "TTOU": syscall.SIGTTOU,
- "UNUSED": syscall.SIGUNUSED,
- "URG": syscall.SIGURG,
- "USR1": syscall.SIGUSR1,
- "USR2": syscall.SIGUSR2,
- "VTALRM": syscall.SIGVTALRM,
- "WINCH": syscall.SIGWINCH,
- "XCPU": syscall.SIGXCPU,
- "XFSZ": syscall.SIGXFSZ,
+ "ABRT": unix.SIGABRT,
+ "ALRM": unix.SIGALRM,
+ "BUS": unix.SIGBUS,
+ "CHLD": unix.SIGCHLD,
+ "CLD": unix.SIGCLD,
+ "CONT": unix.SIGCONT,
+ "FPE": unix.SIGFPE,
+ "HUP": unix.SIGHUP,
+ "ILL": unix.SIGILL,
+ "INT": unix.SIGINT,
+ "IO": unix.SIGIO,
+ "IOT": unix.SIGIOT,
+ "KILL": unix.SIGKILL,
+ "PIPE": unix.SIGPIPE,
+ "POLL": unix.SIGPOLL,
+ "PROF": unix.SIGPROF,
+ "PWR": unix.SIGPWR,
+ "QUIT": unix.SIGQUIT,
+ "SEGV": unix.SIGSEGV,
+ "STKFLT": unix.SIGSTKFLT,
+ "STOP": unix.SIGSTOP,
+ "SYS": unix.SIGSYS,
+ "TERM": unix.SIGTERM,
+ "TRAP": unix.SIGTRAP,
+ "TSTP": unix.SIGTSTP,
+ "TTIN": unix.SIGTTIN,
+ "TTOU": unix.SIGTTOU,
+ "UNUSED": unix.SIGUNUSED,
+ "URG": unix.SIGURG,
+ "USR1": unix.SIGUSR1,
+ "USR2": unix.SIGUSR2,
+ "VTALRM": unix.SIGVTALRM,
+ "WINCH": unix.SIGWINCH,
+ "XCPU": unix.SIGXCPU,
+ "XFSZ": unix.SIGXFSZ,
"RTMIN": sigrtmin,
"RTMIN+1": sigrtmin + 1,
"RTMIN+2": sigrtmin + 2,
diff --git a/vendor/github.com/docker/docker/pkg/signal/trap.go b/vendor/github.com/docker/docker/pkg/signal/trap.go
index 548a5480e..2884dfee3 100644
--- a/vendor/github.com/docker/docker/pkg/signal/trap.go
+++ b/vendor/github.com/docker/docker/pkg/signal/trap.go
@@ -11,7 +11,6 @@ import (
"syscall"
"time"
- "github.com/sirupsen/logrus"
"github.com/pkg/errors"
)
@@ -27,7 +26,9 @@ import (
// the docker daemon is not restarted and also running under systemd.
// Fixes https://github.com/docker/docker/issues/19728
//
-func Trap(cleanup func()) {
+func Trap(cleanup func(), logger interface {
+ Info(args ...interface{})
+}) {
c := make(chan os.Signal, 1)
// we will handle INT, TERM, QUIT, SIGPIPE here
signals := []os.Signal{os.Interrupt, syscall.SIGTERM, syscall.SIGQUIT, syscall.SIGPIPE}
@@ -40,7 +41,7 @@ func Trap(cleanup func()) {
}
go func(sig os.Signal) {
- logrus.Infof("Processing signal '%v'", sig)
+ logger.Info(fmt.Sprintf("Processing signal '%v'", sig))
switch sig {
case os.Interrupt, syscall.SIGTERM:
if atomic.LoadUint32(&interruptCount) < 3 {
@@ -54,11 +55,11 @@ func Trap(cleanup func()) {
}
} else {
// 3 SIGTERM/INT signals received; force exit without cleanup
- logrus.Info("Forcing docker daemon shutdown without cleanup; 3 interrupts received")
+ logger.Info("Forcing docker daemon shutdown without cleanup; 3 interrupts received")
}
case syscall.SIGQUIT:
DumpStacks("")
- logrus.Info("Forcing docker daemon shutdown without cleanup on SIGQUIT")
+ logger.Info("Forcing docker daemon shutdown without cleanup on SIGQUIT")
}
//for the SIGINT/TERM, and SIGQUIT non-clean shutdown case, exit with 128 + signal #
os.Exit(128 + int(sig.(syscall.Signal)))
diff --git a/vendor/github.com/docker/docker/pkg/stringid/stringid.go b/vendor/github.com/docker/docker/pkg/stringid/stringid.go
index fa35d8bad..a0c7c42a0 100644
--- a/vendor/github.com/docker/docker/pkg/stringid/stringid.go
+++ b/vendor/github.com/docker/docker/pkg/stringid/stringid.go
@@ -2,19 +2,25 @@
package stringid
import (
- "crypto/rand"
+ cryptorand "crypto/rand"
"encoding/hex"
+ "fmt"
"io"
+ "math"
+ "math/big"
+ "math/rand"
"regexp"
"strconv"
"strings"
-
- "github.com/docker/docker/pkg/random"
+ "time"
)
const shortLen = 12
-var validShortID = regexp.MustCompile("^[a-z0-9]{12}$")
+var (
+ validShortID = regexp.MustCompile("^[a-f0-9]{12}$")
+ validHex = regexp.MustCompile(`^[a-f0-9]{64}$`)
+)
// IsShortID determines if an arbitrary string *looks like* a short ID.
func IsShortID(id string) bool {
@@ -35,12 +41,8 @@ func TruncateID(id string) string {
return id
}
-func generateID(crypto bool) string {
+func generateID(r io.Reader) string {
b := make([]byte, 32)
- r := random.Reader
- if crypto {
- r = rand.Reader
- }
for {
if _, err := io.ReadFull(r, b); err != nil {
panic(err) // This shouldn't happen
@@ -58,12 +60,40 @@ func generateID(crypto bool) string {
// GenerateRandomID returns a unique id.
func GenerateRandomID() string {
- return generateID(true)
+ return generateID(cryptorand.Reader)
}
// GenerateNonCryptoID generates unique id without using cryptographically
// secure sources of random.
// It helps you to save entropy.
func GenerateNonCryptoID() string {
- return generateID(false)
+ return generateID(readerFunc(rand.Read))
+}
+
+// ValidateID checks whether an ID string is a valid image ID.
+func ValidateID(id string) error {
+ if ok := validHex.MatchString(id); !ok {
+ return fmt.Errorf("image ID %q is invalid", id)
+ }
+ return nil
+}
+
+func init() {
+ // safely set the seed globally so we generate random ids. Tries to use a
+ // crypto seed before falling back to time.
+ var seed int64
+ if cryptoseed, err := cryptorand.Int(cryptorand.Reader, big.NewInt(math.MaxInt64)); err != nil {
+ // This should not happen, but worst-case fallback to time-based seed.
+ seed = time.Now().UnixNano()
+ } else {
+ seed = cryptoseed.Int64()
+ }
+
+ rand.Seed(seed)
+}
+
+type readerFunc func(p []byte) (int, error)
+
+func (fn readerFunc) Read(p []byte) (int, error) {
+ return fn(p)
}
diff --git a/vendor/github.com/docker/docker/vendor.conf b/vendor/github.com/docker/docker/vendor.conf
index bb7718bc4..39b4a2951 100644
--- a/vendor/github.com/docker/docker/vendor.conf
+++ b/vendor/github.com/docker/docker/vendor.conf
@@ -1,80 +1,85 @@
# the following lines are in sorted order, FYI
-github.com/Azure/go-ansiterm 388960b655244e76e24c75f48631564eaefade62
-github.com/Microsoft/hcsshim v0.5.9
-github.com/Microsoft/go-winio v0.3.8
-github.com/Sirupsen/logrus v0.11.0
-github.com/davecgh/go-spew 6d212800a42e8ab5c146b8ace3490ee17e5225f9
+github.com/Azure/go-ansiterm 19f72df4d05d31cbe1c56bfc8045c96babff6c7e
+github.com/Microsoft/hcsshim v0.6.1
+github.com/Microsoft/go-winio v0.4.2
+github.com/moby/buildkit da2b9dc7dab99e824b2b1067ad7d0523e32dd2d9 https://github.com/dmcgowan/buildkit.git
+github.com/davecgh/go-spew 346938d642f2ec3594ed81d874461961cd0faa76
github.com/docker/libtrust 9cbd2a1374f46905c68a4eb3694a130610adc62a
github.com/go-check/check 4ed411733c5785b40214c70bce814c3a3a689609 https://github.com/cpuguy83/check.git
github.com/gorilla/context v1.1
github.com/gorilla/mux v1.1
+github.com/jhowardmsft/opengcs b9d0120d36f26e981a50bf18bac1bb3f0c2b8fef https://github.com/dmcgowan/opengcs.git
github.com/kr/pty 5cf931ef8f
-github.com/mattn/go-shellwords v1.0.0
-github.com/mattn/go-sqlite3 v1.1.0
+github.com/mattn/go-shellwords v1.0.3
+github.com/sirupsen/logrus v1.0.1
github.com/tchap/go-patricia v2.2.6
github.com/vdemeester/shakers 24d7f1d6a71aa5d9cbe7390e4afb66b7eef9e1b3
-# forked golang.org/x/net package includes a patch for lazy loading trace templates
-golang.org/x/net 2beffdc2e92c8a3027590f898fe88f69af48a3f8 https://github.com/tonistiigi/net.git
-golang.org/x/sys 8f0908ab3b2457e2e15403d3697c9ef5cb4b57a9
-github.com/docker/go-units 8a7beacffa3009a9ac66bad506b18ffdd110cf97
-github.com/docker/go-connections ecb4cb2dd420ada7df7f2593d6c25441f65f69f2
+golang.org/x/net 7dcfb8076726a3fdd9353b6b8a1f1b6be6811bd6
+golang.org/x/sys 739734461d1c916b6c72a63d7efda2b27edb369f
+github.com/docker/go-units 9e638d38cf6977a37a8ea0078f3ee75a7cdb2dd1
+github.com/docker/go-connections 3ede32e2033de7505e6500d6c868c2b9ed9f169d
+golang.org/x/text f72d8390a633d5dfb0cc84043294db9f6c935756
+github.com/stretchr/testify 4d4bfba8f1d1027c4fdbe371823030df51419987
+github.com/pmezard/go-difflib v1.0.0
github.com/RackSec/srslog 456df3a81436d29ba874f3590eeeee25d666f8a5
github.com/imdario/mergo 0.2.1
+golang.org/x/sync de49d9dcd27d4f764488181bea099dfe6179bcf0
#get libnetwork packages
-github.com/docker/libnetwork 45b40861e677e37cf27bc184eca5af92f8cdd32d
-github.com/docker/go-events 18b43f1bc85d9cdd42c05a6cd2d444c7a200a894
+github.com/docker/libnetwork 248fd5ea6a67f8810da322e6e7441e8de96a9045 https://github.com/dmcgowan/libnetwork.git
+github.com/docker/go-events 9461782956ad83b30282bf90e31fa6a70c255ba9
github.com/armon/go-radix e39d623f12e8e41c7b5529e9a9dd67a1e2261f80
github.com/armon/go-metrics eb0af217e5e9747e41dd5303755356b62d28e3ec
github.com/hashicorp/go-msgpack 71c2886f5a673a35f909803f38ece5810165097b
-github.com/hashicorp/memberlist 88ac4de0d1a0ca6def284b571342db3b777a4c37
+github.com/hashicorp/memberlist v0.1.0
+github.com/sean-/seed e2103e2c35297fb7e17febb81e49b312087a2372
+github.com/hashicorp/go-sockaddr acd314c5781ea706c710d9ea70069fd2e110d61d
github.com/hashicorp/go-multierror fcdddc395df1ddf4247c69bd436e84cfa0733f7e
github.com/hashicorp/serf 598c54895cc5a7b1a24a398d635e8c0ea0959870
github.com/docker/libkv 1d8431073ae03cdaedb198a89722f3aab6d418ef
github.com/vishvananda/netns 604eaf189ee867d8c147fafc28def2394e878d25
-github.com/vishvananda/netlink 482f7a52b758233521878cb6c5904b6bd63f3457
+github.com/vishvananda/netlink bd6d5de5ccef2d66b0a26177928d0d8895d7f969
github.com/BurntSushi/toml f706d00e3de6abe700c994cdd545a1a4915af060
github.com/samuel/go-zookeeper d0e0d8e11f318e000a8cc434616d69e329edc374
github.com/deckarep/golang-set ef32fa3046d9f249d399f98ebaf9be944430fd1d
-github.com/coreos/etcd 3a49cbb769ebd8d1dd25abb1e83386e9883a5707
+github.com/coreos/etcd v3.2.1
+github.com/coreos/go-semver v0.2.0
github.com/ugorji/go f1f1a805ed361a0e078bb537e4ea78cd37dcf065
github.com/hashicorp/consul v0.5.2
github.com/boltdb/bolt fff57c100f4dea1905678da7e90d92429dff2904
github.com/miekg/dns 75e6e86cc601825c5dbcd4e0c209eab180997cd7
# get graph and distribution packages
-github.com/docker/distribution 28602af35aceda2f8d571bad7ca37a54cf0250bc
+github.com/docker/distribution edc3ab29cdff8694dd6feb85cfeb4b5f1b38ed9c
github.com/vbatts/tar-split v0.10.1
+github.com/opencontainers/go-digest a6d0ee40d4207ea02364bd3b9e8e77b9159ba1eb
# get go-zfs packages
github.com/mistifyio/go-zfs 22c9b32c84eb0d0c6f4043b6e90fc94073de92fa
github.com/pborman/uuid v1.0
-# get desired notary commit, might also need to be updated in Dockerfile
-github.com/docker/notary v0.4.2
-
-google.golang.org/grpc v1.0.2
-github.com/miekg/pkcs11 df8ae6ca730422dba20c768ff38ef7d79077a59f
-github.com/docker/go v1.5.1-1-1-gbaf439e
-github.com/agl/ed25519 d2b94fd789ea21d12fac1a4443dd3a3f79cda72c
+google.golang.org/grpc v1.3.0
# When updating, also update RUNC_COMMIT in hack/dockerfile/binaries-commits accordingly
-github.com/opencontainers/runc 9df8b306d01f59d3a8029be411de015b7304dd8f https://github.com/docker/runc.git # libcontainer
-github.com/opencontainers/runtime-spec 1c7c27d043c2a5e513a44084d2b10d77d1402b8c # specs
+github.com/opencontainers/runc e9325d442f5979c4f79bfa9e09bdf7abb74ba03b https://github.com/dmcgowan/runc.git
+github.com/opencontainers/image-spec 372ad780f63454fbbbbcc7cf80e5b90245c13e13
+github.com/opencontainers/runtime-spec d42f1eb741e6361e858d83fc75aa6893b66292c4 # specs
+
github.com/seccomp/libseccomp-golang 32f571b70023028bd57d9288c20efbcb237f3ce0
+
# libcontainer deps (see src/github.com/opencontainers/runc/Godeps/Godeps.json)
github.com/coreos/go-systemd v4
github.com/godbus/dbus v4.0.0
github.com/syndtr/gocapability 2c00daeb6c3b45114c80ac44119e7b8801fdd852
-github.com/golang/protobuf 1f49d83d9aa00e6ce4fc8258c71cc7786aec968a
+github.com/golang/protobuf 7a211bcf3bce0e3f1d74f9894916e6f116ae83b4
# gelf logging driver deps
-github.com/Graylog2/go-gelf aab2f594e4585d43468ac57287b0dece9d806883
+github.com/Graylog2/go-gelf 7029da823dad4ef3a876df61065156acb703b2ea
github.com/fluent/fluent-logger-golang v1.2.1
# fluent-logger-golang deps
-github.com/philhofer/fwd 899e4efba8eaa1fea74175308f3fae18ff3319fa
+github.com/philhofer/fwd 98c11a7a6ec829d672b03833c3d69a7fae1ca972
github.com/tinylib/msgp 75ee40d2601edf122ef667e2a07d600d4c44490c
# fsnotify
@@ -86,30 +91,29 @@ github.com/go-ini/ini 060d7da055ba6ec5ea7a31f116332fe5efa04ce0
github.com/jmespath/go-jmespath 0b12d6b521d83fc7f755e7cfc1b1fbdd35a01a74
# logentries
-github.com/bsphere/le_go d3308aafe090956bc89a65f0769f58251a1b4f03
+github.com/bsphere/le_go 7a984a84b5492ae539b79b62fb4a10afc63c7bcf
# gcplogs deps
-golang.org/x/oauth2 2baa8a1b9338cf13d9eeb27696d761155fa480be
-google.golang.org/api dc6d2353af16e2a2b0ff6986af051d473a4ed468
-google.golang.org/cloud dae7e3d993bc3812a2185af60552bb6b847e52a0
-
-# native credentials
-github.com/docker/docker-credential-helpers f72c04f1d8e71959a6d103f808c50ccbad79b9fd
+golang.org/x/oauth2 96382aa079b72d8c014eb0c50f6c223d1e6a2de0
+google.golang.org/api 3cc2e591b550923a2c5f0ab5a803feda924d5823
+cloud.google.com/go 9d965e63e8cceb1b5d7977a202f0fcb8866d6525
+github.com/googleapis/gax-go da06d194a00e19ce00d9011a13931c3f6f6887c7
+google.golang.org/genproto d80a6e20e776b0b17a324d0ba1ab50a39c8e8944
# containerd
-github.com/docker/containerd aa8187dbd3b7ad67d8e5e3a15115d3eef43a7ed1
+github.com/containerd/containerd fc10004571bb9b26695ccbf2dd4a83213f60b93e https://github.com/dmcgowan/containerd.git
github.com/tonistiigi/fifo 1405643975692217d6720f8b54aeee1bf2cd5cf4
+github.com/stevvooe/continuity cd7a8e21e2b6f84799f5dd4b65faf49c8d3ee02d
+github.com/tonistiigi/fsutil 0ac4c11b053b9c5c7c47558f81f96c7100ce50fb
# cluster
-github.com/docker/swarmkit 1c7f003d75f091d5f7051ed982594420e4515f77
-github.com/golang/mock bd3c8e81be01eef76d4b503f5e687d2d1354d2d9
-github.com/gogo/protobuf v0.3
+github.com/docker/swarmkit 8bdecc57887ffc598b63d6433f58e0d2852112c3 https://github.com/dmcgowan/swarmkit.git
+github.com/gogo/protobuf v0.4
github.com/cloudflare/cfssl 7fb22c8cba7ecaf98e4082d22d65800cf45e042a
github.com/google/certificate-transparency d90e65c3a07988180c5b1ece71791c0b6506826e
golang.org/x/crypto 3fbbcd23f1cb824e69491a5930cfeff09b12f4d2
golang.org/x/time a4bde12657593d5e90d0533a3e4fd95e635124cb
-github.com/mreiferson/go-httpclient 63fe23f7434723dc904c901043af07931f293c47
-github.com/hashicorp/go-memdb 608dda3b1410a73eaf3ac8b517c9ae7ebab6aa87
+github.com/hashicorp/go-memdb cb9a474f84cc5e41b273b20c6927680b2a8776ad
github.com/hashicorp/go-immutable-radix 8e8ed81f8f0bf1bdd829593fdd5c29922c1ea990
github.com/hashicorp/golang-lru a0d98a5f288019575c6d1f4bb1573fef2d1fcdc4
github.com/coreos/pkg fa29b1d70f0beaddd4c7021607cc3c3be8ce94b8
@@ -119,22 +123,25 @@ github.com/beorn7/perks 4c0e84591b9aa9e6dcfdf3e020114cd81f89d5f9
github.com/prometheus/client_model fa8ad6fec33561be4280a8f0514318c79d7f6cb6
github.com/prometheus/common ebdfc6da46522d58825777cf1f90490a5b1ef1d8
github.com/prometheus/procfs abf152e5f3e97f2fafac028d2cc06c1feb87ffa5
-bitbucket.org/ww/goautoneg 75cd24fc2f2c2a2088577d12123ddee5f54e0675
-github.com/matttproud/golang_protobuf_extensions fc2b8d3a73c4867e51861bbdd5ae3c1f0869dd6a
+github.com/matttproud/golang_protobuf_extensions v1.0.0
github.com/pkg/errors 839d9e913e063e28dfd0e6c7b7512793e0a48be9
+github.com/grpc-ecosystem/go-grpc-prometheus 6b7015e65d366bf3f19b2b2a000a831940f0f7e0
# cli
-github.com/spf13/cobra v1.5 https://github.com/dnephin/cobra.git
-github.com/spf13/pflag dabebe21bf790f782ea4c7bbd2efc430de182afd
+github.com/spf13/cobra v1.5.1 https://github.com/dnephin/cobra.git
+github.com/spf13/pflag 9ff6c6923cfffbcd502984b8e0c80539a94968b7
github.com/inconshreveable/mousetrap 76626ae9c91c4f2a10f34cad8ce83ea42c93bb75
-github.com/flynn-archive/go-shlex 3f9db97f856818214da2e1057f8ad84803971cff
+github.com/Nvveen/Gotty a8b993ba6abdb0e0c12b0125c603323a71c7790c https://github.com/ijc25/Gotty
# metrics
-github.com/docker/go-metrics 86138d05f285fd9737a99bee2d9be30866b59d72
+github.com/docker/go-metrics d466d4f6fd960e01820085bd7e1a24426ee7ef18
-# composefile
-github.com/mitchellh/mapstructure f3009df150dadf309fdee4a54ed65c124afad715
-github.com/xeipuuv/gojsonpointer e0fe6f68307607d540ed8eac07a342c33fa1b54a
-github.com/xeipuuv/gojsonreference e02fc20de94c78484cd5ffb007f8af96be030a45
-github.com/xeipuuv/gojsonschema 93e72a773fade158921402d6a24c819b48aba29d
-gopkg.in/yaml.v2 a83829b6f1293c91addabc89d0571c246397bbf4
+github.com/opencontainers/selinux v1.0.0-rc1
+
+# archive/tar
+# mkdir -p ./vendor/archive
+# git clone git://github.com/tonistiigi/go-1.git ./go
+# git --git-dir ./go/.git --work-tree ./go checkout revert-prefix-ignore
+# cp -a go/src/archive/tar ./vendor/archive/tar
+# rm -rf ./go
+# vndr
diff --git a/vendor/github.com/kubernetes-incubator/cri-o/README.md b/vendor/github.com/kubernetes-incubator/cri-o/README.md
index f498cef0f..a90fef5dc 100644
--- a/vendor/github.com/kubernetes-incubator/cri-o/README.md
+++ b/vendor/github.com/kubernetes-incubator/cri-o/README.md
@@ -4,7 +4,7 @@
[](https://travis-ci.org/kubernetes-incubator/cri-o)
[](https://goreportcard.com/report/github.com/kubernetes-incubator/cri-o)
-### Status: pre-alpha
+### Status: alpha
## What is the scope of this project?
@@ -36,11 +36,38 @@ The plan is to use OCI projects and best of breed libraries for different aspect
It is currently in active development in the Kubernetes community through the [design proposal](https://github.com/kubernetes/kubernetes/pull/26788). Questions and issues should be raised in the Kubernetes [sig-node Slack channel](https://kubernetes.slack.com/archives/sig-node).
+## Commands
+| Command | Description |
+| ---------------------------------------------------- | ---------------------------------------------------------------------------------------------------- |
+| [crio(8)](/docs/crio.8.md) | Enable OCI Kubernetes Container Runtime daemon |
+| [kpod(1)](/docs/kpod.1.md) | Simple management tool for pods and images |
+| [kpod-history(1)](/docs/kpod-history.1.md)] | Shows the history of an image |
+| [kpod-images(1)](/docs/kpod-images.1.md) | List images in local storage |
+| [kpod-inspect(1)](/docs/kpod-inspect.1.md) | Display the configuration of a container or image |
+| [kpod-load(1)](/docs/kpod-load.1.md) | Load an image from docker archive or oci |
+| [kpod-pull(1)](/docs/kpod-pull.1.md) | Pull an image from a registry |
+| [kpod-push(1)](/docs/kpod-push.1.md) | Push an image to a specified destination |
+| [kpod-rmi(1)](/docs/kpod-rmi.1.md) | Removes one or more images |
+| [kpod-save(1)](/docs/kpod-save.1.md) | Saves an image to an archive |
+| [kpod-tag(1)](/docs/kpod-tag.1.md) | Add an additional name to a local image |
+| [kpod-version(1)](/docs/kpod-version.1.md) | Display the Kpod Version Information |
+
+## Configuration
+| File | Description |
+| ---------------------------------------------------- | ---------------------------------------------------------------------------------------------------- |
+| [crio.conf(5)](/docs/crio.conf.5.md) | CRI-O Configuation file |
+
+## Communication
+
+For async communication and long running discussions please use issues and pull requests on the github repo. This will be the best place to discuss design and implementation.
+
+For sync communication we have an IRC channel #cri-o, on chat.freenode.net, that everyone is welcome to join and chat about development.
+
## Getting started
### Prerequisites
-`runc` version 1.0.0.rc1 or greater is expected to be installed on the system. It is picked up as the default runtime by ocid.
+Latest verion of `runc` is expected to be installed on the system. It is picked up as the default runtime by crio.
### Build Dependencies
@@ -60,6 +87,7 @@ yum install -y \
libgpg-error-devel \
libseccomp-devel \
libselinux-devel \
+ ostree-devel \
pkgconfig \
runc
```
@@ -81,7 +109,9 @@ apt install -y \
runc
```
-If using an older release or a long-term support release, be careful to double-check that the version of `runc` is new enough, or else build your own.
+Debian, Ubuntu, and related distributions will also need a copy of the development libraries for `ostree`, either in the form of the `libostree-dev` package from the [flatpak](https://launchpad.net/~alexlarsson/+archive/ubuntu/flatpak) PPA, or built [from source](https://github.com/ostreedev/ostree) (more on that [here](https://ostree.readthedocs.io/en/latest/#building)).
+
+If using an older release or a long-term support release, be careful to double-check that the version of `runc` is new enough (running `runc --version` should produce `spec: 1.0.0`), or else build your own.
**Optional**
@@ -170,16 +200,18 @@ your system.
You can run a local version of kubernetes with cri-o using `local-up-cluster.sh`:
1. Clone the [kubernetes repository](https://github.com/kubernetes/kubernetes)
-1. Start the cri-o daemon (`ocid`)
-1. From the kubernetes project directory, run: `CONTAINER_RUNTIME=remote CONTAINER_RUNTIME_ENDPOINT='/var/run/ocid.sock --runtime-request-timeout=15m' ./hack/local-up-cluster.sh`
+1. Start the cri-o daemon (`crio`)
+1. From the kubernetes project directory, run: `CONTAINER_RUNTIME=remote CONTAINER_RUNTIME_ENDPOINT='/var/run/crio.sock --runtime-request-timeout=15m' ./hack/local-up-cluster.sh`
To run a full cluster, see [the instructions](kubernetes.md).
### Current Roadmap
-1. Basic pod/container lifecycle, basic image pull (already works)
-1. Support for tty handling and state management
-1. Basic integration with kubelet once client side changes are ready
-1. Support for log management, networking integration using CNI, pluggable image/storage management
-1. Support for exec/attach
-1. Target fully automated kubernetes testing without failures
+1. Basic pod/container lifecycle, basic image pull (done)
+1. Support for tty handling and state management (done)
+1. Basic integration with kubelet once client side changes are ready (done)
+1. Support for log management, networking integration using CNI, pluggable image/storage management (done)
+1. Support for exec/attach (done)
+1. Target fully automated kubernetes testing without failures [e2e status](https://github.com/kubernetes-incubator/cri-o/issues/533)
+1. Release 1.0
+1. Track upstream k8s releases
diff --git a/vendor/github.com/kubernetes-incubator/cri-o/conmon/conmon.c b/vendor/github.com/kubernetes-incubator/cri-o/conmon/conmon.c
index 933ac4a28..2009bf490 100644
--- a/vendor/github.com/kubernetes-incubator/cri-o/conmon/conmon.c
+++ b/vendor/github.com/kubernetes-incubator/cri-o/conmon/conmon.c
@@ -7,16 +7,22 @@
#include
#include
#include
-#include
#include
#include
#include
#include
+#include
#include
+#include
+#include
+#include
+#include
+#include
#include
#include
#include
+#include
#include "cmsg.h"
@@ -37,11 +43,13 @@
#define nwarn(fmt, ...) \
do { \
fprintf(stderr, "[conmon:w]: " fmt "\n", ##__VA_ARGS__); \
+ syslog(LOG_INFO, "conmon : " fmt " \n", ##__VA_ARGS__); \
} while (0)
#define ninfo(fmt, ...) \
do { \
fprintf(stderr, "[conmon:i]: " fmt "\n", ##__VA_ARGS__); \
+ syslog(LOG_INFO, "conmon : " fmt " \n", ##__VA_ARGS__); \
} while (0)
#define _cleanup_(x) __attribute__((cleanup(x)))
@@ -58,45 +66,155 @@ static inline void closep(int *fd)
*fd = -1;
}
+static inline void fclosep(FILE **fp) {
+ if (*fp)
+ fclose(*fp);
+ *fp = NULL;
+}
+
static inline void gstring_free_cleanup(GString **string)
{
if (*string)
g_string_free(*string, TRUE);
}
+static inline void strv_cleanup(char ***strv)
+{
+ if (strv)
+ g_strfreev (*strv);
+}
+
#define _cleanup_free_ _cleanup_(freep)
#define _cleanup_close_ _cleanup_(closep)
+#define _cleanup_fclose_ _cleanup_(fclosep)
#define _cleanup_gstring_ _cleanup_(gstring_free_cleanup)
+#define _cleanup_strv_ _cleanup_(strv_cleanup)
-#define BUF_SIZE 256
+#define BUF_SIZE 8192
#define CMD_SIZE 1024
#define MAX_EVENTS 10
-static bool terminal = false;
-static char *cid = NULL;
-static char *runtime_path = NULL;
-static char *bundle_path = NULL;
-static char *pid_file = NULL;
-static bool systemd_cgroup = false;
-static bool exec = false;
-static char *log_path = NULL;
-static GOptionEntry entries[] =
+static bool opt_terminal = false;
+static bool opt_stdin = false;
+static char *opt_cid = NULL;
+static char *opt_cuuid = NULL;
+static char *opt_runtime_path = NULL;
+static char *opt_bundle_path = NULL;
+static char *opt_pid_file = NULL;
+static bool opt_systemd_cgroup = false;
+static char *opt_exec_process_spec = NULL;
+static bool opt_exec = false;
+static char *opt_log_path = NULL;
+static int opt_timeout = 0;
+static GOptionEntry opt_entries[] =
{
- { "terminal", 't', 0, G_OPTION_ARG_NONE, &terminal, "Terminal", NULL },
- { "cid", 'c', 0, G_OPTION_ARG_STRING, &cid, "Container ID", NULL },
- { "runtime", 'r', 0, G_OPTION_ARG_STRING, &runtime_path, "Runtime path", NULL },
- { "bundle", 'b', 0, G_OPTION_ARG_STRING, &bundle_path, "Bundle path", NULL },
- { "pidfile", 'p', 0, G_OPTION_ARG_STRING, &pid_file, "PID file", NULL },
- { "systemd-cgroup", 's', 0, G_OPTION_ARG_NONE, &systemd_cgroup, "Enable systemd cgroup manager", NULL },
- { "exec", 'e', 0, G_OPTION_ARG_NONE, &exec, "Exec a command in a running container", NULL },
- { "log-path", 'l', 0, G_OPTION_ARG_STRING, &log_path, "Log file path", NULL },
+ { "terminal", 't', 0, G_OPTION_ARG_NONE, &opt_terminal, "Terminal", NULL },
+ { "stdin", 'i', 0, G_OPTION_ARG_NONE, &opt_stdin, "Stdin", NULL },
+ { "cid", 'c', 0, G_OPTION_ARG_STRING, &opt_cid, "Container ID", NULL },
+ { "cuuid", 'u', 0, G_OPTION_ARG_STRING, &opt_cuuid, "Container UUID", NULL },
+ { "runtime", 'r', 0, G_OPTION_ARG_STRING, &opt_runtime_path, "Runtime path", NULL },
+ { "bundle", 'b', 0, G_OPTION_ARG_STRING, &opt_bundle_path, "Bundle path", NULL },
+ { "pidfile", 'p', 0, G_OPTION_ARG_STRING, &opt_pid_file, "PID file", NULL },
+ { "systemd-cgroup", 's', 0, G_OPTION_ARG_NONE, &opt_systemd_cgroup, "Enable systemd cgroup manager", NULL },
+ { "exec", 'e', 0, G_OPTION_ARG_NONE, &opt_exec, "Exec a command in a running container", NULL },
+ { "exec-process-spec", 0, 0, G_OPTION_ARG_STRING, &opt_exec_process_spec, "Path to the process spec for exec", NULL },
+ { "log-path", 'l', 0, G_OPTION_ARG_STRING, &opt_log_path, "Log file path", NULL },
+ { "timeout", 'T', 0, G_OPTION_ARG_INT, &opt_timeout, "Timeout in seconds", NULL },
{ NULL }
};
-/* strlen("1997-03-25T13:20:42.999999999+01:00") + 1 */
-#define TSBUFLEN 36
+/* strlen("1997-03-25T13:20:42.999999999+01:00 stdout ") + 1 */
+#define TSBUFLEN 44
-int set_k8s_timestamp(char *buf, ssize_t buflen)
+#define CGROUP_ROOT "/sys/fs/cgroup"
+
+static ssize_t write_all(int fd, const void *buf, size_t count)
+{
+ size_t remaining = count;
+ const char *p = buf;
+ ssize_t res;
+
+ while (remaining > 0) {
+ do {
+ res = write(fd, p, remaining);
+ } while (res == -1 && errno == EINTR);
+
+ if (res <= 0)
+ return -1;
+
+ remaining -= res;
+ p += res;
+ }
+
+ return count;
+}
+
+#define WRITEV_BUFFER_N_IOV 128
+
+typedef struct {
+ int iovcnt;
+ struct iovec iov[WRITEV_BUFFER_N_IOV];
+} writev_buffer_t;
+
+static ssize_t writev_buffer_flush (int fd, writev_buffer_t *buf)
+{
+ size_t count = 0;
+ ssize_t res;
+ struct iovec *iov;
+ int iovcnt;
+
+ iovcnt = buf->iovcnt;
+ iov = buf->iov;
+
+ while (iovcnt > 0) {
+ do {
+ res = writev(fd, iov, iovcnt);
+ } while (res == -1 && errno == EINTR);
+
+ if (res <= 0)
+ return -1;
+
+ count += res;
+
+ while (res > 0) {
+ size_t from_this = MIN((size_t)res, iov->iov_len);
+ iov->iov_len -= from_this;
+ res -= from_this;
+
+ if (iov->iov_len == 0) {
+ iov++;
+ iovcnt--;
+ }
+ }
+ }
+
+ buf->iovcnt = 0;
+
+ return count;
+}
+
+ssize_t writev_buffer_append_segment(int fd, writev_buffer_t *buf, const void *data, ssize_t len)
+{
+ if (data == NULL)
+ return 1;
+
+ if (len < 0)
+ len = strlen ((char *)data);
+
+ if (buf->iovcnt == WRITEV_BUFFER_N_IOV &&
+ writev_buffer_flush (fd, buf) < 0)
+ return -1;
+
+ if (len > 0) {
+ buf->iov[buf->iovcnt].iov_base = (void *)data;
+ buf->iov[buf->iovcnt].iov_len = (size_t)len;
+ buf->iovcnt++;
+ }
+
+ return 1;
+}
+
+int set_k8s_timestamp(char *buf, ssize_t buflen, const char *pipename)
{
struct tm *tm;
struct timespec ts;
@@ -122,17 +240,18 @@ int set_k8s_timestamp(char *buf, ssize_t buflen)
off = -off;
}
- len = snprintf(buf, buflen, "%d-%02d-%02dT%02d:%02d:%02d.%09ld%c%02d:%02d",
+ len = snprintf(buf, buflen, "%d-%02d-%02dT%02d:%02d:%02d.%09ld%c%02d:%02d %s ",
tm->tm_year + 1900, tm->tm_mon + 1, tm->tm_mday,
tm->tm_hour, tm->tm_min, tm->tm_sec, ts.tv_nsec,
- off_sign, off / 3600, off % 3600);
+ off_sign, off / 3600, off % 3600, pipename);
if (len < buflen)
err = 0;
return err;
}
-/* stdpipe_t represents one of the std pipes (or NONE). */
+/* stdpipe_t represents one of the std pipes (or NONE).
+ * Sync with const in container_attach.go */
typedef enum {
NO_PIPE,
STDIN_PIPE, /* unused */
@@ -164,13 +283,14 @@ int write_k8s_log(int fd, stdpipe_t pipe, const char *buf, ssize_t buflen)
{
char tsbuf[TSBUFLEN];
static stdpipe_t trailing_line = NO_PIPE;
+ writev_buffer_t bufv = {0};
/*
* Use the same timestamp for every line of the log in this buffer.
* There is no practical difference in the output since write(2) is
* fast.
*/
- if (set_k8s_timestamp(tsbuf, TSBUFLEN))
+ if (set_k8s_timestamp(tsbuf, sizeof tsbuf, stdpipe_name(pipe)))
/* TODO: We should handle failures much more cleanly than this. */
return -1;
@@ -197,18 +317,16 @@ int write_k8s_log(int fd, stdpipe_t pipe, const char *buf, ssize_t buflen)
* wasn't one output) but without modifying the file in a
* non-append-only way there's not much we can do.
*/
- char *leading = "";
- if (trailing_line != NO_PIPE)
- leading = "\n";
-
- if (dprintf(fd, "%s%s %s ", leading, tsbuf, stdpipe_name(pipe)) < 0) {
+ if ((trailing_line != NO_PIPE &&
+ writev_buffer_append_segment(fd, &bufv, "\n", -1) < 0) ||
+ writev_buffer_append_segment(fd, &bufv, tsbuf, -1) < 0) {
nwarn("failed to write (timestamp, stream) to log");
goto next;
}
}
/* Output the actual contents. */
- if (write(fd, buf, line_len) < 0) {
+ if (writev_buffer_append_segment(fd, &bufv, buf, line_len) < 0) {
nwarn("failed to write buffer to log");
goto next;
}
@@ -222,84 +340,798 @@ next:
buflen -= line_len;
}
+ if (writev_buffer_flush (fd, &bufv) < 0) {
+ nwarn("failed to flush buffer to log");
+ }
+
return 0;
}
+/*
+ * Returns the path for specified controller name for a pid.
+ * Returns NULL on error.
+ */
+static char *process_cgroup_subsystem_path(int pid, const char *subsystem) {
+ _cleanup_free_ char *cgroups_file_path = g_strdup_printf("/proc/%d/cgroup", pid);
+ _cleanup_fclose_ FILE *fp = NULL;
+ fp = fopen(cgroups_file_path, "re");
+ if (fp == NULL) {
+ nwarn("Failed to open cgroups file: %s", cgroups_file_path);
+ return NULL;
+ }
+
+ _cleanup_free_ char *line = NULL;
+ ssize_t read;
+ size_t len = 0;
+ char *ptr, *path;
+ char *subsystem_path = NULL;
+ int i;
+ while ((read = getline(&line, &len, fp)) != -1) {
+ _cleanup_strv_ char **subsystems = NULL;
+ ptr = strchr(line, ':');
+ if (ptr == NULL) {
+ nwarn("Error parsing cgroup, ':' not found: %s", line);
+ return NULL;
+ }
+ ptr++;
+ path = strchr(ptr, ':');
+ if (path == NULL) {
+ nwarn("Error parsing cgroup, second ':' not found: %s", line);
+ return NULL;
+ }
+ *path = 0;
+ path++;
+ subsystems = g_strsplit (ptr, ",", -1);
+ for (i = 0; subsystems[i] != NULL; i++) {
+ if (strcmp (subsystems[i], subsystem) == 0) {
+ char *subpath = strchr(subsystems[i], '=');
+ if (subpath == NULL) {
+ subpath = ptr;
+ } else {
+ *subpath = 0;
+ }
+
+ subsystem_path = g_strdup_printf("%s/%s%s", CGROUP_ROOT, subpath, path);
+ subsystem_path[strlen(subsystem_path) - 1] = '\0';
+ return subsystem_path;
+ }
+ }
+ }
+
+ return NULL;
+}
+
+static char *escape_json_string(const char *str)
+{
+ GString *escaped;
+ const char *p;
+
+ p = str;
+ escaped = g_string_sized_new(strlen(str));
+
+ while (*p != 0) {
+ char c = *p++;
+ if (c == '\\' || c == '"') {
+ g_string_append_c(escaped, '\\');
+ g_string_append_c(escaped, c);
+ } else if (c == '\n') {
+ g_string_append_printf (escaped, "\\n");
+ } else if (c == '\t') {
+ g_string_append_printf (escaped, "\\t");
+ } else if ((c > 0 && c < 0x1f) || c == 0x7f) {
+ g_string_append_printf (escaped, "\\u00%02x", (guint)c);
+ } else {
+ g_string_append_c (escaped, c);
+ }
+ }
+
+ return g_string_free (escaped, FALSE);
+}
+
+static int get_pipe_fd_from_env(const char *envname)
+{
+ char *pipe_str, *endptr;
+ int pipe_fd;
+
+ pipe_str = getenv(envname);
+ if (pipe_str == NULL)
+ return -1;
+
+ errno = 0;
+ pipe_fd = strtol(pipe_str, &endptr, 10);
+ if (errno != 0 || *endptr != '\0')
+ pexit("unable to parse %s", envname);
+ if (fcntl(pipe_fd, F_SETFD, FD_CLOEXEC) == -1)
+ pexit("unable to make %s CLOEXEC", envname);
+
+ return pipe_fd;
+}
+
+static void add_argv(GPtrArray *argv_array, ...) G_GNUC_NULL_TERMINATED;
+
+static void add_argv(GPtrArray *argv_array, ...)
+{
+ va_list args;
+ char *arg;
+
+ va_start (args, argv_array);
+ while ((arg = va_arg (args, char *)))
+ g_ptr_array_add (argv_array, arg);
+ va_end (args);
+}
+
+static void end_argv(GPtrArray *argv_array)
+{
+ g_ptr_array_add(argv_array, NULL);
+}
+
+/* Global state */
+
+static int runtime_status = -1;
+static int container_status = -1;
+
+static int masterfd_stdin = -1;
+static int masterfd_stdout = -1;
+static int masterfd_stderr = -1;
+
+/* Used for attach */
+static int conn_sock = -1;
+static int conn_sock_readable;
+static int conn_sock_writable;
+
+static int log_fd = -1;
+static int oom_event_fd = -1;
+static int attach_socket_fd = -1;
+static int console_socket_fd = -1;
+static int terminal_ctrl_fd = -1;
+
+static bool timed_out = FALSE;
+
+static GMainLoop *main_loop = NULL;
+
+static void conn_sock_shutdown(int how)
+{
+ if (conn_sock == -1)
+ return;
+ shutdown(conn_sock, how);
+ if (how & SHUT_RD)
+ conn_sock_readable = false;
+ if (how & SHUT_WR)
+ conn_sock_writable = false;
+ if (!conn_sock_writable && !conn_sock_readable) {
+ close(conn_sock);
+ conn_sock = -1;
+ }
+}
+
+static gboolean stdio_cb(int fd, GIOCondition condition, gpointer user_data);
+
+static gboolean tty_hup_timeout_scheduled = false;
+
+static gboolean tty_hup_timeout_cb (G_GNUC_UNUSED gpointer user_data)
+{
+ tty_hup_timeout_scheduled = false;
+ g_unix_fd_add (masterfd_stdout, G_IO_IN, stdio_cb, GINT_TO_POINTER(STDOUT_PIPE));
+ return G_SOURCE_REMOVE;
+}
+
+static bool read_stdio(int fd, stdpipe_t pipe, bool *eof)
+{
+ #define STDIO_BUF_SIZE 8192 /* Sync with redirectResponseToOutputStreams() */
+ /* We use one extra byte at the start, which we don't read into, instead
+ we use that for marking the pipe when we write to the attached socket */
+ char real_buf[STDIO_BUF_SIZE + 1];
+ char *buf = real_buf + 1;
+ ssize_t num_read = 0;
+
+ if (eof)
+ *eof = false;
+
+ num_read = read(fd, buf, STDIO_BUF_SIZE);
+ if (num_read == 0) {
+ if (eof)
+ *eof = true;
+ return false;
+ } else if (num_read < 0) {
+ nwarn("stdio_input read failed %s", strerror(errno));
+ return false;
+ } else {
+ if (write_k8s_log(log_fd, pipe, buf, num_read) < 0) {
+ nwarn("write_k8s_log failed");
+ return G_SOURCE_CONTINUE;
+ }
+
+ real_buf[0] = pipe;
+ if (conn_sock_writable && write_all(conn_sock, real_buf, num_read+1) < 0) {
+ nwarn("Failed to write to socket");
+ conn_sock_shutdown(SHUT_WR);
+ }
+ return true;
+ }
+}
+
+static void on_sigchld(G_GNUC_UNUSED int signal)
+{
+ raise (SIGUSR1);
+}
+
+static void check_child_processes(GHashTable *pid_to_handler)
+{
+ void (*cb) (GPid, int, gpointer);
+
+ for (;;) {
+ int status;
+ pid_t pid = waitpid(-1, &status, WNOHANG);
+
+ if (pid < 0 && errno == EINTR)
+ continue;
+ if (pid < 0 && errno == ECHILD) {
+ g_main_loop_quit (main_loop);
+ return;
+ }
+ if (pid < 0)
+ pexit("Failed to read child process status");
+
+ if (pid == 0)
+ return;
+
+ /* If we got here, pid > 0, so we have a valid pid to check. */
+ cb = g_hash_table_lookup(pid_to_handler, &pid);
+ if (cb)
+ cb(pid, status, 0);
+ }
+}
+
+static gboolean on_sigusr1_cb(gpointer user_data)
+{
+ GHashTable *pid_to_handler = (GHashTable *) user_data;
+ check_child_processes (pid_to_handler);
+ return G_SOURCE_CONTINUE;
+}
+
+static gboolean stdio_cb(int fd, GIOCondition condition, gpointer user_data)
+{
+ stdpipe_t pipe = GPOINTER_TO_INT(user_data);
+ bool read_eof = false;
+ bool has_input = (condition & G_IO_IN) != 0;
+ bool has_hup = (condition & G_IO_HUP) != 0;
+
+ /* When we get here, condition can be G_IO_IN and/or G_IO_HUP.
+ IN means there is some data to read.
+ HUP means the other side closed the fd. In the case of a pine
+ this in final, and we will never get more data. However, in the
+ terminal case this just means that nobody has the terminal
+ open at this point, and this can be change whenever someone
+ opens the tty */
+
+ /* Read any data before handling hup */
+ if (has_input) {
+ read_stdio(fd, pipe, &read_eof);
+ }
+
+ if (has_hup && opt_terminal && pipe == STDOUT_PIPE) {
+ /* We got a HUP from the terminal master this means there
+ are no open slaves ptys atm, and we will get a lot
+ of wakeups until we have one, switch to polling
+ mode. */
+
+ /* If we read some data this cycle, wait one more, maybe there
+ is more in the buffer before we handle the hup */
+ if (has_input && !read_eof) {
+ return G_SOURCE_CONTINUE;
+ }
+
+ if (!tty_hup_timeout_scheduled) {
+ g_timeout_add (100, tty_hup_timeout_cb, NULL);
+ }
+ tty_hup_timeout_scheduled = true;
+ return G_SOURCE_REMOVE;
+ }
+
+ if (read_eof || (has_hup && !has_input)) {
+ /* End of input */
+ if (pipe == STDOUT_PIPE)
+ masterfd_stdout = -1;
+ if (pipe == STDERR_PIPE)
+ masterfd_stderr = -1;
+
+ close (fd);
+ return G_SOURCE_REMOVE;
+ }
+
+ return G_SOURCE_CONTINUE;
+}
+
+static gboolean timeout_cb (G_GNUC_UNUSED gpointer user_data)
+{
+ timed_out = TRUE;
+ ninfo ("Timed out, killing main loop");
+ g_main_loop_quit (main_loop);
+ return G_SOURCE_REMOVE;
+}
+
+static gboolean oom_cb(int fd, GIOCondition condition, G_GNUC_UNUSED gpointer user_data)
+{
+ uint64_t oom_event;
+ ssize_t num_read = 0;
+
+ if ((condition & G_IO_IN) != 0) {
+ num_read = read(fd, &oom_event, sizeof(uint64_t));
+ if (num_read < 0) {
+ nwarn("Failed to read oom event from eventfd");
+ return G_SOURCE_CONTINUE;
+ }
+
+ if (num_read > 0) {
+ if (num_read != sizeof(uint64_t))
+ nwarn("Failed to read full oom event from eventfd");
+ ninfo("OOM received");
+ if (open("oom", O_CREAT, 0666) < 0) {
+ nwarn("Failed to write oom file");
+ }
+ return G_SOURCE_CONTINUE;
+ }
+ }
+
+ /* End of input */
+ close (fd);
+ oom_event_fd = -1;
+ return G_SOURCE_REMOVE;
+}
+
+static gboolean conn_sock_cb(int fd, GIOCondition condition, G_GNUC_UNUSED gpointer user_data)
+{
+ #define CONN_SOCK_BUF_SIZE 32*1024 /* Match the write size in CopyDetachable */
+ char buf[CONN_SOCK_BUF_SIZE];
+ ssize_t num_read = 0;
+
+ if ((condition & G_IO_IN) != 0) {
+ num_read = read(fd, buf, CONN_SOCK_BUF_SIZE);
+ if (num_read < 0)
+ return G_SOURCE_CONTINUE;
+
+ if (num_read > 0 && masterfd_stdin >= 0) {
+ if (write_all(masterfd_stdin, buf, num_read) < 0) {
+ nwarn("Failed to write to container stdin");
+ }
+ return G_SOURCE_CONTINUE;
+ }
+ }
+
+ /* End of input */
+ conn_sock_shutdown(SHUT_RD);
+ if (masterfd_stdin >= 0 && opt_stdin) {
+ close(masterfd_stdin);
+ masterfd_stdin = -1;
+ }
+ return G_SOURCE_REMOVE;
+}
+
+static gboolean attach_cb(int fd, G_GNUC_UNUSED GIOCondition condition, G_GNUC_UNUSED gpointer user_data)
+{
+ conn_sock = accept(fd, NULL, NULL);
+ if (conn_sock == -1) {
+ if (errno != EWOULDBLOCK)
+ nwarn("Failed to accept client connection on attach socket");
+ } else {
+ conn_sock_readable = true;
+ conn_sock_writable = true;
+ g_unix_fd_add (conn_sock, G_IO_IN|G_IO_HUP|G_IO_ERR, conn_sock_cb, GINT_TO_POINTER(STDOUT_PIPE));
+ ninfo("Accepted connection %d", conn_sock);
+ }
+
+ return G_SOURCE_CONTINUE;
+}
+
+static gboolean ctrl_cb(int fd, G_GNUC_UNUSED GIOCondition condition, G_GNUC_UNUSED gpointer user_data)
+{
+ #define CTLBUFSZ 200
+ static char ctlbuf[CTLBUFSZ];
+ static int readsz = CTLBUFSZ - 1;
+ static char *readptr = ctlbuf;
+ ssize_t num_read = 0;
+ int ctl_msg_type = -1;
+ int height = -1;
+ int width = -1;
+ struct winsize ws;
+ int ret;
+
+ num_read = read(fd, readptr, readsz);
+ if (num_read <= 0) {
+ nwarn("Failed to read from control fd");
+ return G_SOURCE_CONTINUE;
+ }
+
+ readptr[num_read] = '\0';
+ ninfo("Got ctl message: %s\n", ctlbuf);
+
+ char *beg = ctlbuf;
+ char *newline = strchrnul(beg, '\n');
+ /* Process each message which ends with a line */
+ while (*newline != '\0') {
+ ret = sscanf(ctlbuf, "%d %d %d\n", &ctl_msg_type, &height, &width);
+ if (ret != 3) {
+ nwarn("Failed to sscanf message");
+ return G_SOURCE_CONTINUE;
+ }
+ ninfo("Message type: %d, Height: %d, Width: %d", ctl_msg_type, height, width);
+ ret = ioctl(masterfd_stdout, TIOCGWINSZ, &ws);
+ ninfo("Existing size: %d %d", ws.ws_row, ws.ws_col);
+ ws.ws_row = height;
+ ws.ws_col = width;
+ ret = ioctl(masterfd_stdout, TIOCSWINSZ, &ws);
+ if (ret == -1) {
+ nwarn("Failed to set process pty terminal size");
+ }
+ beg = newline + 1;
+ newline = strchrnul(beg, '\n');
+ }
+ if (num_read == (CTLBUFSZ - 1) && beg == ctlbuf) {
+ /*
+ * We did not find a newline in the entire buffer.
+ * This shouldn't happen as our buffer is larger than
+ * the message that we expect to receive.
+ */
+ nwarn("Could not find newline in entire buffer\n");
+ } else if (*beg == '\0') {
+ /* We exhausted all messages that were complete */
+ readptr = ctlbuf;
+ readsz = CTLBUFSZ - 1;
+ } else {
+ /*
+ * We copy remaining data to beginning of buffer
+ * and advance readptr after that.
+ */
+ int cp_rem = 0;
+ do {
+ ctlbuf[cp_rem++] = *beg++;
+ } while (*beg != '\0');
+ readptr = ctlbuf + cp_rem;
+ readsz = CTLBUFSZ - 1 - cp_rem;
+ }
+
+ return G_SOURCE_CONTINUE;
+}
+
+static gboolean terminal_accept_cb(int fd, G_GNUC_UNUSED GIOCondition condition, G_GNUC_UNUSED gpointer user_data)
+{
+ const char *csname = user_data;
+ struct file_t console;
+ int connfd = -1;
+ struct termios tset;
+
+ ninfo("about to accept from console_socket_fd: %d", fd);
+ connfd = accept4(fd, NULL, NULL, SOCK_CLOEXEC);
+ if (connfd < 0) {
+ nwarn("Failed to accept console-socket connection");
+ return G_SOURCE_CONTINUE;
+ }
+
+ /* Not accepting anything else. */
+ close(fd);
+ unlink(csname);
+
+ /* We exit if this fails. */
+ ninfo("about to recvfd from connfd: %d", connfd);
+ console = recvfd(connfd);
+
+ ninfo("console = {.name = '%s'; .fd = %d}", console.name, console.fd);
+ free(console.name);
+
+ /* We change the terminal settings to match kube settings */
+ if (tcgetattr(console.fd, &tset) == -1)
+ pexit("Failed to get console terminal settings");
+
+ tset.c_oflag |= ONLCR;
+
+ if (tcsetattr(console.fd, TCSANOW, &tset) == -1)
+ pexit("Failed to set console terminal settings");
+
+ /* We only have a single fd for both pipes, so we just treat it as
+ * stdout. stderr is ignored. */
+ masterfd_stdin = console.fd;
+ masterfd_stdout = console.fd;
+
+ /* Clean up everything */
+ close(connfd);
+
+ return G_SOURCE_CONTINUE;
+}
+
+static void
+runtime_exit_cb (G_GNUC_UNUSED GPid pid, int status, G_GNUC_UNUSED gpointer user_data)
+{
+ runtime_status = status;
+ g_main_loop_quit (main_loop);
+}
+
+static void
+container_exit_cb (G_GNUC_UNUSED GPid pid, int status, G_GNUC_UNUSED gpointer user_data)
+{
+ ninfo("container %d exited with status %d\n", pid, status);
+ container_status = status;
+ g_main_loop_quit (main_loop);
+}
+
+static void write_sync_fd(int sync_pipe_fd, int res, const char *message)
+{
+ _cleanup_free_ char *escaped_message = NULL;
+ _cleanup_free_ char *json = NULL;
+ const char *res_key;
+ ssize_t len;
+
+ if (sync_pipe_fd == -1)
+ return;
+
+ if (opt_exec)
+ res_key = "exit_code";
+ else
+ res_key = "pid";
+
+ if (message) {
+ escaped_message = escape_json_string(message);
+ json = g_strdup_printf ("{\"%s\": %d, \"message\": \"%s\"}\n", res_key, res, escaped_message);
+ } else {
+ json = g_strdup_printf ("{\"%s\": %d}\n", res_key, res);
+ }
+
+ len = strlen(json);
+ if (write_all(sync_pipe_fd, json, len) != len) {
+ pexit("Unable to send container stderr message to parent");
+ }
+}
+
+static char *setup_console_socket(void)
+{
+ struct sockaddr_un addr = {0};
+ _cleanup_free_ const char *tmpdir = g_get_tmp_dir();
+ _cleanup_free_ char *csname = g_build_filename(tmpdir, "conmon-term.XXXXXX", NULL);
+ /*
+ * Generate a temporary name. Is this unsafe? Probably, but we can
+ * replace it with a rename(2) setup if necessary.
+ */
+
+ int unusedfd = g_mkstemp(csname);
+ if (unusedfd < 0)
+ pexit("Failed to generate random path for console-socket");
+ close(unusedfd);
+
+ addr.sun_family = AF_UNIX;
+ strncpy(addr.sun_path, csname, sizeof(addr.sun_path)-1);
+
+ ninfo("addr{sun_family=AF_UNIX, sun_path=%s}", addr.sun_path);
+
+ /* Bind to the console socket path. */
+ console_socket_fd = socket(AF_UNIX, SOCK_STREAM|SOCK_CLOEXEC, 0);
+ if (console_socket_fd < 0)
+ pexit("Failed to create console-socket");
+ if (fchmod(console_socket_fd, 0700))
+ pexit("Failed to change console-socket permissions");
+ /* XXX: This should be handled with a rename(2). */
+ if (unlink(csname) < 0)
+ pexit("Failed to unlink temporary random path");
+ if (bind(console_socket_fd, (struct sockaddr *) &addr, sizeof(addr)) < 0)
+ pexit("Failed to bind to console-socket");
+ if (listen(console_socket_fd, 128) < 0)
+ pexit("Failed to listen on console-socket");
+
+ return g_strdup(csname);
+}
+
+static char *setup_attach_socket(void)
+{
+ _cleanup_free_ char *attach_sock_path = NULL;
+ char *attach_symlink_dir_path;
+ struct sockaddr_un attach_addr = {0};
+ attach_addr.sun_family = AF_UNIX;
+
+ /*
+ * Create a symlink so we don't exceed unix domain socket
+ * path length limit.
+ */
+ attach_symlink_dir_path = g_build_filename("/var/run/crio", opt_cuuid, NULL);
+ if (unlink(attach_symlink_dir_path) == -1 && errno != ENOENT)
+ pexit("Failed to remove existing symlink for attach socket directory");
+
+ if (symlink(opt_bundle_path, attach_symlink_dir_path) == -1)
+ pexit("Failed to create symlink for attach socket");
+
+ attach_sock_path = g_build_filename("/var/run/crio", opt_cuuid, "attach", NULL);
+ ninfo("attach sock path: %s", attach_sock_path);
+
+ strncpy(attach_addr.sun_path, attach_sock_path, sizeof(attach_addr.sun_path) - 1);
+ ninfo("addr{sun_family=AF_UNIX, sun_path=%s}", attach_addr.sun_path);
+
+ /*
+ * We make the socket non-blocking to avoid a race where client aborts connection
+ * before the server gets a chance to call accept. In that scenario, the server
+ * accept blocks till a new client connection comes in.
+ */
+ attach_socket_fd = socket(AF_UNIX, SOCK_SEQPACKET|SOCK_NONBLOCK|SOCK_CLOEXEC, 0);
+ if (attach_socket_fd == -1)
+ pexit("Failed to create attach socket");
+
+ if (fchmod(attach_socket_fd, 0700))
+ pexit("Failed to change attach socket permissions");
+
+ if (bind(attach_socket_fd, (struct sockaddr *)&attach_addr, sizeof(struct sockaddr_un)) == -1)
+ pexit("Failed to bind attach socket: %s", attach_sock_path);
+
+ if (listen(attach_socket_fd, 10) == -1)
+ pexit("Failed to listen on attach socket: %s", attach_sock_path);
+
+ g_unix_fd_add (attach_socket_fd, G_IO_IN, attach_cb, NULL);
+
+ return attach_symlink_dir_path;
+}
+
+static void setup_terminal_control_fifo()
+{
+ _cleanup_free_ char *ctl_fifo_path = g_build_filename(opt_bundle_path, "ctl", NULL);
+ ninfo("ctl fifo path: %s", ctl_fifo_path);
+
+ /* Setup fifo for reading in terminal resize and other stdio control messages */
+
+ if (mkfifo(ctl_fifo_path, 0666) == -1)
+ pexit("Failed to mkfifo at %s", ctl_fifo_path);
+
+ terminal_ctrl_fd = open(ctl_fifo_path, O_RDONLY|O_NONBLOCK|O_CLOEXEC);
+ if (terminal_ctrl_fd == -1)
+ pexit("Failed to open control fifo");
+
+ /*
+ * Open a dummy writer to prevent getting flood of POLLHUPs when
+ * last writer closes.
+ */
+ int dummyfd = open(ctl_fifo_path, O_WRONLY|O_CLOEXEC);
+ if (dummyfd == -1)
+ pexit("Failed to open dummy writer for fifo");
+
+ g_unix_fd_add (terminal_ctrl_fd, G_IO_IN, ctrl_cb, NULL);
+
+ ninfo("terminal_ctrl_fd: %d", terminal_ctrl_fd);
+}
+
+static void setup_oom_handling(int container_pid)
+{
+ /* Setup OOM notification for container process */
+ _cleanup_free_ char *memory_cgroup_path = process_cgroup_subsystem_path(container_pid, "memory");
+ _cleanup_close_ int cfd = -1;
+ int ofd = -1; /* Not closed */
+ if (!memory_cgroup_path) {
+ nexit("Failed to get memory cgroup path");
+ }
+
+ _cleanup_free_ char *memory_cgroup_file_path = g_build_filename(memory_cgroup_path, "cgroup.event_control", NULL);
+
+ if ((cfd = open(memory_cgroup_file_path, O_WRONLY | O_CLOEXEC)) == -1) {
+ nwarn("Failed to open %s", memory_cgroup_file_path);
+ return;
+ }
+
+ _cleanup_free_ char *memory_cgroup_file_oom_path = g_build_filename(memory_cgroup_path, "memory.oom_control", NULL);
+ if ((ofd = open(memory_cgroup_file_oom_path, O_RDONLY | O_CLOEXEC)) == -1)
+ pexit("Failed to open %s", memory_cgroup_file_oom_path);
+
+ if ((oom_event_fd = eventfd(0, EFD_CLOEXEC)) == -1)
+ pexit("Failed to create eventfd");
+
+ _cleanup_free_ char *data = g_strdup_printf("%d %d", oom_event_fd, ofd);
+ if (write_all(cfd, data, strlen(data)) < 0)
+ pexit("Failed to write to cgroup.event_control");
+
+ g_unix_fd_add (oom_event_fd, G_IO_IN, oom_cb, NULL);
+}
+
int main(int argc, char *argv[])
{
- int ret, runtime_status;
+ int ret;
char cwd[PATH_MAX];
- char default_pid_file[PATH_MAX];
+ _cleanup_free_ char *default_pid_file = NULL;
+ _cleanup_free_ char *csname = NULL;
GError *err = NULL;
- _cleanup_free_ char *contents;
- int cpid = -1;
- int status;
- pid_t pid, create_pid;
- _cleanup_close_ int logfd = -1;
- _cleanup_close_ int masterfd_stdout = -1;
- _cleanup_close_ int masterfd_stderr = -1;
- _cleanup_close_ int epfd = -1;
- _cleanup_close_ int csfd = -1;
+ _cleanup_free_ char *contents = NULL;
+ int container_pid = -1;
+ pid_t main_pid, create_pid;
/* Used for !terminal cases. */
+ int slavefd_stdin = -1;
int slavefd_stdout = -1;
int slavefd_stderr = -1;
- char csname[PATH_MAX] = "/tmp/conmon-term.XXXXXXXX";
char buf[BUF_SIZE];
int num_read;
- struct epoll_event ev;
- struct epoll_event evlist[MAX_EVENTS];
int sync_pipe_fd = -1;
- char *sync_pipe, *endptr;
- int len;
- int num_stdio_fds = 0;
+ int start_pipe_fd = -1;
GError *error = NULL;
GOptionContext *context;
- _cleanup_gstring_ GString *cmd = NULL;
+ GPtrArray *runtime_argv = NULL;
+ _cleanup_close_ int dev_null_r = -1;
+ _cleanup_close_ int dev_null_w = -1;
+ int fds[2];
+
+ main_loop = g_main_loop_new (NULL, FALSE);
/* Command line parameters */
context = g_option_context_new("- conmon utility");
- g_option_context_add_main_entries(context, entries, "conmon");
+ g_option_context_add_main_entries(context, opt_entries, "conmon");
if (!g_option_context_parse(context, &argc, &argv, &error)) {
g_print("option parsing failed: %s\n", error->message);
exit(1);
}
- if (cid == NULL)
+ if (opt_cid == NULL)
nexit("Container ID not provided. Use --cid");
- if (runtime_path == NULL)
+ if (!opt_exec && opt_cuuid == NULL)
+ nexit("Container UUID not provided. Use --cuuid");
+
+ if (opt_runtime_path == NULL)
nexit("Runtime path not provided. Use --runtime");
- if (bundle_path == NULL && !exec) {
+ if (opt_bundle_path == NULL && !opt_exec) {
if (getcwd(cwd, sizeof(cwd)) == NULL) {
nexit("Failed to get working directory");
}
- bundle_path = cwd;
+ opt_bundle_path = cwd;
}
- if (pid_file == NULL) {
- if (snprintf(default_pid_file, sizeof(default_pid_file),
- "%s/pidfile-%s", cwd, cid) < 0) {
- nexit("Failed to generate the pidfile path");
- }
- pid_file = default_pid_file;
+ dev_null_r = open("/dev/null", O_RDONLY | O_CLOEXEC);
+ if (dev_null_r < 0)
+ pexit("Failed to open /dev/null");
+
+ dev_null_w = open("/dev/null", O_WRONLY | O_CLOEXEC);
+ if (dev_null_w < 0)
+ pexit("Failed to open /dev/null");
+
+ if (opt_exec && opt_exec_process_spec == NULL) {
+ nexit("Exec process spec path not provided. Use --exec-process-spec");
}
- if (log_path == NULL)
+ if (opt_pid_file == NULL) {
+ default_pid_file = g_strdup_printf ("%s/pidfile-%s", cwd, opt_cid);
+ opt_pid_file = default_pid_file;
+ }
+
+ if (opt_log_path == NULL)
nexit("Log file path not provided. Use --log-path");
- /* Environment variables */
- sync_pipe = getenv("_OCI_SYNCPIPE");
- if (sync_pipe) {
- errno = 0;
- sync_pipe_fd = strtol(sync_pipe, &endptr, 10);
- if (errno != 0 || *endptr != '\0')
- pexit("unable to parse _OCI_SYNCPIPE");
+ start_pipe_fd = get_pipe_fd_from_env("_OCI_STARTPIPE");
+ if (start_pipe_fd >= 0) {
+ /* Block for an initial write to the start pipe before
+ spawning any childred or exiting, to ensure the
+ parent can put us in the right cgroup. */
+ read(start_pipe_fd, buf, BUF_SIZE);
+ close(start_pipe_fd);
}
+ /* In the create-container case we double-fork in
+ order to disconnect from the parent, as we want to
+ continue in a daemon-like way */
+ main_pid = fork();
+ if (main_pid < 0) {
+ pexit("Failed to fork the create command");
+ } else if (main_pid != 0) {
+ exit(0);
+ }
+
+ /* Disconnect stdio from parent. We need to do this, because
+ the parent is waiting for the stdout to end when the intermediate
+ child dies */
+ if (dup2(dev_null_r, STDIN_FILENO) < 0)
+ pexit("Failed to dup over stdin");
+ if (dup2(dev_null_w, STDOUT_FILENO) < 0)
+ pexit("Failed to dup over stdout");
+ if (dup2(dev_null_w, STDERR_FILENO) < 0)
+ pexit("Failed to dup over stderr");
+
+ /* Create a new session group */
+ setsid();
+
+ /* Environment variables */
+ sync_pipe_fd = get_pipe_fd_from_env("_OCI_SYNCPIPE");
+
/* Open the log path file. */
- logfd = open(log_path, O_WRONLY | O_APPEND | O_CREAT);
- if (logfd < 0)
+ log_fd = open(opt_log_path, O_WRONLY | O_APPEND | O_CREAT | O_CLOEXEC, 0600);
+ if (log_fd < 0)
pexit("Failed to open log file");
/*
@@ -311,37 +1143,9 @@ int main(int argc, char *argv[])
pexit("Failed to set as subreaper");
}
- if (terminal) {
- struct sockaddr_un addr = {0};
-
- /*
- * Generate a temporary name. Is this unsafe? Probably, but we can
- * replace it with a rename(2) setup if necessary.
- */
-
- int unusedfd = g_mkstemp(csname);
- if (unusedfd < 0)
- pexit("Failed to generate random path for console-socket");
- close(unusedfd);
-
- addr.sun_family = AF_UNIX;
- strncpy(addr.sun_path, csname, sizeof(addr.sun_path)-1);
-
- ninfo("addr{sun_family=AF_UNIX, sun_path=%s}", addr.sun_path);
-
- /* Bind to the console socket path. */
- csfd = socket(AF_UNIX, SOCK_STREAM|SOCK_CLOEXEC, 0);
- if (csfd < 0)
- pexit("Failed to create console-socket");
- /* XXX: This should be handled with a rename(2). */
- if (unlink(csname) < 0)
- pexit("Failed to unlink temporary ranom path");
- if (bind(csfd, (struct sockaddr *) &addr, sizeof(addr)) < 0)
- pexit("Failed to bind to console-socket");
- if (listen(csfd, 128) < 0)
- pexit("Failed to listen on console-socket");
+ if (opt_terminal) {
+ csname = setup_console_socket();
} else {
- int fds[2];
/*
* Create a "fake" master fd so that we can use the same epoll code in
@@ -352,50 +1156,71 @@ int main(int argc, char *argv[])
* used anything else (and it wouldn't be a good idea to create a new
* pty pair in the host).
*/
- if (pipe(fds) < 0)
+
+ if (opt_stdin) {
+ if (pipe2(fds, O_CLOEXEC) < 0)
+ pexit("Failed to create !terminal stdin pipe");
+
+ masterfd_stdin = fds[1];
+ slavefd_stdin = fds[0];
+ }
+
+ if (pipe2(fds, O_CLOEXEC) < 0)
pexit("Failed to create !terminal stdout pipe");
masterfd_stdout = fds[0];
slavefd_stdout = fds[1];
-
- if (pipe(fds) < 0)
- pexit("Failed to create !terminal stderr pipe");
-
- masterfd_stderr = fds[0];
- slavefd_stderr = fds[1];
}
- cmd = g_string_new(runtime_path);
+ /* We always create a stderr pipe, because that way we can capture
+ runc stderr messages before the tty is created */
+ if (pipe2(fds, O_CLOEXEC) < 0)
+ pexit("Failed to create stderr pipe");
+
+ masterfd_stderr = fds[0];
+ slavefd_stderr = fds[1];
+
+ runtime_argv = g_ptr_array_new();
+ add_argv(runtime_argv,
+ opt_runtime_path,
+ NULL);
/* Generate the cmdline. */
- if (!exec && systemd_cgroup)
- g_string_append_printf(cmd, " --systemd-cgroup");
+ if (!opt_exec && opt_systemd_cgroup)
+ add_argv(runtime_argv,
+ "--systemd-cgroup",
+ NULL);
- if (exec)
- g_string_append_printf(cmd, " exec -d --pid-file %s", pid_file);
- else
- g_string_append_printf(cmd, " create --bundle %s --pid-file %s", bundle_path, pid_file);
+ if (opt_exec) {
+ add_argv(runtime_argv,
+ "exec", "-d",
+ "--pid-file", opt_pid_file,
+ NULL);
+ } else {
+ add_argv(runtime_argv,
+ "create",
+ "--bundle", opt_bundle_path,
+ "--pid-file", opt_pid_file,
+ NULL);
+ }
- if (terminal)
- g_string_append_printf(cmd, " --console-socket %s", csname);
-
- /* Container name comes last. */
- g_string_append_printf(cmd, " %s", cid);
+ if (csname != NULL) {
+ add_argv(runtime_argv,
+ "--console-socket", csname,
+ NULL);
+ }
/* Set the exec arguments. */
- if (exec) {
- /*
- * FIXME: This code is broken if argv[1] contains spaces or other
- * similar characters that shells don't like. It's a bit silly
- * that we're doing things inside a shell at all -- this should
- * all be done in arrays.
- */
-
- int i;
- for (i = 1; i < argc; i++)
- g_string_append_printf(cmd, " %s", argv[i]);
+ if (opt_exec) {
+ add_argv(runtime_argv,
+ "--process", opt_exec_process_spec,
+ NULL);
}
+ /* Container name comes last. */
+ add_argv(runtime_argv, opt_cid, NULL);
+ end_argv(runtime_argv);
+
/*
* We have to fork here because the current runC API dups the stdio of the
* calling process over the container's fds. This is actually *very bad*
@@ -409,201 +1234,166 @@ int main(int argc, char *argv[])
if (create_pid < 0) {
pexit("Failed to fork the create command");
} else if (!create_pid) {
- char *argv[] = {"sh", "-c", cmd->str, NULL};
+ /* FIXME: This results in us not outputting runc error messages to crio's log. */
+ if (slavefd_stdin < 0)
+ slavefd_stdin = dev_null_r;
+ if (dup2(slavefd_stdin, STDIN_FILENO) < 0)
+ pexit("Failed to dup over stdout");
- /* We only need to touch the stdio if we have terminal=false. */
- /* FIXME: This results in us not outputting runc error messages to ocid's log. */
- if (slavefd_stdout >= 0) {
- if (dup2(slavefd_stdout, STDOUT_FILENO) < 0)
- pexit("Failed to dup over stdout");
- }
- if (slavefd_stderr >= 0) {
- if (dup2(slavefd_stderr, STDERR_FILENO) < 0)
- pexit("Failed to dup over stderr");
- }
+ if (slavefd_stdout < 0)
+ slavefd_stdout = dev_null_w;
+ if (dup2(slavefd_stdout, STDOUT_FILENO) < 0)
+ pexit("Failed to dup over stdout");
- /* Exec into the process. TODO: Don't use the shell. */
- execv("/bin/sh", argv);
+ if (slavefd_stderr < 0)
+ slavefd_stderr = slavefd_stdout;
+ if (dup2(slavefd_stderr, STDERR_FILENO) < 0)
+ pexit("Failed to dup over stderr");
+
+ execv(g_ptr_array_index(runtime_argv,0), (char **)runtime_argv->pdata);
exit(127);
}
+ g_ptr_array_free (runtime_argv, TRUE);
+
/* The runtime has that fd now. We don't need to touch it anymore. */
+ close(slavefd_stdin);
close(slavefd_stdout);
close(slavefd_stderr);
- /* Get the console fd. */
+ /* Map pid to its handler. */
+ GHashTable *pid_to_handler = g_hash_table_new (g_int_hash, g_int_equal);
+ g_hash_table_insert (pid_to_handler, &create_pid, runtime_exit_cb);
+
/*
- * FIXME: If runc fails to start a container, we won't bail because we're
- * busy waiting for requests. The solution probably involves
- * epoll(2) and a signalfd(2). This causes a lot of issues.
+ * Glib does not support SIGCHLD so use SIGUSR1 with the same semantic. We will
+ * catch SIGCHLD and raise(SIGUSR1) in the signal handler.
*/
- if (terminal) {
- struct file_t console;
- int connfd = -1;
+ g_unix_signal_add (SIGUSR1, on_sigusr1_cb, pid_to_handler);
- ninfo("about to accept from csfd: %d", csfd);
- connfd = accept4(csfd, NULL, NULL, SOCK_CLOEXEC);
- if (connfd < 0)
- pexit("Failed to accept console-socket connection");
-
- /* Not accepting anything else. */
- close(csfd);
- unlink(csname);
-
- /* We exit if this fails. */
- ninfo("about to recvfd from connfd: %d", connfd);
- console = recvfd(connfd);
-
- ninfo("console = {.name = '%s'; .fd = %d}", console.name, console.fd);
- free(console.name);
-
- /* We only have a single fd for both pipes, so we just treat it as
- * stdout. stderr is ignored. */
- masterfd_stdout = console.fd;
- masterfd_stderr = -1;
-
- /* Clean up everything */
- close(connfd);
- }
+ if (signal(SIGCHLD, on_sigchld) == SIG_ERR)
+ pexit("Failed to set handler for SIGCHLD");
ninfo("about to waitpid: %d", create_pid);
+ if (csname != NULL) {
+ guint terminal_watch = g_unix_fd_add (console_socket_fd, G_IO_IN, terminal_accept_cb, csname);
+ /* Process any SIGCHLD we may have missed before the signal handler was in place. */
+ check_child_processes (pid_to_handler);
+ g_main_loop_run (main_loop);
+ g_source_remove (terminal_watch);
+ } else {
+ int ret;
+ /* Wait for our create child to exit with the return code. */
+ do
+ ret = waitpid(create_pid, &runtime_status, 0);
+ while (ret < 0 && errno == EINTR);
+ if (ret < 0) {
+ int old_errno = errno;
+ kill(create_pid, SIGKILL);
+ errno = old_errno;
+ pexit("Failed to wait for `runtime %s`", opt_exec ? "exec" : "create");
+ }
- /* Wait for our create child to exit with the return code. */
- if (waitpid(create_pid, &runtime_status, 0) < 0) {
- int old_errno = errno;
- kill(create_pid, SIGKILL);
- errno = old_errno;
- pexit("Failed to wait for `runtime %s`", exec ? "exec" : "create");
}
- if (!WIFEXITED(runtime_status) || WEXITSTATUS(runtime_status) != 0)
+
+ if (!WIFEXITED(runtime_status) || WEXITSTATUS(runtime_status) != 0) {
+ if (sync_pipe_fd > 0) {
+ /*
+ * Read from container stderr for any error and send it to parent
+ * We send -1 as pid to signal to parent that create container has failed.
+ */
+ num_read = read(masterfd_stderr, buf, BUF_SIZE);
+ if (num_read > 0) {
+ buf[num_read] = '\0';
+ write_sync_fd(sync_pipe_fd, -1, buf);
+ }
+ }
nexit("Failed to create container: exit status %d", WEXITSTATUS(runtime_status));
+ }
+
+ if (opt_terminal && masterfd_stdout == -1)
+ nexit("Runtime did not set up terminal");
/* Read the pid so we can wait for the process to exit */
- g_file_get_contents(pid_file, &contents, NULL, &err);
+ g_file_get_contents(opt_pid_file, &contents, NULL, &err);
if (err) {
nwarn("Failed to read pidfile: %s", err->message);
g_error_free(err);
exit(1);
}
- cpid = atoi(contents);
- ninfo("container PID: %d", cpid);
+ container_pid = atoi(contents);
+ ninfo("container PID: %d", container_pid);
+
+ g_hash_table_insert (pid_to_handler, &container_pid, container_exit_cb);
+
+ /* Setup endpoint for attach */
+ _cleanup_free_ char *attach_symlink_dir_path = NULL;
+ if (!opt_exec) {
+ attach_symlink_dir_path = setup_attach_socket();
+ }
+
+ if (!opt_exec) {
+ setup_terminal_control_fifo();
+ }
/* Send the container pid back to parent */
- if (sync_pipe_fd > 0 && !exec) {
- len = snprintf(buf, BUF_SIZE, "{\"pid\": %d}\n", cpid);
- if (len < 0 || write(sync_pipe_fd, buf, len) != len) {
- pexit("unable to send container pid to parent");
- }
+ if (!opt_exec) {
+ write_sync_fd(sync_pipe_fd, container_pid, NULL);
}
- /* Create epoll_ctl so that we can handle read/write events. */
- /*
- * TODO: Switch to libuv so that we can also implement exec as well as
- * attach and other important things. Using epoll directly is just
- * really nasty.
- */
- epfd = epoll_create(5);
- if (epfd < 0)
- pexit("epoll_create");
- ev.events = EPOLLIN;
+ setup_oom_handling(container_pid);
+
if (masterfd_stdout >= 0) {
- ev.data.fd = masterfd_stdout;
- if (epoll_ctl(epfd, EPOLL_CTL_ADD, ev.data.fd, &ev) < 0)
- pexit("Failed to add console masterfd_stdout to epoll");
- num_stdio_fds++;
+ g_unix_fd_add (masterfd_stdout, G_IO_IN, stdio_cb, GINT_TO_POINTER(STDOUT_PIPE));
}
if (masterfd_stderr >= 0) {
- ev.data.fd = masterfd_stderr;
- if (epoll_ctl(epfd, EPOLL_CTL_ADD, ev.data.fd, &ev) < 0)
- pexit("Failed to add console masterfd_stderr to epoll");
- num_stdio_fds++;
+ g_unix_fd_add (masterfd_stderr, G_IO_IN, stdio_cb, GINT_TO_POINTER(STDERR_PIPE));
}
- /* Log all of the container's output. */
- while (num_stdio_fds > 0) {
- int ready = epoll_wait(epfd, evlist, MAX_EVENTS, -1);
- if (ready < 0)
- continue;
-
- for (int i = 0; i < ready; i++) {
- if (evlist[i].events & EPOLLIN) {
- int masterfd = evlist[i].data.fd;
- stdpipe_t pipe;
- if (masterfd == masterfd_stdout)
- pipe = STDOUT_PIPE;
- else if (masterfd == masterfd_stderr)
- pipe = STDERR_PIPE;
- else {
- nwarn("unknown pipe fd");
- goto out;
- }
-
- num_read = read(masterfd, buf, BUF_SIZE);
- if (num_read <= 0)
- goto out;
-
- if (write_k8s_log(logfd, pipe, buf, num_read) < 0) {
- nwarn("write_k8s_log failed");
- goto out;
- }
- } else if (evlist[i].events & (EPOLLHUP | EPOLLERR)) {
- printf("closing fd %d\n", evlist[i].data.fd);
- if (close(evlist[i].data.fd) < 0)
- pexit("close");
- num_stdio_fds--;
- }
- }
+ if (opt_timeout > 0) {
+ g_timeout_add_seconds (opt_timeout, timeout_cb, NULL);
}
-out:
- /* Wait for the container process and record its exit code */
- while ((pid = waitpid(-1, &status, 0)) > 0) {
- int exit_status = WEXITSTATUS(status);
+ check_child_processes(pid_to_handler);
- printf("PID %d exited with status %d\n", pid, exit_status);
- if (pid == cpid) {
- if (!exec) {
- _cleanup_free_ char *status_str = NULL;
- ret = asprintf(&status_str, "%d", exit_status);
- if (ret < 0) {
- pexit("Failed to allocate memory for status");
- }
- g_file_set_contents("exit", status_str,
- strlen(status_str), &err);
- if (err) {
- fprintf(stderr,
- "Failed to write %s to exit file: %s\n",
- status_str, err->message);
- g_error_free(err);
- exit(1);
- }
- } else {
- /* Send the command exec exit code back to the parent */
- if (sync_pipe_fd > 0) {
- len = snprintf(buf, BUF_SIZE, "{\"exit_code\": %d}\n", exit_status);
- if (len < 0 || write(sync_pipe_fd, buf, len) != len) {
- pexit("unable to send exit status");
- exit(1);
- }
- }
- }
- break;
- }
+ g_main_loop_run (main_loop);
+
+ /* Drain stdout and stderr */
+ if (masterfd_stdout != -1) {
+ g_unix_set_fd_nonblocking(masterfd_stdout, TRUE, NULL);
+ while (read_stdio(masterfd_stdout, STDOUT_PIPE, NULL))
+ ;
+ }
+ if (masterfd_stderr != -1) {
+ g_unix_set_fd_nonblocking(masterfd_stderr, TRUE, NULL);
+ while (read_stdio(masterfd_stderr, STDERR_PIPE, NULL))
+ ;
}
- if (exec && pid < 0 && errno == ECHILD && sync_pipe_fd > 0) {
- /*
- * waitpid failed and set errno to ECHILD:
- * The runtime exec call did not create any child
- * process and we can send the system() exit code
- * to the parent.
- */
- len = snprintf(buf, BUF_SIZE, "{\"exit_code\": %d}\n", WEXITSTATUS(runtime_status));
- if (len < 0 || write(sync_pipe_fd, buf, len) != len) {
- pexit("unable to send exit status");
- exit(1);
- }
+ int exit_status = -1;
+ const char *exit_message = NULL;
+
+ if (timed_out) {
+ kill(container_pid, SIGKILL);
+ exit_message = "command timed out";
+ } else {
+ exit_status = WEXITSTATUS(container_status);
+ }
+
+ if (!opt_exec) {
+ _cleanup_free_ char *status_str = g_strdup_printf("%d", exit_status);
+ if (!g_file_set_contents("exit", status_str, -1, &err))
+ nexit("Failed to write %s to exit file: %s\n",
+ status_str, err->message);
+ } else {
+ /* Send the command exec exit code back to the parent */
+ write_sync_fd(sync_pipe_fd, exit_status, exit_message);
+ }
+
+ if (attach_symlink_dir_path != NULL &&
+ unlink(attach_symlink_dir_path) == -1 && errno != ENOENT) {
+ pexit("Failed to remove symlink for attach socket directory");
}
return EXIT_SUCCESS;
diff --git a/vendor/github.com/kubernetes-incubator/cri-o/pkg/ocicni/ocicni.go b/vendor/github.com/kubernetes-incubator/cri-o/pkg/ocicni/ocicni.go
index 432e4cb6c..0204cc2f4 100644
--- a/vendor/github.com/kubernetes-incubator/cri-o/pkg/ocicni/ocicni.go
+++ b/vendor/github.com/kubernetes-incubator/cri-o/pkg/ocicni/ocicni.go
@@ -7,10 +7,10 @@ import (
"sort"
"sync"
- "github.com/sirupsen/logrus"
"github.com/containernetworking/cni/libcni"
cnitypes "github.com/containernetworking/cni/pkg/types"
"github.com/fsnotify/fsnotify"
+ "github.com/sirupsen/logrus"
)
type cniNetworkPlugin struct {
@@ -48,7 +48,8 @@ func (plugin *cniNetworkPlugin) monitorNetDir() {
select {
case event := <-watcher.Events:
logrus.Debugf("CNI monitoring event %v", event)
- if event.Op&fsnotify.Create != fsnotify.Create {
+ if event.Op&fsnotify.Create != fsnotify.Create &&
+ event.Op&fsnotify.Write != fsnotify.Write {
continue
}
diff --git a/vendor/github.com/kubernetes-incubator/cri-o/vendor.conf b/vendor/github.com/kubernetes-incubator/cri-o/vendor.conf
new file mode 100644
index 000000000..813f08484
--- /dev/null
+++ b/vendor/github.com/kubernetes-incubator/cri-o/vendor.conf
@@ -0,0 +1,73 @@
+k8s.io/kubernetes v1.6.5 https://github.com/kubernetes/kubernetes
+# https://github.com/kubernetes/client-go#compatibility-matrix
+k8s.io/client-go v3.0.0-beta.0 https://github.com/kubernetes/client-go
+k8s.io/apimachinery release-1.6 https://github.com/kubernetes/apimachinery
+k8s.io/apiserver release-1.6 https://github.com/kubernetes/apiserver
+#
+github.com/sirupsen/logrus v1.0.0
+github.com/containers/image 74e359348c7ce9e0caf4fa75aa8de3809cf41c46
+github.com/ostreedev/ostree-go master
+github.com/containers/storage f8cff0727cf0802f0752ca58d2c05ec5270a47d5
+github.com/containernetworking/cni v0.4.0
+google.golang.org/grpc v1.0.1-GA https://github.com/grpc/grpc-go
+github.com/opencontainers/selinux v1.0.0-rc1
+github.com/opencontainers/go-digest v1.0.0-rc0
+github.com/opencontainers/runtime-tools 6bcd3b417fd6962ea04dafdbc2c07444e750572d
+github.com/opencontainers/runc 45bde006ca8c90e089894508708bcf0e2cdf9e13
+github.com/opencontainers/image-spec v1.0.0
+github.com/opencontainers/runtime-spec v1.0.0
+github.com/juju/ratelimit acf38b000a03e4ab89e40f20f1e548f4e6ac7f72
+github.com/tchap/go-patricia v2.2.6
+gopkg.in/cheggaaa/pb.v1 v1.0.7
+gopkg.in/inf.v0 v0.9.0
+gopkg.in/yaml.v2 v2
+github.com/docker/docker d4f6db83c21cfc6af54fffb1f13e8acb7199f96a
+github.com/docker/spdystream ed496381df8283605c435b86d4fdd6f4f20b8c6e
+github.com/docker/distribution 7a8efe719e55bbfaff7bc5718cdf0ed51ca821df
+github.com/docker/go-units v0.3.1
+github.com/docker/go-connections 3ede32e2033de7505e6500d6c868c2b9ed9f169d
+github.com/docker/libtrust aabc10ec26b754e797f9028f4589c5b7bd90dc20
+github.com/mistifyio/go-zfs v2.1.1
+github.com/ghodss/yaml 04f313413ffd65ce25f2541bfd2b2ceec5c0908c
+github.com/imdario/mergo 0.2.2
+github.com/gorilla/mux v1.3.0
+github.com/gorilla/context v1.1
+github.com/mtrmac/gpgme b2432428689ca58c2b8e8dea9449d3295cf96fc9
+github.com/mattn/go-runewidth v0.0.1
+github.com/seccomp/libseccomp-golang v0.9.0
+github.com/syndtr/gocapability e7cb7fa329f456b3855136a2642b197bad7366ba
+github.com/blang/semver v3.5.0
+github.com/BurntSushi/toml v0.2.0
+github.com/mitchellh/go-wordwrap ad45545899c7b13c020ea92b2072220eefad42b8
+github.com/golang/glog 23def4e6c14b4da8ac2ed8007337bc5eb5007998
+github.com/davecgh/go-spew v1.1.0
+github.com/go-openapi/spec 02fb9cd3430ed0581e0ceb4804d5d4b3cc702694
+github.com/go-openapi/jsonpointer 779f45308c19820f1a69e9a4cd965f496e0da10f
+github.com/go-openapi/jsonreference 36d33bfe519efae5632669801b180bf1a245da3b
+github.com/go-openapi/swag d5f8ebc3b1c55a4cf6489eeae7354f338cfe299e
+github.com/google/gofuzz 44d81051d367757e1c7c6a5a86423ece9afcf63c
+github.com/mailru/easyjson 99e922cf9de1bc0ab38310c277cff32c2147e747
+github.com/PuerkitoBio/purell v1.1.0
+github.com/PuerkitoBio/urlesc 5bd2802263f21d8788851d5305584c82a5c75d7e
+github.com/ugorji/go d23841a297e5489e787e72fceffabf9d2994b52a
+github.com/spf13/pflag 9ff6c6923cfffbcd502984b8e0c80539a94968b7
+golang.org/x/crypto 3fbbcd23f1cb824e69491a5930cfeff09b12f4d2
+golang.org/x/net c427ad74c6d7a814201695e9ffde0c5d400a7674
+golang.org/x/sys 4cd6d1a821c7175768725b55ca82f14683a29ea4
+golang.org/x/text f72d8390a633d5dfb0cc84043294db9f6c935756
+github.com/kr/pty v1.0.0
+github.com/gogo/protobuf v0.3
+github.com/golang/protobuf 8ee79997227bf9b34611aee7946ae64735e6fd93
+github.com/coreos/go-systemd v14
+github.com/coreos/pkg v3
+github.com/golang/groupcache b710c8433bd175204919eb38776e944233235d03
+github.com/fsnotify/fsnotify 7d7316ed6e1ed2de075aab8dfc76de5d158d66e1
+github.com/emicklei/go-restful 09691a3b6378b740595c1002f40c34dd5f218a22
+github.com/Azure/go-ansiterm 19f72df4d05d31cbe1c56bfc8045c96babff6c7e
+github.com/Microsoft/go-winio 78439966b38d69bf38227fbf57ac8a6fee70f69a
+github.com/Microsoft/hcsshim 43f9725307998e09f2e3816c2c0c36dc98f0c982
+github.com/pkg/errors v0.8.0
+github.com/godbus/dbus v4.0.0
+github.com/urfave/cli v1.19.1
+github.com/vbatts/tar-split v0.10.1
+github.com/renstrom/dedent v1.0.0