Update vendors.

Signed-off-by: Lantao Liu <lantaol@google.com>
This commit is contained in:
Lantao Liu
2018-02-03 01:04:54 +00:00
parent 809a99a39e
commit fa91b905e9
694 changed files with 226298 additions and 0 deletions

202
vendor/github.com/docker/distribution/LICENSE generated vendored Normal file
View File

@@ -0,0 +1,202 @@
Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS
APPENDIX: How to apply the Apache License to your work.
To apply the Apache License to your work, attach the following
boilerplate notice, with the fields enclosed by brackets "{}"
replaced with your own identifying information. (Don't include
the brackets!) The text should be enclosed in the appropriate
comment syntax for the file format. We also recommend that a
file or class name and description of purpose be included on the
same "printed page" as the copyright notice for easier
identification within third-party archives.
Copyright {yyyy} {name of copyright owner}
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.

131
vendor/github.com/docker/distribution/README.md generated vendored Normal file
View File

@@ -0,0 +1,131 @@
# Distribution
The Docker toolset to pack, ship, store, and deliver content.
This repository's main product is the Docker Registry 2.0 implementation
for storing and distributing Docker images. It supersedes the
[docker/docker-registry](https://github.com/docker/docker-registry)
project with a new API design, focused around security and performance.
<img src="https://www.docker.com/sites/default/files/oyster-registry-3.png" width=200px/>
[![Circle CI](https://circleci.com/gh/docker/distribution/tree/master.svg?style=svg)](https://circleci.com/gh/docker/distribution/tree/master)
[![GoDoc](https://godoc.org/github.com/docker/distribution?status.svg)](https://godoc.org/github.com/docker/distribution)
This repository contains the following components:
|**Component** |Description |
|--------------------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| **registry** | An implementation of the [Docker Registry HTTP API V2](docs/spec/api.md) for use with docker 1.6+. |
| **libraries** | A rich set of libraries for interacting with distribution components. Please see [godoc](https://godoc.org/github.com/docker/distribution) for details. **Note**: These libraries are **unstable**. |
| **specifications** | _Distribution_ related specifications are available in [docs/spec](docs/spec) |
| **documentation** | Docker's full documentation set is available at [docs.docker.com](https://docs.docker.com). This repository [contains the subset](docs/) related just to the registry. |
### How does this integrate with Docker engine?
This project should provide an implementation to a V2 API for use in the [Docker
core project](https://github.com/docker/docker). The API should be embeddable
and simplify the process of securely pulling and pushing content from `docker`
daemons.
### What are the long term goals of the Distribution project?
The _Distribution_ project has the further long term goal of providing a
secure tool chain for distributing content. The specifications, APIs and tools
should be as useful with Docker as they are without.
Our goal is to design a professional grade and extensible content distribution
system that allow users to:
* Enjoy an efficient, secured and reliable way to store, manage, package and
exchange content
* Hack/roll their own on top of healthy open-source components
* Implement their own home made solution through good specs, and solid
extensions mechanism.
## More about Registry 2.0
The new registry implementation provides the following benefits:
- faster push and pull
- new, more efficient implementation
- simplified deployment
- pluggable storage backend
- webhook notifications
For information on upcoming functionality, please see [ROADMAP.md](ROADMAP.md).
### Who needs to deploy a registry?
By default, Docker users pull images from Docker's public registry instance.
[Installing Docker](https://docs.docker.com/engine/installation/) gives users this
ability. Users can also push images to a repository on Docker's public registry,
if they have a [Docker Hub](https://hub.docker.com/) account.
For some users and even companies, this default behavior is sufficient. For
others, it is not.
For example, users with their own software products may want to maintain a
registry for private, company images. Also, you may wish to deploy your own
image repository for images used to test or in continuous integration. For these
use cases and others, [deploying your own registry instance](https://github.com/docker/docker.github.io/blob/master/registry/deploying.md)
may be the better choice.
### Migration to Registry 2.0
For those who have previously deployed their own registry based on the Registry
1.0 implementation and wish to deploy a Registry 2.0 while retaining images,
data migration is required. A tool to assist with migration efforts has been
created. For more information see [docker/migrator]
(https://github.com/docker/migrator).
## Contribute
Please see [CONTRIBUTING.md](CONTRIBUTING.md) for details on how to contribute
issues, fixes, and patches to this project. If you are contributing code, see
the instructions for [building a development environment](BUILDING.md).
## Support
If any issues are encountered while using the _Distribution_ project, several
avenues are available for support:
<table>
<tr>
<th align="left">
IRC
</th>
<td>
#docker-distribution on FreeNode
</td>
</tr>
<tr>
<th align="left">
Issue Tracker
</th>
<td>
github.com/docker/distribution/issues
</td>
</tr>
<tr>
<th align="left">
Google Groups
</th>
<td>
https://groups.google.com/a/dockerproject.org/forum/#!forum/distribution
</td>
</tr>
<tr>
<th align="left">
Mailing List
</th>
<td>
docker@dockerproject.org
</td>
</tr>
</table>
## License
This project is distributed under [Apache License, Version 2.0](LICENSE).

247
vendor/github.com/docker/distribution/digestset/set.go generated vendored Normal file
View File

@@ -0,0 +1,247 @@
package digestset
import (
"errors"
"sort"
"strings"
"sync"
digest "github.com/opencontainers/go-digest"
)
var (
// ErrDigestNotFound is used when a matching digest
// could not be found in a set.
ErrDigestNotFound = errors.New("digest not found")
// ErrDigestAmbiguous is used when multiple digests
// are found in a set. None of the matching digests
// should be considered valid matches.
ErrDigestAmbiguous = errors.New("ambiguous digest string")
)
// Set is used to hold a unique set of digests which
// may be easily referenced by easily referenced by a string
// representation of the digest as well as short representation.
// The uniqueness of the short representation is based on other
// digests in the set. If digests are omitted from this set,
// collisions in a larger set may not be detected, therefore it
// is important to always do short representation lookups on
// the complete set of digests. To mitigate collisions, an
// appropriately long short code should be used.
type Set struct {
mutex sync.RWMutex
entries digestEntries
}
// NewSet creates an empty set of digests
// which may have digests added.
func NewSet() *Set {
return &Set{
entries: digestEntries{},
}
}
// checkShortMatch checks whether two digests match as either whole
// values or short values. This function does not test equality,
// rather whether the second value could match against the first
// value.
func checkShortMatch(alg digest.Algorithm, hex, shortAlg, shortHex string) bool {
if len(hex) == len(shortHex) {
if hex != shortHex {
return false
}
if len(shortAlg) > 0 && string(alg) != shortAlg {
return false
}
} else if !strings.HasPrefix(hex, shortHex) {
return false
} else if len(shortAlg) > 0 && string(alg) != shortAlg {
return false
}
return true
}
// Lookup looks for a digest matching the given string representation.
// If no digests could be found ErrDigestNotFound will be returned
// with an empty digest value. If multiple matches are found
// ErrDigestAmbiguous will be returned with an empty digest value.
func (dst *Set) Lookup(d string) (digest.Digest, error) {
dst.mutex.RLock()
defer dst.mutex.RUnlock()
if len(dst.entries) == 0 {
return "", ErrDigestNotFound
}
var (
searchFunc func(int) bool
alg digest.Algorithm
hex string
)
dgst, err := digest.Parse(d)
if err == digest.ErrDigestInvalidFormat {
hex = d
searchFunc = func(i int) bool {
return dst.entries[i].val >= d
}
} else {
hex = dgst.Hex()
alg = dgst.Algorithm()
searchFunc = func(i int) bool {
if dst.entries[i].val == hex {
return dst.entries[i].alg >= alg
}
return dst.entries[i].val >= hex
}
}
idx := sort.Search(len(dst.entries), searchFunc)
if idx == len(dst.entries) || !checkShortMatch(dst.entries[idx].alg, dst.entries[idx].val, string(alg), hex) {
return "", ErrDigestNotFound
}
if dst.entries[idx].alg == alg && dst.entries[idx].val == hex {
return dst.entries[idx].digest, nil
}
if idx+1 < len(dst.entries) && checkShortMatch(dst.entries[idx+1].alg, dst.entries[idx+1].val, string(alg), hex) {
return "", ErrDigestAmbiguous
}
return dst.entries[idx].digest, nil
}
// Add adds the given digest to the set. An error will be returned
// if the given digest is invalid. If the digest already exists in the
// set, this operation will be a no-op.
func (dst *Set) Add(d digest.Digest) error {
if err := d.Validate(); err != nil {
return err
}
dst.mutex.Lock()
defer dst.mutex.Unlock()
entry := &digestEntry{alg: d.Algorithm(), val: d.Hex(), digest: d}
searchFunc := func(i int) bool {
if dst.entries[i].val == entry.val {
return dst.entries[i].alg >= entry.alg
}
return dst.entries[i].val >= entry.val
}
idx := sort.Search(len(dst.entries), searchFunc)
if idx == len(dst.entries) {
dst.entries = append(dst.entries, entry)
return nil
} else if dst.entries[idx].digest == d {
return nil
}
entries := append(dst.entries, nil)
copy(entries[idx+1:], entries[idx:len(entries)-1])
entries[idx] = entry
dst.entries = entries
return nil
}
// Remove removes the given digest from the set. An err will be
// returned if the given digest is invalid. If the digest does
// not exist in the set, this operation will be a no-op.
func (dst *Set) Remove(d digest.Digest) error {
if err := d.Validate(); err != nil {
return err
}
dst.mutex.Lock()
defer dst.mutex.Unlock()
entry := &digestEntry{alg: d.Algorithm(), val: d.Hex(), digest: d}
searchFunc := func(i int) bool {
if dst.entries[i].val == entry.val {
return dst.entries[i].alg >= entry.alg
}
return dst.entries[i].val >= entry.val
}
idx := sort.Search(len(dst.entries), searchFunc)
// Not found if idx is after or value at idx is not digest
if idx == len(dst.entries) || dst.entries[idx].digest != d {
return nil
}
entries := dst.entries
copy(entries[idx:], entries[idx+1:])
entries = entries[:len(entries)-1]
dst.entries = entries
return nil
}
// All returns all the digests in the set
func (dst *Set) All() []digest.Digest {
dst.mutex.RLock()
defer dst.mutex.RUnlock()
retValues := make([]digest.Digest, len(dst.entries))
for i := range dst.entries {
retValues[i] = dst.entries[i].digest
}
return retValues
}
// ShortCodeTable returns a map of Digest to unique short codes. The
// length represents the minimum value, the maximum length may be the
// entire value of digest if uniqueness cannot be achieved without the
// full value. This function will attempt to make short codes as short
// as possible to be unique.
func ShortCodeTable(dst *Set, length int) map[digest.Digest]string {
dst.mutex.RLock()
defer dst.mutex.RUnlock()
m := make(map[digest.Digest]string, len(dst.entries))
l := length
resetIdx := 0
for i := 0; i < len(dst.entries); i++ {
var short string
extended := true
for extended {
extended = false
if len(dst.entries[i].val) <= l {
short = dst.entries[i].digest.String()
} else {
short = dst.entries[i].val[:l]
for j := i + 1; j < len(dst.entries); j++ {
if checkShortMatch(dst.entries[j].alg, dst.entries[j].val, "", short) {
if j > resetIdx {
resetIdx = j
}
extended = true
} else {
break
}
}
if extended {
l++
}
}
}
m[dst.entries[i].digest] = short
if i >= resetIdx {
l = length
}
}
return m
}
type digestEntry struct {
alg digest.Algorithm
val string
digest digest.Digest
}
type digestEntries []*digestEntry
func (d digestEntries) Len() int {
return len(d)
}
func (d digestEntries) Less(i, j int) bool {
if d[i].val != d[j].val {
return d[i].val < d[j].val
}
return d[i].alg < d[j].alg
}
func (d digestEntries) Swap(i, j int) {
d[i], d[j] = d[j], d[i]
}

View File

@@ -0,0 +1,42 @@
package reference
import "path"
// IsNameOnly returns true if reference only contains a repo name.
func IsNameOnly(ref Named) bool {
if _, ok := ref.(NamedTagged); ok {
return false
}
if _, ok := ref.(Canonical); ok {
return false
}
return true
}
// FamiliarName returns the familiar name string
// for the given named, familiarizing if needed.
func FamiliarName(ref Named) string {
if nn, ok := ref.(normalizedNamed); ok {
return nn.Familiar().Name()
}
return ref.Name()
}
// FamiliarString returns the familiar string representation
// for the given reference, familiarizing if needed.
func FamiliarString(ref Reference) string {
if nn, ok := ref.(normalizedNamed); ok {
return nn.Familiar().String()
}
return ref.String()
}
// FamiliarMatch reports whether ref matches the specified pattern.
// See https://godoc.org/path#Match for supported patterns.
func FamiliarMatch(pattern string, ref Reference) (bool, error) {
matched, err := path.Match(pattern, FamiliarString(ref))
if namedRef, isNamed := ref.(Named); isNamed && !matched {
matched, _ = path.Match(pattern, FamiliarName(namedRef))
}
return matched, err
}

View File

@@ -0,0 +1,170 @@
package reference
import (
"errors"
"fmt"
"strings"
"github.com/docker/distribution/digestset"
"github.com/opencontainers/go-digest"
)
var (
legacyDefaultDomain = "index.docker.io"
defaultDomain = "docker.io"
officialRepoName = "library"
defaultTag = "latest"
)
// normalizedNamed represents a name which has been
// normalized and has a familiar form. A familiar name
// is what is used in Docker UI. An example normalized
// name is "docker.io/library/ubuntu" and corresponding
// familiar name of "ubuntu".
type normalizedNamed interface {
Named
Familiar() Named
}
// ParseNormalizedNamed parses a string into a named reference
// transforming a familiar name from Docker UI to a fully
// qualified reference. If the value may be an identifier
// use ParseAnyReference.
func ParseNormalizedNamed(s string) (Named, error) {
if ok := anchoredIdentifierRegexp.MatchString(s); ok {
return nil, fmt.Errorf("invalid repository name (%s), cannot specify 64-byte hexadecimal strings", s)
}
domain, remainder := splitDockerDomain(s)
var remoteName string
if tagSep := strings.IndexRune(remainder, ':'); tagSep > -1 {
remoteName = remainder[:tagSep]
} else {
remoteName = remainder
}
if strings.ToLower(remoteName) != remoteName {
return nil, errors.New("invalid reference format: repository name must be lowercase")
}
ref, err := Parse(domain + "/" + remainder)
if err != nil {
return nil, err
}
named, isNamed := ref.(Named)
if !isNamed {
return nil, fmt.Errorf("reference %s has no name", ref.String())
}
return named, nil
}
// splitDockerDomain splits a repository name to domain and remotename string.
// If no valid domain is found, the default domain is used. Repository name
// needs to be already validated before.
func splitDockerDomain(name string) (domain, remainder string) {
i := strings.IndexRune(name, '/')
if i == -1 || (!strings.ContainsAny(name[:i], ".:") && name[:i] != "localhost") {
domain, remainder = defaultDomain, name
} else {
domain, remainder = name[:i], name[i+1:]
}
if domain == legacyDefaultDomain {
domain = defaultDomain
}
if domain == defaultDomain && !strings.ContainsRune(remainder, '/') {
remainder = officialRepoName + "/" + remainder
}
return
}
// familiarizeName returns a shortened version of the name familiar
// to to the Docker UI. Familiar names have the default domain
// "docker.io" and "library/" repository prefix removed.
// For example, "docker.io/library/redis" will have the familiar
// name "redis" and "docker.io/dmcgowan/myapp" will be "dmcgowan/myapp".
// Returns a familiarized named only reference.
func familiarizeName(named namedRepository) repository {
repo := repository{
domain: named.Domain(),
path: named.Path(),
}
if repo.domain == defaultDomain {
repo.domain = ""
// Handle official repositories which have the pattern "library/<official repo name>"
if split := strings.Split(repo.path, "/"); len(split) == 2 && split[0] == officialRepoName {
repo.path = split[1]
}
}
return repo
}
func (r reference) Familiar() Named {
return reference{
namedRepository: familiarizeName(r.namedRepository),
tag: r.tag,
digest: r.digest,
}
}
func (r repository) Familiar() Named {
return familiarizeName(r)
}
func (t taggedReference) Familiar() Named {
return taggedReference{
namedRepository: familiarizeName(t.namedRepository),
tag: t.tag,
}
}
func (c canonicalReference) Familiar() Named {
return canonicalReference{
namedRepository: familiarizeName(c.namedRepository),
digest: c.digest,
}
}
// TagNameOnly adds the default tag "latest" to a reference if it only has
// a repo name.
func TagNameOnly(ref Named) Named {
if IsNameOnly(ref) {
namedTagged, err := WithTag(ref, defaultTag)
if err != nil {
// Default tag must be valid, to create a NamedTagged
// type with non-validated input the WithTag function
// should be used instead
panic(err)
}
return namedTagged
}
return ref
}
// ParseAnyReference parses a reference string as a possible identifier,
// full digest, or familiar name.
func ParseAnyReference(ref string) (Reference, error) {
if ok := anchoredIdentifierRegexp.MatchString(ref); ok {
return digestReference("sha256:" + ref), nil
}
if dgst, err := digest.Parse(ref); err == nil {
return digestReference(dgst), nil
}
return ParseNormalizedNamed(ref)
}
// ParseAnyReferenceWithSet parses a reference string as a possible short
// identifier to be matched in a digest set, a full digest, or familiar name.
func ParseAnyReferenceWithSet(ref string, ds *digestset.Set) (Reference, error) {
if ok := anchoredShortIdentifierRegexp.MatchString(ref); ok {
dgst, err := ds.Lookup(ref)
if err == nil {
return digestReference(dgst), nil
}
} else {
if dgst, err := digest.Parse(ref); err == nil {
return digestReference(dgst), nil
}
}
return ParseNormalizedNamed(ref)
}

View File

@@ -0,0 +1,433 @@
// Package reference provides a general type to represent any way of referencing images within the registry.
// Its main purpose is to abstract tags and digests (content-addressable hash).
//
// Grammar
//
// reference := name [ ":" tag ] [ "@" digest ]
// name := [domain '/'] path-component ['/' path-component]*
// domain := domain-component ['.' domain-component]* [':' port-number]
// domain-component := /([a-zA-Z0-9]|[a-zA-Z0-9][a-zA-Z0-9-]*[a-zA-Z0-9])/
// port-number := /[0-9]+/
// path-component := alpha-numeric [separator alpha-numeric]*
// alpha-numeric := /[a-z0-9]+/
// separator := /[_.]|__|[-]*/
//
// tag := /[\w][\w.-]{0,127}/
//
// digest := digest-algorithm ":" digest-hex
// digest-algorithm := digest-algorithm-component [ digest-algorithm-separator digest-algorithm-component ]
// digest-algorithm-separator := /[+.-_]/
// digest-algorithm-component := /[A-Za-z][A-Za-z0-9]*/
// digest-hex := /[0-9a-fA-F]{32,}/ ; At least 128 bit digest value
//
// identifier := /[a-f0-9]{64}/
// short-identifier := /[a-f0-9]{6,64}/
package reference
import (
"errors"
"fmt"
"strings"
"github.com/opencontainers/go-digest"
)
const (
// NameTotalLengthMax is the maximum total number of characters in a repository name.
NameTotalLengthMax = 255
)
var (
// ErrReferenceInvalidFormat represents an error while trying to parse a string as a reference.
ErrReferenceInvalidFormat = errors.New("invalid reference format")
// ErrTagInvalidFormat represents an error while trying to parse a string as a tag.
ErrTagInvalidFormat = errors.New("invalid tag format")
// ErrDigestInvalidFormat represents an error while trying to parse a string as a tag.
ErrDigestInvalidFormat = errors.New("invalid digest format")
// ErrNameContainsUppercase is returned for invalid repository names that contain uppercase characters.
ErrNameContainsUppercase = errors.New("repository name must be lowercase")
// ErrNameEmpty is returned for empty, invalid repository names.
ErrNameEmpty = errors.New("repository name must have at least one component")
// ErrNameTooLong is returned when a repository name is longer than NameTotalLengthMax.
ErrNameTooLong = fmt.Errorf("repository name must not be more than %v characters", NameTotalLengthMax)
// ErrNameNotCanonical is returned when a name is not canonical.
ErrNameNotCanonical = errors.New("repository name must be canonical")
)
// Reference is an opaque object reference identifier that may include
// modifiers such as a hostname, name, tag, and digest.
type Reference interface {
// String returns the full reference
String() string
}
// Field provides a wrapper type for resolving correct reference types when
// working with encoding.
type Field struct {
reference Reference
}
// AsField wraps a reference in a Field for encoding.
func AsField(reference Reference) Field {
return Field{reference}
}
// Reference unwraps the reference type from the field to
// return the Reference object. This object should be
// of the appropriate type to further check for different
// reference types.
func (f Field) Reference() Reference {
return f.reference
}
// MarshalText serializes the field to byte text which
// is the string of the reference.
func (f Field) MarshalText() (p []byte, err error) {
return []byte(f.reference.String()), nil
}
// UnmarshalText parses text bytes by invoking the
// reference parser to ensure the appropriately
// typed reference object is wrapped by field.
func (f *Field) UnmarshalText(p []byte) error {
r, err := Parse(string(p))
if err != nil {
return err
}
f.reference = r
return nil
}
// Named is an object with a full name
type Named interface {
Reference
Name() string
}
// Tagged is an object which has a tag
type Tagged interface {
Reference
Tag() string
}
// NamedTagged is an object including a name and tag.
type NamedTagged interface {
Named
Tag() string
}
// Digested is an object which has a digest
// in which it can be referenced by
type Digested interface {
Reference
Digest() digest.Digest
}
// Canonical reference is an object with a fully unique
// name including a name with domain and digest
type Canonical interface {
Named
Digest() digest.Digest
}
// namedRepository is a reference to a repository with a name.
// A namedRepository has both domain and path components.
type namedRepository interface {
Named
Domain() string
Path() string
}
// Domain returns the domain part of the Named reference
func Domain(named Named) string {
if r, ok := named.(namedRepository); ok {
return r.Domain()
}
domain, _ := splitDomain(named.Name())
return domain
}
// Path returns the name without the domain part of the Named reference
func Path(named Named) (name string) {
if r, ok := named.(namedRepository); ok {
return r.Path()
}
_, path := splitDomain(named.Name())
return path
}
func splitDomain(name string) (string, string) {
match := anchoredNameRegexp.FindStringSubmatch(name)
if len(match) != 3 {
return "", name
}
return match[1], match[2]
}
// SplitHostname splits a named reference into a
// hostname and name string. If no valid hostname is
// found, the hostname is empty and the full value
// is returned as name
// DEPRECATED: Use Domain or Path
func SplitHostname(named Named) (string, string) {
if r, ok := named.(namedRepository); ok {
return r.Domain(), r.Path()
}
return splitDomain(named.Name())
}
// Parse parses s and returns a syntactically valid Reference.
// If an error was encountered it is returned, along with a nil Reference.
// NOTE: Parse will not handle short digests.
func Parse(s string) (Reference, error) {
matches := ReferenceRegexp.FindStringSubmatch(s)
if matches == nil {
if s == "" {
return nil, ErrNameEmpty
}
if ReferenceRegexp.FindStringSubmatch(strings.ToLower(s)) != nil {
return nil, ErrNameContainsUppercase
}
return nil, ErrReferenceInvalidFormat
}
if len(matches[1]) > NameTotalLengthMax {
return nil, ErrNameTooLong
}
var repo repository
nameMatch := anchoredNameRegexp.FindStringSubmatch(matches[1])
if nameMatch != nil && len(nameMatch) == 3 {
repo.domain = nameMatch[1]
repo.path = nameMatch[2]
} else {
repo.domain = ""
repo.path = matches[1]
}
ref := reference{
namedRepository: repo,
tag: matches[2],
}
if matches[3] != "" {
var err error
ref.digest, err = digest.Parse(matches[3])
if err != nil {
return nil, err
}
}
r := getBestReferenceType(ref)
if r == nil {
return nil, ErrNameEmpty
}
return r, nil
}
// ParseNamed parses s and returns a syntactically valid reference implementing
// the Named interface. The reference must have a name and be in the canonical
// form, otherwise an error is returned.
// If an error was encountered it is returned, along with a nil Reference.
// NOTE: ParseNamed will not handle short digests.
func ParseNamed(s string) (Named, error) {
named, err := ParseNormalizedNamed(s)
if err != nil {
return nil, err
}
if named.String() != s {
return nil, ErrNameNotCanonical
}
return named, nil
}
// WithName returns a named object representing the given string. If the input
// is invalid ErrReferenceInvalidFormat will be returned.
func WithName(name string) (Named, error) {
if len(name) > NameTotalLengthMax {
return nil, ErrNameTooLong
}
match := anchoredNameRegexp.FindStringSubmatch(name)
if match == nil || len(match) != 3 {
return nil, ErrReferenceInvalidFormat
}
return repository{
domain: match[1],
path: match[2],
}, nil
}
// WithTag combines the name from "name" and the tag from "tag" to form a
// reference incorporating both the name and the tag.
func WithTag(name Named, tag string) (NamedTagged, error) {
if !anchoredTagRegexp.MatchString(tag) {
return nil, ErrTagInvalidFormat
}
var repo repository
if r, ok := name.(namedRepository); ok {
repo.domain = r.Domain()
repo.path = r.Path()
} else {
repo.path = name.Name()
}
if canonical, ok := name.(Canonical); ok {
return reference{
namedRepository: repo,
tag: tag,
digest: canonical.Digest(),
}, nil
}
return taggedReference{
namedRepository: repo,
tag: tag,
}, nil
}
// WithDigest combines the name from "name" and the digest from "digest" to form
// a reference incorporating both the name and the digest.
func WithDigest(name Named, digest digest.Digest) (Canonical, error) {
if !anchoredDigestRegexp.MatchString(digest.String()) {
return nil, ErrDigestInvalidFormat
}
var repo repository
if r, ok := name.(namedRepository); ok {
repo.domain = r.Domain()
repo.path = r.Path()
} else {
repo.path = name.Name()
}
if tagged, ok := name.(Tagged); ok {
return reference{
namedRepository: repo,
tag: tagged.Tag(),
digest: digest,
}, nil
}
return canonicalReference{
namedRepository: repo,
digest: digest,
}, nil
}
// TrimNamed removes any tag or digest from the named reference.
func TrimNamed(ref Named) Named {
domain, path := SplitHostname(ref)
return repository{
domain: domain,
path: path,
}
}
func getBestReferenceType(ref reference) Reference {
if ref.Name() == "" {
// Allow digest only references
if ref.digest != "" {
return digestReference(ref.digest)
}
return nil
}
if ref.tag == "" {
if ref.digest != "" {
return canonicalReference{
namedRepository: ref.namedRepository,
digest: ref.digest,
}
}
return ref.namedRepository
}
if ref.digest == "" {
return taggedReference{
namedRepository: ref.namedRepository,
tag: ref.tag,
}
}
return ref
}
type reference struct {
namedRepository
tag string
digest digest.Digest
}
func (r reference) String() string {
return r.Name() + ":" + r.tag + "@" + r.digest.String()
}
func (r reference) Tag() string {
return r.tag
}
func (r reference) Digest() digest.Digest {
return r.digest
}
type repository struct {
domain string
path string
}
func (r repository) String() string {
return r.Name()
}
func (r repository) Name() string {
if r.domain == "" {
return r.path
}
return r.domain + "/" + r.path
}
func (r repository) Domain() string {
return r.domain
}
func (r repository) Path() string {
return r.path
}
type digestReference digest.Digest
func (d digestReference) String() string {
return digest.Digest(d).String()
}
func (d digestReference) Digest() digest.Digest {
return digest.Digest(d)
}
type taggedReference struct {
namedRepository
tag string
}
func (t taggedReference) String() string {
return t.Name() + ":" + t.tag
}
func (t taggedReference) Tag() string {
return t.tag
}
type canonicalReference struct {
namedRepository
digest digest.Digest
}
func (c canonicalReference) String() string {
return c.Name() + "@" + c.digest.String()
}
func (c canonicalReference) Digest() digest.Digest {
return c.digest
}

View File

@@ -0,0 +1,143 @@
package reference
import "regexp"
var (
// alphaNumericRegexp defines the alpha numeric atom, typically a
// component of names. This only allows lower case characters and digits.
alphaNumericRegexp = match(`[a-z0-9]+`)
// separatorRegexp defines the separators allowed to be embedded in name
// components. This allow one period, one or two underscore and multiple
// dashes.
separatorRegexp = match(`(?:[._]|__|[-]*)`)
// nameComponentRegexp restricts registry path component names to start
// with at least one letter or number, with following parts able to be
// separated by one period, one or two underscore and multiple dashes.
nameComponentRegexp = expression(
alphaNumericRegexp,
optional(repeated(separatorRegexp, alphaNumericRegexp)))
// domainComponentRegexp restricts the registry domain component of a
// repository name to start with a component as defined by DomainRegexp
// and followed by an optional port.
domainComponentRegexp = match(`(?:[a-zA-Z0-9]|[a-zA-Z0-9][a-zA-Z0-9-]*[a-zA-Z0-9])`)
// DomainRegexp defines the structure of potential domain components
// that may be part of image names. This is purposely a subset of what is
// allowed by DNS to ensure backwards compatibility with Docker image
// names.
DomainRegexp = expression(
domainComponentRegexp,
optional(repeated(literal(`.`), domainComponentRegexp)),
optional(literal(`:`), match(`[0-9]+`)))
// TagRegexp matches valid tag names. From docker/docker:graph/tags.go.
TagRegexp = match(`[\w][\w.-]{0,127}`)
// anchoredTagRegexp matches valid tag names, anchored at the start and
// end of the matched string.
anchoredTagRegexp = anchored(TagRegexp)
// DigestRegexp matches valid digests.
DigestRegexp = match(`[A-Za-z][A-Za-z0-9]*(?:[-_+.][A-Za-z][A-Za-z0-9]*)*[:][[:xdigit:]]{32,}`)
// anchoredDigestRegexp matches valid digests, anchored at the start and
// end of the matched string.
anchoredDigestRegexp = anchored(DigestRegexp)
// NameRegexp is the format for the name component of references. The
// regexp has capturing groups for the domain and name part omitting
// the separating forward slash from either.
NameRegexp = expression(
optional(DomainRegexp, literal(`/`)),
nameComponentRegexp,
optional(repeated(literal(`/`), nameComponentRegexp)))
// anchoredNameRegexp is used to parse a name value, capturing the
// domain and trailing components.
anchoredNameRegexp = anchored(
optional(capture(DomainRegexp), literal(`/`)),
capture(nameComponentRegexp,
optional(repeated(literal(`/`), nameComponentRegexp))))
// ReferenceRegexp is the full supported format of a reference. The regexp
// is anchored and has capturing groups for name, tag, and digest
// components.
ReferenceRegexp = anchored(capture(NameRegexp),
optional(literal(":"), capture(TagRegexp)),
optional(literal("@"), capture(DigestRegexp)))
// IdentifierRegexp is the format for string identifier used as a
// content addressable identifier using sha256. These identifiers
// are like digests without the algorithm, since sha256 is used.
IdentifierRegexp = match(`([a-f0-9]{64})`)
// ShortIdentifierRegexp is the format used to represent a prefix
// of an identifier. A prefix may be used to match a sha256 identifier
// within a list of trusted identifiers.
ShortIdentifierRegexp = match(`([a-f0-9]{6,64})`)
// anchoredIdentifierRegexp is used to check or match an
// identifier value, anchored at start and end of string.
anchoredIdentifierRegexp = anchored(IdentifierRegexp)
// anchoredShortIdentifierRegexp is used to check if a value
// is a possible identifier prefix, anchored at start and end
// of string.
anchoredShortIdentifierRegexp = anchored(ShortIdentifierRegexp)
)
// match compiles the string to a regular expression.
var match = regexp.MustCompile
// literal compiles s into a literal regular expression, escaping any regexp
// reserved characters.
func literal(s string) *regexp.Regexp {
re := match(regexp.QuoteMeta(s))
if _, complete := re.LiteralPrefix(); !complete {
panic("must be a literal")
}
return re
}
// expression defines a full expression, where each regular expression must
// follow the previous.
func expression(res ...*regexp.Regexp) *regexp.Regexp {
var s string
for _, re := range res {
s += re.String()
}
return match(s)
}
// optional wraps the expression in a non-capturing group and makes the
// production optional.
func optional(res ...*regexp.Regexp) *regexp.Regexp {
return match(group(expression(res...)).String() + `?`)
}
// repeated wraps the regexp in a non-capturing group to get one or more
// matches.
func repeated(res ...*regexp.Regexp) *regexp.Regexp {
return match(group(expression(res...)).String() + `+`)
}
// group wraps the regexp in a non-capturing group.
func group(res ...*regexp.Regexp) *regexp.Regexp {
return match(`(?:` + expression(res...).String() + `)`)
}
// capture wraps the expression in a capturing group.
func capture(res ...*regexp.Regexp) *regexp.Regexp {
return match(`(` + expression(res...).String() + `)`)
}
// anchored anchors the regular expression by adding start and end delimiters.
func anchored(res ...*regexp.Regexp) *regexp.Regexp {
return match(`^` + expression(res...).String() + `$`)
}

41
vendor/github.com/docker/distribution/vendor.conf generated vendored Normal file
View File

@@ -0,0 +1,41 @@
github.com/Azure/azure-sdk-for-go c6f0533defaaaa26ea4dff3c9774e36033088112
github.com/Sirupsen/logrus d26492970760ca5d33129d2d799e34be5c4782eb
github.com/aws/aws-sdk-go c6fc52983ea2375810aa38ddb5370e9cdf611716
github.com/bshuster-repo/logrus-logstash-hook 5f729f2fb50a301153cae84ff5c58981d51c095a
github.com/bugsnag/bugsnag-go b1d153021fcd90ca3f080db36bec96dc690fb274
github.com/bugsnag/osext 0dd3f918b21bec95ace9dc86c7e70266cfc5c702
github.com/bugsnag/panicwrap e2c28503fcd0675329da73bf48b33404db873782
github.com/denverdino/aliyungo afedced274aa9a7fcdd47ac97018f0f8db4e5de2
github.com/docker/goamz f0a21f5b2e12f83a505ecf79b633bb2035cf6f85
github.com/docker/libtrust fa567046d9b14f6aa788882a950d69651d230b21
github.com/garyburd/redigo 535138d7bcd717d6531c701ef5933d98b1866257
github.com/go-ini/ini 2ba15ac2dc9cdf88c110ec2dc0ced7fa45f5678c
github.com/golang/protobuf/proto 8d92cf5fc15a4382f8964b08e1f42a75c0591aa3
github.com/gorilla/context 14f550f51af52180c2eefed15e5fd18d63c0a64a
github.com/gorilla/handlers 60c7bfde3e33c201519a200a4507a158cc03a17b
github.com/gorilla/mux e444e69cbd2e2e3e0749a2f3c717cec491552bbf
github.com/inconshreveable/mousetrap 76626ae9c91c4f2a10f34cad8ce83ea42c93bb75
github.com/jmespath/go-jmespath bd40a432e4c76585ef6b72d3fd96fb9b6dc7b68d
github.com/miekg/dns 271c58e0c14f552178ea321a545ff9af38930f39
github.com/mitchellh/mapstructure 482a9fd5fa83e8c4e7817413b80f3eb8feec03ef
github.com/ncw/swift b964f2ca856aac39885e258ad25aec08d5f64ee6
github.com/spf13/cobra 312092086bed4968099259622145a0c9ae280064
github.com/spf13/pflag 5644820622454e71517561946e3d94b9f9db6842
github.com/stevvooe/resumable 51ad44105773cafcbe91927f70ac68e1bf78f8b4
github.com/xenolf/lego/acme a9d8cec0e6563575e5868a005359ac97911b5985
github.com/yvasiyarov/go-metrics 57bccd1ccd43f94bb17fdd8bf3007059b802f85e
github.com/yvasiyarov/gorelic a9bba5b9ab508a086f9a12b8c51fab68478e2128
github.com/yvasiyarov/newrelic_platform_go b21fdbd4370f3717f3bbd2bf41c223bc273068e6
golang.org/x/crypto c10c31b5e94b6f7a0283272dc2bb27163dcea24b
golang.org/x/net 4876518f9e71663000c348837735820161a42df7
golang.org/x/oauth2 045497edb6234273d67dbc25da3f2ddbc4c4cacf
golang.org/x/time/rate a4bde12657593d5e90d0533a3e4fd95e635124cb
google.golang.org/api 9bf6e6e569ff057f75d9604a46c52928f17d2b54
google.golang.org/appengine 12d5545dc1cfa6047a286d5e853841b6471f4c19
google.golang.org/cloud 975617b05ea8a58727e6c1a06b6161ff4185a9f2
google.golang.org/grpc d3ddb4469d5a1b949fc7a7da7c1d6a0d1b6de994
gopkg.in/check.v1 64131543e7896d5bcc6bd5a76287eb75ea96c673
gopkg.in/square/go-jose.v1 40d457b439244b546f023d056628e5184136899b
gopkg.in/yaml.v2 bef53efd0c76e49e6de55ead051f886bea7e9420
rsc.io/letsencrypt e770c10b0f1a64775ae91d240407ce00d1a5bdeb https://github.com/dmcgowan/letsencrypt.git
github.com/opencontainers/go-digest a6d0ee40d4207ea02364bd3b9e8e77b9159ba1eb

191
vendor/github.com/docker/docker/LICENSE generated vendored Normal file
View File

@@ -0,0 +1,191 @@
Apache License
Version 2.0, January 2004
https://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS
Copyright 2013-2017 Docker, Inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
https://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.

19
vendor/github.com/docker/docker/NOTICE generated vendored Normal file
View File

@@ -0,0 +1,19 @@
Docker
Copyright 2012-2017 Docker, Inc.
This product includes software developed at Docker, Inc. (https://www.docker.com).
This product contains software (https://github.com/kr/pty) developed
by Keith Rarick, licensed under the MIT License.
The following is courtesy of our legal counsel:
Use and transfer of Docker may be subject to certain restrictions by the
United States and other governments.
It is your responsibility to ensure that your use and/or transfer does not
violate applicable laws.
For more information, please see https://www.bis.doc.gov
See also https://www.apache.org/dev/crypto.html and/or seek legal counsel.

57
vendor/github.com/docker/docker/README.md generated vendored Normal file
View File

@@ -0,0 +1,57 @@
The Moby Project
================
![Moby Project logo](docs/static_files/moby-project-logo.png "The Moby Project")
Moby is an open-source project created by Docker to enable and accelerate software containerization.
It provides a "Lego set" of toolkit components, the framework for assembling them into custom container-based systems, and a place for all container enthusiasts and professionals to experiment and exchange ideas.
Components include container build tools, a container registry, orchestration tools, a runtime and more, and these can be used as building blocks in conjunction with other tools and projects.
## Principles
Moby is an open project guided by strong principles, aiming to be modular, flexible and without too strong an opinion on user experience.
It is open to the community to help set its direction.
- Modular: the project includes lots of components that have well-defined functions and APIs that work together.
- Batteries included but swappable: Moby includes enough components to build fully featured container system, but its modular architecture ensures that most of the components can be swapped by different implementations.
- Usable security: Moby provides secure defaults without compromising usability.
- Developer focused: The APIs are intended to be functional and useful to build powerful tools.
They are not necessarily intended as end user tools but as components aimed at developers.
Documentation and UX is aimed at developers not end users.
## Audience
The Moby Project is intended for engineers, integrators and enthusiasts looking to modify, hack, fix, experiment, invent and build systems based on containers.
It is not for people looking for a commercially supported system, but for people who want to work and learn with open source code.
## Relationship with Docker
The components and tools in the Moby Project are initially the open source components that Docker and the community have built for the Docker Project.
New projects can be added if they fit with the community goals. Docker is committed to using Moby as the upstream for the Docker Product.
However, other projects are also encouraged to use Moby as an upstream, and to reuse the components in diverse ways, and all these uses will be treated in the same way. External maintainers and contributors are welcomed.
The Moby project is not intended as a location for support or feature requests for Docker products, but as a place for contributors to work on open source code, fix bugs, and make the code more useful.
The releases are supported by the maintainers, community and users, on a best efforts basis only, and are not intended for customers who want enterprise or commercial support; Docker EE is the appropriate product for these use cases.
-----
Legal
=====
*Brought to you courtesy of our legal counsel. For more context,
please see the [NOTICE](https://github.com/moby/moby/blob/master/NOTICE) document in this repo.*
Use and transfer of Moby may be subject to certain restrictions by the
United States and other governments.
It is your responsibility to ensure that your use and/or transfer does not
violate applicable laws.
For more information, please see https://www.bis.doc.gov
Licensing
=========
Moby is licensed under the Apache License, Version 2.0. See
[LICENSE](https://github.com/moby/moby/blob/master/LICENSE) for the full
license text.

11
vendor/github.com/docker/docker/pkg/README.md generated vendored Normal file
View File

@@ -0,0 +1,11 @@
pkg/ is a collection of utility packages used by the Docker project without being specific to its internals.
Utility packages are kept separate from the docker core codebase to keep it as small and concise as possible.
If some utilities grow larger and their APIs stabilize, they may be moved to their own repository under the
Docker organization, to facilitate re-use by other projects. However that is not the priority.
The directory `pkg` is named after the same directory in the camlistore project. Since Brad is a core
Go maintainer, we thought it made sense to copy his methods for organizing Go code :) Thanks Brad!
Because utility packages are small and neatly separated from the rest of the codebase, they are a good
place to start for aspiring maintainers and contributors. Get in touch if you want to help maintain them!

51
vendor/github.com/docker/docker/pkg/ioutils/buffer.go generated vendored Normal file
View File

@@ -0,0 +1,51 @@
package ioutils
import (
"errors"
"io"
)
var errBufferFull = errors.New("buffer is full")
type fixedBuffer struct {
buf []byte
pos int
lastRead int
}
func (b *fixedBuffer) Write(p []byte) (int, error) {
n := copy(b.buf[b.pos:cap(b.buf)], p)
b.pos += n
if n < len(p) {
if b.pos == cap(b.buf) {
return n, errBufferFull
}
return n, io.ErrShortWrite
}
return n, nil
}
func (b *fixedBuffer) Read(p []byte) (int, error) {
n := copy(p, b.buf[b.lastRead:b.pos])
b.lastRead += n
return n, nil
}
func (b *fixedBuffer) Len() int {
return b.pos - b.lastRead
}
func (b *fixedBuffer) Cap() int {
return cap(b.buf)
}
func (b *fixedBuffer) Reset() {
b.pos = 0
b.lastRead = 0
b.buf = b.buf[:0]
}
func (b *fixedBuffer) String() string {
return string(b.buf[b.lastRead:b.pos])
}

View File

@@ -0,0 +1,186 @@
package ioutils
import (
"errors"
"io"
"sync"
)
// maxCap is the highest capacity to use in byte slices that buffer data.
const maxCap = 1e6
// minCap is the lowest capacity to use in byte slices that buffer data
const minCap = 64
// blockThreshold is the minimum number of bytes in the buffer which will cause
// a write to BytesPipe to block when allocating a new slice.
const blockThreshold = 1e6
var (
// ErrClosed is returned when Write is called on a closed BytesPipe.
ErrClosed = errors.New("write to closed BytesPipe")
bufPools = make(map[int]*sync.Pool)
bufPoolsLock sync.Mutex
)
// BytesPipe is io.ReadWriteCloser which works similarly to pipe(queue).
// All written data may be read at most once. Also, BytesPipe allocates
// and releases new byte slices to adjust to current needs, so the buffer
// won't be overgrown after peak loads.
type BytesPipe struct {
mu sync.Mutex
wait *sync.Cond
buf []*fixedBuffer
bufLen int
closeErr error // error to return from next Read. set to nil if not closed.
}
// NewBytesPipe creates new BytesPipe, initialized by specified slice.
// If buf is nil, then it will be initialized with slice which cap is 64.
// buf will be adjusted in a way that len(buf) == 0, cap(buf) == cap(buf).
func NewBytesPipe() *BytesPipe {
bp := &BytesPipe{}
bp.buf = append(bp.buf, getBuffer(minCap))
bp.wait = sync.NewCond(&bp.mu)
return bp
}
// Write writes p to BytesPipe.
// It can allocate new []byte slices in a process of writing.
func (bp *BytesPipe) Write(p []byte) (int, error) {
bp.mu.Lock()
written := 0
loop0:
for {
if bp.closeErr != nil {
bp.mu.Unlock()
return written, ErrClosed
}
if len(bp.buf) == 0 {
bp.buf = append(bp.buf, getBuffer(64))
}
// get the last buffer
b := bp.buf[len(bp.buf)-1]
n, err := b.Write(p)
written += n
bp.bufLen += n
// errBufferFull is an error we expect to get if the buffer is full
if err != nil && err != errBufferFull {
bp.wait.Broadcast()
bp.mu.Unlock()
return written, err
}
// if there was enough room to write all then break
if len(p) == n {
break
}
// more data: write to the next slice
p = p[n:]
// make sure the buffer doesn't grow too big from this write
for bp.bufLen >= blockThreshold {
bp.wait.Wait()
if bp.closeErr != nil {
continue loop0
}
}
// add new byte slice to the buffers slice and continue writing
nextCap := b.Cap() * 2
if nextCap > maxCap {
nextCap = maxCap
}
bp.buf = append(bp.buf, getBuffer(nextCap))
}
bp.wait.Broadcast()
bp.mu.Unlock()
return written, nil
}
// CloseWithError causes further reads from a BytesPipe to return immediately.
func (bp *BytesPipe) CloseWithError(err error) error {
bp.mu.Lock()
if err != nil {
bp.closeErr = err
} else {
bp.closeErr = io.EOF
}
bp.wait.Broadcast()
bp.mu.Unlock()
return nil
}
// Close causes further reads from a BytesPipe to return immediately.
func (bp *BytesPipe) Close() error {
return bp.CloseWithError(nil)
}
// Read reads bytes from BytesPipe.
// Data could be read only once.
func (bp *BytesPipe) Read(p []byte) (n int, err error) {
bp.mu.Lock()
if bp.bufLen == 0 {
if bp.closeErr != nil {
bp.mu.Unlock()
return 0, bp.closeErr
}
bp.wait.Wait()
if bp.bufLen == 0 && bp.closeErr != nil {
err := bp.closeErr
bp.mu.Unlock()
return 0, err
}
}
for bp.bufLen > 0 {
b := bp.buf[0]
read, _ := b.Read(p) // ignore error since fixedBuffer doesn't really return an error
n += read
bp.bufLen -= read
if b.Len() == 0 {
// it's empty so return it to the pool and move to the next one
returnBuffer(b)
bp.buf[0] = nil
bp.buf = bp.buf[1:]
}
if len(p) == read {
break
}
p = p[read:]
}
bp.wait.Broadcast()
bp.mu.Unlock()
return
}
func returnBuffer(b *fixedBuffer) {
b.Reset()
bufPoolsLock.Lock()
pool := bufPools[b.Cap()]
bufPoolsLock.Unlock()
if pool != nil {
pool.Put(b)
}
}
func getBuffer(size int) *fixedBuffer {
bufPoolsLock.Lock()
pool, ok := bufPools[size]
if !ok {
pool = &sync.Pool{New: func() interface{} { return &fixedBuffer{buf: make([]byte, 0, size)} }}
bufPools[size] = pool
}
bufPoolsLock.Unlock()
return pool.Get().(*fixedBuffer)
}

View File

@@ -0,0 +1,162 @@
package ioutils
import (
"io"
"io/ioutil"
"os"
"path/filepath"
)
// NewAtomicFileWriter returns WriteCloser so that writing to it writes to a
// temporary file and closing it atomically changes the temporary file to
// destination path. Writing and closing concurrently is not allowed.
func NewAtomicFileWriter(filename string, perm os.FileMode) (io.WriteCloser, error) {
f, err := ioutil.TempFile(filepath.Dir(filename), ".tmp-"+filepath.Base(filename))
if err != nil {
return nil, err
}
abspath, err := filepath.Abs(filename)
if err != nil {
return nil, err
}
return &atomicFileWriter{
f: f,
fn: abspath,
perm: perm,
}, nil
}
// AtomicWriteFile atomically writes data to a file named by filename.
func AtomicWriteFile(filename string, data []byte, perm os.FileMode) error {
f, err := NewAtomicFileWriter(filename, perm)
if err != nil {
return err
}
n, err := f.Write(data)
if err == nil && n < len(data) {
err = io.ErrShortWrite
f.(*atomicFileWriter).writeErr = err
}
if err1 := f.Close(); err == nil {
err = err1
}
return err
}
type atomicFileWriter struct {
f *os.File
fn string
writeErr error
perm os.FileMode
}
func (w *atomicFileWriter) Write(dt []byte) (int, error) {
n, err := w.f.Write(dt)
if err != nil {
w.writeErr = err
}
return n, err
}
func (w *atomicFileWriter) Close() (retErr error) {
defer func() {
if retErr != nil || w.writeErr != nil {
os.Remove(w.f.Name())
}
}()
if err := w.f.Sync(); err != nil {
w.f.Close()
return err
}
if err := w.f.Close(); err != nil {
return err
}
if err := os.Chmod(w.f.Name(), w.perm); err != nil {
return err
}
if w.writeErr == nil {
return os.Rename(w.f.Name(), w.fn)
}
return nil
}
// AtomicWriteSet is used to atomically write a set
// of files and ensure they are visible at the same time.
// Must be committed to a new directory.
type AtomicWriteSet struct {
root string
}
// NewAtomicWriteSet creates a new atomic write set to
// atomically create a set of files. The given directory
// is used as the base directory for storing files before
// commit. If no temporary directory is given the system
// default is used.
func NewAtomicWriteSet(tmpDir string) (*AtomicWriteSet, error) {
td, err := ioutil.TempDir(tmpDir, "write-set-")
if err != nil {
return nil, err
}
return &AtomicWriteSet{
root: td,
}, nil
}
// WriteFile writes a file to the set, guaranteeing the file
// has been synced.
func (ws *AtomicWriteSet) WriteFile(filename string, data []byte, perm os.FileMode) error {
f, err := ws.FileWriter(filename, os.O_WRONLY|os.O_CREATE|os.O_TRUNC, perm)
if err != nil {
return err
}
n, err := f.Write(data)
if err == nil && n < len(data) {
err = io.ErrShortWrite
}
if err1 := f.Close(); err == nil {
err = err1
}
return err
}
type syncFileCloser struct {
*os.File
}
func (w syncFileCloser) Close() error {
err := w.File.Sync()
if err1 := w.File.Close(); err == nil {
err = err1
}
return err
}
// FileWriter opens a file writer inside the set. The file
// should be synced and closed before calling commit.
func (ws *AtomicWriteSet) FileWriter(name string, flag int, perm os.FileMode) (io.WriteCloser, error) {
f, err := os.OpenFile(filepath.Join(ws.root, name), flag, perm)
if err != nil {
return nil, err
}
return syncFileCloser{f}, nil
}
// Cancel cancels the set and removes all temporary data
// created in the set.
func (ws *AtomicWriteSet) Cancel() error {
return os.RemoveAll(ws.root)
}
// Commit moves all created files to the target directory. The
// target directory must not exist and the parent of the target
// directory must exist.
func (ws *AtomicWriteSet) Commit(target string) error {
return os.Rename(ws.root, target)
}
// String returns the location the set is writing to.
func (ws *AtomicWriteSet) String() string {
return ws.root
}

154
vendor/github.com/docker/docker/pkg/ioutils/readers.go generated vendored Normal file
View File

@@ -0,0 +1,154 @@
package ioutils
import (
"crypto/sha256"
"encoding/hex"
"io"
"golang.org/x/net/context"
)
type readCloserWrapper struct {
io.Reader
closer func() error
}
func (r *readCloserWrapper) Close() error {
return r.closer()
}
// NewReadCloserWrapper returns a new io.ReadCloser.
func NewReadCloserWrapper(r io.Reader, closer func() error) io.ReadCloser {
return &readCloserWrapper{
Reader: r,
closer: closer,
}
}
type readerErrWrapper struct {
reader io.Reader
closer func()
}
func (r *readerErrWrapper) Read(p []byte) (int, error) {
n, err := r.reader.Read(p)
if err != nil {
r.closer()
}
return n, err
}
// NewReaderErrWrapper returns a new io.Reader.
func NewReaderErrWrapper(r io.Reader, closer func()) io.Reader {
return &readerErrWrapper{
reader: r,
closer: closer,
}
}
// HashData returns the sha256 sum of src.
func HashData(src io.Reader) (string, error) {
h := sha256.New()
if _, err := io.Copy(h, src); err != nil {
return "", err
}
return "sha256:" + hex.EncodeToString(h.Sum(nil)), nil
}
// OnEOFReader wraps an io.ReadCloser and a function
// the function will run at the end of file or close the file.
type OnEOFReader struct {
Rc io.ReadCloser
Fn func()
}
func (r *OnEOFReader) Read(p []byte) (n int, err error) {
n, err = r.Rc.Read(p)
if err == io.EOF {
r.runFunc()
}
return
}
// Close closes the file and run the function.
func (r *OnEOFReader) Close() error {
err := r.Rc.Close()
r.runFunc()
return err
}
func (r *OnEOFReader) runFunc() {
if fn := r.Fn; fn != nil {
fn()
r.Fn = nil
}
}
// cancelReadCloser wraps an io.ReadCloser with a context for cancelling read
// operations.
type cancelReadCloser struct {
cancel func()
pR *io.PipeReader // Stream to read from
pW *io.PipeWriter
}
// NewCancelReadCloser creates a wrapper that closes the ReadCloser when the
// context is cancelled. The returned io.ReadCloser must be closed when it is
// no longer needed.
func NewCancelReadCloser(ctx context.Context, in io.ReadCloser) io.ReadCloser {
pR, pW := io.Pipe()
// Create a context used to signal when the pipe is closed
doneCtx, cancel := context.WithCancel(context.Background())
p := &cancelReadCloser{
cancel: cancel,
pR: pR,
pW: pW,
}
go func() {
_, err := io.Copy(pW, in)
select {
case <-ctx.Done():
// If the context was closed, p.closeWithError
// was already called. Calling it again would
// change the error that Read returns.
default:
p.closeWithError(err)
}
in.Close()
}()
go func() {
for {
select {
case <-ctx.Done():
p.closeWithError(ctx.Err())
case <-doneCtx.Done():
return
}
}
}()
return p
}
// Read wraps the Read method of the pipe that provides data from the wrapped
// ReadCloser.
func (p *cancelReadCloser) Read(buf []byte) (n int, err error) {
return p.pR.Read(buf)
}
// closeWithError closes the wrapper and its underlying reader. It will
// cause future calls to Read to return err.
func (p *cancelReadCloser) closeWithError(err error) {
p.pW.CloseWithError(err)
p.cancel()
}
// Close closes the wrapper its underlying reader. It will cause
// future calls to Read to return io.EOF.
func (p *cancelReadCloser) Close() error {
p.closeWithError(io.EOF)
return nil
}

View File

@@ -0,0 +1,10 @@
// +build !windows
package ioutils
import "io/ioutil"
// TempDir on Unix systems is equivalent to ioutil.TempDir.
func TempDir(dir, prefix string) (string, error) {
return ioutil.TempDir(dir, prefix)
}

View File

@@ -0,0 +1,18 @@
// +build windows
package ioutils
import (
"io/ioutil"
"github.com/docker/docker/pkg/longpath"
)
// TempDir is the equivalent of ioutil.TempDir, except that the result is in Windows longpath format.
func TempDir(dir, prefix string) (string, error) {
tempDir, err := ioutil.TempDir(dir, prefix)
if err != nil {
return "", err
}
return longpath.AddPrefix(tempDir), nil
}

View File

@@ -0,0 +1,92 @@
package ioutils
import (
"io"
"sync"
)
// WriteFlusher wraps the Write and Flush operation ensuring that every write
// is a flush. In addition, the Close method can be called to intercept
// Read/Write calls if the targets lifecycle has already ended.
type WriteFlusher struct {
w io.Writer
flusher flusher
flushed chan struct{}
flushedOnce sync.Once
closed chan struct{}
closeLock sync.Mutex
}
type flusher interface {
Flush()
}
var errWriteFlusherClosed = io.EOF
func (wf *WriteFlusher) Write(b []byte) (n int, err error) {
select {
case <-wf.closed:
return 0, errWriteFlusherClosed
default:
}
n, err = wf.w.Write(b)
wf.Flush() // every write is a flush.
return n, err
}
// Flush the stream immediately.
func (wf *WriteFlusher) Flush() {
select {
case <-wf.closed:
return
default:
}
wf.flushedOnce.Do(func() {
close(wf.flushed)
})
wf.flusher.Flush()
}
// Flushed returns the state of flushed.
// If it's flushed, return true, or else it return false.
func (wf *WriteFlusher) Flushed() bool {
// BUG(stevvooe): Remove this method. Its use is inherently racy. Seems to
// be used to detect whether or a response code has been issued or not.
// Another hook should be used instead.
var flushed bool
select {
case <-wf.flushed:
flushed = true
default:
}
return flushed
}
// Close closes the write flusher, disallowing any further writes to the
// target. After the flusher is closed, all calls to write or flush will
// result in an error.
func (wf *WriteFlusher) Close() error {
wf.closeLock.Lock()
defer wf.closeLock.Unlock()
select {
case <-wf.closed:
return errWriteFlusherClosed
default:
close(wf.closed)
}
return nil
}
// NewWriteFlusher returns a new WriteFlusher.
func NewWriteFlusher(w io.Writer) *WriteFlusher {
var fl flusher
if f, ok := w.(flusher); ok {
fl = f
} else {
fl = &NopFlusher{}
}
return &WriteFlusher{w: w, flusher: fl, closed: make(chan struct{}), flushed: make(chan struct{})}
}

66
vendor/github.com/docker/docker/pkg/ioutils/writers.go generated vendored Normal file
View File

@@ -0,0 +1,66 @@
package ioutils
import "io"
// NopWriter represents a type which write operation is nop.
type NopWriter struct{}
func (*NopWriter) Write(buf []byte) (int, error) {
return len(buf), nil
}
type nopWriteCloser struct {
io.Writer
}
func (w *nopWriteCloser) Close() error { return nil }
// NopWriteCloser returns a nopWriteCloser.
func NopWriteCloser(w io.Writer) io.WriteCloser {
return &nopWriteCloser{w}
}
// NopFlusher represents a type which flush operation is nop.
type NopFlusher struct{}
// Flush is a nop operation.
func (f *NopFlusher) Flush() {}
type writeCloserWrapper struct {
io.Writer
closer func() error
}
func (r *writeCloserWrapper) Close() error {
return r.closer()
}
// NewWriteCloserWrapper returns a new io.WriteCloser.
func NewWriteCloserWrapper(r io.Writer, closer func() error) io.WriteCloser {
return &writeCloserWrapper{
Writer: r,
closer: closer,
}
}
// WriteCounter wraps a concrete io.Writer and hold a count of the number
// of bytes written to the writer during a "session".
// This can be convenient when write return is masked
// (e.g., json.Encoder.Encode())
type WriteCounter struct {
Count int64
Writer io.Writer
}
// NewWriteCounter returns a new WriteCounter.
func NewWriteCounter(w io.Writer) *WriteCounter {
return &WriteCounter{
Writer: w,
}
}
func (wc *WriteCounter) Write(p []byte) (count int, err error) {
count, err = wc.Writer.Write(p)
wc.Count += int64(count)
return
}

View File

@@ -0,0 +1,26 @@
// longpath introduces some constants and helper functions for handling long paths
// in Windows, which are expected to be prepended with `\\?\` and followed by either
// a drive letter, a UNC server\share, or a volume identifier.
package longpath
import (
"strings"
)
// Prefix is the longpath prefix for Windows file paths.
const Prefix = `\\?\`
// AddPrefix will add the Windows long path prefix to the path provided if
// it does not already have it.
func AddPrefix(path string) string {
if !strings.HasPrefix(path, Prefix) {
if strings.HasPrefix(path, `\\`) {
// This is a UNC path, so we need to add 'UNC' to the path as well.
path = Prefix + `UNC` + path[1:]
} else {
path = Prefix + path
}
}
return path
}

149
vendor/github.com/docker/docker/pkg/mount/flags.go generated vendored Normal file
View File

@@ -0,0 +1,149 @@
package mount
import (
"fmt"
"strings"
)
var flags = map[string]struct {
clear bool
flag int
}{
"defaults": {false, 0},
"ro": {false, RDONLY},
"rw": {true, RDONLY},
"suid": {true, NOSUID},
"nosuid": {false, NOSUID},
"dev": {true, NODEV},
"nodev": {false, NODEV},
"exec": {true, NOEXEC},
"noexec": {false, NOEXEC},
"sync": {false, SYNCHRONOUS},
"async": {true, SYNCHRONOUS},
"dirsync": {false, DIRSYNC},
"remount": {false, REMOUNT},
"mand": {false, MANDLOCK},
"nomand": {true, MANDLOCK},
"atime": {true, NOATIME},
"noatime": {false, NOATIME},
"diratime": {true, NODIRATIME},
"nodiratime": {false, NODIRATIME},
"bind": {false, BIND},
"rbind": {false, RBIND},
"unbindable": {false, UNBINDABLE},
"runbindable": {false, RUNBINDABLE},
"private": {false, PRIVATE},
"rprivate": {false, RPRIVATE},
"shared": {false, SHARED},
"rshared": {false, RSHARED},
"slave": {false, SLAVE},
"rslave": {false, RSLAVE},
"relatime": {false, RELATIME},
"norelatime": {true, RELATIME},
"strictatime": {false, STRICTATIME},
"nostrictatime": {true, STRICTATIME},
}
var validFlags = map[string]bool{
"": true,
"size": true,
"mode": true,
"uid": true,
"gid": true,
"nr_inodes": true,
"nr_blocks": true,
"mpol": true,
}
var propagationFlags = map[string]bool{
"bind": true,
"rbind": true,
"unbindable": true,
"runbindable": true,
"private": true,
"rprivate": true,
"shared": true,
"rshared": true,
"slave": true,
"rslave": true,
}
// MergeTmpfsOptions merge mount options to make sure there is no duplicate.
func MergeTmpfsOptions(options []string) ([]string, error) {
// We use collisions maps to remove duplicates.
// For flag, the key is the flag value (the key for propagation flag is -1)
// For data=value, the key is the data
flagCollisions := map[int]bool{}
dataCollisions := map[string]bool{}
var newOptions []string
// We process in reverse order
for i := len(options) - 1; i >= 0; i-- {
option := options[i]
if option == "defaults" {
continue
}
if f, ok := flags[option]; ok && f.flag != 0 {
// There is only one propagation mode
key := f.flag
if propagationFlags[option] {
key = -1
}
// Check to see if there is collision for flag
if !flagCollisions[key] {
// We prepend the option and add to collision map
newOptions = append([]string{option}, newOptions...)
flagCollisions[key] = true
}
continue
}
opt := strings.SplitN(option, "=", 2)
if len(opt) != 2 || !validFlags[opt[0]] {
return nil, fmt.Errorf("Invalid tmpfs option %q", opt)
}
if !dataCollisions[opt[0]] {
// We prepend the option and add to collision map
newOptions = append([]string{option}, newOptions...)
dataCollisions[opt[0]] = true
}
}
return newOptions, nil
}
// Parse fstab type mount options into mount() flags
// and device specific data
func parseOptions(options string) (int, string) {
var (
flag int
data []string
)
for _, o := range strings.Split(options, ",") {
// If the option does not exist in the flags table or the flag
// is not supported on the platform,
// then it is a data value for a specific fs type
if f, exists := flags[o]; exists && f.flag != 0 {
if f.clear {
flag &= ^f.flag
} else {
flag |= f.flag
}
} else {
data = append(data, o)
}
}
return flag, strings.Join(data, ",")
}
// ParseTmpfsOptions parse fstab type mount options into flags and data
func ParseTmpfsOptions(options string) (int, string, error) {
flags, data := parseOptions(options)
for _, o := range strings.Split(data, ",") {
opt := strings.SplitN(o, "=", 2)
if !validFlags[opt[0]] {
return 0, "", fmt.Errorf("Invalid tmpfs option %q", opt)
}
}
return flags, data, nil
}

View File

@@ -0,0 +1,49 @@
// +build freebsd,cgo
package mount
/*
#include <sys/mount.h>
*/
import "C"
const (
// RDONLY will mount the filesystem as read-only.
RDONLY = C.MNT_RDONLY
// NOSUID will not allow set-user-identifier or set-group-identifier bits to
// take effect.
NOSUID = C.MNT_NOSUID
// NOEXEC will not allow execution of any binaries on the mounted file system.
NOEXEC = C.MNT_NOEXEC
// SYNCHRONOUS will allow any I/O to the file system to be done synchronously.
SYNCHRONOUS = C.MNT_SYNCHRONOUS
// NOATIME will not update the file access time when reading from a file.
NOATIME = C.MNT_NOATIME
)
// These flags are unsupported.
const (
BIND = 0
DIRSYNC = 0
MANDLOCK = 0
NODEV = 0
NODIRATIME = 0
UNBINDABLE = 0
RUNBINDABLE = 0
PRIVATE = 0
RPRIVATE = 0
SHARED = 0
RSHARED = 0
SLAVE = 0
RSLAVE = 0
RBIND = 0
RELATIVE = 0
RELATIME = 0
REMOUNT = 0
STRICTATIME = 0
mntDetach = 0
)

View File

@@ -0,0 +1,87 @@
package mount
import (
"golang.org/x/sys/unix"
)
const (
// RDONLY will mount the file system read-only.
RDONLY = unix.MS_RDONLY
// NOSUID will not allow set-user-identifier or set-group-identifier bits to
// take effect.
NOSUID = unix.MS_NOSUID
// NODEV will not interpret character or block special devices on the file
// system.
NODEV = unix.MS_NODEV
// NOEXEC will not allow execution of any binaries on the mounted file system.
NOEXEC = unix.MS_NOEXEC
// SYNCHRONOUS will allow I/O to the file system to be done synchronously.
SYNCHRONOUS = unix.MS_SYNCHRONOUS
// DIRSYNC will force all directory updates within the file system to be done
// synchronously. This affects the following system calls: create, link,
// unlink, symlink, mkdir, rmdir, mknod and rename.
DIRSYNC = unix.MS_DIRSYNC
// REMOUNT will attempt to remount an already-mounted file system. This is
// commonly used to change the mount flags for a file system, especially to
// make a readonly file system writeable. It does not change device or mount
// point.
REMOUNT = unix.MS_REMOUNT
// MANDLOCK will force mandatory locks on a filesystem.
MANDLOCK = unix.MS_MANDLOCK
// NOATIME will not update the file access time when reading from a file.
NOATIME = unix.MS_NOATIME
// NODIRATIME will not update the directory access time.
NODIRATIME = unix.MS_NODIRATIME
// BIND remounts a subtree somewhere else.
BIND = unix.MS_BIND
// RBIND remounts a subtree and all possible submounts somewhere else.
RBIND = unix.MS_BIND | unix.MS_REC
// UNBINDABLE creates a mount which cannot be cloned through a bind operation.
UNBINDABLE = unix.MS_UNBINDABLE
// RUNBINDABLE marks the entire mount tree as UNBINDABLE.
RUNBINDABLE = unix.MS_UNBINDABLE | unix.MS_REC
// PRIVATE creates a mount which carries no propagation abilities.
PRIVATE = unix.MS_PRIVATE
// RPRIVATE marks the entire mount tree as PRIVATE.
RPRIVATE = unix.MS_PRIVATE | unix.MS_REC
// SLAVE creates a mount which receives propagation from its master, but not
// vice versa.
SLAVE = unix.MS_SLAVE
// RSLAVE marks the entire mount tree as SLAVE.
RSLAVE = unix.MS_SLAVE | unix.MS_REC
// SHARED creates a mount which provides the ability to create mirrors of
// that mount such that mounts and unmounts within any of the mirrors
// propagate to the other mirrors.
SHARED = unix.MS_SHARED
// RSHARED marks the entire mount tree as SHARED.
RSHARED = unix.MS_SHARED | unix.MS_REC
// RELATIME updates inode access times relative to modify or change time.
RELATIME = unix.MS_RELATIME
// STRICTATIME allows to explicitly request full atime updates. This makes
// it possible for the kernel to default to relatime or noatime but still
// allow userspace to override it.
STRICTATIME = unix.MS_STRICTATIME
mntDetach = unix.MNT_DETACH
)

View File

@@ -0,0 +1,31 @@
// +build !linux,!freebsd freebsd,!cgo solaris,!cgo
package mount
// These flags are unsupported.
const (
BIND = 0
DIRSYNC = 0
MANDLOCK = 0
NOATIME = 0
NODEV = 0
NODIRATIME = 0
NOEXEC = 0
NOSUID = 0
UNBINDABLE = 0
RUNBINDABLE = 0
PRIVATE = 0
RPRIVATE = 0
SHARED = 0
RSHARED = 0
SLAVE = 0
RSLAVE = 0
RBIND = 0
RELATIME = 0
RELATIVE = 0
REMOUNT = 0
STRICTATIME = 0
SYNCHRONOUS = 0
RDONLY = 0
mntDetach = 0
)

86
vendor/github.com/docker/docker/pkg/mount/mount.go generated vendored Normal file
View File

@@ -0,0 +1,86 @@
package mount
import (
"sort"
"strings"
)
// GetMounts retrieves a list of mounts for the current running process.
func GetMounts() ([]*Info, error) {
return parseMountTable()
}
// Mounted determines if a specified mountpoint has been mounted.
// On Linux it looks at /proc/self/mountinfo and on Solaris at mnttab.
func Mounted(mountpoint string) (bool, error) {
entries, err := parseMountTable()
if err != nil {
return false, err
}
// Search the table for the mountpoint
for _, e := range entries {
if e.Mountpoint == mountpoint {
return true, nil
}
}
return false, nil
}
// Mount will mount filesystem according to the specified configuration, on the
// condition that the target path is *not* already mounted. Options must be
// specified like the mount or fstab unix commands: "opt1=val1,opt2=val2". See
// flags.go for supported option flags.
func Mount(device, target, mType, options string) error {
flag, _ := parseOptions(options)
if flag&REMOUNT != REMOUNT {
if mounted, err := Mounted(target); err != nil || mounted {
return err
}
}
return ForceMount(device, target, mType, options)
}
// ForceMount will mount a filesystem according to the specified configuration,
// *regardless* if the target path is not already mounted. Options must be
// specified like the mount or fstab unix commands: "opt1=val1,opt2=val2". See
// flags.go for supported option flags.
func ForceMount(device, target, mType, options string) error {
flag, data := parseOptions(options)
return mount(device, target, mType, uintptr(flag), data)
}
// Unmount lazily unmounts a filesystem on supported platforms, otherwise
// does a normal unmount.
func Unmount(target string) error {
if mounted, err := Mounted(target); err != nil || !mounted {
return err
}
return unmount(target, mntDetach)
}
// RecursiveUnmount unmounts the target and all mounts underneath, starting with
// the deepsest mount first.
func RecursiveUnmount(target string) error {
mounts, err := GetMounts()
if err != nil {
return err
}
// Make the deepest mount be first
sort.Sort(sort.Reverse(byMountpoint(mounts)))
for i, m := range mounts {
if !strings.HasPrefix(m.Mountpoint, target) {
continue
}
if err := Unmount(m.Mountpoint); err != nil && i == len(mounts)-1 {
if mounted, err := Mounted(m.Mountpoint); err != nil || mounted {
return err
}
// Ignore errors for submounts and continue trying to unmount others
// The final unmount should fail if there ane any submounts remaining
}
}
return nil
}

View File

@@ -0,0 +1,60 @@
package mount
/*
#include <errno.h>
#include <stdlib.h>
#include <string.h>
#include <sys/_iovec.h>
#include <sys/mount.h>
#include <sys/param.h>
*/
import "C"
import (
"fmt"
"strings"
"unsafe"
"golang.org/x/sys/unix"
)
func allocateIOVecs(options []string) []C.struct_iovec {
out := make([]C.struct_iovec, len(options))
for i, option := range options {
out[i].iov_base = unsafe.Pointer(C.CString(option))
out[i].iov_len = C.size_t(len(option) + 1)
}
return out
}
func mount(device, target, mType string, flag uintptr, data string) error {
isNullFS := false
xs := strings.Split(data, ",")
for _, x := range xs {
if x == "bind" {
isNullFS = true
}
}
options := []string{"fspath", target}
if isNullFS {
options = append(options, "fstype", "nullfs", "target", device)
} else {
options = append(options, "fstype", mType, "from", device)
}
rawOptions := allocateIOVecs(options)
for _, rawOption := range rawOptions {
defer C.free(rawOption.iov_base)
}
if errno := C.nmount(&rawOptions[0], C.uint(len(options)), C.int(flag)); errno != 0 {
reason := C.GoString(C.strerror(*C.__error()))
return fmt.Errorf("Failed to call nmount: %s", reason)
}
return nil
}
func unmount(target string, flag int) error {
return unix.Unmount(target, flag)
}

View File

@@ -0,0 +1,57 @@
package mount
import (
"golang.org/x/sys/unix"
)
const (
// ptypes is the set propagation types.
ptypes = unix.MS_SHARED | unix.MS_PRIVATE | unix.MS_SLAVE | unix.MS_UNBINDABLE
// pflags is the full set valid flags for a change propagation call.
pflags = ptypes | unix.MS_REC | unix.MS_SILENT
// broflags is the combination of bind and read only
broflags = unix.MS_BIND | unix.MS_RDONLY
)
// isremount returns true if either device name or flags identify a remount request, false otherwise.
func isremount(device string, flags uintptr) bool {
switch {
// We treat device "" and "none" as a remount request to provide compatibility with
// requests that don't explicitly set MS_REMOUNT such as those manipulating bind mounts.
case flags&unix.MS_REMOUNT != 0, device == "", device == "none":
return true
default:
return false
}
}
func mount(device, target, mType string, flags uintptr, data string) error {
oflags := flags &^ ptypes
if !isremount(device, flags) || data != "" {
// Initial call applying all non-propagation flags for mount
// or remount with changed data
if err := unix.Mount(device, target, mType, oflags, data); err != nil {
return err
}
}
if flags&ptypes != 0 {
// Change the propagation type.
if err := unix.Mount("", target, "", flags&pflags, ""); err != nil {
return err
}
}
if oflags&broflags == broflags {
// Remount the bind to apply read only.
return unix.Mount("", target, "", oflags|unix.MS_REMOUNT, "")
}
return nil
}
func unmount(target string, flag int) error {
return unix.Unmount(target, flag)
}

View File

@@ -0,0 +1,34 @@
// +build solaris,cgo
package mount
import (
"unsafe"
"golang.org/x/sys/unix"
)
// #include <stdlib.h>
// #include <stdio.h>
// #include <sys/mount.h>
// int Mount(const char *spec, const char *dir, int mflag,
// char *fstype, char *dataptr, int datalen, char *optptr, int optlen) {
// return mount(spec, dir, mflag, fstype, dataptr, datalen, optptr, optlen);
// }
import "C"
func mount(device, target, mType string, flag uintptr, data string) error {
spec := C.CString(device)
dir := C.CString(target)
fstype := C.CString(mType)
_, err := C.Mount(spec, dir, C.int(flag), fstype, nil, 0, nil, 0)
C.free(unsafe.Pointer(spec))
C.free(unsafe.Pointer(dir))
C.free(unsafe.Pointer(fstype))
return err
}
func unmount(target string, flag int) error {
err := unix.Unmount(target, flag)
return err
}

View File

@@ -0,0 +1,11 @@
// +build !linux,!freebsd,!solaris freebsd,!cgo solaris,!cgo
package mount
func mount(device, target, mType string, flag uintptr, data string) error {
panic("Not implemented")
}
func unmount(target string, flag int) error {
panic("Not implemented")
}

54
vendor/github.com/docker/docker/pkg/mount/mountinfo.go generated vendored Normal file
View File

@@ -0,0 +1,54 @@
package mount
// Info reveals information about a particular mounted filesystem. This
// struct is populated from the content in the /proc/<pid>/mountinfo file.
type Info struct {
// ID is a unique identifier of the mount (may be reused after umount).
ID int
// Parent indicates the ID of the mount parent (or of self for the top of the
// mount tree).
Parent int
// Major indicates one half of the device ID which identifies the device class.
Major int
// Minor indicates one half of the device ID which identifies a specific
// instance of device.
Minor int
// Root of the mount within the filesystem.
Root string
// Mountpoint indicates the mount point relative to the process's root.
Mountpoint string
// Opts represents mount-specific options.
Opts string
// Optional represents optional fields.
Optional string
// Fstype indicates the type of filesystem, such as EXT3.
Fstype string
// Source indicates filesystem specific information or "none".
Source string
// VfsOpts represents per super block options.
VfsOpts string
}
type byMountpoint []*Info
func (by byMountpoint) Len() int {
return len(by)
}
func (by byMountpoint) Less(i, j int) bool {
return by[i].Mountpoint < by[j].Mountpoint
}
func (by byMountpoint) Swap(i, j int) {
by[i], by[j] = by[j], by[i]
}

View File

@@ -0,0 +1,41 @@
package mount
/*
#include <sys/param.h>
#include <sys/ucred.h>
#include <sys/mount.h>
*/
import "C"
import (
"fmt"
"reflect"
"unsafe"
)
// Parse /proc/self/mountinfo because comparing Dev and ino does not work from
// bind mounts.
func parseMountTable() ([]*Info, error) {
var rawEntries *C.struct_statfs
count := int(C.getmntinfo(&rawEntries, C.MNT_WAIT))
if count == 0 {
return nil, fmt.Errorf("Failed to call getmntinfo")
}
var entries []C.struct_statfs
header := (*reflect.SliceHeader)(unsafe.Pointer(&entries))
header.Cap = count
header.Len = count
header.Data = uintptr(unsafe.Pointer(rawEntries))
var out []*Info
for _, entry := range entries {
var mountinfo Info
mountinfo.Mountpoint = C.GoString(&entry.f_mntonname[0])
mountinfo.Source = C.GoString(&entry.f_mntfromname[0])
mountinfo.Fstype = C.GoString(&entry.f_fstypename[0])
out = append(out, &mountinfo)
}
return out, nil
}

View File

@@ -0,0 +1,95 @@
// +build linux
package mount
import (
"bufio"
"fmt"
"io"
"os"
"strings"
)
const (
/* 36 35 98:0 /mnt1 /mnt2 rw,noatime master:1 - ext3 /dev/root rw,errors=continue
(1)(2)(3) (4) (5) (6) (7) (8) (9) (10) (11)
(1) mount ID: unique identifier of the mount (may be reused after umount)
(2) parent ID: ID of parent (or of self for the top of the mount tree)
(3) major:minor: value of st_dev for files on filesystem
(4) root: root of the mount within the filesystem
(5) mount point: mount point relative to the process's root
(6) mount options: per mount options
(7) optional fields: zero or more fields of the form "tag[:value]"
(8) separator: marks the end of the optional fields
(9) filesystem type: name of filesystem of the form "type[.subtype]"
(10) mount source: filesystem specific information or "none"
(11) super options: per super block options*/
mountinfoFormat = "%d %d %d:%d %s %s %s %s"
)
// Parse /proc/self/mountinfo because comparing Dev and ino does not work from
// bind mounts
func parseMountTable() ([]*Info, error) {
f, err := os.Open("/proc/self/mountinfo")
if err != nil {
return nil, err
}
defer f.Close()
return parseInfoFile(f)
}
func parseInfoFile(r io.Reader) ([]*Info, error) {
var (
s = bufio.NewScanner(r)
out = []*Info{}
)
for s.Scan() {
if err := s.Err(); err != nil {
return nil, err
}
var (
p = &Info{}
text = s.Text()
optionalFields string
)
if _, err := fmt.Sscanf(text, mountinfoFormat,
&p.ID, &p.Parent, &p.Major, &p.Minor,
&p.Root, &p.Mountpoint, &p.Opts, &optionalFields); err != nil {
return nil, fmt.Errorf("Scanning '%s' failed: %s", text, err)
}
// Safe as mountinfo encodes mountpoints with spaces as \040.
index := strings.Index(text, " - ")
postSeparatorFields := strings.Fields(text[index+3:])
if len(postSeparatorFields) < 3 {
return nil, fmt.Errorf("Error found less than 3 fields post '-' in %q", text)
}
if optionalFields != "-" {
p.Optional = optionalFields
}
p.Fstype = postSeparatorFields[0]
p.Source = postSeparatorFields[1]
p.VfsOpts = strings.Join(postSeparatorFields[2:], " ")
out = append(out, p)
}
return out, nil
}
// PidMountInfo collects the mounts for a specific process ID. If the process
// ID is unknown, it is better to use `GetMounts` which will inspect
// "/proc/self/mountinfo" instead.
func PidMountInfo(pid int) ([]*Info, error) {
f, err := os.Open(fmt.Sprintf("/proc/%d/mountinfo", pid))
if err != nil {
return nil, err
}
defer f.Close()
return parseInfoFile(f)
}

View File

@@ -0,0 +1,44 @@
// +build solaris,cgo
package mount
/*
#include <stdio.h>
#include <stdlib.h>
#include <sys/mnttab.h>
*/
import "C"
import (
"fmt"
"unsafe"
)
func parseMountTable() ([]*Info, error) {
path := C.CString(C.MNTTAB)
defer C.free(unsafe.Pointer(path))
mode := C.CString("r")
defer C.free(unsafe.Pointer(mode))
mnttab := C.fopen(path, mode)
if mnttab == nil {
return nil, fmt.Errorf("Failed to open %s", C.MNTTAB)
}
var out []*Info
var mp C.struct_mnttab
ret := C.getmntent(mnttab, &mp)
for ret == 0 {
var mountinfo Info
mountinfo.Mountpoint = C.GoString(mp.mnt_mountp)
mountinfo.Source = C.GoString(mp.mnt_special)
mountinfo.Fstype = C.GoString(mp.mnt_fstype)
mountinfo.Opts = C.GoString(mp.mnt_mntopts)
out = append(out, &mountinfo)
ret = C.getmntent(mnttab, &mp)
}
C.fclose(mnttab)
return out, nil
}

View File

@@ -0,0 +1,12 @@
// +build !windows,!linux,!freebsd,!solaris freebsd,!cgo solaris,!cgo
package mount
import (
"fmt"
"runtime"
)
func parseMountTable() ([]*Info, error) {
return nil, fmt.Errorf("mount.parseMountTable is not implemented on %s/%s", runtime.GOOS, runtime.GOARCH)
}

View File

@@ -0,0 +1,6 @@
package mount
func parseMountTable() ([]*Info, error) {
// Do NOT return an error!
return nil, nil
}

View File

@@ -0,0 +1,69 @@
// +build linux
package mount
// MakeShared ensures a mounted filesystem has the SHARED mount option enabled.
// See the supported options in flags.go for further reference.
func MakeShared(mountPoint string) error {
return ensureMountedAs(mountPoint, "shared")
}
// MakeRShared ensures a mounted filesystem has the RSHARED mount option enabled.
// See the supported options in flags.go for further reference.
func MakeRShared(mountPoint string) error {
return ensureMountedAs(mountPoint, "rshared")
}
// MakePrivate ensures a mounted filesystem has the PRIVATE mount option enabled.
// See the supported options in flags.go for further reference.
func MakePrivate(mountPoint string) error {
return ensureMountedAs(mountPoint, "private")
}
// MakeRPrivate ensures a mounted filesystem has the RPRIVATE mount option
// enabled. See the supported options in flags.go for further reference.
func MakeRPrivate(mountPoint string) error {
return ensureMountedAs(mountPoint, "rprivate")
}
// MakeSlave ensures a mounted filesystem has the SLAVE mount option enabled.
// See the supported options in flags.go for further reference.
func MakeSlave(mountPoint string) error {
return ensureMountedAs(mountPoint, "slave")
}
// MakeRSlave ensures a mounted filesystem has the RSLAVE mount option enabled.
// See the supported options in flags.go for further reference.
func MakeRSlave(mountPoint string) error {
return ensureMountedAs(mountPoint, "rslave")
}
// MakeUnbindable ensures a mounted filesystem has the UNBINDABLE mount option
// enabled. See the supported options in flags.go for further reference.
func MakeUnbindable(mountPoint string) error {
return ensureMountedAs(mountPoint, "unbindable")
}
// MakeRUnbindable ensures a mounted filesystem has the RUNBINDABLE mount
// option enabled. See the supported options in flags.go for further reference.
func MakeRUnbindable(mountPoint string) error {
return ensureMountedAs(mountPoint, "runbindable")
}
func ensureMountedAs(mountPoint, options string) error {
mounted, err := Mounted(mountPoint)
if err != nil {
return err
}
if !mounted {
if err := Mount(mountPoint, mountPoint, "none", "bind,rw"); err != nil {
return err
}
}
if _, err = Mounted(mountPoint); err != nil {
return err
}
return ForceMount("", mountPoint, "none", options)
}

View File

@@ -0,0 +1,58 @@
// +build solaris
package mount
// MakeShared ensures a mounted filesystem has the SHARED mount option enabled.
// See the supported options in flags.go for further reference.
func MakeShared(mountPoint string) error {
return ensureMountedAs(mountPoint, "shared")
}
// MakeRShared ensures a mounted filesystem has the RSHARED mount option enabled.
// See the supported options in flags.go for further reference.
func MakeRShared(mountPoint string) error {
return ensureMountedAs(mountPoint, "rshared")
}
// MakePrivate ensures a mounted filesystem has the PRIVATE mount option enabled.
// See the supported options in flags.go for further reference.
func MakePrivate(mountPoint string) error {
return ensureMountedAs(mountPoint, "private")
}
// MakeRPrivate ensures a mounted filesystem has the RPRIVATE mount option
// enabled. See the supported options in flags.go for further reference.
func MakeRPrivate(mountPoint string) error {
return ensureMountedAs(mountPoint, "rprivate")
}
// MakeSlave ensures a mounted filesystem has the SLAVE mount option enabled.
// See the supported options in flags.go for further reference.
func MakeSlave(mountPoint string) error {
return ensureMountedAs(mountPoint, "slave")
}
// MakeRSlave ensures a mounted filesystem has the RSLAVE mount option enabled.
// See the supported options in flags.go for further reference.
func MakeRSlave(mountPoint string) error {
return ensureMountedAs(mountPoint, "rslave")
}
// MakeUnbindable ensures a mounted filesystem has the UNBINDABLE mount option
// enabled. See the supported options in flags.go for further reference.
func MakeUnbindable(mountPoint string) error {
return ensureMountedAs(mountPoint, "unbindable")
}
// MakeRUnbindable ensures a mounted filesystem has the RUNBINDABLE mount
// option enabled. See the supported options in flags.go for further reference.
func MakeRUnbindable(mountPoint string) error {
return ensureMountedAs(mountPoint, "runbindable")
}
func ensureMountedAs(mountPoint, options string) error {
// TODO: Solaris does not support bind mounts.
// Evaluate lofs and also look at the relevant
// mount flags to be supported.
return nil
}

1
vendor/github.com/docker/docker/pkg/signal/README.md generated vendored Normal file
View File

@@ -0,0 +1 @@
This package provides helper functions for dealing with signals across various operating systems

54
vendor/github.com/docker/docker/pkg/signal/signal.go generated vendored Normal file
View File

@@ -0,0 +1,54 @@
// Package signal provides helper functions for dealing with signals across
// various operating systems.
package signal
import (
"fmt"
"os"
"os/signal"
"strconv"
"strings"
"syscall"
)
// CatchAll catches all signals and relays them to the specified channel.
func CatchAll(sigc chan os.Signal) {
handledSigs := []os.Signal{}
for _, s := range SignalMap {
handledSigs = append(handledSigs, s)
}
signal.Notify(sigc, handledSigs...)
}
// StopCatch stops catching the signals and closes the specified channel.
func StopCatch(sigc chan os.Signal) {
signal.Stop(sigc)
close(sigc)
}
// ParseSignal translates a string to a valid syscall signal.
// It returns an error if the signal map doesn't include the given signal.
func ParseSignal(rawSignal string) (syscall.Signal, error) {
s, err := strconv.Atoi(rawSignal)
if err == nil {
if s == 0 {
return -1, fmt.Errorf("Invalid signal: %s", rawSignal)
}
return syscall.Signal(s), nil
}
signal, ok := SignalMap[strings.TrimPrefix(strings.ToUpper(rawSignal), "SIG")]
if !ok {
return -1, fmt.Errorf("Invalid signal: %s", rawSignal)
}
return signal, nil
}
// ValidSignalForPlatform returns true if a signal is valid on the platform
func ValidSignalForPlatform(sig syscall.Signal) bool {
for _, v := range SignalMap {
if v == sig {
return true
}
}
return false
}

View File

@@ -0,0 +1,41 @@
package signal
import (
"syscall"
)
// SignalMap is a map of Darwin signals.
var SignalMap = map[string]syscall.Signal{
"ABRT": syscall.SIGABRT,
"ALRM": syscall.SIGALRM,
"BUG": syscall.SIGBUS,
"CHLD": syscall.SIGCHLD,
"CONT": syscall.SIGCONT,
"EMT": syscall.SIGEMT,
"FPE": syscall.SIGFPE,
"HUP": syscall.SIGHUP,
"ILL": syscall.SIGILL,
"INFO": syscall.SIGINFO,
"INT": syscall.SIGINT,
"IO": syscall.SIGIO,
"IOT": syscall.SIGIOT,
"KILL": syscall.SIGKILL,
"PIPE": syscall.SIGPIPE,
"PROF": syscall.SIGPROF,
"QUIT": syscall.SIGQUIT,
"SEGV": syscall.SIGSEGV,
"STOP": syscall.SIGSTOP,
"SYS": syscall.SIGSYS,
"TERM": syscall.SIGTERM,
"TRAP": syscall.SIGTRAP,
"TSTP": syscall.SIGTSTP,
"TTIN": syscall.SIGTTIN,
"TTOU": syscall.SIGTTOU,
"URG": syscall.SIGURG,
"USR1": syscall.SIGUSR1,
"USR2": syscall.SIGUSR2,
"VTALRM": syscall.SIGVTALRM,
"WINCH": syscall.SIGWINCH,
"XCPU": syscall.SIGXCPU,
"XFSZ": syscall.SIGXFSZ,
}

View File

@@ -0,0 +1,43 @@
package signal
import (
"syscall"
)
// SignalMap is a map of FreeBSD signals.
var SignalMap = map[string]syscall.Signal{
"ABRT": syscall.SIGABRT,
"ALRM": syscall.SIGALRM,
"BUF": syscall.SIGBUS,
"CHLD": syscall.SIGCHLD,
"CONT": syscall.SIGCONT,
"EMT": syscall.SIGEMT,
"FPE": syscall.SIGFPE,
"HUP": syscall.SIGHUP,
"ILL": syscall.SIGILL,
"INFO": syscall.SIGINFO,
"INT": syscall.SIGINT,
"IO": syscall.SIGIO,
"IOT": syscall.SIGIOT,
"KILL": syscall.SIGKILL,
"LWP": syscall.SIGLWP,
"PIPE": syscall.SIGPIPE,
"PROF": syscall.SIGPROF,
"QUIT": syscall.SIGQUIT,
"SEGV": syscall.SIGSEGV,
"STOP": syscall.SIGSTOP,
"SYS": syscall.SIGSYS,
"TERM": syscall.SIGTERM,
"THR": syscall.SIGTHR,
"TRAP": syscall.SIGTRAP,
"TSTP": syscall.SIGTSTP,
"TTIN": syscall.SIGTTIN,
"TTOU": syscall.SIGTTOU,
"URG": syscall.SIGURG,
"USR1": syscall.SIGUSR1,
"USR2": syscall.SIGUSR2,
"VTALRM": syscall.SIGVTALRM,
"WINCH": syscall.SIGWINCH,
"XCPU": syscall.SIGXCPU,
"XFSZ": syscall.SIGXFSZ,
}

View File

@@ -0,0 +1,81 @@
package signal
import (
"syscall"
"golang.org/x/sys/unix"
)
const (
sigrtmin = 34
sigrtmax = 64
)
// SignalMap is a map of Linux signals.
var SignalMap = map[string]syscall.Signal{
"ABRT": unix.SIGABRT,
"ALRM": unix.SIGALRM,
"BUS": unix.SIGBUS,
"CHLD": unix.SIGCHLD,
"CLD": unix.SIGCLD,
"CONT": unix.SIGCONT,
"FPE": unix.SIGFPE,
"HUP": unix.SIGHUP,
"ILL": unix.SIGILL,
"INT": unix.SIGINT,
"IO": unix.SIGIO,
"IOT": unix.SIGIOT,
"KILL": unix.SIGKILL,
"PIPE": unix.SIGPIPE,
"POLL": unix.SIGPOLL,
"PROF": unix.SIGPROF,
"PWR": unix.SIGPWR,
"QUIT": unix.SIGQUIT,
"SEGV": unix.SIGSEGV,
"STKFLT": unix.SIGSTKFLT,
"STOP": unix.SIGSTOP,
"SYS": unix.SIGSYS,
"TERM": unix.SIGTERM,
"TRAP": unix.SIGTRAP,
"TSTP": unix.SIGTSTP,
"TTIN": unix.SIGTTIN,
"TTOU": unix.SIGTTOU,
"URG": unix.SIGURG,
"USR1": unix.SIGUSR1,
"USR2": unix.SIGUSR2,
"VTALRM": unix.SIGVTALRM,
"WINCH": unix.SIGWINCH,
"XCPU": unix.SIGXCPU,
"XFSZ": unix.SIGXFSZ,
"RTMIN": sigrtmin,
"RTMIN+1": sigrtmin + 1,
"RTMIN+2": sigrtmin + 2,
"RTMIN+3": sigrtmin + 3,
"RTMIN+4": sigrtmin + 4,
"RTMIN+5": sigrtmin + 5,
"RTMIN+6": sigrtmin + 6,
"RTMIN+7": sigrtmin + 7,
"RTMIN+8": sigrtmin + 8,
"RTMIN+9": sigrtmin + 9,
"RTMIN+10": sigrtmin + 10,
"RTMIN+11": sigrtmin + 11,
"RTMIN+12": sigrtmin + 12,
"RTMIN+13": sigrtmin + 13,
"RTMIN+14": sigrtmin + 14,
"RTMIN+15": sigrtmin + 15,
"RTMAX-14": sigrtmax - 14,
"RTMAX-13": sigrtmax - 13,
"RTMAX-12": sigrtmax - 12,
"RTMAX-11": sigrtmax - 11,
"RTMAX-10": sigrtmax - 10,
"RTMAX-9": sigrtmax - 9,
"RTMAX-8": sigrtmax - 8,
"RTMAX-7": sigrtmax - 7,
"RTMAX-6": sigrtmax - 6,
"RTMAX-5": sigrtmax - 5,
"RTMAX-4": sigrtmax - 4,
"RTMAX-3": sigrtmax - 3,
"RTMAX-2": sigrtmax - 2,
"RTMAX-1": sigrtmax - 1,
"RTMAX": sigrtmax,
}

View File

@@ -0,0 +1,42 @@
package signal
import (
"syscall"
)
// SignalMap is a map of Solaris signals.
// SIGINFO and SIGTHR not defined for Solaris
var SignalMap = map[string]syscall.Signal{
"ABRT": syscall.SIGABRT,
"ALRM": syscall.SIGALRM,
"BUF": syscall.SIGBUS,
"CHLD": syscall.SIGCHLD,
"CONT": syscall.SIGCONT,
"EMT": syscall.SIGEMT,
"FPE": syscall.SIGFPE,
"HUP": syscall.SIGHUP,
"ILL": syscall.SIGILL,
"INT": syscall.SIGINT,
"IO": syscall.SIGIO,
"IOT": syscall.SIGIOT,
"KILL": syscall.SIGKILL,
"LWP": syscall.SIGLWP,
"PIPE": syscall.SIGPIPE,
"PROF": syscall.SIGPROF,
"QUIT": syscall.SIGQUIT,
"SEGV": syscall.SIGSEGV,
"STOP": syscall.SIGSTOP,
"SYS": syscall.SIGSYS,
"TERM": syscall.SIGTERM,
"TRAP": syscall.SIGTRAP,
"TSTP": syscall.SIGTSTP,
"TTIN": syscall.SIGTTIN,
"TTOU": syscall.SIGTTOU,
"URG": syscall.SIGURG,
"USR1": syscall.SIGUSR1,
"USR2": syscall.SIGUSR2,
"VTALRM": syscall.SIGVTALRM,
"WINCH": syscall.SIGWINCH,
"XCPU": syscall.SIGXCPU,
"XFSZ": syscall.SIGXFSZ,
}

View File

@@ -0,0 +1,21 @@
// +build !windows
package signal
import (
"syscall"
)
// Signals used in cli/command (no windows equivalent, use
// invalid signals so they don't get handled)
const (
// SIGCHLD is a signal sent to a process when a child process terminates, is interrupted, or resumes after being interrupted.
SIGCHLD = syscall.SIGCHLD
// SIGWINCH is a signal sent to a process when its controlling terminal changes its size
SIGWINCH = syscall.SIGWINCH
// SIGPIPE is a signal sent to a process when a pipe is written to before the other end is open for reading
SIGPIPE = syscall.SIGPIPE
// DefaultStopSignal is the syscall signal used to stop a container in unix systems.
DefaultStopSignal = "SIGTERM"
)

View File

@@ -0,0 +1,10 @@
// +build !linux,!darwin,!freebsd,!windows,!solaris
package signal
import (
"syscall"
)
// SignalMap is an empty map of signals for unsupported platform.
var SignalMap = map[string]syscall.Signal{}

View File

@@ -0,0 +1,28 @@
// +build windows
package signal
import (
"syscall"
)
// Signals used in cli/command (no windows equivalent, use
// invalid signals so they don't get handled)
const (
SIGCHLD = syscall.Signal(0xff)
SIGWINCH = syscall.Signal(0xff)
SIGPIPE = syscall.Signal(0xff)
// DefaultStopSignal is the syscall signal used to stop a container in windows systems.
DefaultStopSignal = "15"
)
// SignalMap is a map of "supported" signals. As per the comment in GOLang's
// ztypes_windows.go: "More invented values for signals". Windows doesn't
// really support signals in any way, shape or form that Unix does.
//
// We have these so that docker kill can be used to gracefully (TERM) and
// forcibly (KILL) terminate a container on Windows.
var SignalMap = map[string]syscall.Signal{
"KILL": syscall.SIGKILL,
"TERM": syscall.SIGTERM,
}

104
vendor/github.com/docker/docker/pkg/signal/trap.go generated vendored Normal file
View File

@@ -0,0 +1,104 @@
package signal
import (
"fmt"
"os"
gosignal "os/signal"
"path/filepath"
"runtime"
"strings"
"sync/atomic"
"syscall"
"time"
"github.com/pkg/errors"
)
// Trap sets up a simplified signal "trap", appropriate for common
// behavior expected from a vanilla unix command-line tool in general
// (and the Docker engine in particular).
//
// * If SIGINT or SIGTERM are received, `cleanup` is called, then the process is terminated.
// * If SIGINT or SIGTERM are received 3 times before cleanup is complete, then cleanup is
// skipped and the process is terminated immediately (allows force quit of stuck daemon)
// * A SIGQUIT always causes an exit without cleanup, with a goroutine dump preceding exit.
// * Ignore SIGPIPE events. These are generated by systemd when journald is restarted while
// the docker daemon is not restarted and also running under systemd.
// Fixes https://github.com/docker/docker/issues/19728
//
func Trap(cleanup func(), logger interface {
Info(args ...interface{})
}) {
c := make(chan os.Signal, 1)
// we will handle INT, TERM, QUIT, SIGPIPE here
signals := []os.Signal{os.Interrupt, syscall.SIGTERM, syscall.SIGQUIT, syscall.SIGPIPE}
gosignal.Notify(c, signals...)
go func() {
interruptCount := uint32(0)
for sig := range c {
if sig == syscall.SIGPIPE {
continue
}
go func(sig os.Signal) {
logger.Info(fmt.Sprintf("Processing signal '%v'", sig))
switch sig {
case os.Interrupt, syscall.SIGTERM:
if atomic.LoadUint32(&interruptCount) < 3 {
// Initiate the cleanup only once
if atomic.AddUint32(&interruptCount, 1) == 1 {
// Call the provided cleanup handler
cleanup()
os.Exit(0)
} else {
return
}
} else {
// 3 SIGTERM/INT signals received; force exit without cleanup
logger.Info("Forcing docker daemon shutdown without cleanup; 3 interrupts received")
}
case syscall.SIGQUIT:
DumpStacks("")
logger.Info("Forcing docker daemon shutdown without cleanup on SIGQUIT")
}
//for the SIGINT/TERM, and SIGQUIT non-clean shutdown case, exit with 128 + signal #
os.Exit(128 + int(sig.(syscall.Signal)))
}(sig)
}
}()
}
const stacksLogNameTemplate = "goroutine-stacks-%s.log"
// DumpStacks appends the runtime stack into file in dir and returns full path
// to that file.
func DumpStacks(dir string) (string, error) {
var (
buf []byte
stackSize int
)
bufferLen := 16384
for stackSize == len(buf) {
buf = make([]byte, bufferLen)
stackSize = runtime.Stack(buf, true)
bufferLen *= 2
}
buf = buf[:stackSize]
var f *os.File
if dir != "" {
path := filepath.Join(dir, fmt.Sprintf(stacksLogNameTemplate, strings.Replace(time.Now().Format(time.RFC3339), ":", "", -1)))
var err error
f, err = os.OpenFile(path, os.O_CREATE|os.O_WRONLY, 0666)
if err != nil {
return "", errors.Wrap(err, "failed to open file to write the goroutine stacks")
}
defer f.Close()
defer f.Sync()
} else {
f = os.Stderr
}
if _, err := f.Write(buf); err != nil {
return "", errors.Wrap(err, "failed to write goroutine stacks")
}
return f.Name(), nil
}

View File

@@ -0,0 +1 @@
This package provides helper functions for dealing with string identifiers

View File

@@ -0,0 +1,99 @@
// Package stringid provides helper functions for dealing with string identifiers
package stringid
import (
cryptorand "crypto/rand"
"encoding/hex"
"fmt"
"io"
"math"
"math/big"
"math/rand"
"regexp"
"strconv"
"strings"
"time"
)
const shortLen = 12
var (
validShortID = regexp.MustCompile("^[a-f0-9]{12}$")
validHex = regexp.MustCompile(`^[a-f0-9]{64}$`)
)
// IsShortID determines if an arbitrary string *looks like* a short ID.
func IsShortID(id string) bool {
return validShortID.MatchString(id)
}
// TruncateID returns a shorthand version of a string identifier for convenience.
// A collision with other shorthands is very unlikely, but possible.
// In case of a collision a lookup with TruncIndex.Get() will fail, and the caller
// will need to use a longer prefix, or the full-length Id.
func TruncateID(id string) string {
if i := strings.IndexRune(id, ':'); i >= 0 {
id = id[i+1:]
}
if len(id) > shortLen {
id = id[:shortLen]
}
return id
}
func generateID(r io.Reader) string {
b := make([]byte, 32)
for {
if _, err := io.ReadFull(r, b); err != nil {
panic(err) // This shouldn't happen
}
id := hex.EncodeToString(b)
// if we try to parse the truncated for as an int and we don't have
// an error then the value is all numeric and causes issues when
// used as a hostname. ref #3869
if _, err := strconv.ParseInt(TruncateID(id), 10, 64); err == nil {
continue
}
return id
}
}
// GenerateRandomID returns a unique id.
func GenerateRandomID() string {
return generateID(cryptorand.Reader)
}
// GenerateNonCryptoID generates unique id without using cryptographically
// secure sources of random.
// It helps you to save entropy.
func GenerateNonCryptoID() string {
return generateID(readerFunc(rand.Read))
}
// ValidateID checks whether an ID string is a valid image ID.
func ValidateID(id string) error {
if ok := validHex.MatchString(id); !ok {
return fmt.Errorf("image ID %q is invalid", id)
}
return nil
}
func init() {
// safely set the seed globally so we generate random ids. Tries to use a
// crypto seed before falling back to time.
var seed int64
if cryptoseed, err := cryptorand.Int(cryptorand.Reader, big.NewInt(math.MaxInt64)); err != nil {
// This should not happen, but worst-case fallback to time-based seed.
seed = time.Now().UnixNano()
} else {
seed = cryptoseed.Int64()
}
rand.Seed(seed)
}
type readerFunc func(p []byte) (int, error)
func (fn readerFunc) Read(p []byte) (int, error) {
return fn(p)
}

View File

@@ -0,0 +1,191 @@
Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS
Copyright 2014-2017 Docker, Inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.

View File

@@ -0,0 +1,27 @@
Copyright (c) 2014-2017 The Docker & Go Authors. All rights reserved.
Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions are
met:
* Redistributions of source code must retain the above copyright
notice, this list of conditions and the following disclaimer.
* Redistributions in binary form must reproduce the above
copyright notice, this list of conditions and the following disclaimer
in the documentation and/or other materials provided with the
distribution.
* Neither the name of Google Inc. nor the names of its
contributors may be used to endorse or promote products derived from
this software without specific prior written permission.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
"AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.

View File

@@ -0,0 +1,6 @@
Package symlink implements EvalSymlinksInScope which is an extension of filepath.EvalSymlinks,
as well as a Windows long-path aware version of filepath.EvalSymlinks
from the [Go standard library](https://golang.org/pkg/path/filepath).
The code from filepath.EvalSymlinks has been adapted in fs.go.
Please read the LICENSE.BSD file that governs fs.go and LICENSE.APACHE for fs_test.go.

144
vendor/github.com/docker/docker/pkg/symlink/fs.go generated vendored Normal file
View File

@@ -0,0 +1,144 @@
// Copyright 2012 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE.BSD file.
// This code is a modified version of path/filepath/symlink.go from the Go standard library.
package symlink
import (
"bytes"
"errors"
"os"
"path/filepath"
"strings"
"github.com/docker/docker/pkg/system"
)
// FollowSymlinkInScope is a wrapper around evalSymlinksInScope that returns an
// absolute path. This function handles paths in a platform-agnostic manner.
func FollowSymlinkInScope(path, root string) (string, error) {
path, err := filepath.Abs(filepath.FromSlash(path))
if err != nil {
return "", err
}
root, err = filepath.Abs(filepath.FromSlash(root))
if err != nil {
return "", err
}
return evalSymlinksInScope(path, root)
}
// evalSymlinksInScope will evaluate symlinks in `path` within a scope `root` and return
// a result guaranteed to be contained within the scope `root`, at the time of the call.
// Symlinks in `root` are not evaluated and left as-is.
// Errors encountered while attempting to evaluate symlinks in path will be returned.
// Non-existing paths are valid and do not constitute an error.
// `path` has to contain `root` as a prefix, or else an error will be returned.
// Trying to break out from `root` does not constitute an error.
//
// Example:
// If /foo/bar -> /outside,
// FollowSymlinkInScope("/foo/bar", "/foo") == "/foo/outside" instead of "/outside"
//
// IMPORTANT: it is the caller's responsibility to call evalSymlinksInScope *after* relevant symlinks
// are created and not to create subsequently, additional symlinks that could potentially make a
// previously-safe path, unsafe. Example: if /foo/bar does not exist, evalSymlinksInScope("/foo/bar", "/foo")
// would return "/foo/bar". If one makes /foo/bar a symlink to /baz subsequently, then "/foo/bar" should
// no longer be considered safely contained in "/foo".
func evalSymlinksInScope(path, root string) (string, error) {
root = filepath.Clean(root)
if path == root {
return path, nil
}
if !strings.HasPrefix(path, root) {
return "", errors.New("evalSymlinksInScope: " + path + " is not in " + root)
}
const maxIter = 255
originalPath := path
// given root of "/a" and path of "/a/b/../../c" we want path to be "/b/../../c"
path = path[len(root):]
if root == string(filepath.Separator) {
path = string(filepath.Separator) + path
}
if !strings.HasPrefix(path, string(filepath.Separator)) {
return "", errors.New("evalSymlinksInScope: " + path + " is not in " + root)
}
path = filepath.Clean(path)
// consume path by taking each frontmost path element,
// expanding it if it's a symlink, and appending it to b
var b bytes.Buffer
// b here will always be considered to be the "current absolute path inside
// root" when we append paths to it, we also append a slash and use
// filepath.Clean after the loop to trim the trailing slash
for n := 0; path != ""; n++ {
if n > maxIter {
return "", errors.New("evalSymlinksInScope: too many links in " + originalPath)
}
// find next path component, p
i := strings.IndexRune(path, filepath.Separator)
var p string
if i == -1 {
p, path = path, ""
} else {
p, path = path[:i], path[i+1:]
}
if p == "" {
continue
}
// this takes a b.String() like "b/../" and a p like "c" and turns it
// into "/b/../c" which then gets filepath.Cleaned into "/c" and then
// root gets prepended and we Clean again (to remove any trailing slash
// if the first Clean gave us just "/")
cleanP := filepath.Clean(string(filepath.Separator) + b.String() + p)
if isDriveOrRoot(cleanP) {
// never Lstat "/" itself, or drive letters on Windows
b.Reset()
continue
}
fullP := filepath.Clean(root + cleanP)
fi, err := os.Lstat(fullP)
if os.IsNotExist(err) {
// if p does not exist, accept it
b.WriteString(p)
b.WriteRune(filepath.Separator)
continue
}
if err != nil {
return "", err
}
if fi.Mode()&os.ModeSymlink == 0 {
b.WriteString(p)
b.WriteRune(filepath.Separator)
continue
}
// it's a symlink, put it at the front of path
dest, err := os.Readlink(fullP)
if err != nil {
return "", err
}
if system.IsAbs(dest) {
b.Reset()
}
path = dest + string(filepath.Separator) + path
}
// see note above on "fullP := ..." for why this is double-cleaned and
// what's happening here
return filepath.Clean(root + filepath.Clean(string(filepath.Separator)+b.String())), nil
}
// EvalSymlinks returns the path name after the evaluation of any symbolic
// links.
// If path is relative the result will be relative to the current directory,
// unless one of the components is an absolute symbolic link.
// This version has been updated to support long paths prepended with `\\?\`.
func EvalSymlinks(path string) (string, error) {
return evalSymlinks(path)
}

15
vendor/github.com/docker/docker/pkg/symlink/fs_unix.go generated vendored Normal file
View File

@@ -0,0 +1,15 @@
// +build !windows
package symlink
import (
"path/filepath"
)
func evalSymlinks(path string) (string, error) {
return filepath.EvalSymlinks(path)
}
func isDriveOrRoot(p string) bool {
return p == string(filepath.Separator)
}

View File

@@ -0,0 +1,169 @@
package symlink
import (
"bytes"
"errors"
"os"
"path/filepath"
"strings"
"github.com/docker/docker/pkg/longpath"
"golang.org/x/sys/windows"
)
func toShort(path string) (string, error) {
p, err := windows.UTF16FromString(path)
if err != nil {
return "", err
}
b := p // GetShortPathName says we can reuse buffer
n, err := windows.GetShortPathName(&p[0], &b[0], uint32(len(b)))
if err != nil {
return "", err
}
if n > uint32(len(b)) {
b = make([]uint16, n)
if _, err = windows.GetShortPathName(&p[0], &b[0], uint32(len(b))); err != nil {
return "", err
}
}
return windows.UTF16ToString(b), nil
}
func toLong(path string) (string, error) {
p, err := windows.UTF16FromString(path)
if err != nil {
return "", err
}
b := p // GetLongPathName says we can reuse buffer
n, err := windows.GetLongPathName(&p[0], &b[0], uint32(len(b)))
if err != nil {
return "", err
}
if n > uint32(len(b)) {
b = make([]uint16, n)
n, err = windows.GetLongPathName(&p[0], &b[0], uint32(len(b)))
if err != nil {
return "", err
}
}
b = b[:n]
return windows.UTF16ToString(b), nil
}
func evalSymlinks(path string) (string, error) {
path, err := walkSymlinks(path)
if err != nil {
return "", err
}
p, err := toShort(path)
if err != nil {
return "", err
}
p, err = toLong(p)
if err != nil {
return "", err
}
// windows.GetLongPathName does not change the case of the drive letter,
// but the result of EvalSymlinks must be unique, so we have
// EvalSymlinks(`c:\a`) == EvalSymlinks(`C:\a`).
// Make drive letter upper case.
if len(p) >= 2 && p[1] == ':' && 'a' <= p[0] && p[0] <= 'z' {
p = string(p[0]+'A'-'a') + p[1:]
} else if len(p) >= 6 && p[5] == ':' && 'a' <= p[4] && p[4] <= 'z' {
p = p[:3] + string(p[4]+'A'-'a') + p[5:]
}
return filepath.Clean(p), nil
}
const utf8RuneSelf = 0x80
func walkSymlinks(path string) (string, error) {
const maxIter = 255
originalPath := path
// consume path by taking each frontmost path element,
// expanding it if it's a symlink, and appending it to b
var b bytes.Buffer
for n := 0; path != ""; n++ {
if n > maxIter {
return "", errors.New("EvalSymlinks: too many links in " + originalPath)
}
// A path beginning with `\\?\` represents the root, so automatically
// skip that part and begin processing the next segment.
if strings.HasPrefix(path, longpath.Prefix) {
b.WriteString(longpath.Prefix)
path = path[4:]
continue
}
// find next path component, p
var i = -1
for j, c := range path {
if c < utf8RuneSelf && os.IsPathSeparator(uint8(c)) {
i = j
break
}
}
var p string
if i == -1 {
p, path = path, ""
} else {
p, path = path[:i], path[i+1:]
}
if p == "" {
if b.Len() == 0 {
// must be absolute path
b.WriteRune(filepath.Separator)
}
continue
}
// If this is the first segment after the long path prefix, accept the
// current segment as a volume root or UNC share and move on to the next.
if b.String() == longpath.Prefix {
b.WriteString(p)
b.WriteRune(filepath.Separator)
continue
}
fi, err := os.Lstat(b.String() + p)
if err != nil {
return "", err
}
if fi.Mode()&os.ModeSymlink == 0 {
b.WriteString(p)
if path != "" || (b.Len() == 2 && len(p) == 2 && p[1] == ':') {
b.WriteRune(filepath.Separator)
}
continue
}
// it's a symlink, put it at the front of path
dest, err := os.Readlink(b.String() + p)
if err != nil {
return "", err
}
if filepath.IsAbs(dest) || os.IsPathSeparator(dest[0]) {
b.Reset()
}
path = dest + string(filepath.Separator) + path
}
return filepath.Clean(b.String()), nil
}
func isDriveOrRoot(p string) bool {
if p == string(filepath.Separator) {
return true
}
length := len(p)
if length >= 2 {
if p[length-1] == ':' && (('a' <= p[length-2] && p[length-2] <= 'z') || ('A' <= p[length-2] && p[length-2] <= 'Z')) {
return true
}
}
return false
}

35
vendor/github.com/docker/docker/pkg/system/chtimes.go generated vendored Normal file
View File

@@ -0,0 +1,35 @@
package system
import (
"os"
"time"
)
// Chtimes changes the access time and modified time of a file at the given path
func Chtimes(name string, atime time.Time, mtime time.Time) error {
unixMinTime := time.Unix(0, 0)
unixMaxTime := maxTime
// If the modified time is prior to the Unix Epoch, or after the
// end of Unix Time, os.Chtimes has undefined behavior
// default to Unix Epoch in this case, just in case
if atime.Before(unixMinTime) || atime.After(unixMaxTime) {
atime = unixMinTime
}
if mtime.Before(unixMinTime) || mtime.After(unixMaxTime) {
mtime = unixMinTime
}
if err := os.Chtimes(name, atime, mtime); err != nil {
return err
}
// Take platform specific action for setting create time.
if err := setCTime(name, mtime); err != nil {
return err
}
return nil
}

View File

@@ -0,0 +1,14 @@
// +build !windows
package system
import (
"time"
)
//setCTime will set the create time on a file. On Unix, the create
//time is updated as a side effect of setting the modified time, so
//no action is required.
func setCTime(path string, ctime time.Time) error {
return nil
}

View File

@@ -0,0 +1,28 @@
// +build windows
package system
import (
"time"
"golang.org/x/sys/windows"
)
//setCTime will set the create time on a file. On Windows, this requires
//calling SetFileTime and explicitly including the create time.
func setCTime(path string, ctime time.Time) error {
ctimespec := windows.NsecToTimespec(ctime.UnixNano())
pathp, e := windows.UTF16PtrFromString(path)
if e != nil {
return e
}
h, e := windows.CreateFile(pathp,
windows.FILE_WRITE_ATTRIBUTES, windows.FILE_SHARE_WRITE, nil,
windows.OPEN_EXISTING, windows.FILE_FLAG_BACKUP_SEMANTICS, 0)
if e != nil {
return e
}
defer windows.Close(h)
c := windows.NsecToFiletime(windows.TimespecToNsec(ctimespec))
return windows.SetFileTime(h, &c, nil, nil)
}

10
vendor/github.com/docker/docker/pkg/system/errors.go generated vendored Normal file
View File

@@ -0,0 +1,10 @@
package system
import (
"errors"
)
var (
// ErrNotSupportedPlatform means the platform is not supported.
ErrNotSupportedPlatform = errors.New("platform and architecture is not supported")
)

19
vendor/github.com/docker/docker/pkg/system/exitcode.go generated vendored Normal file
View File

@@ -0,0 +1,19 @@
package system
import (
"fmt"
"os/exec"
"syscall"
)
// GetExitCode returns the ExitStatus of the specified error if its type is
// exec.ExitError, returns 0 and an error otherwise.
func GetExitCode(err error) (int, error) {
exitCode := 0
if exiterr, ok := err.(*exec.ExitError); ok {
if procExit, ok := exiterr.Sys().(syscall.WaitStatus); ok {
return procExit.ExitStatus(), nil
}
}
return exitCode, fmt.Errorf("failed to get exit code")
}

67
vendor/github.com/docker/docker/pkg/system/filesys.go generated vendored Normal file
View File

@@ -0,0 +1,67 @@
// +build !windows
package system
import (
"io/ioutil"
"os"
"path/filepath"
)
// MkdirAllWithACL is a wrapper for MkdirAll on unix systems.
func MkdirAllWithACL(path string, perm os.FileMode, sddl string) error {
return MkdirAll(path, perm, sddl)
}
// MkdirAll creates a directory named path along with any necessary parents,
// with permission specified by attribute perm for all dir created.
func MkdirAll(path string, perm os.FileMode, sddl string) error {
return os.MkdirAll(path, perm)
}
// IsAbs is a platform-specific wrapper for filepath.IsAbs.
func IsAbs(path string) bool {
return filepath.IsAbs(path)
}
// The functions below here are wrappers for the equivalents in the os and ioutils packages.
// They are passthrough on Unix platforms, and only relevant on Windows.
// CreateSequential creates the named file with mode 0666 (before umask), truncating
// it if it already exists. If successful, methods on the returned
// File can be used for I/O; the associated file descriptor has mode
// O_RDWR.
// If there is an error, it will be of type *PathError.
func CreateSequential(name string) (*os.File, error) {
return os.Create(name)
}
// OpenSequential opens the named file for reading. If successful, methods on
// the returned file can be used for reading; the associated file
// descriptor has mode O_RDONLY.
// If there is an error, it will be of type *PathError.
func OpenSequential(name string) (*os.File, error) {
return os.Open(name)
}
// OpenFileSequential is the generalized open call; most users will use Open
// or Create instead. It opens the named file with specified flag
// (O_RDONLY etc.) and perm, (0666 etc.) if applicable. If successful,
// methods on the returned File can be used for I/O.
// If there is an error, it will be of type *PathError.
func OpenFileSequential(name string, flag int, perm os.FileMode) (*os.File, error) {
return os.OpenFile(name, flag, perm)
}
// TempFileSequential creates a new temporary file in the directory dir
// with a name beginning with prefix, opens the file for reading
// and writing, and returns the resulting *os.File.
// If dir is the empty string, TempFile uses the default directory
// for temporary files (see os.TempDir).
// Multiple programs calling TempFile simultaneously
// will not choose the same file. The caller can use f.Name()
// to find the pathname of the file. It is the caller's responsibility
// to remove the file when no longer needed.
func TempFileSequential(dir, prefix string) (f *os.File, err error) {
return ioutil.TempFile(dir, prefix)
}

View File

@@ -0,0 +1,298 @@
// +build windows
package system
import (
"os"
"path/filepath"
"regexp"
"strconv"
"strings"
"sync"
"syscall"
"time"
"unsafe"
winio "github.com/Microsoft/go-winio"
"golang.org/x/sys/windows"
)
const (
// SddlAdministratorsLocalSystem is local administrators plus NT AUTHORITY\System
SddlAdministratorsLocalSystem = "D:P(A;OICI;GA;;;BA)(A;OICI;GA;;;SY)"
// SddlNtvmAdministratorsLocalSystem is NT VIRTUAL MACHINE\Virtual Machines plus local administrators plus NT AUTHORITY\System
SddlNtvmAdministratorsLocalSystem = "D:P(A;OICI;GA;;;S-1-5-83-0)(A;OICI;GA;;;BA)(A;OICI;GA;;;SY)"
)
// MkdirAllWithACL is a wrapper for MkdirAll that creates a directory
// with an appropriate SDDL defined ACL.
func MkdirAllWithACL(path string, perm os.FileMode, sddl string) error {
return mkdirall(path, true, sddl)
}
// MkdirAll implementation that is volume path aware for Windows.
func MkdirAll(path string, _ os.FileMode, sddl string) error {
return mkdirall(path, false, sddl)
}
// mkdirall is a custom version of os.MkdirAll modified for use on Windows
// so that it is both volume path aware, and can create a directory with
// a DACL.
func mkdirall(path string, applyACL bool, sddl string) error {
if re := regexp.MustCompile(`^\\\\\?\\Volume{[a-z0-9-]+}$`); re.MatchString(path) {
return nil
}
// The rest of this method is largely copied from os.MkdirAll and should be kept
// as-is to ensure compatibility.
// Fast path: if we can tell whether path is a directory or file, stop with success or error.
dir, err := os.Stat(path)
if err == nil {
if dir.IsDir() {
return nil
}
return &os.PathError{
Op: "mkdir",
Path: path,
Err: syscall.ENOTDIR,
}
}
// Slow path: make sure parent exists and then call Mkdir for path.
i := len(path)
for i > 0 && os.IsPathSeparator(path[i-1]) { // Skip trailing path separator.
i--
}
j := i
for j > 0 && !os.IsPathSeparator(path[j-1]) { // Scan backward over element.
j--
}
if j > 1 {
// Create parent
err = mkdirall(path[0:j-1], false, sddl)
if err != nil {
return err
}
}
// Parent now exists; invoke os.Mkdir or mkdirWithACL and use its result.
if applyACL {
err = mkdirWithACL(path, sddl)
} else {
err = os.Mkdir(path, 0)
}
if err != nil {
// Handle arguments like "foo/." by
// double-checking that directory doesn't exist.
dir, err1 := os.Lstat(path)
if err1 == nil && dir.IsDir() {
return nil
}
return err
}
return nil
}
// mkdirWithACL creates a new directory. If there is an error, it will be of
// type *PathError. .
//
// This is a modified and combined version of os.Mkdir and windows.Mkdir
// in golang to cater for creating a directory am ACL permitting full
// access, with inheritance, to any subfolder/file for Built-in Administrators
// and Local System.
func mkdirWithACL(name string, sddl string) error {
sa := windows.SecurityAttributes{Length: 0}
sd, err := winio.SddlToSecurityDescriptor(sddl)
if err != nil {
return &os.PathError{Op: "mkdir", Path: name, Err: err}
}
sa.Length = uint32(unsafe.Sizeof(sa))
sa.InheritHandle = 1
sa.SecurityDescriptor = uintptr(unsafe.Pointer(&sd[0]))
namep, err := windows.UTF16PtrFromString(name)
if err != nil {
return &os.PathError{Op: "mkdir", Path: name, Err: err}
}
e := windows.CreateDirectory(namep, &sa)
if e != nil {
return &os.PathError{Op: "mkdir", Path: name, Err: e}
}
return nil
}
// IsAbs is a platform-specific wrapper for filepath.IsAbs. On Windows,
// golang filepath.IsAbs does not consider a path \windows\system32 as absolute
// as it doesn't start with a drive-letter/colon combination. However, in
// docker we need to verify things such as WORKDIR /windows/system32 in
// a Dockerfile (which gets translated to \windows\system32 when being processed
// by the daemon. This SHOULD be treated as absolute from a docker processing
// perspective.
func IsAbs(path string) bool {
if !filepath.IsAbs(path) {
if !strings.HasPrefix(path, string(os.PathSeparator)) {
return false
}
}
return true
}
// The origin of the functions below here are the golang OS and windows packages,
// slightly modified to only cope with files, not directories due to the
// specific use case.
//
// The alteration is to allow a file on Windows to be opened with
// FILE_FLAG_SEQUENTIAL_SCAN (particular for docker load), to avoid eating
// the standby list, particularly when accessing large files such as layer.tar.
// CreateSequential creates the named file with mode 0666 (before umask), truncating
// it if it already exists. If successful, methods on the returned
// File can be used for I/O; the associated file descriptor has mode
// O_RDWR.
// If there is an error, it will be of type *PathError.
func CreateSequential(name string) (*os.File, error) {
return OpenFileSequential(name, os.O_RDWR|os.O_CREATE|os.O_TRUNC, 0)
}
// OpenSequential opens the named file for reading. If successful, methods on
// the returned file can be used for reading; the associated file
// descriptor has mode O_RDONLY.
// If there is an error, it will be of type *PathError.
func OpenSequential(name string) (*os.File, error) {
return OpenFileSequential(name, os.O_RDONLY, 0)
}
// OpenFileSequential is the generalized open call; most users will use Open
// or Create instead.
// If there is an error, it will be of type *PathError.
func OpenFileSequential(name string, flag int, _ os.FileMode) (*os.File, error) {
if name == "" {
return nil, &os.PathError{Op: "open", Path: name, Err: syscall.ENOENT}
}
r, errf := windowsOpenFileSequential(name, flag, 0)
if errf == nil {
return r, nil
}
return nil, &os.PathError{Op: "open", Path: name, Err: errf}
}
func windowsOpenFileSequential(name string, flag int, _ os.FileMode) (file *os.File, err error) {
r, e := windowsOpenSequential(name, flag|windows.O_CLOEXEC, 0)
if e != nil {
return nil, e
}
return os.NewFile(uintptr(r), name), nil
}
func makeInheritSa() *windows.SecurityAttributes {
var sa windows.SecurityAttributes
sa.Length = uint32(unsafe.Sizeof(sa))
sa.InheritHandle = 1
return &sa
}
func windowsOpenSequential(path string, mode int, _ uint32) (fd windows.Handle, err error) {
if len(path) == 0 {
return windows.InvalidHandle, windows.ERROR_FILE_NOT_FOUND
}
pathp, err := windows.UTF16PtrFromString(path)
if err != nil {
return windows.InvalidHandle, err
}
var access uint32
switch mode & (windows.O_RDONLY | windows.O_WRONLY | windows.O_RDWR) {
case windows.O_RDONLY:
access = windows.GENERIC_READ
case windows.O_WRONLY:
access = windows.GENERIC_WRITE
case windows.O_RDWR:
access = windows.GENERIC_READ | windows.GENERIC_WRITE
}
if mode&windows.O_CREAT != 0 {
access |= windows.GENERIC_WRITE
}
if mode&windows.O_APPEND != 0 {
access &^= windows.GENERIC_WRITE
access |= windows.FILE_APPEND_DATA
}
sharemode := uint32(windows.FILE_SHARE_READ | windows.FILE_SHARE_WRITE)
var sa *windows.SecurityAttributes
if mode&windows.O_CLOEXEC == 0 {
sa = makeInheritSa()
}
var createmode uint32
switch {
case mode&(windows.O_CREAT|windows.O_EXCL) == (windows.O_CREAT | windows.O_EXCL):
createmode = windows.CREATE_NEW
case mode&(windows.O_CREAT|windows.O_TRUNC) == (windows.O_CREAT | windows.O_TRUNC):
createmode = windows.CREATE_ALWAYS
case mode&windows.O_CREAT == windows.O_CREAT:
createmode = windows.OPEN_ALWAYS
case mode&windows.O_TRUNC == windows.O_TRUNC:
createmode = windows.TRUNCATE_EXISTING
default:
createmode = windows.OPEN_EXISTING
}
// Use FILE_FLAG_SEQUENTIAL_SCAN rather than FILE_ATTRIBUTE_NORMAL as implemented in golang.
//https://msdn.microsoft.com/en-us/library/windows/desktop/aa363858(v=vs.85).aspx
const fileFlagSequentialScan = 0x08000000 // FILE_FLAG_SEQUENTIAL_SCAN
h, e := windows.CreateFile(pathp, access, sharemode, sa, createmode, fileFlagSequentialScan, 0)
return h, e
}
// Helpers for TempFileSequential
var rand uint32
var randmu sync.Mutex
func reseed() uint32 {
return uint32(time.Now().UnixNano() + int64(os.Getpid()))
}
func nextSuffix() string {
randmu.Lock()
r := rand
if r == 0 {
r = reseed()
}
r = r*1664525 + 1013904223 // constants from Numerical Recipes
rand = r
randmu.Unlock()
return strconv.Itoa(int(1e9 + r%1e9))[1:]
}
// TempFileSequential is a copy of ioutil.TempFile, modified to use sequential
// file access. Below is the original comment from golang:
// TempFile creates a new temporary file in the directory dir
// with a name beginning with prefix, opens the file for reading
// and writing, and returns the resulting *os.File.
// If dir is the empty string, TempFile uses the default directory
// for temporary files (see os.TempDir).
// Multiple programs calling TempFile simultaneously
// will not choose the same file. The caller can use f.Name()
// to find the pathname of the file. It is the caller's responsibility
// to remove the file when no longer needed.
func TempFileSequential(dir, prefix string) (f *os.File, err error) {
if dir == "" {
dir = os.TempDir()
}
nconflict := 0
for i := 0; i < 10000; i++ {
name := filepath.Join(dir, prefix+nextSuffix())
f, err = OpenFileSequential(name, os.O_RDWR|os.O_CREATE|os.O_EXCL, 0600)
if os.IsExist(err) {
if nconflict++; nconflict > 10 {
randmu.Lock()
rand = reseed()
randmu.Unlock()
}
continue
}
break
}
return
}

22
vendor/github.com/docker/docker/pkg/system/init.go generated vendored Normal file
View File

@@ -0,0 +1,22 @@
package system
import (
"syscall"
"time"
"unsafe"
)
// Used by chtimes
var maxTime time.Time
func init() {
// chtimes initialization
if unsafe.Sizeof(syscall.Timespec{}.Nsec) == 8 {
// This is a 64 bit timespec
// os.Chtimes limits time to the following
maxTime = time.Unix(0, 1<<63-1)
} else {
// This is a 32 bit timespec
maxTime = time.Unix(1<<31-1, 0)
}
}

View File

@@ -0,0 +1,7 @@
// +build !windows
package system
// InitLCOW does nothing since LCOW is a windows only feature
func InitLCOW(experimental bool) {
}

View File

@@ -0,0 +1,17 @@
package system
import "os"
// lcowSupported determines if Linux Containers on Windows are supported.
var lcowSupported = false
// InitLCOW sets whether LCOW is supported or not
// TODO @jhowardmsft.
// 1. Replace with RS3 RTM build number.
// 2. Remove the getenv check when image-store is coalesced as shouldn't be needed anymore.
func InitLCOW(experimental bool) {
v := GetOSVersion()
if experimental && v.Build > 16270 && os.Getenv("LCOW_SUPPORTED") != "" {
lcowSupported = true
}
}

58
vendor/github.com/docker/docker/pkg/system/lcow.go generated vendored Normal file
View File

@@ -0,0 +1,58 @@
package system
import (
"fmt"
"runtime"
"strings"
specs "github.com/opencontainers/image-spec/specs-go/v1"
)
// ValidatePlatform determines if a platform structure is valid.
// TODO This is a temporary function - can be replaced by parsing from
// https://github.com/containerd/containerd/pull/1403/files at a later date.
// @jhowardmsft
func ValidatePlatform(platform *specs.Platform) error {
platform.Architecture = strings.ToLower(platform.Architecture)
platform.OS = strings.ToLower(platform.OS)
// Based on https://github.com/moby/moby/pull/34642#issuecomment-330375350, do
// not support anything except operating system.
if platform.Architecture != "" {
return fmt.Errorf("invalid platform architecture %q", platform.Architecture)
}
if platform.OS != "" {
if !(platform.OS == runtime.GOOS || (LCOWSupported() && platform.OS == "linux")) {
return fmt.Errorf("invalid platform os %q", platform.OS)
}
}
if len(platform.OSFeatures) != 0 {
return fmt.Errorf("invalid platform osfeatures %q", platform.OSFeatures)
}
if platform.OSVersion != "" {
return fmt.Errorf("invalid platform osversion %q", platform.OSVersion)
}
if platform.Variant != "" {
return fmt.Errorf("invalid platform variant %q", platform.Variant)
}
return nil
}
// ParsePlatform parses a platform string in the format os[/arch[/variant]
// into an OCI image-spec platform structure.
// TODO This is a temporary function - can be replaced by parsing from
// https://github.com/containerd/containerd/pull/1403/files at a later date.
// @jhowardmsft
func ParsePlatform(in string) *specs.Platform {
p := &specs.Platform{}
elements := strings.SplitN(strings.ToLower(in), "/", 3)
if len(elements) == 3 {
p.Variant = elements[2]
}
if len(elements) >= 2 {
p.Architecture = elements[1]
}
if len(elements) >= 1 {
p.OS = elements[0]
}
return p
}

View File

@@ -0,0 +1,8 @@
// +build !windows
package system
// LCOWSupported returns true if Linux containers on Windows are supported.
func LCOWSupported() bool {
return false
}

View File

@@ -0,0 +1,6 @@
package system
// LCOWSupported returns true if Linux containers on Windows are supported.
func LCOWSupported() bool {
return lcowSupported
}

View File

@@ -0,0 +1,19 @@
// +build !windows
package system
import (
"syscall"
)
// Lstat takes a path to a file and returns
// a system.StatT type pertaining to that file.
//
// Throws an error if the file does not exist
func Lstat(path string) (*StatT, error) {
s := &syscall.Stat_t{}
if err := syscall.Lstat(path, s); err != nil {
return nil, err
}
return fromStatT(s)
}

View File

@@ -0,0 +1,14 @@
package system
import "os"
// Lstat calls os.Lstat to get a fileinfo interface back.
// This is then copied into our own locally defined structure.
func Lstat(path string) (*StatT, error) {
fi, err := os.Lstat(path)
if err != nil {
return nil, err
}
return fromStatT(&fi)
}

17
vendor/github.com/docker/docker/pkg/system/meminfo.go generated vendored Normal file
View File

@@ -0,0 +1,17 @@
package system
// MemInfo contains memory statistics of the host system.
type MemInfo struct {
// Total usable RAM (i.e. physical RAM minus a few reserved bits and the
// kernel binary code).
MemTotal int64
// Amount of free memory.
MemFree int64
// Total amount of swap space available.
SwapTotal int64
// Amount of swap space that is currently unused.
SwapFree int64
}

View File

@@ -0,0 +1,65 @@
package system
import (
"bufio"
"io"
"os"
"strconv"
"strings"
"github.com/docker/go-units"
)
// ReadMemInfo retrieves memory statistics of the host system and returns a
// MemInfo type.
func ReadMemInfo() (*MemInfo, error) {
file, err := os.Open("/proc/meminfo")
if err != nil {
return nil, err
}
defer file.Close()
return parseMemInfo(file)
}
// parseMemInfo parses the /proc/meminfo file into
// a MemInfo object given an io.Reader to the file.
// Throws error if there are problems reading from the file
func parseMemInfo(reader io.Reader) (*MemInfo, error) {
meminfo := &MemInfo{}
scanner := bufio.NewScanner(reader)
for scanner.Scan() {
// Expected format: ["MemTotal:", "1234", "kB"]
parts := strings.Fields(scanner.Text())
// Sanity checks: Skip malformed entries.
if len(parts) < 3 || parts[2] != "kB" {
continue
}
// Convert to bytes.
size, err := strconv.Atoi(parts[1])
if err != nil {
continue
}
bytes := int64(size) * units.KiB
switch parts[0] {
case "MemTotal:":
meminfo.MemTotal = bytes
case "MemFree:":
meminfo.MemFree = bytes
case "SwapTotal:":
meminfo.SwapTotal = bytes
case "SwapFree:":
meminfo.SwapFree = bytes
}
}
// Handle errors that may have occurred during the reading of the file.
if err := scanner.Err(); err != nil {
return nil, err
}
return meminfo, nil
}

View File

@@ -0,0 +1,129 @@
// +build solaris,cgo
package system
import (
"fmt"
"unsafe"
)
// #cgo CFLAGS: -std=c99
// #cgo LDFLAGS: -lkstat
// #include <unistd.h>
// #include <stdlib.h>
// #include <stdio.h>
// #include <kstat.h>
// #include <sys/swap.h>
// #include <sys/param.h>
// struct swaptable *allocSwaptable(int num) {
// struct swaptable *st;
// struct swapent *swapent;
// st = (struct swaptable *)malloc(num * sizeof(swapent_t) + sizeof (int));
// swapent = st->swt_ent;
// for (int i = 0; i < num; i++,swapent++) {
// swapent->ste_path = (char *)malloc(MAXPATHLEN * sizeof (char));
// }
// st->swt_n = num;
// return st;
//}
// void freeSwaptable (struct swaptable *st) {
// struct swapent *swapent = st->swt_ent;
// for (int i = 0; i < st->swt_n; i++,swapent++) {
// free(swapent->ste_path);
// }
// free(st);
// }
// swapent_t getSwapEnt(swapent_t *ent, int i) {
// return ent[i];
// }
// int64_t getPpKernel() {
// int64_t pp_kernel = 0;
// kstat_ctl_t *ksc;
// kstat_t *ks;
// kstat_named_t *knp;
// kid_t kid;
//
// if ((ksc = kstat_open()) == NULL) {
// return -1;
// }
// if ((ks = kstat_lookup(ksc, "unix", 0, "system_pages")) == NULL) {
// return -1;
// }
// if (((kid = kstat_read(ksc, ks, NULL)) == -1) ||
// ((knp = kstat_data_lookup(ks, "pp_kernel")) == NULL)) {
// return -1;
// }
// switch (knp->data_type) {
// case KSTAT_DATA_UINT64:
// pp_kernel = knp->value.ui64;
// break;
// case KSTAT_DATA_UINT32:
// pp_kernel = knp->value.ui32;
// break;
// }
// pp_kernel *= sysconf(_SC_PAGESIZE);
// return (pp_kernel > 0 ? pp_kernel : -1);
// }
import "C"
// Get the system memory info using sysconf same as prtconf
func getTotalMem() int64 {
pagesize := C.sysconf(C._SC_PAGESIZE)
npages := C.sysconf(C._SC_PHYS_PAGES)
return int64(pagesize * npages)
}
func getFreeMem() int64 {
pagesize := C.sysconf(C._SC_PAGESIZE)
npages := C.sysconf(C._SC_AVPHYS_PAGES)
return int64(pagesize * npages)
}
// ReadMemInfo retrieves memory statistics of the host system and returns a
// MemInfo type.
func ReadMemInfo() (*MemInfo, error) {
ppKernel := C.getPpKernel()
MemTotal := getTotalMem()
MemFree := getFreeMem()
SwapTotal, SwapFree, err := getSysSwap()
if ppKernel < 0 || MemTotal < 0 || MemFree < 0 || SwapTotal < 0 ||
SwapFree < 0 {
return nil, fmt.Errorf("error getting system memory info %v\n", err)
}
meminfo := &MemInfo{}
// Total memory is total physical memory less than memory locked by kernel
meminfo.MemTotal = MemTotal - int64(ppKernel)
meminfo.MemFree = MemFree
meminfo.SwapTotal = SwapTotal
meminfo.SwapFree = SwapFree
return meminfo, nil
}
func getSysSwap() (int64, int64, error) {
var tSwap int64
var fSwap int64
var diskblksPerPage int64
num, err := C.swapctl(C.SC_GETNSWP, nil)
if err != nil {
return -1, -1, err
}
st := C.allocSwaptable(num)
_, err = C.swapctl(C.SC_LIST, unsafe.Pointer(st))
if err != nil {
C.freeSwaptable(st)
return -1, -1, err
}
diskblksPerPage = int64(C.sysconf(C._SC_PAGESIZE) >> C.DEV_BSHIFT)
for i := 0; i < int(num); i++ {
swapent := C.getSwapEnt(&st.swt_ent[0], C.int(i))
tSwap += int64(swapent.ste_pages) * diskblksPerPage
fSwap += int64(swapent.ste_free) * diskblksPerPage
}
C.freeSwaptable(st)
return tSwap, fSwap, nil
}

View File

@@ -0,0 +1,8 @@
// +build !linux,!windows,!solaris
package system
// ReadMemInfo is not supported on platforms other than linux and windows.
func ReadMemInfo() (*MemInfo, error) {
return nil, ErrNotSupportedPlatform
}

View File

@@ -0,0 +1,45 @@
package system
import (
"unsafe"
"golang.org/x/sys/windows"
)
var (
modkernel32 = windows.NewLazySystemDLL("kernel32.dll")
procGlobalMemoryStatusEx = modkernel32.NewProc("GlobalMemoryStatusEx")
)
// https://msdn.microsoft.com/en-us/library/windows/desktop/aa366589(v=vs.85).aspx
// https://msdn.microsoft.com/en-us/library/windows/desktop/aa366770(v=vs.85).aspx
type memorystatusex struct {
dwLength uint32
dwMemoryLoad uint32
ullTotalPhys uint64
ullAvailPhys uint64
ullTotalPageFile uint64
ullAvailPageFile uint64
ullTotalVirtual uint64
ullAvailVirtual uint64
ullAvailExtendedVirtual uint64
}
// ReadMemInfo retrieves memory statistics of the host system and returns a
// MemInfo type.
func ReadMemInfo() (*MemInfo, error) {
msi := &memorystatusex{
dwLength: 64,
}
r1, _, _ := procGlobalMemoryStatusEx.Call(uintptr(unsafe.Pointer(msi)))
if r1 == 0 {
return &MemInfo{}, nil
}
return &MemInfo{
MemTotal: int64(msi.ullTotalPhys),
MemFree: int64(msi.ullAvailPhys),
SwapTotal: int64(msi.ullTotalPageFile),
SwapFree: int64(msi.ullAvailPageFile),
}, nil
}

22
vendor/github.com/docker/docker/pkg/system/mknod.go generated vendored Normal file
View File

@@ -0,0 +1,22 @@
// +build !windows
package system
import (
"golang.org/x/sys/unix"
)
// Mknod creates a filesystem node (file, device special file or named pipe) named path
// with attributes specified by mode and dev.
func Mknod(path string, mode uint32, dev int) error {
return unix.Mknod(path, mode, dev)
}
// Mkdev is used to build the value of linux devices (in /dev/) which specifies major
// and minor number of the newly created device special file.
// Linux device nodes are a bit weird due to backwards compat with 16 bit device nodes.
// They are, from low to high: the lower 8 bits of the minor, then 12 bits of the major,
// then the top 12 bits of the minor.
func Mkdev(major int64, minor int64) uint32 {
return uint32(unix.Mkdev(uint32(major), uint32(minor)))
}

View File

@@ -0,0 +1,13 @@
// +build windows
package system
// Mknod is not implemented on Windows.
func Mknod(path string, mode uint32, dev int) error {
return ErrNotSupportedPlatform
}
// Mkdev is not implemented on Windows.
func Mkdev(major int64, minor int64) uint32 {
panic("Mkdev not implemented on Windows.")
}

60
vendor/github.com/docker/docker/pkg/system/path.go generated vendored Normal file
View File

@@ -0,0 +1,60 @@
package system
import (
"fmt"
"path/filepath"
"runtime"
"strings"
"github.com/containerd/continuity/pathdriver"
)
const defaultUnixPathEnv = "/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
// DefaultPathEnv is unix style list of directories to search for
// executables. Each directory is separated from the next by a colon
// ':' character .
func DefaultPathEnv(os string) string {
if runtime.GOOS == "windows" {
if os != runtime.GOOS {
return defaultUnixPathEnv
}
// Deliberately empty on Windows containers on Windows as the default path will be set by
// the container. Docker has no context of what the default path should be.
return ""
}
return defaultUnixPathEnv
}
// CheckSystemDriveAndRemoveDriveLetter verifies that a path, if it includes a drive letter,
// is the system drive.
// On Linux: this is a no-op.
// On Windows: this does the following>
// CheckSystemDriveAndRemoveDriveLetter verifies and manipulates a Windows path.
// This is used, for example, when validating a user provided path in docker cp.
// If a drive letter is supplied, it must be the system drive. The drive letter
// is always removed. Also, it translates it to OS semantics (IOW / to \). We
// need the path in this syntax so that it can ultimately be contatenated with
// a Windows long-path which doesn't support drive-letters. Examples:
// C: --> Fail
// C:\ --> \
// a --> a
// /a --> \a
// d:\ --> Fail
func CheckSystemDriveAndRemoveDriveLetter(path string, driver pathdriver.PathDriver) (string, error) {
if runtime.GOOS != "windows" || LCOWSupported() {
return path, nil
}
if len(path) == 2 && string(path[1]) == ":" {
return "", fmt.Errorf("No relative path specified in %q", path)
}
if !driver.IsAbs(path) || len(path) < 2 {
return filepath.FromSlash(path), nil
}
if string(path[1]) == ":" && !strings.EqualFold(string(path[0]), "c") {
return "", fmt.Errorf("The specified path is not on the system drive (C:)")
}
return filepath.FromSlash(path[2:]), nil
}

View File

@@ -0,0 +1,24 @@
// +build linux freebsd solaris darwin
package system
import (
"syscall"
"golang.org/x/sys/unix"
)
// IsProcessAlive returns true if process with a given pid is running.
func IsProcessAlive(pid int) bool {
err := unix.Kill(pid, syscall.Signal(0))
if err == nil || err == unix.EPERM {
return true
}
return false
}
// KillProcess force-stops a process.
func KillProcess(pid int) {
unix.Kill(pid, unix.SIGKILL)
}

80
vendor/github.com/docker/docker/pkg/system/rm.go generated vendored Normal file
View File

@@ -0,0 +1,80 @@
package system
import (
"os"
"syscall"
"time"
"github.com/docker/docker/pkg/mount"
"github.com/pkg/errors"
)
// EnsureRemoveAll wraps `os.RemoveAll` to check for specific errors that can
// often be remedied.
// Only use `EnsureRemoveAll` if you really want to make every effort to remove
// a directory.
//
// Because of the way `os.Remove` (and by extension `os.RemoveAll`) works, there
// can be a race between reading directory entries and then actually attempting
// to remove everything in the directory.
// These types of errors do not need to be returned since it's ok for the dir to
// be gone we can just retry the remove operation.
//
// This should not return a `os.ErrNotExist` kind of error under any circumstances
func EnsureRemoveAll(dir string) error {
notExistErr := make(map[string]bool)
// track retries
exitOnErr := make(map[string]int)
maxRetry := 5
// Attempt to unmount anything beneath this dir first
mount.RecursiveUnmount(dir)
for {
err := os.RemoveAll(dir)
if err == nil {
return err
}
pe, ok := err.(*os.PathError)
if !ok {
return err
}
if os.IsNotExist(err) {
if notExistErr[pe.Path] {
return err
}
notExistErr[pe.Path] = true
// There is a race where some subdir can be removed but after the parent
// dir entries have been read.
// So the path could be from `os.Remove(subdir)`
// If the reported non-existent path is not the passed in `dir` we
// should just retry, but otherwise return with no error.
if pe.Path == dir {
return nil
}
continue
}
if pe.Err != syscall.EBUSY {
return err
}
if mounted, _ := mount.Mounted(pe.Path); mounted {
if e := mount.Unmount(pe.Path); e != nil {
if mounted, _ := mount.Mounted(pe.Path); mounted {
return errors.Wrapf(e, "error while removing %s", dir)
}
}
}
if exitOnErr[pe.Path] == maxRetry {
return err
}
exitOnErr[pe.Path]++
time.Sleep(100 * time.Millisecond)
}
}

View File

@@ -0,0 +1,13 @@
package system
import "syscall"
// fromStatT converts a syscall.Stat_t type to a system.Stat_t type
func fromStatT(s *syscall.Stat_t) (*StatT, error) {
return &StatT{size: s.Size,
mode: uint32(s.Mode),
uid: s.Uid,
gid: s.Gid,
rdev: uint64(s.Rdev),
mtim: s.Mtimespec}, nil
}

View File

@@ -0,0 +1,13 @@
package system
import "syscall"
// fromStatT converts a syscall.Stat_t type to a system.Stat_t type
func fromStatT(s *syscall.Stat_t) (*StatT, error) {
return &StatT{size: s.Size,
mode: uint32(s.Mode),
uid: s.Uid,
gid: s.Gid,
rdev: uint64(s.Rdev),
mtim: s.Mtimespec}, nil
}

View File

@@ -0,0 +1,19 @@
package system
import "syscall"
// fromStatT converts a syscall.Stat_t type to a system.Stat_t type
func fromStatT(s *syscall.Stat_t) (*StatT, error) {
return &StatT{size: s.Size,
mode: s.Mode,
uid: s.Uid,
gid: s.Gid,
rdev: s.Rdev,
mtim: s.Mtim}, nil
}
// FromStatT converts a syscall.Stat_t type to a system.Stat_t type
// This is exposed on Linux as pkg/archive/changes uses it.
func FromStatT(s *syscall.Stat_t) (*StatT, error) {
return fromStatT(s)
}

View File

@@ -0,0 +1,13 @@
package system
import "syscall"
// fromStatT converts a syscall.Stat_t type to a system.Stat_t type
func fromStatT(s *syscall.Stat_t) (*StatT, error) {
return &StatT{size: s.Size,
mode: uint32(s.Mode),
uid: s.Uid,
gid: s.Gid,
rdev: uint64(s.Rdev),
mtim: s.Mtim}, nil
}

View File

@@ -0,0 +1,13 @@
package system
import "syscall"
// fromStatT converts a syscall.Stat_t type to a system.Stat_t type
func fromStatT(s *syscall.Stat_t) (*StatT, error) {
return &StatT{size: s.Size,
mode: uint32(s.Mode),
uid: s.Uid,
gid: s.Gid,
rdev: uint64(s.Rdev),
mtim: s.Mtim}, nil
}

View File

@@ -0,0 +1,60 @@
// +build !windows
package system
import (
"syscall"
)
// StatT type contains status of a file. It contains metadata
// like permission, owner, group, size, etc about a file.
type StatT struct {
mode uint32
uid uint32
gid uint32
rdev uint64
size int64
mtim syscall.Timespec
}
// Mode returns file's permission mode.
func (s StatT) Mode() uint32 {
return s.mode
}
// UID returns file's user id of owner.
func (s StatT) UID() uint32 {
return s.uid
}
// GID returns file's group id of owner.
func (s StatT) GID() uint32 {
return s.gid
}
// Rdev returns file's device ID (if it's special file).
func (s StatT) Rdev() uint64 {
return s.rdev
}
// Size returns file's size.
func (s StatT) Size() int64 {
return s.size
}
// Mtim returns file's last modification time.
func (s StatT) Mtim() syscall.Timespec {
return s.mtim
}
// Stat takes a path to a file and returns
// a system.StatT type pertaining to that file.
//
// Throws an error if the file does not exist
func Stat(path string) (*StatT, error) {
s := &syscall.Stat_t{}
if err := syscall.Stat(path, s); err != nil {
return nil, err
}
return fromStatT(s)
}

View File

@@ -0,0 +1,49 @@
package system
import (
"os"
"time"
)
// StatT type contains status of a file. It contains metadata
// like permission, size, etc about a file.
type StatT struct {
mode os.FileMode
size int64
mtim time.Time
}
// Size returns file's size.
func (s StatT) Size() int64 {
return s.size
}
// Mode returns file's permission mode.
func (s StatT) Mode() os.FileMode {
return os.FileMode(s.mode)
}
// Mtim returns file's last modification time.
func (s StatT) Mtim() time.Time {
return time.Time(s.mtim)
}
// Stat takes a path to a file and returns
// a system.StatT type pertaining to that file.
//
// Throws an error if the file does not exist
func Stat(path string) (*StatT, error) {
fi, err := os.Stat(path)
if err != nil {
return nil, err
}
return fromStatT(&fi)
}
// fromStatT converts a os.FileInfo type to a system.StatT type
func fromStatT(fi *os.FileInfo) (*StatT, error) {
return &StatT{
size: (*fi).Size(),
mode: (*fi).Mode(),
mtim: (*fi).ModTime()}, nil
}

View File

@@ -0,0 +1,17 @@
// +build linux freebsd
package system
import "golang.org/x/sys/unix"
// Unmount is a platform-specific helper function to call
// the unmount syscall.
func Unmount(dest string) error {
return unix.Unmount(dest, 0)
}
// CommandLineToArgv should not be used on Unix.
// It simply returns commandLine in the only element in the returned array.
func CommandLineToArgv(commandLine string) ([]string, error) {
return []string{commandLine}, nil
}

View File

@@ -0,0 +1,122 @@
package system
import (
"unsafe"
"github.com/sirupsen/logrus"
"golang.org/x/sys/windows"
)
var (
ntuserApiset = windows.NewLazyDLL("ext-ms-win-ntuser-window-l1-1-0")
procGetVersionExW = modkernel32.NewProc("GetVersionExW")
procGetProductInfo = modkernel32.NewProc("GetProductInfo")
)
// OSVersion is a wrapper for Windows version information
// https://msdn.microsoft.com/en-us/library/windows/desktop/ms724439(v=vs.85).aspx
type OSVersion struct {
Version uint32
MajorVersion uint8
MinorVersion uint8
Build uint16
}
// https://msdn.microsoft.com/en-us/library/windows/desktop/ms724833(v=vs.85).aspx
type osVersionInfoEx struct {
OSVersionInfoSize uint32
MajorVersion uint32
MinorVersion uint32
BuildNumber uint32
PlatformID uint32
CSDVersion [128]uint16
ServicePackMajor uint16
ServicePackMinor uint16
SuiteMask uint16
ProductType byte
Reserve byte
}
// GetOSVersion gets the operating system version on Windows. Note that
// docker.exe must be manifested to get the correct version information.
func GetOSVersion() OSVersion {
var err error
osv := OSVersion{}
osv.Version, err = windows.GetVersion()
if err != nil {
// GetVersion never fails.
panic(err)
}
osv.MajorVersion = uint8(osv.Version & 0xFF)
osv.MinorVersion = uint8(osv.Version >> 8 & 0xFF)
osv.Build = uint16(osv.Version >> 16)
return osv
}
// IsWindowsClient returns true if the SKU is client
// @engine maintainers - this function should not be removed or modified as it
// is used to enforce licensing restrictions on Windows.
func IsWindowsClient() bool {
osviex := &osVersionInfoEx{OSVersionInfoSize: 284}
r1, _, err := procGetVersionExW.Call(uintptr(unsafe.Pointer(osviex)))
if r1 == 0 {
logrus.Warnf("GetVersionExW failed - assuming server SKU: %v", err)
return false
}
const verNTWorkstation = 0x00000001
return osviex.ProductType == verNTWorkstation
}
// IsIoTCore returns true if the currently running image is based off of
// Windows 10 IoT Core.
// @engine maintainers - this function should not be removed or modified as it
// is used to enforce licensing restrictions on Windows.
func IsIoTCore() bool {
var returnedProductType uint32
r1, _, err := procGetProductInfo.Call(6, 1, 0, 0, uintptr(unsafe.Pointer(&returnedProductType)))
if r1 == 0 {
logrus.Warnf("GetProductInfo failed - assuming this is not IoT: %v", err)
return false
}
const productIoTUAP = 0x0000007B
const productIoTUAPCommercial = 0x00000083
return returnedProductType == productIoTUAP || returnedProductType == productIoTUAPCommercial
}
// Unmount is a platform-specific helper function to call
// the unmount syscall. Not supported on Windows
func Unmount(dest string) error {
return nil
}
// CommandLineToArgv wraps the Windows syscall to turn a commandline into an argument array.
func CommandLineToArgv(commandLine string) ([]string, error) {
var argc int32
argsPtr, err := windows.UTF16PtrFromString(commandLine)
if err != nil {
return nil, err
}
argv, err := windows.CommandLineToArgv(argsPtr, &argc)
if err != nil {
return nil, err
}
defer windows.LocalFree(windows.Handle(uintptr(unsafe.Pointer(argv))))
newArgs := make([]string, argc)
for i, v := range (*argv)[:argc] {
newArgs[i] = string(windows.UTF16ToString((*v)[:]))
}
return newArgs, nil
}
// HasWin32KSupport determines whether containers that depend on win32k can
// run on this machine. Win32k is the driver used to implement windowing.
func HasWin32KSupport() bool {
// For now, check for ntuser API support on the host. In the future, a host
// may support win32k in containers even if the host does not support ntuser
// APIs.
return ntuserApiset.Load() == nil
}

13
vendor/github.com/docker/docker/pkg/system/umask.go generated vendored Normal file
View File

@@ -0,0 +1,13 @@
// +build !windows
package system
import (
"golang.org/x/sys/unix"
)
// Umask sets current process's file mode creation mask to newmask
// and returns oldmask.
func Umask(newmask int) (oldmask int, err error) {
return unix.Umask(newmask), nil
}

View File

@@ -0,0 +1,9 @@
// +build windows
package system
// Umask is not supported on the windows platform.
func Umask(newmask int) (oldmask int, err error) {
// should not be called on cli code path
return 0, ErrNotSupportedPlatform
}

View File

@@ -0,0 +1,24 @@
package system
import (
"syscall"
"unsafe"
"golang.org/x/sys/unix"
)
// LUtimesNano is used to change access and modification time of the specified path.
// It's used for symbol link file because unix.UtimesNano doesn't support a NOFOLLOW flag atm.
func LUtimesNano(path string, ts []syscall.Timespec) error {
var _path *byte
_path, err := unix.BytePtrFromString(path)
if err != nil {
return err
}
if _, _, err := unix.Syscall(unix.SYS_LUTIMES, uintptr(unsafe.Pointer(_path)), uintptr(unsafe.Pointer(&ts[0])), 0); err != 0 && err != unix.ENOSYS {
return err
}
return nil
}

View File

@@ -0,0 +1,25 @@
package system
import (
"syscall"
"unsafe"
"golang.org/x/sys/unix"
)
// LUtimesNano is used to change access and modification time of the specified path.
// It's used for symbol link file because unix.UtimesNano doesn't support a NOFOLLOW flag atm.
func LUtimesNano(path string, ts []syscall.Timespec) error {
atFdCwd := unix.AT_FDCWD
var _path *byte
_path, err := unix.BytePtrFromString(path)
if err != nil {
return err
}
if _, _, err := unix.Syscall6(unix.SYS_UTIMENSAT, uintptr(atFdCwd), uintptr(unsafe.Pointer(_path)), uintptr(unsafe.Pointer(&ts[0])), unix.AT_SYMLINK_NOFOLLOW, 0, 0); err != 0 && err != unix.ENOSYS {
return err
}
return nil
}

View File

@@ -0,0 +1,10 @@
// +build !linux,!freebsd
package system
import "syscall"
// LUtimesNano is only supported on linux and freebsd.
func LUtimesNano(path string, ts []syscall.Timespec) error {
return ErrNotSupportedPlatform
}

View File

@@ -0,0 +1,29 @@
package system
import "golang.org/x/sys/unix"
// Lgetxattr retrieves the value of the extended attribute identified by attr
// and associated with the given path in the file system.
// It will returns a nil slice and nil error if the xattr is not set.
func Lgetxattr(path string, attr string) ([]byte, error) {
dest := make([]byte, 128)
sz, errno := unix.Lgetxattr(path, attr, dest)
if errno == unix.ENODATA {
return nil, nil
}
if errno == unix.ERANGE {
dest = make([]byte, sz)
sz, errno = unix.Lgetxattr(path, attr, dest)
}
if errno != nil {
return nil, errno
}
return dest[:sz], nil
}
// Lsetxattr sets the value of the extended attribute identified by attr
// and associated with the given path in the file system.
func Lsetxattr(path string, attr string, data []byte, flags int) error {
return unix.Lsetxattr(path, attr, data, flags)
}

View File

@@ -0,0 +1,13 @@
// +build !linux
package system
// Lgetxattr is not supported on platforms other than linux.
func Lgetxattr(path string, attr string) ([]byte, error) {
return nil, ErrNotSupportedPlatform
}
// Lsetxattr is not supported on platforms other than linux.
func Lsetxattr(path string, attr string, data []byte, flags int) error {
return ErrNotSupportedPlatform
}

View File

@@ -0,0 +1,139 @@
// Package truncindex provides a general 'index tree', used by Docker
// in order to be able to reference containers by only a few unambiguous
// characters of their id.
package truncindex
import (
"errors"
"fmt"
"strings"
"sync"
"github.com/tchap/go-patricia/patricia"
)
var (
// ErrEmptyPrefix is an error returned if the prefix was empty.
ErrEmptyPrefix = errors.New("Prefix can't be empty")
// ErrIllegalChar is returned when a space is in the ID
ErrIllegalChar = errors.New("illegal character: ' '")
// ErrNotExist is returned when ID or its prefix not found in index.
ErrNotExist = errors.New("ID does not exist")
)
// ErrAmbiguousPrefix is returned if the prefix was ambiguous
// (multiple ids for the prefix).
type ErrAmbiguousPrefix struct {
prefix string
}
func (e ErrAmbiguousPrefix) Error() string {
return fmt.Sprintf("Multiple IDs found with provided prefix: %s", e.prefix)
}
// TruncIndex allows the retrieval of string identifiers by any of their unique prefixes.
// This is used to retrieve image and container IDs by more convenient shorthand prefixes.
type TruncIndex struct {
sync.RWMutex
trie *patricia.Trie
ids map[string]struct{}
}
// NewTruncIndex creates a new TruncIndex and initializes with a list of IDs.
func NewTruncIndex(ids []string) (idx *TruncIndex) {
idx = &TruncIndex{
ids: make(map[string]struct{}),
// Change patricia max prefix per node length,
// because our len(ID) always 64
trie: patricia.NewTrie(patricia.MaxPrefixPerNode(64)),
}
for _, id := range ids {
idx.addID(id)
}
return
}
func (idx *TruncIndex) addID(id string) error {
if strings.Contains(id, " ") {
return ErrIllegalChar
}
if id == "" {
return ErrEmptyPrefix
}
if _, exists := idx.ids[id]; exists {
return fmt.Errorf("id already exists: '%s'", id)
}
idx.ids[id] = struct{}{}
if inserted := idx.trie.Insert(patricia.Prefix(id), struct{}{}); !inserted {
return fmt.Errorf("failed to insert id: %s", id)
}
return nil
}
// Add adds a new ID to the TruncIndex.
func (idx *TruncIndex) Add(id string) error {
idx.Lock()
defer idx.Unlock()
return idx.addID(id)
}
// Delete removes an ID from the TruncIndex. If there are multiple IDs
// with the given prefix, an error is thrown.
func (idx *TruncIndex) Delete(id string) error {
idx.Lock()
defer idx.Unlock()
if _, exists := idx.ids[id]; !exists || id == "" {
return fmt.Errorf("no such id: '%s'", id)
}
delete(idx.ids, id)
if deleted := idx.trie.Delete(patricia.Prefix(id)); !deleted {
return fmt.Errorf("no such id: '%s'", id)
}
return nil
}
// Get retrieves an ID from the TruncIndex. If there are multiple IDs
// with the given prefix, an error is thrown.
func (idx *TruncIndex) Get(s string) (string, error) {
if s == "" {
return "", ErrEmptyPrefix
}
var (
id string
)
subTreeVisitFunc := func(prefix patricia.Prefix, item patricia.Item) error {
if id != "" {
// we haven't found the ID if there are two or more IDs
id = ""
return ErrAmbiguousPrefix{prefix: string(prefix)}
}
id = string(prefix)
return nil
}
idx.RLock()
defer idx.RUnlock()
if err := idx.trie.VisitSubtree(patricia.Prefix(s), subTreeVisitFunc); err != nil {
return "", err
}
if id != "" {
return id, nil
}
return "", ErrNotExist
}
// Iterate iterates over all stored IDs and passes each of them to the given
// handler. Take care that the handler method does not call any public
// method on truncindex as the internal locking is not reentrant/recursive
// and will result in deadlock.
func (idx *TruncIndex) Iterate(handler func(id string)) {
idx.Lock()
defer idx.Unlock()
idx.trie.Visit(func(prefix patricia.Prefix, item patricia.Item) error {
handler(string(prefix))
return nil
})
}

149
vendor/github.com/docker/docker/vendor.conf generated vendored Normal file
View File

@@ -0,0 +1,149 @@
# the following lines are in sorted order, FYI
github.com/Azure/go-ansiterm d6e3b3328b783f23731bc4d058875b0371ff8109
github.com/Microsoft/hcsshim v0.6.5
github.com/Microsoft/go-winio v0.4.5
github.com/davecgh/go-spew 346938d642f2ec3594ed81d874461961cd0faa76
github.com/docker/libtrust 9cbd2a1374f46905c68a4eb3694a130610adc62a
github.com/go-check/check 4ed411733c5785b40214c70bce814c3a3a689609 https://github.com/cpuguy83/check.git
github.com/gorilla/context v1.1
github.com/gorilla/mux v1.1
github.com/Microsoft/opengcs v0.3.4
github.com/kr/pty 5cf931ef8f
github.com/mattn/go-shellwords v1.0.3
github.com/sirupsen/logrus v1.0.3
github.com/tchap/go-patricia v2.2.6
github.com/vdemeester/shakers 24d7f1d6a71aa5d9cbe7390e4afb66b7eef9e1b3
golang.org/x/net 7dcfb8076726a3fdd9353b6b8a1f1b6be6811bd6
golang.org/x/sys 8dbc5d05d6edcc104950cc299a1ce6641235bc86
github.com/docker/go-units 9e638d38cf6977a37a8ea0078f3ee75a7cdb2dd1
github.com/docker/go-connections 3ede32e2033de7505e6500d6c868c2b9ed9f169d
golang.org/x/text f72d8390a633d5dfb0cc84043294db9f6c935756
github.com/stretchr/testify 4d4bfba8f1d1027c4fdbe371823030df51419987
github.com/pmezard/go-difflib v1.0.0
github.com/gotestyourself/gotestyourself v1.1.0
github.com/RackSec/srslog 456df3a81436d29ba874f3590eeeee25d666f8a5
github.com/imdario/mergo 0.2.1
golang.org/x/sync de49d9dcd27d4f764488181bea099dfe6179bcf0
github.com/containerd/continuity 22694c680ee48fb8f50015b44618517e2bde77e8
github.com/moby/buildkit aaff9d591ef128560018433fe61beb802e149de8
github.com/tonistiigi/fsutil dea3a0da73aee887fc02142d995be764106ac5e2
#get libnetwork packages
github.com/docker/libnetwork 68f1039f172434709a4550fe92e3e058406c74ce
github.com/docker/go-events 9461782956ad83b30282bf90e31fa6a70c255ba9
github.com/armon/go-radix e39d623f12e8e41c7b5529e9a9dd67a1e2261f80
github.com/armon/go-metrics eb0af217e5e9747e41dd5303755356b62d28e3ec
github.com/hashicorp/go-msgpack 71c2886f5a673a35f909803f38ece5810165097b
github.com/hashicorp/memberlist v0.1.0
github.com/sean-/seed e2103e2c35297fb7e17febb81e49b312087a2372
github.com/hashicorp/go-sockaddr acd314c5781ea706c710d9ea70069fd2e110d61d
github.com/hashicorp/go-multierror fcdddc395df1ddf4247c69bd436e84cfa0733f7e
github.com/hashicorp/serf 598c54895cc5a7b1a24a398d635e8c0ea0959870
github.com/docker/libkv 1d8431073ae03cdaedb198a89722f3aab6d418ef
github.com/vishvananda/netns 604eaf189ee867d8c147fafc28def2394e878d25
github.com/vishvananda/netlink bd6d5de5ccef2d66b0a26177928d0d8895d7f969
github.com/BurntSushi/toml f706d00e3de6abe700c994cdd545a1a4915af060
github.com/samuel/go-zookeeper d0e0d8e11f318e000a8cc434616d69e329edc374
github.com/deckarep/golang-set ef32fa3046d9f249d399f98ebaf9be944430fd1d
github.com/coreos/etcd v3.2.1
github.com/coreos/go-semver v0.2.0
github.com/ugorji/go f1f1a805ed361a0e078bb537e4ea78cd37dcf065
github.com/hashicorp/consul v0.5.2
github.com/boltdb/bolt fff57c100f4dea1905678da7e90d92429dff2904
github.com/miekg/dns 75e6e86cc601825c5dbcd4e0c209eab180997cd7
# get graph and distribution packages
github.com/docker/distribution edc3ab29cdff8694dd6feb85cfeb4b5f1b38ed9c
github.com/vbatts/tar-split v0.10.1
github.com/opencontainers/go-digest a6d0ee40d4207ea02364bd3b9e8e77b9159ba1eb
# get go-zfs packages
github.com/mistifyio/go-zfs 22c9b32c84eb0d0c6f4043b6e90fc94073de92fa
github.com/pborman/uuid v1.0
google.golang.org/grpc v1.3.0
# When updating, also update RUNC_COMMIT in hack/dockerfile/binaries-commits accordingly
github.com/opencontainers/runc 0351df1c5a66838d0c392b4ac4cf9450de844e2d
github.com/opencontainers/image-spec 372ad780f63454fbbbbcc7cf80e5b90245c13e13
github.com/opencontainers/runtime-spec v1.0.0
github.com/seccomp/libseccomp-golang 32f571b70023028bd57d9288c20efbcb237f3ce0
# libcontainer deps (see src/github.com/opencontainers/runc/Godeps/Godeps.json)
github.com/coreos/go-systemd v4
github.com/godbus/dbus v4.0.0
github.com/syndtr/gocapability 2c00daeb6c3b45114c80ac44119e7b8801fdd852
github.com/golang/protobuf 7a211bcf3bce0e3f1d74f9894916e6f116ae83b4
# gelf logging driver deps
github.com/Graylog2/go-gelf v2
github.com/fluent/fluent-logger-golang v1.2.1
# fluent-logger-golang deps
github.com/philhofer/fwd 98c11a7a6ec829d672b03833c3d69a7fae1ca972
github.com/tinylib/msgp 75ee40d2601edf122ef667e2a07d600d4c44490c
# fsnotify
github.com/fsnotify/fsnotify v1.4.2
# awslogs deps
github.com/aws/aws-sdk-go v1.4.22
github.com/go-ini/ini 060d7da055ba6ec5ea7a31f116332fe5efa04ce0
github.com/jmespath/go-jmespath 0b12d6b521d83fc7f755e7cfc1b1fbdd35a01a74
# logentries
github.com/bsphere/le_go 7a984a84b5492ae539b79b62fb4a10afc63c7bcf
# gcplogs deps
golang.org/x/oauth2 96382aa079b72d8c014eb0c50f6c223d1e6a2de0
google.golang.org/api 3cc2e591b550923a2c5f0ab5a803feda924d5823
cloud.google.com/go 9d965e63e8cceb1b5d7977a202f0fcb8866d6525
github.com/googleapis/gax-go da06d194a00e19ce00d9011a13931c3f6f6887c7
google.golang.org/genproto d80a6e20e776b0b17a324d0ba1ab50a39c8e8944
# containerd
github.com/containerd/containerd 06b9cb35161009dcb7123345749fef02f7cea8e0
github.com/tonistiigi/fifo 1405643975692217d6720f8b54aeee1bf2cd5cf4
# cluster
github.com/docker/swarmkit 872861d2ae46958af7ead1d5fffb092c73afbaf0
github.com/gogo/protobuf v0.4
github.com/cloudflare/cfssl 7fb22c8cba7ecaf98e4082d22d65800cf45e042a
github.com/google/certificate-transparency d90e65c3a07988180c5b1ece71791c0b6506826e
golang.org/x/crypto 558b6879de74bc843225cde5686419267ff707ca
golang.org/x/time a4bde12657593d5e90d0533a3e4fd95e635124cb
github.com/hashicorp/go-memdb cb9a474f84cc5e41b273b20c6927680b2a8776ad
github.com/hashicorp/go-immutable-radix 8e8ed81f8f0bf1bdd829593fdd5c29922c1ea990
github.com/hashicorp/golang-lru a0d98a5f288019575c6d1f4bb1573fef2d1fcdc4
github.com/coreos/pkg fa29b1d70f0beaddd4c7021607cc3c3be8ce94b8
github.com/pivotal-golang/clock 3fd3c1944c59d9742e1cd333672181cd1a6f9fa0
github.com/prometheus/client_golang 52437c81da6b127a9925d17eb3a382a2e5fd395e
github.com/beorn7/perks 4c0e84591b9aa9e6dcfdf3e020114cd81f89d5f9
github.com/prometheus/client_model fa8ad6fec33561be4280a8f0514318c79d7f6cb6
github.com/prometheus/common ebdfc6da46522d58825777cf1f90490a5b1ef1d8
github.com/prometheus/procfs abf152e5f3e97f2fafac028d2cc06c1feb87ffa5
github.com/matttproud/golang_protobuf_extensions v1.0.0
github.com/pkg/errors 839d9e913e063e28dfd0e6c7b7512793e0a48be9
github.com/grpc-ecosystem/go-grpc-prometheus 6b7015e65d366bf3f19b2b2a000a831940f0f7e0
# cli
github.com/spf13/cobra v1.5.1 https://github.com/dnephin/cobra.git
github.com/spf13/pflag 9ff6c6923cfffbcd502984b8e0c80539a94968b7
github.com/inconshreveable/mousetrap 76626ae9c91c4f2a10f34cad8ce83ea42c93bb75
github.com/Nvveen/Gotty a8b993ba6abdb0e0c12b0125c603323a71c7790c https://github.com/ijc25/Gotty
# metrics
github.com/docker/go-metrics d466d4f6fd960e01820085bd7e1a24426ee7ef18
github.com/opencontainers/selinux v1.0.0-rc1
# archive/tar
# mkdir -p ./vendor/archive
# git clone git://github.com/tonistiigi/go-1.git ./go
# git --git-dir ./go/.git --work-tree ./go checkout revert-prefix-ignore
# cp -a go/src/archive/tar ./vendor/archive/tar
# rm -rf ./go
# vndr

191
vendor/github.com/docker/spdystream/LICENSE generated vendored Normal file
View File

@@ -0,0 +1,191 @@
Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS
Copyright 2014-2015 Docker, Inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.

Some files were not shown because too many files have changed in this diff Show More