Update kubernetes to v1.11.0-beta.2

Signed-off-by: Lantao Liu <lantaol@google.com>
This commit is contained in:
Lantao Liu 2018-06-07 19:08:21 -07:00
parent dfae95ec9d
commit 2b48f8738f
105 changed files with 8283 additions and 9239 deletions

View File

@ -49,7 +49,6 @@ github.com/prometheus/common 89604d197083d4781071d3c65855d24ecfb0a563
github.com/prometheus/procfs cb4147076ac75738c9a7d279075a253c0cc5acbd
github.com/seccomp/libseccomp-golang 32f571b70023028bd57d9288c20efbcb237f3ce0
github.com/sirupsen/logrus v1.0.0
github.com/spf13/pflag v1.0.0
github.com/stevvooe/ttrpc d4528379866b0ce7e9d71f3eb96f0582fc374577
github.com/stretchr/testify v1.1.4
github.com/syndtr/gocapability db04d3cc01c8b54962a58ec7e491717d06cfcc16
@ -65,9 +64,9 @@ google.golang.org/genproto d80a6e20e776b0b17a324d0ba1ab50a39c8e8944
google.golang.org/grpc v1.12.0
gopkg.in/inf.v0 3887ee99ecf07df5b447e9b00d9c0b2adaa9f3e4
gopkg.in/yaml.v2 53feefa2559fb8dfa8d81baad31be332c97d6c77
k8s.io/api 7e796de92438aede7cb5d6bcf6c10f4fa65db560
k8s.io/apimachinery fcb9a12f7875d01f8390b28faedc37dcf2e713b9
k8s.io/apiserver 4a8377c547bbff4576a35b5b5bf4026d9b5aa763
k8s.io/client-go b9a0cf870f239c4a4ecfd3feb075a50e7cbe1473
k8s.io/kubernetes v1.10.0
k8s.io/api aac5d23ea3ca7132d75daeed7516f8205553c832
k8s.io/apimachinery 521145febf93d5639dce48a49ee8dc080863b034
k8s.io/apiserver 0553b9748924ffb5737340cb0afa6f0de2b12c42
k8s.io/client-go 6fd8d2ef33ca9a0d821c9d66c23793cc8603d897
k8s.io/kubernetes v1.11.0-beta.2
k8s.io/utils 258e2a2fa64568210fbd6267cf1d8fd87c3cb86e

View File

@ -1,28 +0,0 @@
Copyright (c) 2012 Alex Ogier. All rights reserved.
Copyright (c) 2012 The Go Authors. All rights reserved.
Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions are
met:
* Redistributions of source code must retain the above copyright
notice, this list of conditions and the following disclaimer.
* Redistributions in binary form must reproduce the above
copyright notice, this list of conditions and the following disclaimer
in the documentation and/or other materials provided with the
distribution.
* Neither the name of Google Inc. nor the names of its
contributors may be used to endorse or promote products derived from
this software without specific prior written permission.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
"AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.

View File

@ -1,296 +0,0 @@
[![Build Status](https://travis-ci.org/spf13/pflag.svg?branch=master)](https://travis-ci.org/spf13/pflag)
[![Go Report Card](https://goreportcard.com/badge/github.com/spf13/pflag)](https://goreportcard.com/report/github.com/spf13/pflag)
[![GoDoc](https://godoc.org/github.com/spf13/pflag?status.svg)](https://godoc.org/github.com/spf13/pflag)
## Description
pflag is a drop-in replacement for Go's flag package, implementing
POSIX/GNU-style --flags.
pflag is compatible with the [GNU extensions to the POSIX recommendations
for command-line options][1]. For a more precise description, see the
"Command-line flag syntax" section below.
[1]: http://www.gnu.org/software/libc/manual/html_node/Argument-Syntax.html
pflag is available under the same style of BSD license as the Go language,
which can be found in the LICENSE file.
## Installation
pflag is available using the standard `go get` command.
Install by running:
go get github.com/spf13/pflag
Run tests by running:
go test github.com/spf13/pflag
## Usage
pflag is a drop-in replacement of Go's native flag package. If you import
pflag under the name "flag" then all code should continue to function
with no changes.
``` go
import flag "github.com/spf13/pflag"
```
There is one exception to this: if you directly instantiate the Flag struct
there is one more field "Shorthand" that you will need to set.
Most code never instantiates this struct directly, and instead uses
functions such as String(), BoolVar(), and Var(), and is therefore
unaffected.
Define flags using flag.String(), Bool(), Int(), etc.
This declares an integer flag, -flagname, stored in the pointer ip, with type *int.
``` go
var ip *int = flag.Int("flagname", 1234, "help message for flagname")
```
If you like, you can bind the flag to a variable using the Var() functions.
``` go
var flagvar int
func init() {
flag.IntVar(&flagvar, "flagname", 1234, "help message for flagname")
}
```
Or you can create custom flags that satisfy the Value interface (with
pointer receivers) and couple them to flag parsing by
``` go
flag.Var(&flagVal, "name", "help message for flagname")
```
For such flags, the default value is just the initial value of the variable.
After all flags are defined, call
``` go
flag.Parse()
```
to parse the command line into the defined flags.
Flags may then be used directly. If you're using the flags themselves,
they are all pointers; if you bind to variables, they're values.
``` go
fmt.Println("ip has value ", *ip)
fmt.Println("flagvar has value ", flagvar)
```
There are helpers function to get values later if you have the FlagSet but
it was difficult to keep up with all of the flag pointers in your code.
If you have a pflag.FlagSet with a flag called 'flagname' of type int you
can use GetInt() to get the int value. But notice that 'flagname' must exist
and it must be an int. GetString("flagname") will fail.
``` go
i, err := flagset.GetInt("flagname")
```
After parsing, the arguments after the flag are available as the
slice flag.Args() or individually as flag.Arg(i).
The arguments are indexed from 0 through flag.NArg()-1.
The pflag package also defines some new functions that are not in flag,
that give one-letter shorthands for flags. You can use these by appending
'P' to the name of any function that defines a flag.
``` go
var ip = flag.IntP("flagname", "f", 1234, "help message")
var flagvar bool
func init() {
flag.BoolVarP(&flagvar, "boolname", "b", true, "help message")
}
flag.VarP(&flagVal, "varname", "v", "help message")
```
Shorthand letters can be used with single dashes on the command line.
Boolean shorthand flags can be combined with other shorthand flags.
The default set of command-line flags is controlled by
top-level functions. The FlagSet type allows one to define
independent sets of flags, such as to implement subcommands
in a command-line interface. The methods of FlagSet are
analogous to the top-level functions for the command-line
flag set.
## Setting no option default values for flags
After you create a flag it is possible to set the pflag.NoOptDefVal for
the given flag. Doing this changes the meaning of the flag slightly. If
a flag has a NoOptDefVal and the flag is set on the command line without
an option the flag will be set to the NoOptDefVal. For example given:
``` go
var ip = flag.IntP("flagname", "f", 1234, "help message")
flag.Lookup("flagname").NoOptDefVal = "4321"
```
Would result in something like
| Parsed Arguments | Resulting Value |
| ------------- | ------------- |
| --flagname=1357 | ip=1357 |
| --flagname | ip=4321 |
| [nothing] | ip=1234 |
## Command line flag syntax
```
--flag // boolean flags, or flags with no option default values
--flag x // only on flags without a default value
--flag=x
```
Unlike the flag package, a single dash before an option means something
different than a double dash. Single dashes signify a series of shorthand
letters for flags. All but the last shorthand letter must be boolean flags
or a flag with a default value
```
// boolean or flags where the 'no option default value' is set
-f
-f=true
-abc
but
-b true is INVALID
// non-boolean and flags without a 'no option default value'
-n 1234
-n=1234
-n1234
// mixed
-abcs "hello"
-absd="hello"
-abcs1234
```
Flag parsing stops after the terminator "--". Unlike the flag package,
flags can be interspersed with arguments anywhere on the command line
before this terminator.
Integer flags accept 1234, 0664, 0x1234 and may be negative.
Boolean flags (in their long form) accept 1, 0, t, f, true, false,
TRUE, FALSE, True, False.
Duration flags accept any input valid for time.ParseDuration.
## Mutating or "Normalizing" Flag names
It is possible to set a custom flag name 'normalization function.' It allows flag names to be mutated both when created in the code and when used on the command line to some 'normalized' form. The 'normalized' form is used for comparison. Two examples of using the custom normalization func follow.
**Example #1**: You want -, _, and . in flags to compare the same. aka --my-flag == --my_flag == --my.flag
``` go
func wordSepNormalizeFunc(f *pflag.FlagSet, name string) pflag.NormalizedName {
from := []string{"-", "_"}
to := "."
for _, sep := range from {
name = strings.Replace(name, sep, to, -1)
}
return pflag.NormalizedName(name)
}
myFlagSet.SetNormalizeFunc(wordSepNormalizeFunc)
```
**Example #2**: You want to alias two flags. aka --old-flag-name == --new-flag-name
``` go
func aliasNormalizeFunc(f *pflag.FlagSet, name string) pflag.NormalizedName {
switch name {
case "old-flag-name":
name = "new-flag-name"
break
}
return pflag.NormalizedName(name)
}
myFlagSet.SetNormalizeFunc(aliasNormalizeFunc)
```
## Deprecating a flag or its shorthand
It is possible to deprecate a flag, or just its shorthand. Deprecating a flag/shorthand hides it from help text and prints a usage message when the deprecated flag/shorthand is used.
**Example #1**: You want to deprecate a flag named "badflag" as well as inform the users what flag they should use instead.
```go
// deprecate a flag by specifying its name and a usage message
flags.MarkDeprecated("badflag", "please use --good-flag instead")
```
This hides "badflag" from help text, and prints `Flag --badflag has been deprecated, please use --good-flag instead` when "badflag" is used.
**Example #2**: You want to keep a flag name "noshorthandflag" but deprecate its shortname "n".
```go
// deprecate a flag shorthand by specifying its flag name and a usage message
flags.MarkShorthandDeprecated("noshorthandflag", "please use --noshorthandflag only")
```
This hides the shortname "n" from help text, and prints `Flag shorthand -n has been deprecated, please use --noshorthandflag only` when the shorthand "n" is used.
Note that usage message is essential here, and it should not be empty.
## Hidden flags
It is possible to mark a flag as hidden, meaning it will still function as normal, however will not show up in usage/help text.
**Example**: You have a flag named "secretFlag" that you need for internal use only and don't want it showing up in help text, or for its usage text to be available.
```go
// hide a flag by specifying its name
flags.MarkHidden("secretFlag")
```
## Disable sorting of flags
`pflag` allows you to disable sorting of flags for help and usage message.
**Example**:
```go
flags.BoolP("verbose", "v", false, "verbose output")
flags.String("coolflag", "yeaah", "it's really cool flag")
flags.Int("usefulflag", 777, "sometimes it's very useful")
flags.SortFlags = false
flags.PrintDefaults()
```
**Output**:
```
-v, --verbose verbose output
--coolflag string it's really cool flag (default "yeaah")
--usefulflag int sometimes it's very useful (default 777)
```
## Supporting Go flags when using pflag
In order to support flags defined using Go's `flag` package, they must be added to the `pflag` flagset. This is usually necessary
to support flags defined by third-party dependencies (e.g. `golang/glog`).
**Example**: You want to add the Go flags to the `CommandLine` flagset
```go
import (
goflag "flag"
flag "github.com/spf13/pflag"
)
var ip *int = flag.Int("flagname", 1234, "help message for flagname")
func main() {
flag.CommandLine.AddGoFlagSet(goflag.CommandLine)
flag.Parse()
}
```
## More info
You can see the full reference documentation of the pflag package
[at godoc.org][3], or through go's standard documentation system by
running `godoc -http=:6060` and browsing to
[http://localhost:6060/pkg/github.com/spf13/pflag][2] after
installation.
[2]: http://localhost:6060/pkg/github.com/spf13/pflag
[3]: http://godoc.org/github.com/spf13/pflag

View File

@ -1,94 +0,0 @@
package pflag
import "strconv"
// optional interface to indicate boolean flags that can be
// supplied without "=value" text
type boolFlag interface {
Value
IsBoolFlag() bool
}
// -- bool Value
type boolValue bool
func newBoolValue(val bool, p *bool) *boolValue {
*p = val
return (*boolValue)(p)
}
func (b *boolValue) Set(s string) error {
v, err := strconv.ParseBool(s)
*b = boolValue(v)
return err
}
func (b *boolValue) Type() string {
return "bool"
}
func (b *boolValue) String() string { return strconv.FormatBool(bool(*b)) }
func (b *boolValue) IsBoolFlag() bool { return true }
func boolConv(sval string) (interface{}, error) {
return strconv.ParseBool(sval)
}
// GetBool return the bool value of a flag with the given name
func (f *FlagSet) GetBool(name string) (bool, error) {
val, err := f.getFlagType(name, "bool", boolConv)
if err != nil {
return false, err
}
return val.(bool), nil
}
// BoolVar defines a bool flag with specified name, default value, and usage string.
// The argument p points to a bool variable in which to store the value of the flag.
func (f *FlagSet) BoolVar(p *bool, name string, value bool, usage string) {
f.BoolVarP(p, name, "", value, usage)
}
// BoolVarP is like BoolVar, but accepts a shorthand letter that can be used after a single dash.
func (f *FlagSet) BoolVarP(p *bool, name, shorthand string, value bool, usage string) {
flag := f.VarPF(newBoolValue(value, p), name, shorthand, usage)
flag.NoOptDefVal = "true"
}
// BoolVar defines a bool flag with specified name, default value, and usage string.
// The argument p points to a bool variable in which to store the value of the flag.
func BoolVar(p *bool, name string, value bool, usage string) {
BoolVarP(p, name, "", value, usage)
}
// BoolVarP is like BoolVar, but accepts a shorthand letter that can be used after a single dash.
func BoolVarP(p *bool, name, shorthand string, value bool, usage string) {
flag := CommandLine.VarPF(newBoolValue(value, p), name, shorthand, usage)
flag.NoOptDefVal = "true"
}
// Bool defines a bool flag with specified name, default value, and usage string.
// The return value is the address of a bool variable that stores the value of the flag.
func (f *FlagSet) Bool(name string, value bool, usage string) *bool {
return f.BoolP(name, "", value, usage)
}
// BoolP is like Bool, but accepts a shorthand letter that can be used after a single dash.
func (f *FlagSet) BoolP(name, shorthand string, value bool, usage string) *bool {
p := new(bool)
f.BoolVarP(p, name, shorthand, value, usage)
return p
}
// Bool defines a bool flag with specified name, default value, and usage string.
// The return value is the address of a bool variable that stores the value of the flag.
func Bool(name string, value bool, usage string) *bool {
return BoolP(name, "", value, usage)
}
// BoolP is like Bool, but accepts a shorthand letter that can be used after a single dash.
func BoolP(name, shorthand string, value bool, usage string) *bool {
b := CommandLine.BoolP(name, shorthand, value, usage)
return b
}

View File

@ -1,147 +0,0 @@
package pflag
import (
"io"
"strconv"
"strings"
)
// -- boolSlice Value
type boolSliceValue struct {
value *[]bool
changed bool
}
func newBoolSliceValue(val []bool, p *[]bool) *boolSliceValue {
bsv := new(boolSliceValue)
bsv.value = p
*bsv.value = val
return bsv
}
// Set converts, and assigns, the comma-separated boolean argument string representation as the []bool value of this flag.
// If Set is called on a flag that already has a []bool assigned, the newly converted values will be appended.
func (s *boolSliceValue) Set(val string) error {
// remove all quote characters
rmQuote := strings.NewReplacer(`"`, "", `'`, "", "`", "")
// read flag arguments with CSV parser
boolStrSlice, err := readAsCSV(rmQuote.Replace(val))
if err != nil && err != io.EOF {
return err
}
// parse boolean values into slice
out := make([]bool, 0, len(boolStrSlice))
for _, boolStr := range boolStrSlice {
b, err := strconv.ParseBool(strings.TrimSpace(boolStr))
if err != nil {
return err
}
out = append(out, b)
}
if !s.changed {
*s.value = out
} else {
*s.value = append(*s.value, out...)
}
s.changed = true
return nil
}
// Type returns a string that uniquely represents this flag's type.
func (s *boolSliceValue) Type() string {
return "boolSlice"
}
// String defines a "native" format for this boolean slice flag value.
func (s *boolSliceValue) String() string {
boolStrSlice := make([]string, len(*s.value))
for i, b := range *s.value {
boolStrSlice[i] = strconv.FormatBool(b)
}
out, _ := writeAsCSV(boolStrSlice)
return "[" + out + "]"
}
func boolSliceConv(val string) (interface{}, error) {
val = strings.Trim(val, "[]")
// Empty string would cause a slice with one (empty) entry
if len(val) == 0 {
return []bool{}, nil
}
ss := strings.Split(val, ",")
out := make([]bool, len(ss))
for i, t := range ss {
var err error
out[i], err = strconv.ParseBool(t)
if err != nil {
return nil, err
}
}
return out, nil
}
// GetBoolSlice returns the []bool value of a flag with the given name.
func (f *FlagSet) GetBoolSlice(name string) ([]bool, error) {
val, err := f.getFlagType(name, "boolSlice", boolSliceConv)
if err != nil {
return []bool{}, err
}
return val.([]bool), nil
}
// BoolSliceVar defines a boolSlice flag with specified name, default value, and usage string.
// The argument p points to a []bool variable in which to store the value of the flag.
func (f *FlagSet) BoolSliceVar(p *[]bool, name string, value []bool, usage string) {
f.VarP(newBoolSliceValue(value, p), name, "", usage)
}
// BoolSliceVarP is like BoolSliceVar, but accepts a shorthand letter that can be used after a single dash.
func (f *FlagSet) BoolSliceVarP(p *[]bool, name, shorthand string, value []bool, usage string) {
f.VarP(newBoolSliceValue(value, p), name, shorthand, usage)
}
// BoolSliceVar defines a []bool flag with specified name, default value, and usage string.
// The argument p points to a []bool variable in which to store the value of the flag.
func BoolSliceVar(p *[]bool, name string, value []bool, usage string) {
CommandLine.VarP(newBoolSliceValue(value, p), name, "", usage)
}
// BoolSliceVarP is like BoolSliceVar, but accepts a shorthand letter that can be used after a single dash.
func BoolSliceVarP(p *[]bool, name, shorthand string, value []bool, usage string) {
CommandLine.VarP(newBoolSliceValue(value, p), name, shorthand, usage)
}
// BoolSlice defines a []bool flag with specified name, default value, and usage string.
// The return value is the address of a []bool variable that stores the value of the flag.
func (f *FlagSet) BoolSlice(name string, value []bool, usage string) *[]bool {
p := []bool{}
f.BoolSliceVarP(&p, name, "", value, usage)
return &p
}
// BoolSliceP is like BoolSlice, but accepts a shorthand letter that can be used after a single dash.
func (f *FlagSet) BoolSliceP(name, shorthand string, value []bool, usage string) *[]bool {
p := []bool{}
f.BoolSliceVarP(&p, name, shorthand, value, usage)
return &p
}
// BoolSlice defines a []bool flag with specified name, default value, and usage string.
// The return value is the address of a []bool variable that stores the value of the flag.
func BoolSlice(name string, value []bool, usage string) *[]bool {
return CommandLine.BoolSliceP(name, "", value, usage)
}
// BoolSliceP is like BoolSlice, but accepts a shorthand letter that can be used after a single dash.
func BoolSliceP(name, shorthand string, value []bool, usage string) *[]bool {
return CommandLine.BoolSliceP(name, shorthand, value, usage)
}

View File

@ -1,96 +0,0 @@
package pflag
import "strconv"
// -- count Value
type countValue int
func newCountValue(val int, p *int) *countValue {
*p = val
return (*countValue)(p)
}
func (i *countValue) Set(s string) error {
v, err := strconv.ParseInt(s, 0, 64)
// -1 means that no specific value was passed, so increment
if v == -1 {
*i = countValue(*i + 1)
} else {
*i = countValue(v)
}
return err
}
func (i *countValue) Type() string {
return "count"
}
func (i *countValue) String() string { return strconv.Itoa(int(*i)) }
func countConv(sval string) (interface{}, error) {
i, err := strconv.Atoi(sval)
if err != nil {
return nil, err
}
return i, nil
}
// GetCount return the int value of a flag with the given name
func (f *FlagSet) GetCount(name string) (int, error) {
val, err := f.getFlagType(name, "count", countConv)
if err != nil {
return 0, err
}
return val.(int), nil
}
// CountVar defines a count flag with specified name, default value, and usage string.
// The argument p points to an int variable in which to store the value of the flag.
// A count flag will add 1 to its value evey time it is found on the command line
func (f *FlagSet) CountVar(p *int, name string, usage string) {
f.CountVarP(p, name, "", usage)
}
// CountVarP is like CountVar only take a shorthand for the flag name.
func (f *FlagSet) CountVarP(p *int, name, shorthand string, usage string) {
flag := f.VarPF(newCountValue(0, p), name, shorthand, usage)
flag.NoOptDefVal = "-1"
}
// CountVar like CountVar only the flag is placed on the CommandLine instead of a given flag set
func CountVar(p *int, name string, usage string) {
CommandLine.CountVar(p, name, usage)
}
// CountVarP is like CountVar only take a shorthand for the flag name.
func CountVarP(p *int, name, shorthand string, usage string) {
CommandLine.CountVarP(p, name, shorthand, usage)
}
// Count defines a count flag with specified name, default value, and usage string.
// The return value is the address of an int variable that stores the value of the flag.
// A count flag will add 1 to its value evey time it is found on the command line
func (f *FlagSet) Count(name string, usage string) *int {
p := new(int)
f.CountVarP(p, name, "", usage)
return p
}
// CountP is like Count only takes a shorthand for the flag name.
func (f *FlagSet) CountP(name, shorthand string, usage string) *int {
p := new(int)
f.CountVarP(p, name, shorthand, usage)
return p
}
// Count defines a count flag with specified name, default value, and usage string.
// The return value is the address of an int variable that stores the value of the flag.
// A count flag will add 1 to its value evey time it is found on the command line
func Count(name string, usage string) *int {
return CommandLine.CountP(name, "", usage)
}
// CountP is like Count only takes a shorthand for the flag name.
func CountP(name, shorthand string, usage string) *int {
return CommandLine.CountP(name, shorthand, usage)
}

View File

@ -1,86 +0,0 @@
package pflag
import (
"time"
)
// -- time.Duration Value
type durationValue time.Duration
func newDurationValue(val time.Duration, p *time.Duration) *durationValue {
*p = val
return (*durationValue)(p)
}
func (d *durationValue) Set(s string) error {
v, err := time.ParseDuration(s)
*d = durationValue(v)
return err
}
func (d *durationValue) Type() string {
return "duration"
}
func (d *durationValue) String() string { return (*time.Duration)(d).String() }
func durationConv(sval string) (interface{}, error) {
return time.ParseDuration(sval)
}
// GetDuration return the duration value of a flag with the given name
func (f *FlagSet) GetDuration(name string) (time.Duration, error) {
val, err := f.getFlagType(name, "duration", durationConv)
if err != nil {
return 0, err
}
return val.(time.Duration), nil
}
// DurationVar defines a time.Duration flag with specified name, default value, and usage string.
// The argument p points to a time.Duration variable in which to store the value of the flag.
func (f *FlagSet) DurationVar(p *time.Duration, name string, value time.Duration, usage string) {
f.VarP(newDurationValue(value, p), name, "", usage)
}
// DurationVarP is like DurationVar, but accepts a shorthand letter that can be used after a single dash.
func (f *FlagSet) DurationVarP(p *time.Duration, name, shorthand string, value time.Duration, usage string) {
f.VarP(newDurationValue(value, p), name, shorthand, usage)
}
// DurationVar defines a time.Duration flag with specified name, default value, and usage string.
// The argument p points to a time.Duration variable in which to store the value of the flag.
func DurationVar(p *time.Duration, name string, value time.Duration, usage string) {
CommandLine.VarP(newDurationValue(value, p), name, "", usage)
}
// DurationVarP is like DurationVar, but accepts a shorthand letter that can be used after a single dash.
func DurationVarP(p *time.Duration, name, shorthand string, value time.Duration, usage string) {
CommandLine.VarP(newDurationValue(value, p), name, shorthand, usage)
}
// Duration defines a time.Duration flag with specified name, default value, and usage string.
// The return value is the address of a time.Duration variable that stores the value of the flag.
func (f *FlagSet) Duration(name string, value time.Duration, usage string) *time.Duration {
p := new(time.Duration)
f.DurationVarP(p, name, "", value, usage)
return p
}
// DurationP is like Duration, but accepts a shorthand letter that can be used after a single dash.
func (f *FlagSet) DurationP(name, shorthand string, value time.Duration, usage string) *time.Duration {
p := new(time.Duration)
f.DurationVarP(p, name, shorthand, value, usage)
return p
}
// Duration defines a time.Duration flag with specified name, default value, and usage string.
// The return value is the address of a time.Duration variable that stores the value of the flag.
func Duration(name string, value time.Duration, usage string) *time.Duration {
return CommandLine.DurationP(name, "", value, usage)
}
// DurationP is like Duration, but accepts a shorthand letter that can be used after a single dash.
func DurationP(name, shorthand string, value time.Duration, usage string) *time.Duration {
return CommandLine.DurationP(name, shorthand, value, usage)
}

1128
vendor/github.com/spf13/pflag/flag.go generated vendored

File diff suppressed because it is too large Load Diff

View File

@ -1,88 +0,0 @@
package pflag
import "strconv"
// -- float32 Value
type float32Value float32
func newFloat32Value(val float32, p *float32) *float32Value {
*p = val
return (*float32Value)(p)
}
func (f *float32Value) Set(s string) error {
v, err := strconv.ParseFloat(s, 32)
*f = float32Value(v)
return err
}
func (f *float32Value) Type() string {
return "float32"
}
func (f *float32Value) String() string { return strconv.FormatFloat(float64(*f), 'g', -1, 32) }
func float32Conv(sval string) (interface{}, error) {
v, err := strconv.ParseFloat(sval, 32)
if err != nil {
return 0, err
}
return float32(v), nil
}
// GetFloat32 return the float32 value of a flag with the given name
func (f *FlagSet) GetFloat32(name string) (float32, error) {
val, err := f.getFlagType(name, "float32", float32Conv)
if err != nil {
return 0, err
}
return val.(float32), nil
}
// Float32Var defines a float32 flag with specified name, default value, and usage string.
// The argument p points to a float32 variable in which to store the value of the flag.
func (f *FlagSet) Float32Var(p *float32, name string, value float32, usage string) {
f.VarP(newFloat32Value(value, p), name, "", usage)
}
// Float32VarP is like Float32Var, but accepts a shorthand letter that can be used after a single dash.
func (f *FlagSet) Float32VarP(p *float32, name, shorthand string, value float32, usage string) {
f.VarP(newFloat32Value(value, p), name, shorthand, usage)
}
// Float32Var defines a float32 flag with specified name, default value, and usage string.
// The argument p points to a float32 variable in which to store the value of the flag.
func Float32Var(p *float32, name string, value float32, usage string) {
CommandLine.VarP(newFloat32Value(value, p), name, "", usage)
}
// Float32VarP is like Float32Var, but accepts a shorthand letter that can be used after a single dash.
func Float32VarP(p *float32, name, shorthand string, value float32, usage string) {
CommandLine.VarP(newFloat32Value(value, p), name, shorthand, usage)
}
// Float32 defines a float32 flag with specified name, default value, and usage string.
// The return value is the address of a float32 variable that stores the value of the flag.
func (f *FlagSet) Float32(name string, value float32, usage string) *float32 {
p := new(float32)
f.Float32VarP(p, name, "", value, usage)
return p
}
// Float32P is like Float32, but accepts a shorthand letter that can be used after a single dash.
func (f *FlagSet) Float32P(name, shorthand string, value float32, usage string) *float32 {
p := new(float32)
f.Float32VarP(p, name, shorthand, value, usage)
return p
}
// Float32 defines a float32 flag with specified name, default value, and usage string.
// The return value is the address of a float32 variable that stores the value of the flag.
func Float32(name string, value float32, usage string) *float32 {
return CommandLine.Float32P(name, "", value, usage)
}
// Float32P is like Float32, but accepts a shorthand letter that can be used after a single dash.
func Float32P(name, shorthand string, value float32, usage string) *float32 {
return CommandLine.Float32P(name, shorthand, value, usage)
}

View File

@ -1,84 +0,0 @@
package pflag
import "strconv"
// -- float64 Value
type float64Value float64
func newFloat64Value(val float64, p *float64) *float64Value {
*p = val
return (*float64Value)(p)
}
func (f *float64Value) Set(s string) error {
v, err := strconv.ParseFloat(s, 64)
*f = float64Value(v)
return err
}
func (f *float64Value) Type() string {
return "float64"
}
func (f *float64Value) String() string { return strconv.FormatFloat(float64(*f), 'g', -1, 64) }
func float64Conv(sval string) (interface{}, error) {
return strconv.ParseFloat(sval, 64)
}
// GetFloat64 return the float64 value of a flag with the given name
func (f *FlagSet) GetFloat64(name string) (float64, error) {
val, err := f.getFlagType(name, "float64", float64Conv)
if err != nil {
return 0, err
}
return val.(float64), nil
}
// Float64Var defines a float64 flag with specified name, default value, and usage string.
// The argument p points to a float64 variable in which to store the value of the flag.
func (f *FlagSet) Float64Var(p *float64, name string, value float64, usage string) {
f.VarP(newFloat64Value(value, p), name, "", usage)
}
// Float64VarP is like Float64Var, but accepts a shorthand letter that can be used after a single dash.
func (f *FlagSet) Float64VarP(p *float64, name, shorthand string, value float64, usage string) {
f.VarP(newFloat64Value(value, p), name, shorthand, usage)
}
// Float64Var defines a float64 flag with specified name, default value, and usage string.
// The argument p points to a float64 variable in which to store the value of the flag.
func Float64Var(p *float64, name string, value float64, usage string) {
CommandLine.VarP(newFloat64Value(value, p), name, "", usage)
}
// Float64VarP is like Float64Var, but accepts a shorthand letter that can be used after a single dash.
func Float64VarP(p *float64, name, shorthand string, value float64, usage string) {
CommandLine.VarP(newFloat64Value(value, p), name, shorthand, usage)
}
// Float64 defines a float64 flag with specified name, default value, and usage string.
// The return value is the address of a float64 variable that stores the value of the flag.
func (f *FlagSet) Float64(name string, value float64, usage string) *float64 {
p := new(float64)
f.Float64VarP(p, name, "", value, usage)
return p
}
// Float64P is like Float64, but accepts a shorthand letter that can be used after a single dash.
func (f *FlagSet) Float64P(name, shorthand string, value float64, usage string) *float64 {
p := new(float64)
f.Float64VarP(p, name, shorthand, value, usage)
return p
}
// Float64 defines a float64 flag with specified name, default value, and usage string.
// The return value is the address of a float64 variable that stores the value of the flag.
func Float64(name string, value float64, usage string) *float64 {
return CommandLine.Float64P(name, "", value, usage)
}
// Float64P is like Float64, but accepts a shorthand letter that can be used after a single dash.
func Float64P(name, shorthand string, value float64, usage string) *float64 {
return CommandLine.Float64P(name, shorthand, value, usage)
}

View File

@ -1,101 +0,0 @@
// Copyright 2009 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
package pflag
import (
goflag "flag"
"reflect"
"strings"
)
// flagValueWrapper implements pflag.Value around a flag.Value. The main
// difference here is the addition of the Type method that returns a string
// name of the type. As this is generally unknown, we approximate that with
// reflection.
type flagValueWrapper struct {
inner goflag.Value
flagType string
}
// We are just copying the boolFlag interface out of goflag as that is what
// they use to decide if a flag should get "true" when no arg is given.
type goBoolFlag interface {
goflag.Value
IsBoolFlag() bool
}
func wrapFlagValue(v goflag.Value) Value {
// If the flag.Value happens to also be a pflag.Value, just use it directly.
if pv, ok := v.(Value); ok {
return pv
}
pv := &flagValueWrapper{
inner: v,
}
t := reflect.TypeOf(v)
if t.Kind() == reflect.Interface || t.Kind() == reflect.Ptr {
t = t.Elem()
}
pv.flagType = strings.TrimSuffix(t.Name(), "Value")
return pv
}
func (v *flagValueWrapper) String() string {
return v.inner.String()
}
func (v *flagValueWrapper) Set(s string) error {
return v.inner.Set(s)
}
func (v *flagValueWrapper) Type() string {
return v.flagType
}
// PFlagFromGoFlag will return a *pflag.Flag given a *flag.Flag
// If the *flag.Flag.Name was a single character (ex: `v`) it will be accessiblei
// with both `-v` and `--v` in flags. If the golang flag was more than a single
// character (ex: `verbose`) it will only be accessible via `--verbose`
func PFlagFromGoFlag(goflag *goflag.Flag) *Flag {
// Remember the default value as a string; it won't change.
flag := &Flag{
Name: goflag.Name,
Usage: goflag.Usage,
Value: wrapFlagValue(goflag.Value),
// Looks like golang flags don't set DefValue correctly :-(
//DefValue: goflag.DefValue,
DefValue: goflag.Value.String(),
}
// Ex: if the golang flag was -v, allow both -v and --v to work
if len(flag.Name) == 1 {
flag.Shorthand = flag.Name
}
if fv, ok := goflag.Value.(goBoolFlag); ok && fv.IsBoolFlag() {
flag.NoOptDefVal = "true"
}
return flag
}
// AddGoFlag will add the given *flag.Flag to the pflag.FlagSet
func (f *FlagSet) AddGoFlag(goflag *goflag.Flag) {
if f.Lookup(goflag.Name) != nil {
return
}
newflag := PFlagFromGoFlag(goflag)
f.AddFlag(newflag)
}
// AddGoFlagSet will add the given *flag.FlagSet to the pflag.FlagSet
func (f *FlagSet) AddGoFlagSet(newSet *goflag.FlagSet) {
if newSet == nil {
return
}
newSet.VisitAll(func(goflag *goflag.Flag) {
f.AddGoFlag(goflag)
})
}

84
vendor/github.com/spf13/pflag/int.go generated vendored
View File

@ -1,84 +0,0 @@
package pflag
import "strconv"
// -- int Value
type intValue int
func newIntValue(val int, p *int) *intValue {
*p = val
return (*intValue)(p)
}
func (i *intValue) Set(s string) error {
v, err := strconv.ParseInt(s, 0, 64)
*i = intValue(v)
return err
}
func (i *intValue) Type() string {
return "int"
}
func (i *intValue) String() string { return strconv.Itoa(int(*i)) }
func intConv(sval string) (interface{}, error) {
return strconv.Atoi(sval)
}
// GetInt return the int value of a flag with the given name
func (f *FlagSet) GetInt(name string) (int, error) {
val, err := f.getFlagType(name, "int", intConv)
if err != nil {
return 0, err
}
return val.(int), nil
}
// IntVar defines an int flag with specified name, default value, and usage string.
// The argument p points to an int variable in which to store the value of the flag.
func (f *FlagSet) IntVar(p *int, name string, value int, usage string) {
f.VarP(newIntValue(value, p), name, "", usage)
}
// IntVarP is like IntVar, but accepts a shorthand letter that can be used after a single dash.
func (f *FlagSet) IntVarP(p *int, name, shorthand string, value int, usage string) {
f.VarP(newIntValue(value, p), name, shorthand, usage)
}
// IntVar defines an int flag with specified name, default value, and usage string.
// The argument p points to an int variable in which to store the value of the flag.
func IntVar(p *int, name string, value int, usage string) {
CommandLine.VarP(newIntValue(value, p), name, "", usage)
}
// IntVarP is like IntVar, but accepts a shorthand letter that can be used after a single dash.
func IntVarP(p *int, name, shorthand string, value int, usage string) {
CommandLine.VarP(newIntValue(value, p), name, shorthand, usage)
}
// Int defines an int flag with specified name, default value, and usage string.
// The return value is the address of an int variable that stores the value of the flag.
func (f *FlagSet) Int(name string, value int, usage string) *int {
p := new(int)
f.IntVarP(p, name, "", value, usage)
return p
}
// IntP is like Int, but accepts a shorthand letter that can be used after a single dash.
func (f *FlagSet) IntP(name, shorthand string, value int, usage string) *int {
p := new(int)
f.IntVarP(p, name, shorthand, value, usage)
return p
}
// Int defines an int flag with specified name, default value, and usage string.
// The return value is the address of an int variable that stores the value of the flag.
func Int(name string, value int, usage string) *int {
return CommandLine.IntP(name, "", value, usage)
}
// IntP is like Int, but accepts a shorthand letter that can be used after a single dash.
func IntP(name, shorthand string, value int, usage string) *int {
return CommandLine.IntP(name, shorthand, value, usage)
}

View File

@ -1,88 +0,0 @@
package pflag
import "strconv"
// -- int32 Value
type int32Value int32
func newInt32Value(val int32, p *int32) *int32Value {
*p = val
return (*int32Value)(p)
}
func (i *int32Value) Set(s string) error {
v, err := strconv.ParseInt(s, 0, 32)
*i = int32Value(v)
return err
}
func (i *int32Value) Type() string {
return "int32"
}
func (i *int32Value) String() string { return strconv.FormatInt(int64(*i), 10) }
func int32Conv(sval string) (interface{}, error) {
v, err := strconv.ParseInt(sval, 0, 32)
if err != nil {
return 0, err
}
return int32(v), nil
}
// GetInt32 return the int32 value of a flag with the given name
func (f *FlagSet) GetInt32(name string) (int32, error) {
val, err := f.getFlagType(name, "int32", int32Conv)
if err != nil {
return 0, err
}
return val.(int32), nil
}
// Int32Var defines an int32 flag with specified name, default value, and usage string.
// The argument p points to an int32 variable in which to store the value of the flag.
func (f *FlagSet) Int32Var(p *int32, name string, value int32, usage string) {
f.VarP(newInt32Value(value, p), name, "", usage)
}
// Int32VarP is like Int32Var, but accepts a shorthand letter that can be used after a single dash.
func (f *FlagSet) Int32VarP(p *int32, name, shorthand string, value int32, usage string) {
f.VarP(newInt32Value(value, p), name, shorthand, usage)
}
// Int32Var defines an int32 flag with specified name, default value, and usage string.
// The argument p points to an int32 variable in which to store the value of the flag.
func Int32Var(p *int32, name string, value int32, usage string) {
CommandLine.VarP(newInt32Value(value, p), name, "", usage)
}
// Int32VarP is like Int32Var, but accepts a shorthand letter that can be used after a single dash.
func Int32VarP(p *int32, name, shorthand string, value int32, usage string) {
CommandLine.VarP(newInt32Value(value, p), name, shorthand, usage)
}
// Int32 defines an int32 flag with specified name, default value, and usage string.
// The return value is the address of an int32 variable that stores the value of the flag.
func (f *FlagSet) Int32(name string, value int32, usage string) *int32 {
p := new(int32)
f.Int32VarP(p, name, "", value, usage)
return p
}
// Int32P is like Int32, but accepts a shorthand letter that can be used after a single dash.
func (f *FlagSet) Int32P(name, shorthand string, value int32, usage string) *int32 {
p := new(int32)
f.Int32VarP(p, name, shorthand, value, usage)
return p
}
// Int32 defines an int32 flag with specified name, default value, and usage string.
// The return value is the address of an int32 variable that stores the value of the flag.
func Int32(name string, value int32, usage string) *int32 {
return CommandLine.Int32P(name, "", value, usage)
}
// Int32P is like Int32, but accepts a shorthand letter that can be used after a single dash.
func Int32P(name, shorthand string, value int32, usage string) *int32 {
return CommandLine.Int32P(name, shorthand, value, usage)
}

View File

@ -1,84 +0,0 @@
package pflag
import "strconv"
// -- int64 Value
type int64Value int64
func newInt64Value(val int64, p *int64) *int64Value {
*p = val
return (*int64Value)(p)
}
func (i *int64Value) Set(s string) error {
v, err := strconv.ParseInt(s, 0, 64)
*i = int64Value(v)
return err
}
func (i *int64Value) Type() string {
return "int64"
}
func (i *int64Value) String() string { return strconv.FormatInt(int64(*i), 10) }
func int64Conv(sval string) (interface{}, error) {
return strconv.ParseInt(sval, 0, 64)
}
// GetInt64 return the int64 value of a flag with the given name
func (f *FlagSet) GetInt64(name string) (int64, error) {
val, err := f.getFlagType(name, "int64", int64Conv)
if err != nil {
return 0, err
}
return val.(int64), nil
}
// Int64Var defines an int64 flag with specified name, default value, and usage string.
// The argument p points to an int64 variable in which to store the value of the flag.
func (f *FlagSet) Int64Var(p *int64, name string, value int64, usage string) {
f.VarP(newInt64Value(value, p), name, "", usage)
}
// Int64VarP is like Int64Var, but accepts a shorthand letter that can be used after a single dash.
func (f *FlagSet) Int64VarP(p *int64, name, shorthand string, value int64, usage string) {
f.VarP(newInt64Value(value, p), name, shorthand, usage)
}
// Int64Var defines an int64 flag with specified name, default value, and usage string.
// The argument p points to an int64 variable in which to store the value of the flag.
func Int64Var(p *int64, name string, value int64, usage string) {
CommandLine.VarP(newInt64Value(value, p), name, "", usage)
}
// Int64VarP is like Int64Var, but accepts a shorthand letter that can be used after a single dash.
func Int64VarP(p *int64, name, shorthand string, value int64, usage string) {
CommandLine.VarP(newInt64Value(value, p), name, shorthand, usage)
}
// Int64 defines an int64 flag with specified name, default value, and usage string.
// The return value is the address of an int64 variable that stores the value of the flag.
func (f *FlagSet) Int64(name string, value int64, usage string) *int64 {
p := new(int64)
f.Int64VarP(p, name, "", value, usage)
return p
}
// Int64P is like Int64, but accepts a shorthand letter that can be used after a single dash.
func (f *FlagSet) Int64P(name, shorthand string, value int64, usage string) *int64 {
p := new(int64)
f.Int64VarP(p, name, shorthand, value, usage)
return p
}
// Int64 defines an int64 flag with specified name, default value, and usage string.
// The return value is the address of an int64 variable that stores the value of the flag.
func Int64(name string, value int64, usage string) *int64 {
return CommandLine.Int64P(name, "", value, usage)
}
// Int64P is like Int64, but accepts a shorthand letter that can be used after a single dash.
func Int64P(name, shorthand string, value int64, usage string) *int64 {
return CommandLine.Int64P(name, shorthand, value, usage)
}

View File

@ -1,88 +0,0 @@
package pflag
import "strconv"
// -- int8 Value
type int8Value int8
func newInt8Value(val int8, p *int8) *int8Value {
*p = val
return (*int8Value)(p)
}
func (i *int8Value) Set(s string) error {
v, err := strconv.ParseInt(s, 0, 8)
*i = int8Value(v)
return err
}
func (i *int8Value) Type() string {
return "int8"
}
func (i *int8Value) String() string { return strconv.FormatInt(int64(*i), 10) }
func int8Conv(sval string) (interface{}, error) {
v, err := strconv.ParseInt(sval, 0, 8)
if err != nil {
return 0, err
}
return int8(v), nil
}
// GetInt8 return the int8 value of a flag with the given name
func (f *FlagSet) GetInt8(name string) (int8, error) {
val, err := f.getFlagType(name, "int8", int8Conv)
if err != nil {
return 0, err
}
return val.(int8), nil
}
// Int8Var defines an int8 flag with specified name, default value, and usage string.
// The argument p points to an int8 variable in which to store the value of the flag.
func (f *FlagSet) Int8Var(p *int8, name string, value int8, usage string) {
f.VarP(newInt8Value(value, p), name, "", usage)
}
// Int8VarP is like Int8Var, but accepts a shorthand letter that can be used after a single dash.
func (f *FlagSet) Int8VarP(p *int8, name, shorthand string, value int8, usage string) {
f.VarP(newInt8Value(value, p), name, shorthand, usage)
}
// Int8Var defines an int8 flag with specified name, default value, and usage string.
// The argument p points to an int8 variable in which to store the value of the flag.
func Int8Var(p *int8, name string, value int8, usage string) {
CommandLine.VarP(newInt8Value(value, p), name, "", usage)
}
// Int8VarP is like Int8Var, but accepts a shorthand letter that can be used after a single dash.
func Int8VarP(p *int8, name, shorthand string, value int8, usage string) {
CommandLine.VarP(newInt8Value(value, p), name, shorthand, usage)
}
// Int8 defines an int8 flag with specified name, default value, and usage string.
// The return value is the address of an int8 variable that stores the value of the flag.
func (f *FlagSet) Int8(name string, value int8, usage string) *int8 {
p := new(int8)
f.Int8VarP(p, name, "", value, usage)
return p
}
// Int8P is like Int8, but accepts a shorthand letter that can be used after a single dash.
func (f *FlagSet) Int8P(name, shorthand string, value int8, usage string) *int8 {
p := new(int8)
f.Int8VarP(p, name, shorthand, value, usage)
return p
}
// Int8 defines an int8 flag with specified name, default value, and usage string.
// The return value is the address of an int8 variable that stores the value of the flag.
func Int8(name string, value int8, usage string) *int8 {
return CommandLine.Int8P(name, "", value, usage)
}
// Int8P is like Int8, but accepts a shorthand letter that can be used after a single dash.
func Int8P(name, shorthand string, value int8, usage string) *int8 {
return CommandLine.Int8P(name, shorthand, value, usage)
}

View File

@ -1,128 +0,0 @@
package pflag
import (
"fmt"
"strconv"
"strings"
)
// -- intSlice Value
type intSliceValue struct {
value *[]int
changed bool
}
func newIntSliceValue(val []int, p *[]int) *intSliceValue {
isv := new(intSliceValue)
isv.value = p
*isv.value = val
return isv
}
func (s *intSliceValue) Set(val string) error {
ss := strings.Split(val, ",")
out := make([]int, len(ss))
for i, d := range ss {
var err error
out[i], err = strconv.Atoi(d)
if err != nil {
return err
}
}
if !s.changed {
*s.value = out
} else {
*s.value = append(*s.value, out...)
}
s.changed = true
return nil
}
func (s *intSliceValue) Type() string {
return "intSlice"
}
func (s *intSliceValue) String() string {
out := make([]string, len(*s.value))
for i, d := range *s.value {
out[i] = fmt.Sprintf("%d", d)
}
return "[" + strings.Join(out, ",") + "]"
}
func intSliceConv(val string) (interface{}, error) {
val = strings.Trim(val, "[]")
// Empty string would cause a slice with one (empty) entry
if len(val) == 0 {
return []int{}, nil
}
ss := strings.Split(val, ",")
out := make([]int, len(ss))
for i, d := range ss {
var err error
out[i], err = strconv.Atoi(d)
if err != nil {
return nil, err
}
}
return out, nil
}
// GetIntSlice return the []int value of a flag with the given name
func (f *FlagSet) GetIntSlice(name string) ([]int, error) {
val, err := f.getFlagType(name, "intSlice", intSliceConv)
if err != nil {
return []int{}, err
}
return val.([]int), nil
}
// IntSliceVar defines a intSlice flag with specified name, default value, and usage string.
// The argument p points to a []int variable in which to store the value of the flag.
func (f *FlagSet) IntSliceVar(p *[]int, name string, value []int, usage string) {
f.VarP(newIntSliceValue(value, p), name, "", usage)
}
// IntSliceVarP is like IntSliceVar, but accepts a shorthand letter that can be used after a single dash.
func (f *FlagSet) IntSliceVarP(p *[]int, name, shorthand string, value []int, usage string) {
f.VarP(newIntSliceValue(value, p), name, shorthand, usage)
}
// IntSliceVar defines a int[] flag with specified name, default value, and usage string.
// The argument p points to a int[] variable in which to store the value of the flag.
func IntSliceVar(p *[]int, name string, value []int, usage string) {
CommandLine.VarP(newIntSliceValue(value, p), name, "", usage)
}
// IntSliceVarP is like IntSliceVar, but accepts a shorthand letter that can be used after a single dash.
func IntSliceVarP(p *[]int, name, shorthand string, value []int, usage string) {
CommandLine.VarP(newIntSliceValue(value, p), name, shorthand, usage)
}
// IntSlice defines a []int flag with specified name, default value, and usage string.
// The return value is the address of a []int variable that stores the value of the flag.
func (f *FlagSet) IntSlice(name string, value []int, usage string) *[]int {
p := []int{}
f.IntSliceVarP(&p, name, "", value, usage)
return &p
}
// IntSliceP is like IntSlice, but accepts a shorthand letter that can be used after a single dash.
func (f *FlagSet) IntSliceP(name, shorthand string, value []int, usage string) *[]int {
p := []int{}
f.IntSliceVarP(&p, name, shorthand, value, usage)
return &p
}
// IntSlice defines a []int flag with specified name, default value, and usage string.
// The return value is the address of a []int variable that stores the value of the flag.
func IntSlice(name string, value []int, usage string) *[]int {
return CommandLine.IntSliceP(name, "", value, usage)
}
// IntSliceP is like IntSlice, but accepts a shorthand letter that can be used after a single dash.
func IntSliceP(name, shorthand string, value []int, usage string) *[]int {
return CommandLine.IntSliceP(name, shorthand, value, usage)
}

94
vendor/github.com/spf13/pflag/ip.go generated vendored
View File

@ -1,94 +0,0 @@
package pflag
import (
"fmt"
"net"
"strings"
)
// -- net.IP value
type ipValue net.IP
func newIPValue(val net.IP, p *net.IP) *ipValue {
*p = val
return (*ipValue)(p)
}
func (i *ipValue) String() string { return net.IP(*i).String() }
func (i *ipValue) Set(s string) error {
ip := net.ParseIP(strings.TrimSpace(s))
if ip == nil {
return fmt.Errorf("failed to parse IP: %q", s)
}
*i = ipValue(ip)
return nil
}
func (i *ipValue) Type() string {
return "ip"
}
func ipConv(sval string) (interface{}, error) {
ip := net.ParseIP(sval)
if ip != nil {
return ip, nil
}
return nil, fmt.Errorf("invalid string being converted to IP address: %s", sval)
}
// GetIP return the net.IP value of a flag with the given name
func (f *FlagSet) GetIP(name string) (net.IP, error) {
val, err := f.getFlagType(name, "ip", ipConv)
if err != nil {
return nil, err
}
return val.(net.IP), nil
}
// IPVar defines an net.IP flag with specified name, default value, and usage string.
// The argument p points to an net.IP variable in which to store the value of the flag.
func (f *FlagSet) IPVar(p *net.IP, name string, value net.IP, usage string) {
f.VarP(newIPValue(value, p), name, "", usage)
}
// IPVarP is like IPVar, but accepts a shorthand letter that can be used after a single dash.
func (f *FlagSet) IPVarP(p *net.IP, name, shorthand string, value net.IP, usage string) {
f.VarP(newIPValue(value, p), name, shorthand, usage)
}
// IPVar defines an net.IP flag with specified name, default value, and usage string.
// The argument p points to an net.IP variable in which to store the value of the flag.
func IPVar(p *net.IP, name string, value net.IP, usage string) {
CommandLine.VarP(newIPValue(value, p), name, "", usage)
}
// IPVarP is like IPVar, but accepts a shorthand letter that can be used after a single dash.
func IPVarP(p *net.IP, name, shorthand string, value net.IP, usage string) {
CommandLine.VarP(newIPValue(value, p), name, shorthand, usage)
}
// IP defines an net.IP flag with specified name, default value, and usage string.
// The return value is the address of an net.IP variable that stores the value of the flag.
func (f *FlagSet) IP(name string, value net.IP, usage string) *net.IP {
p := new(net.IP)
f.IPVarP(p, name, "", value, usage)
return p
}
// IPP is like IP, but accepts a shorthand letter that can be used after a single dash.
func (f *FlagSet) IPP(name, shorthand string, value net.IP, usage string) *net.IP {
p := new(net.IP)
f.IPVarP(p, name, shorthand, value, usage)
return p
}
// IP defines an net.IP flag with specified name, default value, and usage string.
// The return value is the address of an net.IP variable that stores the value of the flag.
func IP(name string, value net.IP, usage string) *net.IP {
return CommandLine.IPP(name, "", value, usage)
}
// IPP is like IP, but accepts a shorthand letter that can be used after a single dash.
func IPP(name, shorthand string, value net.IP, usage string) *net.IP {
return CommandLine.IPP(name, shorthand, value, usage)
}

View File

@ -1,148 +0,0 @@
package pflag
import (
"fmt"
"io"
"net"
"strings"
)
// -- ipSlice Value
type ipSliceValue struct {
value *[]net.IP
changed bool
}
func newIPSliceValue(val []net.IP, p *[]net.IP) *ipSliceValue {
ipsv := new(ipSliceValue)
ipsv.value = p
*ipsv.value = val
return ipsv
}
// Set converts, and assigns, the comma-separated IP argument string representation as the []net.IP value of this flag.
// If Set is called on a flag that already has a []net.IP assigned, the newly converted values will be appended.
func (s *ipSliceValue) Set(val string) error {
// remove all quote characters
rmQuote := strings.NewReplacer(`"`, "", `'`, "", "`", "")
// read flag arguments with CSV parser
ipStrSlice, err := readAsCSV(rmQuote.Replace(val))
if err != nil && err != io.EOF {
return err
}
// parse ip values into slice
out := make([]net.IP, 0, len(ipStrSlice))
for _, ipStr := range ipStrSlice {
ip := net.ParseIP(strings.TrimSpace(ipStr))
if ip == nil {
return fmt.Errorf("invalid string being converted to IP address: %s", ipStr)
}
out = append(out, ip)
}
if !s.changed {
*s.value = out
} else {
*s.value = append(*s.value, out...)
}
s.changed = true
return nil
}
// Type returns a string that uniquely represents this flag's type.
func (s *ipSliceValue) Type() string {
return "ipSlice"
}
// String defines a "native" format for this net.IP slice flag value.
func (s *ipSliceValue) String() string {
ipStrSlice := make([]string, len(*s.value))
for i, ip := range *s.value {
ipStrSlice[i] = ip.String()
}
out, _ := writeAsCSV(ipStrSlice)
return "[" + out + "]"
}
func ipSliceConv(val string) (interface{}, error) {
val = strings.Trim(val, "[]")
// Emtpy string would cause a slice with one (empty) entry
if len(val) == 0 {
return []net.IP{}, nil
}
ss := strings.Split(val, ",")
out := make([]net.IP, len(ss))
for i, sval := range ss {
ip := net.ParseIP(strings.TrimSpace(sval))
if ip == nil {
return nil, fmt.Errorf("invalid string being converted to IP address: %s", sval)
}
out[i] = ip
}
return out, nil
}
// GetIPSlice returns the []net.IP value of a flag with the given name
func (f *FlagSet) GetIPSlice(name string) ([]net.IP, error) {
val, err := f.getFlagType(name, "ipSlice", ipSliceConv)
if err != nil {
return []net.IP{}, err
}
return val.([]net.IP), nil
}
// IPSliceVar defines a ipSlice flag with specified name, default value, and usage string.
// The argument p points to a []net.IP variable in which to store the value of the flag.
func (f *FlagSet) IPSliceVar(p *[]net.IP, name string, value []net.IP, usage string) {
f.VarP(newIPSliceValue(value, p), name, "", usage)
}
// IPSliceVarP is like IPSliceVar, but accepts a shorthand letter that can be used after a single dash.
func (f *FlagSet) IPSliceVarP(p *[]net.IP, name, shorthand string, value []net.IP, usage string) {
f.VarP(newIPSliceValue(value, p), name, shorthand, usage)
}
// IPSliceVar defines a []net.IP flag with specified name, default value, and usage string.
// The argument p points to a []net.IP variable in which to store the value of the flag.
func IPSliceVar(p *[]net.IP, name string, value []net.IP, usage string) {
CommandLine.VarP(newIPSliceValue(value, p), name, "", usage)
}
// IPSliceVarP is like IPSliceVar, but accepts a shorthand letter that can be used after a single dash.
func IPSliceVarP(p *[]net.IP, name, shorthand string, value []net.IP, usage string) {
CommandLine.VarP(newIPSliceValue(value, p), name, shorthand, usage)
}
// IPSlice defines a []net.IP flag with specified name, default value, and usage string.
// The return value is the address of a []net.IP variable that stores the value of that flag.
func (f *FlagSet) IPSlice(name string, value []net.IP, usage string) *[]net.IP {
p := []net.IP{}
f.IPSliceVarP(&p, name, "", value, usage)
return &p
}
// IPSliceP is like IPSlice, but accepts a shorthand letter that can be used after a single dash.
func (f *FlagSet) IPSliceP(name, shorthand string, value []net.IP, usage string) *[]net.IP {
p := []net.IP{}
f.IPSliceVarP(&p, name, shorthand, value, usage)
return &p
}
// IPSlice defines a []net.IP flag with specified name, default value, and usage string.
// The return value is the address of a []net.IP variable that stores the value of the flag.
func IPSlice(name string, value []net.IP, usage string) *[]net.IP {
return CommandLine.IPSliceP(name, "", value, usage)
}
// IPSliceP is like IPSlice, but accepts a shorthand letter that can be used after a single dash.
func IPSliceP(name, shorthand string, value []net.IP, usage string) *[]net.IP {
return CommandLine.IPSliceP(name, shorthand, value, usage)
}

View File

@ -1,122 +0,0 @@
package pflag
import (
"fmt"
"net"
"strconv"
)
// -- net.IPMask value
type ipMaskValue net.IPMask
func newIPMaskValue(val net.IPMask, p *net.IPMask) *ipMaskValue {
*p = val
return (*ipMaskValue)(p)
}
func (i *ipMaskValue) String() string { return net.IPMask(*i).String() }
func (i *ipMaskValue) Set(s string) error {
ip := ParseIPv4Mask(s)
if ip == nil {
return fmt.Errorf("failed to parse IP mask: %q", s)
}
*i = ipMaskValue(ip)
return nil
}
func (i *ipMaskValue) Type() string {
return "ipMask"
}
// ParseIPv4Mask written in IP form (e.g. 255.255.255.0).
// This function should really belong to the net package.
func ParseIPv4Mask(s string) net.IPMask {
mask := net.ParseIP(s)
if mask == nil {
if len(s) != 8 {
return nil
}
// net.IPMask.String() actually outputs things like ffffff00
// so write a horrible parser for that as well :-(
m := []int{}
for i := 0; i < 4; i++ {
b := "0x" + s[2*i:2*i+2]
d, err := strconv.ParseInt(b, 0, 0)
if err != nil {
return nil
}
m = append(m, int(d))
}
s := fmt.Sprintf("%d.%d.%d.%d", m[0], m[1], m[2], m[3])
mask = net.ParseIP(s)
if mask == nil {
return nil
}
}
return net.IPv4Mask(mask[12], mask[13], mask[14], mask[15])
}
func parseIPv4Mask(sval string) (interface{}, error) {
mask := ParseIPv4Mask(sval)
if mask == nil {
return nil, fmt.Errorf("unable to parse %s as net.IPMask", sval)
}
return mask, nil
}
// GetIPv4Mask return the net.IPv4Mask value of a flag with the given name
func (f *FlagSet) GetIPv4Mask(name string) (net.IPMask, error) {
val, err := f.getFlagType(name, "ipMask", parseIPv4Mask)
if err != nil {
return nil, err
}
return val.(net.IPMask), nil
}
// IPMaskVar defines an net.IPMask flag with specified name, default value, and usage string.
// The argument p points to an net.IPMask variable in which to store the value of the flag.
func (f *FlagSet) IPMaskVar(p *net.IPMask, name string, value net.IPMask, usage string) {
f.VarP(newIPMaskValue(value, p), name, "", usage)
}
// IPMaskVarP is like IPMaskVar, but accepts a shorthand letter that can be used after a single dash.
func (f *FlagSet) IPMaskVarP(p *net.IPMask, name, shorthand string, value net.IPMask, usage string) {
f.VarP(newIPMaskValue(value, p), name, shorthand, usage)
}
// IPMaskVar defines an net.IPMask flag with specified name, default value, and usage string.
// The argument p points to an net.IPMask variable in which to store the value of the flag.
func IPMaskVar(p *net.IPMask, name string, value net.IPMask, usage string) {
CommandLine.VarP(newIPMaskValue(value, p), name, "", usage)
}
// IPMaskVarP is like IPMaskVar, but accepts a shorthand letter that can be used after a single dash.
func IPMaskVarP(p *net.IPMask, name, shorthand string, value net.IPMask, usage string) {
CommandLine.VarP(newIPMaskValue(value, p), name, shorthand, usage)
}
// IPMask defines an net.IPMask flag with specified name, default value, and usage string.
// The return value is the address of an net.IPMask variable that stores the value of the flag.
func (f *FlagSet) IPMask(name string, value net.IPMask, usage string) *net.IPMask {
p := new(net.IPMask)
f.IPMaskVarP(p, name, "", value, usage)
return p
}
// IPMaskP is like IPMask, but accepts a shorthand letter that can be used after a single dash.
func (f *FlagSet) IPMaskP(name, shorthand string, value net.IPMask, usage string) *net.IPMask {
p := new(net.IPMask)
f.IPMaskVarP(p, name, shorthand, value, usage)
return p
}
// IPMask defines an net.IPMask flag with specified name, default value, and usage string.
// The return value is the address of an net.IPMask variable that stores the value of the flag.
func IPMask(name string, value net.IPMask, usage string) *net.IPMask {
return CommandLine.IPMaskP(name, "", value, usage)
}
// IPMaskP is like IP, but accepts a shorthand letter that can be used after a single dash.
func IPMaskP(name, shorthand string, value net.IPMask, usage string) *net.IPMask {
return CommandLine.IPMaskP(name, shorthand, value, usage)
}

View File

@ -1,98 +0,0 @@
package pflag
import (
"fmt"
"net"
"strings"
)
// IPNet adapts net.IPNet for use as a flag.
type ipNetValue net.IPNet
func (ipnet ipNetValue) String() string {
n := net.IPNet(ipnet)
return n.String()
}
func (ipnet *ipNetValue) Set(value string) error {
_, n, err := net.ParseCIDR(strings.TrimSpace(value))
if err != nil {
return err
}
*ipnet = ipNetValue(*n)
return nil
}
func (*ipNetValue) Type() string {
return "ipNet"
}
func newIPNetValue(val net.IPNet, p *net.IPNet) *ipNetValue {
*p = val
return (*ipNetValue)(p)
}
func ipNetConv(sval string) (interface{}, error) {
_, n, err := net.ParseCIDR(strings.TrimSpace(sval))
if err == nil {
return *n, nil
}
return nil, fmt.Errorf("invalid string being converted to IPNet: %s", sval)
}
// GetIPNet return the net.IPNet value of a flag with the given name
func (f *FlagSet) GetIPNet(name string) (net.IPNet, error) {
val, err := f.getFlagType(name, "ipNet", ipNetConv)
if err != nil {
return net.IPNet{}, err
}
return val.(net.IPNet), nil
}
// IPNetVar defines an net.IPNet flag with specified name, default value, and usage string.
// The argument p points to an net.IPNet variable in which to store the value of the flag.
func (f *FlagSet) IPNetVar(p *net.IPNet, name string, value net.IPNet, usage string) {
f.VarP(newIPNetValue(value, p), name, "", usage)
}
// IPNetVarP is like IPNetVar, but accepts a shorthand letter that can be used after a single dash.
func (f *FlagSet) IPNetVarP(p *net.IPNet, name, shorthand string, value net.IPNet, usage string) {
f.VarP(newIPNetValue(value, p), name, shorthand, usage)
}
// IPNetVar defines an net.IPNet flag with specified name, default value, and usage string.
// The argument p points to an net.IPNet variable in which to store the value of the flag.
func IPNetVar(p *net.IPNet, name string, value net.IPNet, usage string) {
CommandLine.VarP(newIPNetValue(value, p), name, "", usage)
}
// IPNetVarP is like IPNetVar, but accepts a shorthand letter that can be used after a single dash.
func IPNetVarP(p *net.IPNet, name, shorthand string, value net.IPNet, usage string) {
CommandLine.VarP(newIPNetValue(value, p), name, shorthand, usage)
}
// IPNet defines an net.IPNet flag with specified name, default value, and usage string.
// The return value is the address of an net.IPNet variable that stores the value of the flag.
func (f *FlagSet) IPNet(name string, value net.IPNet, usage string) *net.IPNet {
p := new(net.IPNet)
f.IPNetVarP(p, name, "", value, usage)
return p
}
// IPNetP is like IPNet, but accepts a shorthand letter that can be used after a single dash.
func (f *FlagSet) IPNetP(name, shorthand string, value net.IPNet, usage string) *net.IPNet {
p := new(net.IPNet)
f.IPNetVarP(p, name, shorthand, value, usage)
return p
}
// IPNet defines an net.IPNet flag with specified name, default value, and usage string.
// The return value is the address of an net.IPNet variable that stores the value of the flag.
func IPNet(name string, value net.IPNet, usage string) *net.IPNet {
return CommandLine.IPNetP(name, "", value, usage)
}
// IPNetP is like IPNet, but accepts a shorthand letter that can be used after a single dash.
func IPNetP(name, shorthand string, value net.IPNet, usage string) *net.IPNet {
return CommandLine.IPNetP(name, shorthand, value, usage)
}

View File

@ -1,80 +0,0 @@
package pflag
// -- string Value
type stringValue string
func newStringValue(val string, p *string) *stringValue {
*p = val
return (*stringValue)(p)
}
func (s *stringValue) Set(val string) error {
*s = stringValue(val)
return nil
}
func (s *stringValue) Type() string {
return "string"
}
func (s *stringValue) String() string { return string(*s) }
func stringConv(sval string) (interface{}, error) {
return sval, nil
}
// GetString return the string value of a flag with the given name
func (f *FlagSet) GetString(name string) (string, error) {
val, err := f.getFlagType(name, "string", stringConv)
if err != nil {
return "", err
}
return val.(string), nil
}
// StringVar defines a string flag with specified name, default value, and usage string.
// The argument p points to a string variable in which to store the value of the flag.
func (f *FlagSet) StringVar(p *string, name string, value string, usage string) {
f.VarP(newStringValue(value, p), name, "", usage)
}
// StringVarP is like StringVar, but accepts a shorthand letter that can be used after a single dash.
func (f *FlagSet) StringVarP(p *string, name, shorthand string, value string, usage string) {
f.VarP(newStringValue(value, p), name, shorthand, usage)
}
// StringVar defines a string flag with specified name, default value, and usage string.
// The argument p points to a string variable in which to store the value of the flag.
func StringVar(p *string, name string, value string, usage string) {
CommandLine.VarP(newStringValue(value, p), name, "", usage)
}
// StringVarP is like StringVar, but accepts a shorthand letter that can be used after a single dash.
func StringVarP(p *string, name, shorthand string, value string, usage string) {
CommandLine.VarP(newStringValue(value, p), name, shorthand, usage)
}
// String defines a string flag with specified name, default value, and usage string.
// The return value is the address of a string variable that stores the value of the flag.
func (f *FlagSet) String(name string, value string, usage string) *string {
p := new(string)
f.StringVarP(p, name, "", value, usage)
return p
}
// StringP is like String, but accepts a shorthand letter that can be used after a single dash.
func (f *FlagSet) StringP(name, shorthand string, value string, usage string) *string {
p := new(string)
f.StringVarP(p, name, shorthand, value, usage)
return p
}
// String defines a string flag with specified name, default value, and usage string.
// The return value is the address of a string variable that stores the value of the flag.
func String(name string, value string, usage string) *string {
return CommandLine.StringP(name, "", value, usage)
}
// StringP is like String, but accepts a shorthand letter that can be used after a single dash.
func StringP(name, shorthand string, value string, usage string) *string {
return CommandLine.StringP(name, shorthand, value, usage)
}

View File

@ -1,103 +0,0 @@
package pflag
// -- stringArray Value
type stringArrayValue struct {
value *[]string
changed bool
}
func newStringArrayValue(val []string, p *[]string) *stringArrayValue {
ssv := new(stringArrayValue)
ssv.value = p
*ssv.value = val
return ssv
}
func (s *stringArrayValue) Set(val string) error {
if !s.changed {
*s.value = []string{val}
s.changed = true
} else {
*s.value = append(*s.value, val)
}
return nil
}
func (s *stringArrayValue) Type() string {
return "stringArray"
}
func (s *stringArrayValue) String() string {
str, _ := writeAsCSV(*s.value)
return "[" + str + "]"
}
func stringArrayConv(sval string) (interface{}, error) {
sval = sval[1 : len(sval)-1]
// An empty string would cause a array with one (empty) string
if len(sval) == 0 {
return []string{}, nil
}
return readAsCSV(sval)
}
// GetStringArray return the []string value of a flag with the given name
func (f *FlagSet) GetStringArray(name string) ([]string, error) {
val, err := f.getFlagType(name, "stringArray", stringArrayConv)
if err != nil {
return []string{}, err
}
return val.([]string), nil
}
// StringArrayVar defines a string flag with specified name, default value, and usage string.
// The argument p points to a []string variable in which to store the values of the multiple flags.
// The value of each argument will not try to be separated by comma
func (f *FlagSet) StringArrayVar(p *[]string, name string, value []string, usage string) {
f.VarP(newStringArrayValue(value, p), name, "", usage)
}
// StringArrayVarP is like StringArrayVar, but accepts a shorthand letter that can be used after a single dash.
func (f *FlagSet) StringArrayVarP(p *[]string, name, shorthand string, value []string, usage string) {
f.VarP(newStringArrayValue(value, p), name, shorthand, usage)
}
// StringArrayVar defines a string flag with specified name, default value, and usage string.
// The argument p points to a []string variable in which to store the value of the flag.
// The value of each argument will not try to be separated by comma
func StringArrayVar(p *[]string, name string, value []string, usage string) {
CommandLine.VarP(newStringArrayValue(value, p), name, "", usage)
}
// StringArrayVarP is like StringArrayVar, but accepts a shorthand letter that can be used after a single dash.
func StringArrayVarP(p *[]string, name, shorthand string, value []string, usage string) {
CommandLine.VarP(newStringArrayValue(value, p), name, shorthand, usage)
}
// StringArray defines a string flag with specified name, default value, and usage string.
// The return value is the address of a []string variable that stores the value of the flag.
// The value of each argument will not try to be separated by comma
func (f *FlagSet) StringArray(name string, value []string, usage string) *[]string {
p := []string{}
f.StringArrayVarP(&p, name, "", value, usage)
return &p
}
// StringArrayP is like StringArray, but accepts a shorthand letter that can be used after a single dash.
func (f *FlagSet) StringArrayP(name, shorthand string, value []string, usage string) *[]string {
p := []string{}
f.StringArrayVarP(&p, name, shorthand, value, usage)
return &p
}
// StringArray defines a string flag with specified name, default value, and usage string.
// The return value is the address of a []string variable that stores the value of the flag.
// The value of each argument will not try to be separated by comma
func StringArray(name string, value []string, usage string) *[]string {
return CommandLine.StringArrayP(name, "", value, usage)
}
// StringArrayP is like StringArray, but accepts a shorthand letter that can be used after a single dash.
func StringArrayP(name, shorthand string, value []string, usage string) *[]string {
return CommandLine.StringArrayP(name, shorthand, value, usage)
}

View File

@ -1,129 +0,0 @@
package pflag
import (
"bytes"
"encoding/csv"
"strings"
)
// -- stringSlice Value
type stringSliceValue struct {
value *[]string
changed bool
}
func newStringSliceValue(val []string, p *[]string) *stringSliceValue {
ssv := new(stringSliceValue)
ssv.value = p
*ssv.value = val
return ssv
}
func readAsCSV(val string) ([]string, error) {
if val == "" {
return []string{}, nil
}
stringReader := strings.NewReader(val)
csvReader := csv.NewReader(stringReader)
return csvReader.Read()
}
func writeAsCSV(vals []string) (string, error) {
b := &bytes.Buffer{}
w := csv.NewWriter(b)
err := w.Write(vals)
if err != nil {
return "", err
}
w.Flush()
return strings.TrimSuffix(b.String(), "\n"), nil
}
func (s *stringSliceValue) Set(val string) error {
v, err := readAsCSV(val)
if err != nil {
return err
}
if !s.changed {
*s.value = v
} else {
*s.value = append(*s.value, v...)
}
s.changed = true
return nil
}
func (s *stringSliceValue) Type() string {
return "stringSlice"
}
func (s *stringSliceValue) String() string {
str, _ := writeAsCSV(*s.value)
return "[" + str + "]"
}
func stringSliceConv(sval string) (interface{}, error) {
sval = sval[1 : len(sval)-1]
// An empty string would cause a slice with one (empty) string
if len(sval) == 0 {
return []string{}, nil
}
return readAsCSV(sval)
}
// GetStringSlice return the []string value of a flag with the given name
func (f *FlagSet) GetStringSlice(name string) ([]string, error) {
val, err := f.getFlagType(name, "stringSlice", stringSliceConv)
if err != nil {
return []string{}, err
}
return val.([]string), nil
}
// StringSliceVar defines a string flag with specified name, default value, and usage string.
// The argument p points to a []string variable in which to store the value of the flag.
func (f *FlagSet) StringSliceVar(p *[]string, name string, value []string, usage string) {
f.VarP(newStringSliceValue(value, p), name, "", usage)
}
// StringSliceVarP is like StringSliceVar, but accepts a shorthand letter that can be used after a single dash.
func (f *FlagSet) StringSliceVarP(p *[]string, name, shorthand string, value []string, usage string) {
f.VarP(newStringSliceValue(value, p), name, shorthand, usage)
}
// StringSliceVar defines a string flag with specified name, default value, and usage string.
// The argument p points to a []string variable in which to store the value of the flag.
func StringSliceVar(p *[]string, name string, value []string, usage string) {
CommandLine.VarP(newStringSliceValue(value, p), name, "", usage)
}
// StringSliceVarP is like StringSliceVar, but accepts a shorthand letter that can be used after a single dash.
func StringSliceVarP(p *[]string, name, shorthand string, value []string, usage string) {
CommandLine.VarP(newStringSliceValue(value, p), name, shorthand, usage)
}
// StringSlice defines a string flag with specified name, default value, and usage string.
// The return value is the address of a []string variable that stores the value of the flag.
func (f *FlagSet) StringSlice(name string, value []string, usage string) *[]string {
p := []string{}
f.StringSliceVarP(&p, name, "", value, usage)
return &p
}
// StringSliceP is like StringSlice, but accepts a shorthand letter that can be used after a single dash.
func (f *FlagSet) StringSliceP(name, shorthand string, value []string, usage string) *[]string {
p := []string{}
f.StringSliceVarP(&p, name, shorthand, value, usage)
return &p
}
// StringSlice defines a string flag with specified name, default value, and usage string.
// The return value is the address of a []string variable that stores the value of the flag.
func StringSlice(name string, value []string, usage string) *[]string {
return CommandLine.StringSliceP(name, "", value, usage)
}
// StringSliceP is like StringSlice, but accepts a shorthand letter that can be used after a single dash.
func StringSliceP(name, shorthand string, value []string, usage string) *[]string {
return CommandLine.StringSliceP(name, shorthand, value, usage)
}

View File

@ -1,88 +0,0 @@
package pflag
import "strconv"
// -- uint Value
type uintValue uint
func newUintValue(val uint, p *uint) *uintValue {
*p = val
return (*uintValue)(p)
}
func (i *uintValue) Set(s string) error {
v, err := strconv.ParseUint(s, 0, 64)
*i = uintValue(v)
return err
}
func (i *uintValue) Type() string {
return "uint"
}
func (i *uintValue) String() string { return strconv.FormatUint(uint64(*i), 10) }
func uintConv(sval string) (interface{}, error) {
v, err := strconv.ParseUint(sval, 0, 0)
if err != nil {
return 0, err
}
return uint(v), nil
}
// GetUint return the uint value of a flag with the given name
func (f *FlagSet) GetUint(name string) (uint, error) {
val, err := f.getFlagType(name, "uint", uintConv)
if err != nil {
return 0, err
}
return val.(uint), nil
}
// UintVar defines a uint flag with specified name, default value, and usage string.
// The argument p points to a uint variable in which to store the value of the flag.
func (f *FlagSet) UintVar(p *uint, name string, value uint, usage string) {
f.VarP(newUintValue(value, p), name, "", usage)
}
// UintVarP is like UintVar, but accepts a shorthand letter that can be used after a single dash.
func (f *FlagSet) UintVarP(p *uint, name, shorthand string, value uint, usage string) {
f.VarP(newUintValue(value, p), name, shorthand, usage)
}
// UintVar defines a uint flag with specified name, default value, and usage string.
// The argument p points to a uint variable in which to store the value of the flag.
func UintVar(p *uint, name string, value uint, usage string) {
CommandLine.VarP(newUintValue(value, p), name, "", usage)
}
// UintVarP is like UintVar, but accepts a shorthand letter that can be used after a single dash.
func UintVarP(p *uint, name, shorthand string, value uint, usage string) {
CommandLine.VarP(newUintValue(value, p), name, shorthand, usage)
}
// Uint defines a uint flag with specified name, default value, and usage string.
// The return value is the address of a uint variable that stores the value of the flag.
func (f *FlagSet) Uint(name string, value uint, usage string) *uint {
p := new(uint)
f.UintVarP(p, name, "", value, usage)
return p
}
// UintP is like Uint, but accepts a shorthand letter that can be used after a single dash.
func (f *FlagSet) UintP(name, shorthand string, value uint, usage string) *uint {
p := new(uint)
f.UintVarP(p, name, shorthand, value, usage)
return p
}
// Uint defines a uint flag with specified name, default value, and usage string.
// The return value is the address of a uint variable that stores the value of the flag.
func Uint(name string, value uint, usage string) *uint {
return CommandLine.UintP(name, "", value, usage)
}
// UintP is like Uint, but accepts a shorthand letter that can be used after a single dash.
func UintP(name, shorthand string, value uint, usage string) *uint {
return CommandLine.UintP(name, shorthand, value, usage)
}

View File

@ -1,88 +0,0 @@
package pflag
import "strconv"
// -- uint16 value
type uint16Value uint16
func newUint16Value(val uint16, p *uint16) *uint16Value {
*p = val
return (*uint16Value)(p)
}
func (i *uint16Value) Set(s string) error {
v, err := strconv.ParseUint(s, 0, 16)
*i = uint16Value(v)
return err
}
func (i *uint16Value) Type() string {
return "uint16"
}
func (i *uint16Value) String() string { return strconv.FormatUint(uint64(*i), 10) }
func uint16Conv(sval string) (interface{}, error) {
v, err := strconv.ParseUint(sval, 0, 16)
if err != nil {
return 0, err
}
return uint16(v), nil
}
// GetUint16 return the uint16 value of a flag with the given name
func (f *FlagSet) GetUint16(name string) (uint16, error) {
val, err := f.getFlagType(name, "uint16", uint16Conv)
if err != nil {
return 0, err
}
return val.(uint16), nil
}
// Uint16Var defines a uint flag with specified name, default value, and usage string.
// The argument p points to a uint variable in which to store the value of the flag.
func (f *FlagSet) Uint16Var(p *uint16, name string, value uint16, usage string) {
f.VarP(newUint16Value(value, p), name, "", usage)
}
// Uint16VarP is like Uint16Var, but accepts a shorthand letter that can be used after a single dash.
func (f *FlagSet) Uint16VarP(p *uint16, name, shorthand string, value uint16, usage string) {
f.VarP(newUint16Value(value, p), name, shorthand, usage)
}
// Uint16Var defines a uint flag with specified name, default value, and usage string.
// The argument p points to a uint variable in which to store the value of the flag.
func Uint16Var(p *uint16, name string, value uint16, usage string) {
CommandLine.VarP(newUint16Value(value, p), name, "", usage)
}
// Uint16VarP is like Uint16Var, but accepts a shorthand letter that can be used after a single dash.
func Uint16VarP(p *uint16, name, shorthand string, value uint16, usage string) {
CommandLine.VarP(newUint16Value(value, p), name, shorthand, usage)
}
// Uint16 defines a uint flag with specified name, default value, and usage string.
// The return value is the address of a uint variable that stores the value of the flag.
func (f *FlagSet) Uint16(name string, value uint16, usage string) *uint16 {
p := new(uint16)
f.Uint16VarP(p, name, "", value, usage)
return p
}
// Uint16P is like Uint16, but accepts a shorthand letter that can be used after a single dash.
func (f *FlagSet) Uint16P(name, shorthand string, value uint16, usage string) *uint16 {
p := new(uint16)
f.Uint16VarP(p, name, shorthand, value, usage)
return p
}
// Uint16 defines a uint flag with specified name, default value, and usage string.
// The return value is the address of a uint variable that stores the value of the flag.
func Uint16(name string, value uint16, usage string) *uint16 {
return CommandLine.Uint16P(name, "", value, usage)
}
// Uint16P is like Uint16, but accepts a shorthand letter that can be used after a single dash.
func Uint16P(name, shorthand string, value uint16, usage string) *uint16 {
return CommandLine.Uint16P(name, shorthand, value, usage)
}

View File

@ -1,88 +0,0 @@
package pflag
import "strconv"
// -- uint32 value
type uint32Value uint32
func newUint32Value(val uint32, p *uint32) *uint32Value {
*p = val
return (*uint32Value)(p)
}
func (i *uint32Value) Set(s string) error {
v, err := strconv.ParseUint(s, 0, 32)
*i = uint32Value(v)
return err
}
func (i *uint32Value) Type() string {
return "uint32"
}
func (i *uint32Value) String() string { return strconv.FormatUint(uint64(*i), 10) }
func uint32Conv(sval string) (interface{}, error) {
v, err := strconv.ParseUint(sval, 0, 32)
if err != nil {
return 0, err
}
return uint32(v), nil
}
// GetUint32 return the uint32 value of a flag with the given name
func (f *FlagSet) GetUint32(name string) (uint32, error) {
val, err := f.getFlagType(name, "uint32", uint32Conv)
if err != nil {
return 0, err
}
return val.(uint32), nil
}
// Uint32Var defines a uint32 flag with specified name, default value, and usage string.
// The argument p points to a uint32 variable in which to store the value of the flag.
func (f *FlagSet) Uint32Var(p *uint32, name string, value uint32, usage string) {
f.VarP(newUint32Value(value, p), name, "", usage)
}
// Uint32VarP is like Uint32Var, but accepts a shorthand letter that can be used after a single dash.
func (f *FlagSet) Uint32VarP(p *uint32, name, shorthand string, value uint32, usage string) {
f.VarP(newUint32Value(value, p), name, shorthand, usage)
}
// Uint32Var defines a uint32 flag with specified name, default value, and usage string.
// The argument p points to a uint32 variable in which to store the value of the flag.
func Uint32Var(p *uint32, name string, value uint32, usage string) {
CommandLine.VarP(newUint32Value(value, p), name, "", usage)
}
// Uint32VarP is like Uint32Var, but accepts a shorthand letter that can be used after a single dash.
func Uint32VarP(p *uint32, name, shorthand string, value uint32, usage string) {
CommandLine.VarP(newUint32Value(value, p), name, shorthand, usage)
}
// Uint32 defines a uint32 flag with specified name, default value, and usage string.
// The return value is the address of a uint32 variable that stores the value of the flag.
func (f *FlagSet) Uint32(name string, value uint32, usage string) *uint32 {
p := new(uint32)
f.Uint32VarP(p, name, "", value, usage)
return p
}
// Uint32P is like Uint32, but accepts a shorthand letter that can be used after a single dash.
func (f *FlagSet) Uint32P(name, shorthand string, value uint32, usage string) *uint32 {
p := new(uint32)
f.Uint32VarP(p, name, shorthand, value, usage)
return p
}
// Uint32 defines a uint32 flag with specified name, default value, and usage string.
// The return value is the address of a uint32 variable that stores the value of the flag.
func Uint32(name string, value uint32, usage string) *uint32 {
return CommandLine.Uint32P(name, "", value, usage)
}
// Uint32P is like Uint32, but accepts a shorthand letter that can be used after a single dash.
func Uint32P(name, shorthand string, value uint32, usage string) *uint32 {
return CommandLine.Uint32P(name, shorthand, value, usage)
}

View File

@ -1,88 +0,0 @@
package pflag
import "strconv"
// -- uint64 Value
type uint64Value uint64
func newUint64Value(val uint64, p *uint64) *uint64Value {
*p = val
return (*uint64Value)(p)
}
func (i *uint64Value) Set(s string) error {
v, err := strconv.ParseUint(s, 0, 64)
*i = uint64Value(v)
return err
}
func (i *uint64Value) Type() string {
return "uint64"
}
func (i *uint64Value) String() string { return strconv.FormatUint(uint64(*i), 10) }
func uint64Conv(sval string) (interface{}, error) {
v, err := strconv.ParseUint(sval, 0, 64)
if err != nil {
return 0, err
}
return uint64(v), nil
}
// GetUint64 return the uint64 value of a flag with the given name
func (f *FlagSet) GetUint64(name string) (uint64, error) {
val, err := f.getFlagType(name, "uint64", uint64Conv)
if err != nil {
return 0, err
}
return val.(uint64), nil
}
// Uint64Var defines a uint64 flag with specified name, default value, and usage string.
// The argument p points to a uint64 variable in which to store the value of the flag.
func (f *FlagSet) Uint64Var(p *uint64, name string, value uint64, usage string) {
f.VarP(newUint64Value(value, p), name, "", usage)
}
// Uint64VarP is like Uint64Var, but accepts a shorthand letter that can be used after a single dash.
func (f *FlagSet) Uint64VarP(p *uint64, name, shorthand string, value uint64, usage string) {
f.VarP(newUint64Value(value, p), name, shorthand, usage)
}
// Uint64Var defines a uint64 flag with specified name, default value, and usage string.
// The argument p points to a uint64 variable in which to store the value of the flag.
func Uint64Var(p *uint64, name string, value uint64, usage string) {
CommandLine.VarP(newUint64Value(value, p), name, "", usage)
}
// Uint64VarP is like Uint64Var, but accepts a shorthand letter that can be used after a single dash.
func Uint64VarP(p *uint64, name, shorthand string, value uint64, usage string) {
CommandLine.VarP(newUint64Value(value, p), name, shorthand, usage)
}
// Uint64 defines a uint64 flag with specified name, default value, and usage string.
// The return value is the address of a uint64 variable that stores the value of the flag.
func (f *FlagSet) Uint64(name string, value uint64, usage string) *uint64 {
p := new(uint64)
f.Uint64VarP(p, name, "", value, usage)
return p
}
// Uint64P is like Uint64, but accepts a shorthand letter that can be used after a single dash.
func (f *FlagSet) Uint64P(name, shorthand string, value uint64, usage string) *uint64 {
p := new(uint64)
f.Uint64VarP(p, name, shorthand, value, usage)
return p
}
// Uint64 defines a uint64 flag with specified name, default value, and usage string.
// The return value is the address of a uint64 variable that stores the value of the flag.
func Uint64(name string, value uint64, usage string) *uint64 {
return CommandLine.Uint64P(name, "", value, usage)
}
// Uint64P is like Uint64, but accepts a shorthand letter that can be used after a single dash.
func Uint64P(name, shorthand string, value uint64, usage string) *uint64 {
return CommandLine.Uint64P(name, shorthand, value, usage)
}

View File

@ -1,88 +0,0 @@
package pflag
import "strconv"
// -- uint8 Value
type uint8Value uint8
func newUint8Value(val uint8, p *uint8) *uint8Value {
*p = val
return (*uint8Value)(p)
}
func (i *uint8Value) Set(s string) error {
v, err := strconv.ParseUint(s, 0, 8)
*i = uint8Value(v)
return err
}
func (i *uint8Value) Type() string {
return "uint8"
}
func (i *uint8Value) String() string { return strconv.FormatUint(uint64(*i), 10) }
func uint8Conv(sval string) (interface{}, error) {
v, err := strconv.ParseUint(sval, 0, 8)
if err != nil {
return 0, err
}
return uint8(v), nil
}
// GetUint8 return the uint8 value of a flag with the given name
func (f *FlagSet) GetUint8(name string) (uint8, error) {
val, err := f.getFlagType(name, "uint8", uint8Conv)
if err != nil {
return 0, err
}
return val.(uint8), nil
}
// Uint8Var defines a uint8 flag with specified name, default value, and usage string.
// The argument p points to a uint8 variable in which to store the value of the flag.
func (f *FlagSet) Uint8Var(p *uint8, name string, value uint8, usage string) {
f.VarP(newUint8Value(value, p), name, "", usage)
}
// Uint8VarP is like Uint8Var, but accepts a shorthand letter that can be used after a single dash.
func (f *FlagSet) Uint8VarP(p *uint8, name, shorthand string, value uint8, usage string) {
f.VarP(newUint8Value(value, p), name, shorthand, usage)
}
// Uint8Var defines a uint8 flag with specified name, default value, and usage string.
// The argument p points to a uint8 variable in which to store the value of the flag.
func Uint8Var(p *uint8, name string, value uint8, usage string) {
CommandLine.VarP(newUint8Value(value, p), name, "", usage)
}
// Uint8VarP is like Uint8Var, but accepts a shorthand letter that can be used after a single dash.
func Uint8VarP(p *uint8, name, shorthand string, value uint8, usage string) {
CommandLine.VarP(newUint8Value(value, p), name, shorthand, usage)
}
// Uint8 defines a uint8 flag with specified name, default value, and usage string.
// The return value is the address of a uint8 variable that stores the value of the flag.
func (f *FlagSet) Uint8(name string, value uint8, usage string) *uint8 {
p := new(uint8)
f.Uint8VarP(p, name, "", value, usage)
return p
}
// Uint8P is like Uint8, but accepts a shorthand letter that can be used after a single dash.
func (f *FlagSet) Uint8P(name, shorthand string, value uint8, usage string) *uint8 {
p := new(uint8)
f.Uint8VarP(p, name, shorthand, value, usage)
return p
}
// Uint8 defines a uint8 flag with specified name, default value, and usage string.
// The return value is the address of a uint8 variable that stores the value of the flag.
func Uint8(name string, value uint8, usage string) *uint8 {
return CommandLine.Uint8P(name, "", value, usage)
}
// Uint8P is like Uint8, but accepts a shorthand letter that can be used after a single dash.
func Uint8P(name, shorthand string, value uint8, usage string) *uint8 {
return CommandLine.Uint8P(name, shorthand, value, usage)
}

View File

@ -1,126 +0,0 @@
package pflag
import (
"fmt"
"strconv"
"strings"
)
// -- uintSlice Value
type uintSliceValue struct {
value *[]uint
changed bool
}
func newUintSliceValue(val []uint, p *[]uint) *uintSliceValue {
uisv := new(uintSliceValue)
uisv.value = p
*uisv.value = val
return uisv
}
func (s *uintSliceValue) Set(val string) error {
ss := strings.Split(val, ",")
out := make([]uint, len(ss))
for i, d := range ss {
u, err := strconv.ParseUint(d, 10, 0)
if err != nil {
return err
}
out[i] = uint(u)
}
if !s.changed {
*s.value = out
} else {
*s.value = append(*s.value, out...)
}
s.changed = true
return nil
}
func (s *uintSliceValue) Type() string {
return "uintSlice"
}
func (s *uintSliceValue) String() string {
out := make([]string, len(*s.value))
for i, d := range *s.value {
out[i] = fmt.Sprintf("%d", d)
}
return "[" + strings.Join(out, ",") + "]"
}
func uintSliceConv(val string) (interface{}, error) {
val = strings.Trim(val, "[]")
// Empty string would cause a slice with one (empty) entry
if len(val) == 0 {
return []uint{}, nil
}
ss := strings.Split(val, ",")
out := make([]uint, len(ss))
for i, d := range ss {
u, err := strconv.ParseUint(d, 10, 0)
if err != nil {
return nil, err
}
out[i] = uint(u)
}
return out, nil
}
// GetUintSlice returns the []uint value of a flag with the given name.
func (f *FlagSet) GetUintSlice(name string) ([]uint, error) {
val, err := f.getFlagType(name, "uintSlice", uintSliceConv)
if err != nil {
return []uint{}, err
}
return val.([]uint), nil
}
// UintSliceVar defines a uintSlice flag with specified name, default value, and usage string.
// The argument p points to a []uint variable in which to store the value of the flag.
func (f *FlagSet) UintSliceVar(p *[]uint, name string, value []uint, usage string) {
f.VarP(newUintSliceValue(value, p), name, "", usage)
}
// UintSliceVarP is like UintSliceVar, but accepts a shorthand letter that can be used after a single dash.
func (f *FlagSet) UintSliceVarP(p *[]uint, name, shorthand string, value []uint, usage string) {
f.VarP(newUintSliceValue(value, p), name, shorthand, usage)
}
// UintSliceVar defines a uint[] flag with specified name, default value, and usage string.
// The argument p points to a uint[] variable in which to store the value of the flag.
func UintSliceVar(p *[]uint, name string, value []uint, usage string) {
CommandLine.VarP(newUintSliceValue(value, p), name, "", usage)
}
// UintSliceVarP is like the UintSliceVar, but accepts a shorthand letter that can be used after a single dash.
func UintSliceVarP(p *[]uint, name, shorthand string, value []uint, usage string) {
CommandLine.VarP(newUintSliceValue(value, p), name, shorthand, usage)
}
// UintSlice defines a []uint flag with specified name, default value, and usage string.
// The return value is the address of a []uint variable that stores the value of the flag.
func (f *FlagSet) UintSlice(name string, value []uint, usage string) *[]uint {
p := []uint{}
f.UintSliceVarP(&p, name, "", value, usage)
return &p
}
// UintSliceP is like UintSlice, but accepts a shorthand letter that can be used after a single dash.
func (f *FlagSet) UintSliceP(name, shorthand string, value []uint, usage string) *[]uint {
p := []uint{}
f.UintSliceVarP(&p, name, shorthand, value, usage)
return &p
}
// UintSlice defines a []uint flag with specified name, default value, and usage string.
// The return value is the address of a []uint variable that stores the value of the flag.
func UintSlice(name string, value []uint, usage string) *[]uint {
return CommandLine.UintSliceP(name, "", value, usage)
}
// UintSliceP is like UintSlice, but accepts a shorthand letter that can be used after a single dash.
func UintSliceP(name, shorthand string, value []uint, usage string) *[]uint {
return CommandLine.UintSliceP(name, shorthand, value, usage)
}

View File

@ -45,24 +45,17 @@ const (
// to one container of a pod.
SeccompContainerAnnotationKeyPrefix string = "container.seccomp.security.alpha.kubernetes.io/"
// SeccompProfileRuntimeDefault represents the default seccomp profile used by container runtime.
SeccompProfileRuntimeDefault string = "runtime/default"
// DeprecatedSeccompProfileDockerDefault represents the default seccomp profile used by docker.
// This is now deprecated and should be replaced by SeccompProfileRuntimeDefault.
DeprecatedSeccompProfileDockerDefault string = "docker/default"
// PreferAvoidPodsAnnotationKey represents the key of preferAvoidPods data (json serialized)
// in the Annotations of a Node.
PreferAvoidPodsAnnotationKey string = "scheduler.alpha.kubernetes.io/preferAvoidPods"
// SysctlsPodAnnotationKey represents the key of sysctls which are set for the infrastructure
// container of a pod. The annotation value is a comma separated list of sysctl_name=value
// key-value pairs. Only a limited set of whitelisted and isolated sysctls is supported by
// the kubelet. Pods with other sysctls will fail to launch.
SysctlsPodAnnotationKey string = "security.alpha.kubernetes.io/sysctls"
// UnsafeSysctlsPodAnnotationKey represents the key of sysctls which are set for the infrastructure
// container of a pod. The annotation value is a comma separated list of sysctl_name=value
// key-value pairs. Unsafe sysctls must be explicitly enabled for a kubelet. They are properly
// namespaced to a pod or a container, but their isolation is usually unclear or weak. Their use
// is at-your-own-risk. Pods that attempt to set an unsafe sysctl that is not enabled for a kubelet
// will fail to launch.
UnsafeSysctlsPodAnnotationKey string = "security.alpha.kubernetes.io/unsafe-sysctls"
// ObjectTTLAnnotations represents a suggestion for kubelet for how long it can cache
// an object (e.g. secret, config map) before fetching it again from apiserver.
// This annotation can be attached to node.

File diff suppressed because it is too large Load Diff

View File

@ -298,6 +298,34 @@ message CephFSVolumeSource {
optional bool readOnly = 6;
}
// Represents a cinder volume resource in Openstack.
// A Cinder volume must exist before mounting to a container.
// The volume must also be in the same region as the kubelet.
// Cinder volumes support ownership management and SELinux relabeling.
message CinderPersistentVolumeSource {
// volume id used to identify the volume in cinder
// More info: https://releases.k8s.io/HEAD/examples/mysql-cinder-pd/README.md
optional string volumeID = 1;
// Filesystem type to mount.
// Must be a filesystem type supported by the host operating system.
// Examples: "ext4", "xfs", "ntfs". Implicitly inferred to be "ext4" if unspecified.
// More info: https://releases.k8s.io/HEAD/examples/mysql-cinder-pd/README.md
// +optional
optional string fsType = 2;
// Optional: Defaults to false (read/write). ReadOnly here will force
// the ReadOnly setting in VolumeMounts.
// More info: https://releases.k8s.io/HEAD/examples/mysql-cinder-pd/README.md
// +optional
optional bool readOnly = 3;
// Optional: points to a secret object containing parameters used to connect
// to OpenStack.
// +optional
optional SecretReference secretRef = 4;
}
// Represents a cinder volume resource in Openstack.
// A Cinder volume must exist before mounting to a container.
// The volume must also be in the same region as the kubelet.
@ -319,6 +347,11 @@ message CinderVolumeSource {
// More info: https://releases.k8s.io/HEAD/examples/mysql-cinder-pd/README.md
// +optional
optional bool readOnly = 3;
// Optional: points to a secret object containing parameters used to connect
// to OpenStack.
// +optional
optional LocalObjectReference secretRef = 4;
}
// ClientIPConfig represents the configurations of Client IP based session affinity.
@ -439,6 +472,31 @@ message ConfigMapList {
repeated ConfigMap items = 2;
}
// ConfigMapNodeConfigSource contains the information to reference a ConfigMap as a config source for the Node.
message ConfigMapNodeConfigSource {
// Namespace is the metadata.namespace of the referenced ConfigMap.
// This field is required in all cases.
optional string namespace = 1;
// Name is the metadata.name of the referenced ConfigMap.
// This field is required in all cases.
optional string name = 2;
// UID is the metadata.UID of the referenced ConfigMap.
// This field is forbidden in Node.Spec, and required in Node.Status.
// +optional
optional string uid = 3;
// ResourceVersion is the metadata.ResourceVersion of the referenced ConfigMap.
// This field is forbidden in Node.Spec, and required in Node.Status.
// +optional
optional string resourceVersion = 4;
// KubeletConfigKey declares which key of the referenced ConfigMap corresponds to the KubeletConfiguration structure
// This field is required in all cases.
optional string kubeletConfigKey = 5;
}
// Adapts a ConfigMap into a projected volume.
//
// The contents of the target ConfigMap's Data field will be presented in a
@ -809,41 +867,6 @@ message DaemonEndpoint {
optional int32 Port = 1;
}
// DeleteOptions may be provided when deleting an API object
// DEPRECATED: This type has been moved to meta/v1 and will be removed soon.
// +k8s:openapi-gen=false
message DeleteOptions {
// The duration in seconds before the object should be deleted. Value must be non-negative integer.
// The value zero indicates delete immediately. If this value is nil, the default grace period for the
// specified type will be used.
// Defaults to a per object value if not specified. zero means delete immediately.
// +optional
optional int64 gracePeriodSeconds = 1;
// Must be fulfilled before a deletion is carried out. If not possible, a 409 Conflict status will be
// returned.
// +optional
optional Preconditions preconditions = 2;
// Deprecated: please use the PropagationPolicy, this field will be deprecated in 1.7.
// Should the dependent objects be orphaned. If true/false, the "orphan"
// finalizer will be added to/removed from the object's finalizers list.
// Either this field or PropagationPolicy may be set, but not both.
// +optional
optional bool orphanDependents = 3;
// Whether and how garbage collection will be performed.
// Either this field or OrphanDependents may be set, but not both.
// The default policy is decided by the existing finalizer set in the
// metadata.finalizers and the resource-specific default policy.
// Acceptable values are: 'Orphan' - orphan the dependents; 'Background' -
// allow the garbage collector to delete the dependents in the background;
// 'Foreground' - a cascading policy that deletes all dependents in the
// foreground.
// +optional
optional string propagationPolicy = 4;
}
// Represents downward API info for projecting into a projected volume.
// Note that this is identical to a downwardAPI volume source without the default
// mode.
@ -1328,6 +1351,10 @@ message GCEPersistentDiskVolumeSource {
// Represents a volume that is populated with the contents of a git repository.
// Git repo volumes do not support ownership management.
// Git repo volumes support SELinux relabeling.
//
// DEPRECATED: GitRepo is deprecated. To provision a container with a git repo, mount an
// EmptyDir into an InitContainer that clones the repo using git, then mount the EmptyDir
// into the Pod's container.
message GitRepoVolumeSource {
// Repository URL
optional string repository = 1;
@ -1662,43 +1689,6 @@ message List {
repeated k8s.io.apimachinery.pkg.runtime.RawExtension items = 2;
}
// ListOptions is the query options to a standard REST list call.
// DEPRECATED: This type has been moved to meta/v1 and will be removed soon.
// +k8s:openapi-gen=false
message ListOptions {
// A selector to restrict the list of returned objects by their labels.
// Defaults to everything.
// +optional
optional string labelSelector = 1;
// A selector to restrict the list of returned objects by their fields.
// Defaults to everything.
// +optional
optional string fieldSelector = 2;
// If true, partially initialized resources are included in the response.
// +optional
optional bool includeUninitialized = 6;
// Watch for changes to the described resources and return them as a stream of
// add, update, and remove notifications. Specify resourceVersion.
// +optional
optional bool watch = 3;
// When specified with a watch call, shows changes that occur after that particular version of a resource.
// Defaults to changes from the beginning of history.
// When specified for list:
// - if unset, then the result is returned from remote storage based on quorum-read flag;
// - if it's 0, then we simply return what we currently have in cache, no guarantee;
// - if set to non zero, then the result is at least as fresh as given rv.
// +optional
optional string resourceVersion = 4;
// Timeout for the list/watch call.
// +optional
optional int64 timeoutSeconds = 5;
}
// LoadBalancerIngress represents the status of a load-balancer ingress point:
// traffic intended for the service should be sent to an ingress point.
message LoadBalancerIngress {
@ -1731,11 +1721,13 @@ message LocalObjectReference {
optional string name = 1;
}
// Local represents directly-attached storage with node affinity
// Local represents directly-attached storage with node affinity (Beta feature)
message LocalVolumeSource {
// The full path to the volume on the node
// For alpha, this path must be a directory
// Once block as a source is supported, then this path can point to a block device
// The full path to the volume on the node.
// It can be either a directory or block device (disk, partition, ...).
// Directories can be represented only by PersistentVolume with VolumeMode=Filesystem.
// Block devices can be represented only by VolumeMode=Block, which also requires the
// BlockVolume alpha feature gate to be enabled.
optional string path = 1;
}
@ -1885,7 +1877,58 @@ message NodeCondition {
// NodeConfigSource specifies a source of node configuration. Exactly one subfield (excluding metadata) must be non-nil.
message NodeConfigSource {
optional ObjectReference configMapRef = 1;
// ConfigMap is a reference to a Node's ConfigMap
optional ConfigMapNodeConfigSource configMap = 2;
}
// NodeConfigStatus describes the status of the config assigned by Node.Spec.ConfigSource.
message NodeConfigStatus {
// Assigned reports the checkpointed config the node will try to use.
// When Node.Spec.ConfigSource is updated, the node checkpoints the associated
// config payload to local disk, along with a record indicating intended
// config. The node refers to this record to choose its config checkpoint, and
// reports this record in Assigned. Assigned only updates in the status after
// the record has been checkpointed to disk. When the Kubelet is restarted,
// it tries to make the Assigned config the Active config by loading and
// validating the checkpointed payload identified by Assigned.
// +optional
optional NodeConfigSource assigned = 1;
// Active reports the checkpointed config the node is actively using.
// Active will represent either the current version of the Assigned config,
// or the current LastKnownGood config, depending on whether attempting to use the
// Assigned config results in an error.
// +optional
optional NodeConfigSource active = 2;
// LastKnownGood reports the checkpointed config the node will fall back to
// when it encounters an error attempting to use the Assigned config.
// The Assigned config becomes the LastKnownGood config when the node determines
// that the Assigned config is stable and correct.
// This is currently implemented as a 10-minute soak period starting when the local
// record of Assigned config is updated. If the Assigned config is Active at the end
// of this period, it becomes the LastKnownGood. Note that if Spec.ConfigSource is
// reset to nil (use local defaults), the LastKnownGood is also immediately reset to nil,
// because the local default config is always assumed good.
// You should not make assumptions about the node's method of determining config stability
// and correctness, as this may change or become configurable in the future.
// +optional
optional NodeConfigSource lastKnownGood = 3;
// Error describes any problems reconciling the Spec.ConfigSource to the Active config.
// Errors may occur, for example, attempting to checkpoint Spec.ConfigSource to the local Assigned
// record, attempting to checkpoint the payload associated with Spec.ConfigSource, attempting
// to load or validate the Assigned config, etc.
// Errors may occur at different points while syncing config. Earlier errors (e.g. download or
// checkpointing errors) will not result in a rollback to LastKnownGood, and may resolve across
// Kubelet retries. Later errors (e.g. loading or validating a checkpointed config) will result in
// a rollback to LastKnownGood. In the latter case, it is usually possible to resolve the error
// by fixing the config assigned in Spec.ConfigSource.
// You can find additional information for debugging by searching the error message in the Kubelet log.
// Error is a human-readable description of the error state; machines can check whether or not Error
// is empty, but should not rely on the stability of the Error text across Kubelet versions.
// +optional
optional string error = 4;
}
// NodeDaemonEndpoints lists ports opened by daemons running on the Node.
@ -1947,10 +1990,17 @@ message NodeSelectorRequirement {
repeated string values = 3;
}
// A null or empty node selector term matches no objects.
// A null or empty node selector term matches no objects. The requirements of
// them are ANDed.
// The TopologySelectorTerm type implements a subset of the NodeSelectorTerm.
message NodeSelectorTerm {
// Required. A list of node selector requirements. The requirements are ANDed.
// A list of node selector requirements by node's labels.
// +optional
repeated NodeSelectorRequirement matchExpressions = 1;
// A list of node selector requirements by node's fields.
// +optional
repeated NodeSelectorRequirement matchFields = 2;
}
// NodeSpec describes the attributes that a node is created with.
@ -1959,11 +2009,6 @@ message NodeSpec {
// +optional
optional string podCIDR = 1;
// External ID of the node assigned by some machine database (e.g. a cloud provider).
// Deprecated.
// +optional
optional string externalID = 2;
// ID of the node assigned by the cloud provider in the format: <ProviderName>://<ProviderSpecificNodeID>
// +optional
optional string providerID = 3;
@ -1981,6 +2026,11 @@ message NodeSpec {
// The DynamicKubeletConfig feature gate must be enabled for the Kubelet to use this field
// +optional
optional NodeConfigSource configSource = 6;
// Deprecated. Not all kubelets will set this field. Remove field after 1.13.
// see: https://issues.k8s.io/61966
// +optional
optional string externalID = 2;
}
// NodeStatus is information about the current status of a node.
@ -2036,6 +2086,10 @@ message NodeStatus {
// List of volumes that are attached to the node.
// +optional
repeated AttachedVolume volumesAttached = 10;
// Status of the config assigned to the node via the dynamic Kubelet config feature.
// +optional
optional NodeConfigStatus config = 11;
}
// NodeSystemInfo is a set of ids/uuids to uniquely identify the node.
@ -2085,170 +2139,6 @@ message ObjectFieldSelector {
optional string fieldPath = 2;
}
// ObjectMeta is metadata that all persisted resources must have, which includes all objects
// users must create.
// DEPRECATED: Use k8s.io/apimachinery/pkg/apis/meta/v1.ObjectMeta instead - this type will be removed soon.
// +k8s:openapi-gen=false
message ObjectMeta {
// Name must be unique within a namespace. Is required when creating resources, although
// some resources may allow a client to request the generation of an appropriate name
// automatically. Name is primarily intended for creation idempotence and configuration
// definition.
// Cannot be updated.
// More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names
// +optional
optional string name = 1;
// GenerateName is an optional prefix, used by the server, to generate a unique
// name ONLY IF the Name field has not been provided.
// If this field is used, the name returned to the client will be different
// than the name passed. This value will also be combined with a unique suffix.
// The provided value has the same validation rules as the Name field,
// and may be truncated by the length of the suffix required to make the value
// unique on the server.
//
// If this field is specified and the generated name exists, the server will
// NOT return a 409 - instead, it will either return 201 Created or 500 with Reason
// ServerTimeout indicating a unique name could not be found in the time allotted, and the client
// should retry (optionally after the time indicated in the Retry-After header).
//
// Applied only if Name is not specified.
// More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#idempotency
// +optional
optional string generateName = 2;
// Namespace defines the space within each name must be unique. An empty namespace is
// equivalent to the "default" namespace, but "default" is the canonical representation.
// Not all objects are required to be scoped to a namespace - the value of this field for
// those objects will be empty.
//
// Must be a DNS_LABEL.
// Cannot be updated.
// More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/
// +optional
optional string namespace = 3;
// SelfLink is a URL representing this object.
// Populated by the system.
// Read-only.
// +optional
optional string selfLink = 4;
// UID is the unique in time and space value for this object. It is typically generated by
// the server on successful creation of a resource and is not allowed to change on PUT
// operations.
//
// Populated by the system.
// Read-only.
// More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#uids
// +optional
optional string uid = 5;
// An opaque value that represents the internal version of this object that can
// be used by clients to determine when objects have changed. May be used for optimistic
// concurrency, change detection, and the watch operation on a resource or set of resources.
// Clients must treat these values as opaque and passed unmodified back to the server.
// They may only be valid for a particular resource or set of resources.
//
// Populated by the system.
// Read-only.
// Value must be treated as opaque by clients and .
// More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#concurrency-control-and-consistency
// +optional
optional string resourceVersion = 6;
// A sequence number representing a specific generation of the desired state.
// Populated by the system. Read-only.
// +optional
optional int64 generation = 7;
// CreationTimestamp is a timestamp representing the server time when this object was
// created. It is not guaranteed to be set in happens-before order across separate operations.
// Clients may not set this value. It is represented in RFC3339 form and is in UTC.
//
// Populated by the system.
// Read-only.
// Null for lists.
// More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#metadata
// +optional
optional k8s.io.apimachinery.pkg.apis.meta.v1.Time creationTimestamp = 8;
// DeletionTimestamp is RFC 3339 date and time at which this resource will be deleted. This
// field is set by the server when a graceful deletion is requested by the user, and is not
// directly settable by a client. The resource is expected to be deleted (no longer visible
// from resource lists, and not reachable by name) after the time in this field. Once set,
// this value may not be unset or be set further into the future, although it may be shortened
// or the resource may be deleted prior to this time. For example, a user may request that
// a pod is deleted in 30 seconds. The Kubelet will react by sending a graceful termination
// signal to the containers in the pod. After that 30 seconds, the Kubelet will send a hard
// termination signal (SIGKILL) to the container and after cleanup, remove the pod from the
// API. In the presence of network partitions, this object may still exist after this
// timestamp, until an administrator or automated process can determine the resource is
// fully terminated.
// If not set, graceful deletion of the object has not been requested.
//
// Populated by the system when a graceful deletion is requested.
// Read-only.
// More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#metadata
// +optional
optional k8s.io.apimachinery.pkg.apis.meta.v1.Time deletionTimestamp = 9;
// Number of seconds allowed for this object to gracefully terminate before
// it will be removed from the system. Only set when deletionTimestamp is also set.
// May only be shortened.
// Read-only.
// +optional
optional int64 deletionGracePeriodSeconds = 10;
// Map of string keys and values that can be used to organize and categorize
// (scope and select) objects. May match selectors of replication controllers
// and services.
// More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/
// +optional
map<string, string> labels = 11;
// Annotations is an unstructured key value map stored with a resource that may be
// set by external tools to store and retrieve arbitrary metadata. They are not
// queryable and should be preserved when modifying objects.
// More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/annotations/
// +optional
map<string, string> annotations = 12;
// List of objects depended by this object. If ALL objects in the list have
// been deleted, this object will be garbage collected. If this object is managed by a controller,
// then an entry in this list will point to this controller, with the controller field set to true.
// There cannot be more than one managing controller.
// +optional
// +patchMergeKey=uid
// +patchStrategy=merge
repeated k8s.io.apimachinery.pkg.apis.meta.v1.OwnerReference ownerReferences = 13;
// An initializer is a controller which enforces some system invariant at object creation time.
// This field is a list of initializers that have not yet acted on this object. If nil or empty,
// this object has been completely initialized. Otherwise, the object is considered uninitialized
// and is hidden (in list/watch and get calls) from clients that haven't explicitly asked to
// observe uninitialized objects.
//
// When an object is created, the system will populate this list with the current set of initializers.
// Only privileged users may set or modify this list. Once it is empty, it may not be modified further
// by any user.
optional k8s.io.apimachinery.pkg.apis.meta.v1.Initializers initializers = 16;
// Must be empty before the object is deleted from the registry. Each entry
// is an identifier for the responsible component that will remove the entry
// from the list. If the deletionTimestamp of the object is non-nil, entries
// in this list can only be removed.
// +optional
// +patchStrategy=merge
repeated string finalizers = 14;
// The name of the cluster which the object belongs to.
// This is used to distinguish resources with same name and namespace in different clusters.
// This field is not set anywhere right now and apiserver is going to ignore it if set in create or update request.
// +optional
optional string clusterName = 15;
}
// ObjectReference contains enough information to let you inspect or modify the referred object.
// +k8s:deepcopy-gen:interfaces=k8s.io/apimachinery/pkg/runtime.Object
message ObjectReference {
@ -2502,7 +2392,7 @@ message PersistentVolumeSource {
// Cinder represents a cinder volume attached and mounted on kubelets host machine
// More info: https://releases.k8s.io/HEAD/examples/mysql-cinder-pd/README.md
// +optional
optional CinderVolumeSource cinder = 8;
optional CinderPersistentVolumeSource cinder = 8;
// CephFS represents a Ceph FS mount on the host that shares a pod's lifetime
// +optional
@ -2775,7 +2665,6 @@ message PodAttachOptions {
// PodCondition contains details for the current condition of this pod.
message PodCondition {
// Type is the type of the condition.
// Currently only Ready.
// More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#pod-conditions
optional string type = 1;
@ -2944,6 +2833,12 @@ message PodProxyOptions {
optional string path = 1;
}
// PodReadinessGate contains the reference to a pod condition
message PodReadinessGate {
// ConditionType refers to a condition in the pod's condition list with matching type.
optional string conditionType = 1;
}
// PodSecurityContext holds pod-level security attributes and common container settings.
// Some fields are also present in container.securityContext. Field values of
// container.securityContext take precedence over field values of PodSecurityContext.
@ -2998,6 +2893,11 @@ message PodSecurityContext {
// If unset, the Kubelet will not modify the ownership and permissions of any volume.
// +optional
optional int64 fsGroup = 5;
// Sysctls hold a list of namespaced sysctls used for the pod. Pods with unsupported
// sysctls (by the container runtime) might fail to launch.
// +optional
repeated Sysctl sysctls = 7;
}
// Describes the class of pods that should avoid this node.
@ -3196,12 +3096,36 @@ message PodSpec {
// configuration based on DNSPolicy.
// +optional
optional PodDNSConfig dnsConfig = 26;
// If specified, all readiness gates will be evaluated for pod readiness.
// A pod is ready when all its containers are ready AND
// all conditions specified in the readiness gates have status equal to "True"
// More info: https://github.com/kubernetes/community/blob/master/keps/sig-network/0007-pod-ready%2B%2B.md
// +optional
repeated PodReadinessGate readinessGates = 28;
}
// PodStatus represents information about the status of a pod. Status may trail the actual
// state of a system.
// state of a system, especially if the node that hosts the pod cannot contact the control
// plane.
message PodStatus {
// Current condition of the pod.
// The phase of a Pod is a simple, high-level summary of where the Pod is in its lifecycle.
// The conditions array, the reason and message fields, and the individual container status
// arrays contain more detail about the pod's status.
// There are five possible phase values:
//
// Pending: The pod has been accepted by the Kubernetes system, but one or more of the
// container images has not been created. This includes time before being scheduled as
// well as time spent downloading images over the network, which could take a while.
// Running: The pod has been bound to a node, and all of the containers have been created.
// At least one container is still running, or is in the process of starting or restarting.
// Succeeded: All containers in the pod have terminated in success, and will not be restarted.
// Failed: All containers in the pod have terminated, and at least one container has
// terminated in failure. The container either exited with non-zero status or was terminated
// by the system.
// Unknown: For some reason the state of the pod could not be obtained, typically due to an
// error in communicating with the host of the pod.
//
// More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#pod-phase
// +optional
optional string phase = 1;
@ -3721,7 +3645,7 @@ message ResourceQuotaList {
// ResourceQuotaSpec defines the desired hard limits to enforce for Quota.
message ResourceQuotaSpec {
// Hard is the set of desired hard limits for each named resource.
// hard is the set of desired hard limits for each named resource.
// More info: https://kubernetes.io/docs/concepts/policy/resource-quotas/
// +optional
map<string, k8s.io.apimachinery.pkg.api.resource.Quantity> hard = 1;
@ -3730,6 +3654,12 @@ message ResourceQuotaSpec {
// If not specified, the quota matches all objects.
// +optional
repeated string scopes = 2;
// scopeSelector is also a collection of filters like scopes that must match each object tracked by a quota
// but expressed using ScopeSelectorOperator in combination with possible values.
// For a resource to match, both scopes AND scopeSelector (if specified in spec), must be matched.
// +optional
optional ScopeSelector scopeSelector = 3;
}
// ResourceQuotaStatus defines the enforced hard limits and observed use.
@ -3866,6 +3796,32 @@ message ScaleIOVolumeSource {
optional bool readOnly = 10;
}
// A scope selector represents the AND of the selectors represented
// by the scoped-resource selector requirements.
message ScopeSelector {
// A list of scope selector requirements by scope of the resources.
// +optional
repeated ScopedResourceSelectorRequirement matchExpressions = 1;
}
// A scoped-resource selector requirement is a selector that contains values, a scope name, and an operator
// that relates the scope name and values.
message ScopedResourceSelectorRequirement {
// The name of the scope that the selector applies to.
optional string scopeName = 1;
// Represents a scope's relationship to a set of values.
// Valid operators are In, NotIn, Exists, DoesNotExist.
optional string operator = 2;
// An array of string values. If the operator is In or NotIn,
// the values array must be non-empty. If the operator is Exists or DoesNotExist,
// the values array must be empty.
// This array is replaced during a strategic merge patch.
// +optional
repeated string values = 3;
}
// Secret holds secret data of a certain type. The total bytes of the values in
// the Data field must be less than MaxSecretSize bytes.
message Secret {
@ -4134,6 +4090,32 @@ message ServiceAccountList {
repeated ServiceAccount items = 2;
}
// ServiceAccountTokenProjection represents a projected service account token
// volume. This projection can be used to insert a service account token into
// the pods runtime filesystem for use against APIs (Kubernetes API Server or
// otherwise).
message ServiceAccountTokenProjection {
// Audience is the intended audience of the token. A recipient of a token
// must identify itself with an identifier specified in the audience of the
// token, and otherwise should reject the token. The audience defaults to the
// identifier of the apiserver.
// +optional
optional string audience = 1;
// ExpirationSeconds is the requested duration of validity of the service
// account token. As the token approaches expiration, the kubelet volume
// plugin will proactively rotate the service account token. The kubelet will
// start trying to rotate the token if the token is older than 80 percent of
// its time to live or if the token is older than 24 hours.Defaults to 1 hour
// and must be at least 10 minutes.
// +optional
optional int64 expirationSeconds = 2;
// Path is the path relative to the mount point of the file to project the
// token into.
optional string path = 3;
}
// ServiceList holds a list of services.
message ServiceList {
// Standard list metadata.
@ -4300,9 +4282,6 @@ message ServiceSpec {
// The primary use case for setting this field is to use a StatefulSet's Headless Service
// to propagate SRV records for its Pods without respect to their readiness for purpose
// of peer discovery.
// This field will replace the service.alpha.kubernetes.io/tolerate-unready-endpoints
// when that annotation is deprecated and all clients have been converted to use this
// field.
// +optional
optional bool publishNotReadyAddresses = 13;
@ -4465,6 +4444,28 @@ message Toleration {
optional int64 tolerationSeconds = 5;
}
// A topology selector requirement is a selector that matches given label.
// This is an alpha feature and may change in the future.
message TopologySelectorLabelRequirement {
// The label key that the selector applies to.
optional string key = 1;
// An array of string values. One value must match the label to be selected.
// Each entry in Values is ORed.
repeated string values = 2;
}
// A topology selector term represents the result of label queries.
// A null or empty topology selector term matches no objects.
// The requirements of them are ANDed.
// It provides a subset of functionality as NodeSelectorTerm.
// This is an alpha feature and may change in the future.
message TopologySelectorTerm {
// A list of topology selector requirements by labels.
// +optional
repeated TopologySelectorLabelRequirement matchLabelExpressions = 1;
}
// Volume represents a named volume in a pod that may be accessed by any container in the pod.
message Volume {
// Volume's name.
@ -4523,13 +4524,20 @@ message VolumeNodeAffinity {
// Projection that may be projected along with other supported volume types
message VolumeProjection {
// information about the secret data to project
// +optional
optional SecretProjection secret = 1;
// information about the downwardAPI data to project
// +optional
optional DownwardAPIProjection downwardAPI = 2;
// information about the configMap data to project
// +optional
optional ConfigMapProjection configMap = 3;
// information about the serviceAccountToken data to project
// +optional
optional ServiceAccountTokenProjection serviceAccountToken = 4;
}
// Represents the source of a volume to mount.
@ -4564,6 +4572,9 @@ message VolumeSource {
optional AWSElasticBlockStoreVolumeSource awsElasticBlockStore = 4;
// GitRepo represents a git repository at a particular revision.
// DEPRECATED: GitRepo is deprecated. To provision a container with a git repo, mount an
// EmptyDir into an InitContainer that clones the repo using git, then mount the EmptyDir
// into the Pod's container.
// +optional
optional GitRepoVolumeSource gitRepo = 5;

108
vendor/k8s.io/api/core/v1/meta.go generated vendored
View File

@ -1,108 +0,0 @@
/*
Copyright 2014 The Kubernetes Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package v1
import (
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/types"
)
func (obj *ObjectMeta) GetObjectMeta() metav1.Object { return obj }
// Namespace implements metav1.Object for any object with an ObjectMeta typed field. Allows
// fast, direct access to metadata fields for API objects.
func (meta *ObjectMeta) GetNamespace() string { return meta.Namespace }
func (meta *ObjectMeta) SetNamespace(namespace string) { meta.Namespace = namespace }
func (meta *ObjectMeta) GetName() string { return meta.Name }
func (meta *ObjectMeta) SetName(name string) { meta.Name = name }
func (meta *ObjectMeta) GetGenerateName() string { return meta.GenerateName }
func (meta *ObjectMeta) SetGenerateName(generateName string) { meta.GenerateName = generateName }
func (meta *ObjectMeta) GetUID() types.UID { return meta.UID }
func (meta *ObjectMeta) SetUID(uid types.UID) { meta.UID = uid }
func (meta *ObjectMeta) GetResourceVersion() string { return meta.ResourceVersion }
func (meta *ObjectMeta) SetResourceVersion(version string) { meta.ResourceVersion = version }
func (meta *ObjectMeta) GetGeneration() int64 { return meta.Generation }
func (meta *ObjectMeta) SetGeneration(generation int64) { meta.Generation = generation }
func (meta *ObjectMeta) GetSelfLink() string { return meta.SelfLink }
func (meta *ObjectMeta) SetSelfLink(selfLink string) { meta.SelfLink = selfLink }
func (meta *ObjectMeta) GetCreationTimestamp() metav1.Time { return meta.CreationTimestamp }
func (meta *ObjectMeta) SetCreationTimestamp(creationTimestamp metav1.Time) {
meta.CreationTimestamp = creationTimestamp
}
func (meta *ObjectMeta) GetDeletionTimestamp() *metav1.Time { return meta.DeletionTimestamp }
func (meta *ObjectMeta) SetDeletionTimestamp(deletionTimestamp *metav1.Time) {
meta.DeletionTimestamp = deletionTimestamp
}
func (meta *ObjectMeta) GetDeletionGracePeriodSeconds() *int64 { return meta.DeletionGracePeriodSeconds }
func (meta *ObjectMeta) SetDeletionGracePeriodSeconds(deletionGracePeriodSeconds *int64) {
meta.DeletionGracePeriodSeconds = deletionGracePeriodSeconds
}
func (meta *ObjectMeta) GetLabels() map[string]string { return meta.Labels }
func (meta *ObjectMeta) SetLabels(labels map[string]string) { meta.Labels = labels }
func (meta *ObjectMeta) GetAnnotations() map[string]string { return meta.Annotations }
func (meta *ObjectMeta) SetAnnotations(annotations map[string]string) { meta.Annotations = annotations }
func (meta *ObjectMeta) GetInitializers() *metav1.Initializers { return meta.Initializers }
func (meta *ObjectMeta) SetInitializers(initializers *metav1.Initializers) {
meta.Initializers = initializers
}
func (meta *ObjectMeta) GetFinalizers() []string { return meta.Finalizers }
func (meta *ObjectMeta) SetFinalizers(finalizers []string) { meta.Finalizers = finalizers }
func (meta *ObjectMeta) GetOwnerReferences() []metav1.OwnerReference {
ret := make([]metav1.OwnerReference, len(meta.OwnerReferences))
for i := 0; i < len(meta.OwnerReferences); i++ {
ret[i].Kind = meta.OwnerReferences[i].Kind
ret[i].Name = meta.OwnerReferences[i].Name
ret[i].UID = meta.OwnerReferences[i].UID
ret[i].APIVersion = meta.OwnerReferences[i].APIVersion
if meta.OwnerReferences[i].Controller != nil {
value := *meta.OwnerReferences[i].Controller
ret[i].Controller = &value
}
if meta.OwnerReferences[i].BlockOwnerDeletion != nil {
value := *meta.OwnerReferences[i].BlockOwnerDeletion
ret[i].BlockOwnerDeletion = &value
}
}
return ret
}
func (meta *ObjectMeta) SetOwnerReferences(references []metav1.OwnerReference) {
newReferences := make([]metav1.OwnerReference, len(references))
for i := 0; i < len(references); i++ {
newReferences[i].Kind = references[i].Kind
newReferences[i].Name = references[i].Name
newReferences[i].UID = references[i].UID
newReferences[i].APIVersion = references[i].APIVersion
if references[i].Controller != nil {
value := *references[i].Controller
newReferences[i].Controller = &value
}
if references[i].BlockOwnerDeletion != nil {
value := *references[i].BlockOwnerDeletion
newReferences[i].BlockOwnerDeletion = &value
}
}
meta.OwnerReferences = newReferences
}
func (meta *ObjectMeta) GetClusterName() string {
return meta.ClusterName
}
func (meta *ObjectMeta) SetClusterName(clusterName string) {
meta.ClusterName = clusterName
}

View File

@ -57,7 +57,6 @@ func addKnownTypes(scheme *runtime.Scheme) error {
&Endpoints{},
&EndpointsList{},
&Node{},
&NodeConfigSource{},
&NodeList{},
&NodeProxyOptions{},
&Binding{},

View File

@ -48,13 +48,6 @@ func (self *ResourceList) Pods() *resource.Quantity {
return &resource.Quantity{}
}
func (self *ResourceList) NvidiaGPU() *resource.Quantity {
if val, ok := (*self)[ResourceNvidiaGPU]; ok {
return &val
}
return &resource.Quantity{}
}
func (self *ResourceList) StorageEphemeral() *resource.Quantity {
if val, ok := (*self)[ResourceEphemeralStorage]; ok {
return &val

614
vendor/k8s.io/api/core/v1/types.go generated vendored
View File

@ -23,209 +23,6 @@ import (
"k8s.io/apimachinery/pkg/util/intstr"
)
// The comments for the structs and fields can be used from go-restful to
// generate Swagger API documentation for its models. Please read this PR for more
// information on the implementation: https://github.com/emicklei/go-restful/pull/215
//
// TODOs are ignored from the parser (e.g. TODO(andronat):... || TODO:...) if and only if
// they are on one line! For multiple line or blocks that you want to ignore use ---.
// Any context after a --- is ignored and not exported to the SwaggerAPI.
//
// The aforementioned methods can be generated by hack/update-generated-swagger-docs.sh
// Common string formats
// ---------------------
// Many fields in this API have formatting requirements. The commonly used
// formats are defined here.
//
// C_IDENTIFIER: This is a string that conforms to the definition of an "identifier"
// in the C language. This is captured by the following regex:
// [A-Za-z_][A-Za-z0-9_]*
// This defines the format, but not the length restriction, which should be
// specified at the definition of any field of this type.
//
// DNS_LABEL: This is a string, no more than 63 characters long, that conforms
// to the definition of a "label" in RFCs 1035 and 1123. This is captured
// by the following regex:
// [a-z0-9]([-a-z0-9]*[a-z0-9])?
//
// DNS_SUBDOMAIN: This is a string, no more than 253 characters long, that conforms
// to the definition of a "subdomain" in RFCs 1035 and 1123. This is captured
// by the following regex:
// [a-z0-9]([-a-z0-9]*[a-z0-9])?(\.[a-z0-9]([-a-z0-9]*[a-z0-9])?)*
// or more simply:
// DNS_LABEL(\.DNS_LABEL)*
//
// IANA_SVC_NAME: This is a string, no more than 15 characters long, that
// conforms to the definition of IANA service name in RFC 6335.
// It must contains at least one letter [a-z] and it must contains only [a-z0-9-].
// Hypens ('-') cannot be leading or trailing character of the string
// and cannot be adjacent to other hyphens.
// ObjectMeta is metadata that all persisted resources must have, which includes all objects
// users must create.
// DEPRECATED: Use k8s.io/apimachinery/pkg/apis/meta/v1.ObjectMeta instead - this type will be removed soon.
// +k8s:openapi-gen=false
type ObjectMeta struct {
// Name must be unique within a namespace. Is required when creating resources, although
// some resources may allow a client to request the generation of an appropriate name
// automatically. Name is primarily intended for creation idempotence and configuration
// definition.
// Cannot be updated.
// More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names
// +optional
Name string `json:"name,omitempty" protobuf:"bytes,1,opt,name=name"`
// GenerateName is an optional prefix, used by the server, to generate a unique
// name ONLY IF the Name field has not been provided.
// If this field is used, the name returned to the client will be different
// than the name passed. This value will also be combined with a unique suffix.
// The provided value has the same validation rules as the Name field,
// and may be truncated by the length of the suffix required to make the value
// unique on the server.
//
// If this field is specified and the generated name exists, the server will
// NOT return a 409 - instead, it will either return 201 Created or 500 with Reason
// ServerTimeout indicating a unique name could not be found in the time allotted, and the client
// should retry (optionally after the time indicated in the Retry-After header).
//
// Applied only if Name is not specified.
// More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#idempotency
// +optional
GenerateName string `json:"generateName,omitempty" protobuf:"bytes,2,opt,name=generateName"`
// Namespace defines the space within each name must be unique. An empty namespace is
// equivalent to the "default" namespace, but "default" is the canonical representation.
// Not all objects are required to be scoped to a namespace - the value of this field for
// those objects will be empty.
//
// Must be a DNS_LABEL.
// Cannot be updated.
// More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/
// +optional
Namespace string `json:"namespace,omitempty" protobuf:"bytes,3,opt,name=namespace"`
// SelfLink is a URL representing this object.
// Populated by the system.
// Read-only.
// +optional
SelfLink string `json:"selfLink,omitempty" protobuf:"bytes,4,opt,name=selfLink"`
// UID is the unique in time and space value for this object. It is typically generated by
// the server on successful creation of a resource and is not allowed to change on PUT
// operations.
//
// Populated by the system.
// Read-only.
// More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#uids
// +optional
UID types.UID `json:"uid,omitempty" protobuf:"bytes,5,opt,name=uid,casttype=k8s.io/apimachinery/pkg/types.UID"`
// An opaque value that represents the internal version of this object that can
// be used by clients to determine when objects have changed. May be used for optimistic
// concurrency, change detection, and the watch operation on a resource or set of resources.
// Clients must treat these values as opaque and passed unmodified back to the server.
// They may only be valid for a particular resource or set of resources.
//
// Populated by the system.
// Read-only.
// Value must be treated as opaque by clients and .
// More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#concurrency-control-and-consistency
// +optional
ResourceVersion string `json:"resourceVersion,omitempty" protobuf:"bytes,6,opt,name=resourceVersion"`
// A sequence number representing a specific generation of the desired state.
// Populated by the system. Read-only.
// +optional
Generation int64 `json:"generation,omitempty" protobuf:"varint,7,opt,name=generation"`
// CreationTimestamp is a timestamp representing the server time when this object was
// created. It is not guaranteed to be set in happens-before order across separate operations.
// Clients may not set this value. It is represented in RFC3339 form and is in UTC.
//
// Populated by the system.
// Read-only.
// Null for lists.
// More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#metadata
// +optional
CreationTimestamp metav1.Time `json:"creationTimestamp,omitempty" protobuf:"bytes,8,opt,name=creationTimestamp"`
// DeletionTimestamp is RFC 3339 date and time at which this resource will be deleted. This
// field is set by the server when a graceful deletion is requested by the user, and is not
// directly settable by a client. The resource is expected to be deleted (no longer visible
// from resource lists, and not reachable by name) after the time in this field. Once set,
// this value may not be unset or be set further into the future, although it may be shortened
// or the resource may be deleted prior to this time. For example, a user may request that
// a pod is deleted in 30 seconds. The Kubelet will react by sending a graceful termination
// signal to the containers in the pod. After that 30 seconds, the Kubelet will send a hard
// termination signal (SIGKILL) to the container and after cleanup, remove the pod from the
// API. In the presence of network partitions, this object may still exist after this
// timestamp, until an administrator or automated process can determine the resource is
// fully terminated.
// If not set, graceful deletion of the object has not been requested.
//
// Populated by the system when a graceful deletion is requested.
// Read-only.
// More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#metadata
// +optional
DeletionTimestamp *metav1.Time `json:"deletionTimestamp,omitempty" protobuf:"bytes,9,opt,name=deletionTimestamp"`
// Number of seconds allowed for this object to gracefully terminate before
// it will be removed from the system. Only set when deletionTimestamp is also set.
// May only be shortened.
// Read-only.
// +optional
DeletionGracePeriodSeconds *int64 `json:"deletionGracePeriodSeconds,omitempty" protobuf:"varint,10,opt,name=deletionGracePeriodSeconds"`
// Map of string keys and values that can be used to organize and categorize
// (scope and select) objects. May match selectors of replication controllers
// and services.
// More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/
// +optional
Labels map[string]string `json:"labels,omitempty" protobuf:"bytes,11,rep,name=labels"`
// Annotations is an unstructured key value map stored with a resource that may be
// set by external tools to store and retrieve arbitrary metadata. They are not
// queryable and should be preserved when modifying objects.
// More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/annotations/
// +optional
Annotations map[string]string `json:"annotations,omitempty" protobuf:"bytes,12,rep,name=annotations"`
// List of objects depended by this object. If ALL objects in the list have
// been deleted, this object will be garbage collected. If this object is managed by a controller,
// then an entry in this list will point to this controller, with the controller field set to true.
// There cannot be more than one managing controller.
// +optional
// +patchMergeKey=uid
// +patchStrategy=merge
OwnerReferences []metav1.OwnerReference `json:"ownerReferences,omitempty" patchStrategy:"merge" patchMergeKey:"uid" protobuf:"bytes,13,rep,name=ownerReferences"`
// An initializer is a controller which enforces some system invariant at object creation time.
// This field is a list of initializers that have not yet acted on this object. If nil or empty,
// this object has been completely initialized. Otherwise, the object is considered uninitialized
// and is hidden (in list/watch and get calls) from clients that haven't explicitly asked to
// observe uninitialized objects.
//
// When an object is created, the system will populate this list with the current set of initializers.
// Only privileged users may set or modify this list. Once it is empty, it may not be modified further
// by any user.
Initializers *metav1.Initializers `json:"initializers,omitempty" patchStrategy:"merge" protobuf:"bytes,16,rep,name=initializers"`
// Must be empty before the object is deleted from the registry. Each entry
// is an identifier for the responsible component that will remove the entry
// from the list. If the deletionTimestamp of the object is non-nil, entries
// in this list can only be removed.
// +optional
// +patchStrategy=merge
Finalizers []string `json:"finalizers,omitempty" patchStrategy:"merge" protobuf:"bytes,14,rep,name=finalizers"`
// The name of the cluster which the object belongs to.
// This is used to distinguish resources with same name and namespace in different clusters.
// This field is not set anywhere right now and apiserver is going to ignore it if set in create or update request.
// +optional
ClusterName string `json:"clusterName,omitempty" protobuf:"bytes,15,opt,name=clusterName"`
}
const (
// NamespaceDefault means the object is in the default namespace which is applied when not specified by clients
NamespaceDefault string = "default"
@ -273,6 +70,9 @@ type VolumeSource struct {
// +optional
AWSElasticBlockStore *AWSElasticBlockStoreVolumeSource `json:"awsElasticBlockStore,omitempty" protobuf:"bytes,4,opt,name=awsElasticBlockStore"`
// GitRepo represents a git repository at a particular revision.
// DEPRECATED: GitRepo is deprecated. To provision a container with a git repo, mount an
// EmptyDir into an InitContainer that clones the repo using git, then mount the EmptyDir
// into the Pod's container.
// +optional
GitRepo *GitRepoVolumeSource `json:"gitRepo,omitempty" protobuf:"bytes,5,opt,name=gitRepo"`
// Secret represents a secret that should populate this volume.
@ -405,7 +205,7 @@ type PersistentVolumeSource struct {
// Cinder represents a cinder volume attached and mounted on kubelets host machine
// More info: https://releases.k8s.io/HEAD/examples/mysql-cinder-pd/README.md
// +optional
Cinder *CinderVolumeSource `json:"cinder,omitempty" protobuf:"bytes,8,opt,name=cinder"`
Cinder *CinderPersistentVolumeSource `json:"cinder,omitempty" protobuf:"bytes,8,opt,name=cinder"`
// CephFS represents a Ceph FS mount on the host that shares a pod's lifetime
// +optional
CephFS *CephFSPersistentVolumeSource `json:"cephfs,omitempty" protobuf:"bytes,9,opt,name=cephfs"`
@ -458,10 +258,6 @@ const (
// MountOptionAnnotation defines mount option annotation used in PVs
MountOptionAnnotation = "volume.beta.kubernetes.io/mount-options"
// AlphaStorageNodeAffinityAnnotation defines node affinity policies for a PersistentVolume.
// Value is a string of the json representation of type NodeAffinity
AlphaStorageNodeAffinityAnnotation = "volume.alpha.kubernetes.io/node-affinity"
)
// +genclient
@ -935,6 +731,35 @@ type CinderVolumeSource struct {
// More info: https://releases.k8s.io/HEAD/examples/mysql-cinder-pd/README.md
// +optional
ReadOnly bool `json:"readOnly,omitempty" protobuf:"varint,3,opt,name=readOnly"`
// Optional: points to a secret object containing parameters used to connect
// to OpenStack.
// +optional
SecretRef *LocalObjectReference `json:"secretRef,omitempty" protobuf:"bytes,4,opt,name=secretRef"`
}
// Represents a cinder volume resource in Openstack.
// A Cinder volume must exist before mounting to a container.
// The volume must also be in the same region as the kubelet.
// Cinder volumes support ownership management and SELinux relabeling.
type CinderPersistentVolumeSource struct {
// volume id used to identify the volume in cinder
// More info: https://releases.k8s.io/HEAD/examples/mysql-cinder-pd/README.md
VolumeID string `json:"volumeID" protobuf:"bytes,1,opt,name=volumeID"`
// Filesystem type to mount.
// Must be a filesystem type supported by the host operating system.
// Examples: "ext4", "xfs", "ntfs". Implicitly inferred to be "ext4" if unspecified.
// More info: https://releases.k8s.io/HEAD/examples/mysql-cinder-pd/README.md
// +optional
FSType string `json:"fsType,omitempty" protobuf:"bytes,2,opt,name=fsType"`
// Optional: Defaults to false (read/write). ReadOnly here will force
// the ReadOnly setting in VolumeMounts.
// More info: https://releases.k8s.io/HEAD/examples/mysql-cinder-pd/README.md
// +optional
ReadOnly bool `json:"readOnly,omitempty" protobuf:"varint,3,opt,name=readOnly"`
// Optional: points to a secret object containing parameters used to connect
// to OpenStack.
// +optional
SecretRef *SecretReference `json:"secretRef,omitempty" protobuf:"bytes,4,opt,name=secretRef"`
}
// Represents a Ceph Filesystem mount that lasts the lifetime of a pod
@ -1179,6 +1004,10 @@ type AWSElasticBlockStoreVolumeSource struct {
// Represents a volume that is populated with the contents of a git repository.
// Git repo volumes do not support ownership management.
// Git repo volumes support SELinux relabeling.
//
// DEPRECATED: GitRepo is deprecated. To provision a container with a git repo, mount an
// EmptyDir into an InitContainer that clones the repo using git, then mount the EmptyDir
// into the Pod's container.
type GitRepoVolumeSource struct {
// Repository URL
Repository string `json:"repository" protobuf:"bytes,1,opt,name=repository"`
@ -1673,6 +1502,30 @@ type ConfigMapProjection struct {
Optional *bool `json:"optional,omitempty" protobuf:"varint,4,opt,name=optional"`
}
// ServiceAccountTokenProjection represents a projected service account token
// volume. This projection can be used to insert a service account token into
// the pods runtime filesystem for use against APIs (Kubernetes API Server or
// otherwise).
type ServiceAccountTokenProjection struct {
// Audience is the intended audience of the token. A recipient of a token
// must identify itself with an identifier specified in the audience of the
// token, and otherwise should reject the token. The audience defaults to the
// identifier of the apiserver.
//+optional
Audience string `json:"audience,omitempty" protobuf:"bytes,1,rep,name=audience"`
// ExpirationSeconds is the requested duration of validity of the service
// account token. As the token approaches expiration, the kubelet volume
// plugin will proactively rotate the service account token. The kubelet will
// start trying to rotate the token if the token is older than 80 percent of
// its time to live or if the token is older than 24 hours.Defaults to 1 hour
// and must be at least 10 minutes.
//+optional
ExpirationSeconds *int64 `json:"expirationSeconds,omitempty" protobuf:"varint,2,opt,name=expirationSeconds"`
// Path is the path relative to the mount point of the file to project the
// token into.
Path string `json:"path" protobuf:"bytes,3,opt,name=path"`
}
// Represents a projected volume source
type ProjectedVolumeSource struct {
// list of volume projections
@ -1691,11 +1544,17 @@ type VolumeProjection struct {
// all types below are the supported types for projection into the same volume
// information about the secret data to project
// +optional
Secret *SecretProjection `json:"secret,omitempty" protobuf:"bytes,1,opt,name=secret"`
// information about the downwardAPI data to project
// +optional
DownwardAPI *DownwardAPIProjection `json:"downwardAPI,omitempty" protobuf:"bytes,2,opt,name=downwardAPI"`
// information about the configMap data to project
// +optional
ConfigMap *ConfigMapProjection `json:"configMap,omitempty" protobuf:"bytes,3,opt,name=configMap"`
// information about the serviceAccountToken data to project
// +optional
ServiceAccountToken *ServiceAccountTokenProjection `json:"serviceAccountToken,omitempty" protobuf:"bytes,4,opt,name=serviceAccountToken"`
}
const (
@ -1720,11 +1579,13 @@ type KeyToPath struct {
Mode *int32 `json:"mode,omitempty" protobuf:"varint,3,opt,name=mode"`
}
// Local represents directly-attached storage with node affinity
// Local represents directly-attached storage with node affinity (Beta feature)
type LocalVolumeSource struct {
// The full path to the volume on the node
// For alpha, this path must be a directory
// Once block as a source is supported, then this path can point to a block device
// The full path to the volume on the node.
// It can be either a directory or block device (disk, partition, ...).
// Directories can be represented only by PersistentVolume with VolumeMode=Filesystem.
// Block devices can be represented only by VolumeMode=Block, which also requires the
// BlockVolume alpha feature gate to be enabled.
Path string `json:"path" protobuf:"bytes,1,opt,name=path"`
}
@ -1831,6 +1692,12 @@ type VolumeMount struct {
type MountPropagationMode string
const (
// MountPropagationNone means that the volume in a container will
// not receive new mounts from the host or other containers, and filesystems
// mounted inside the container won't be propagated to the host or other
// containers.
// Note that this mode corresponds to "private" in Linux terminology.
MountPropagationNone MountPropagationMode = "None"
// MountPropagationHostToContainer means that the volume in a container will
// receive new mounts from the host or other containers, but filesystems
// mounted inside the container won't be propagated to the host or other
@ -2445,12 +2312,13 @@ const (
// PodReasonUnschedulable reason in PodScheduled PodCondition means that the scheduler
// can't schedule the pod right now, for example due to insufficient resources in the cluster.
PodReasonUnschedulable = "Unschedulable"
// ContainersReady indicates whether all containers in the pod are ready.
ContainersReady PodConditionType = "ContainersReady"
)
// PodCondition contains details for the current condition of this pod.
type PodCondition struct {
// Type is the type of the condition.
// Currently only Ready.
// More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#pod-conditions
Type PodConditionType `json:"type" protobuf:"bytes,1,opt,name=type,casttype=PodConditionType"`
// Status is the status of the condition.
@ -2521,10 +2389,16 @@ type NodeSelector struct {
NodeSelectorTerms []NodeSelectorTerm `json:"nodeSelectorTerms" protobuf:"bytes,1,rep,name=nodeSelectorTerms"`
}
// A null or empty node selector term matches no objects.
// A null or empty node selector term matches no objects. The requirements of
// them are ANDed.
// The TopologySelectorTerm type implements a subset of the NodeSelectorTerm.
type NodeSelectorTerm struct {
//Required. A list of node selector requirements. The requirements are ANDed.
MatchExpressions []NodeSelectorRequirement `json:"matchExpressions" protobuf:"bytes,1,rep,name=matchExpressions"`
// A list of node selector requirements by node's labels.
// +optional
MatchExpressions []NodeSelectorRequirement `json:"matchExpressions,omitempty" protobuf:"bytes,1,rep,name=matchExpressions"`
// A list of node selector requirements by node's fields.
// +optional
MatchFields []NodeSelectorRequirement `json:"matchFields,omitempty" protobuf:"bytes,2,rep,name=matchFields"`
}
// A node selector requirement is a selector that contains values, a key, and an operator
@ -2557,6 +2431,27 @@ const (
NodeSelectorOpLt NodeSelectorOperator = "Lt"
)
// A topology selector term represents the result of label queries.
// A null or empty topology selector term matches no objects.
// The requirements of them are ANDed.
// It provides a subset of functionality as NodeSelectorTerm.
// This is an alpha feature and may change in the future.
type TopologySelectorTerm struct {
// A list of topology selector requirements by labels.
// +optional
MatchLabelExpressions []TopologySelectorLabelRequirement `json:"matchLabelExpressions,omitempty" protobuf:"bytes,1,rep,name=matchLabelExpressions"`
}
// A topology selector requirement is a selector that matches given label.
// This is an alpha feature and may change in the future.
type TopologySelectorLabelRequirement struct {
// The label key that the selector applies to.
Key string `json:"key" protobuf:"bytes,1,opt,name=key"`
// An array of string values. One value must match the label to be selected.
// Each entry in Values is ORed.
Values []string `json:"values" protobuf:"bytes,2,rep,name=values"`
}
// Affinity is a group of affinity scheduling rules.
type Affinity struct {
// Describes node affinity scheduling rules for the pod.
@ -2789,6 +2684,12 @@ const (
TolerationOpEqual TolerationOperator = "Equal"
)
// PodReadinessGate contains the reference to a pod condition
type PodReadinessGate struct {
// ConditionType refers to a condition in the pod's condition list with matching type.
ConditionType PodConditionType `json:"conditionType" protobuf:"bytes,1,opt,name=conditionType,casttype=PodConditionType"`
}
// PodSpec is a description of a pod.
type PodSpec struct {
// List of volumes that can be mounted by containers belonging to the pod.
@ -2953,6 +2854,13 @@ type PodSpec struct {
// configuration based on DNSPolicy.
// +optional
DNSConfig *PodDNSConfig `json:"dnsConfig,omitempty" protobuf:"bytes,26,opt,name=dnsConfig"`
// If specified, all readiness gates will be evaluated for pod readiness.
// A pod is ready when all its containers are ready AND
// all conditions specified in the readiness gates have status equal to "True"
// More info: https://github.com/kubernetes/community/blob/master/keps/sig-network/0007-pod-ready%2B%2B.md
// +optional
ReadinessGates []PodReadinessGate `json:"readinessGates,omitempty" protobuf:"bytes,28,opt,name=readinessGates"`
}
// HostAlias holds the mapping between IP and hostnames that will be injected as an entry in the
@ -3013,6 +2921,10 @@ type PodSecurityContext struct {
// If unset, the Kubelet will not modify the ownership and permissions of any volume.
// +optional
FSGroup *int64 `json:"fsGroup,omitempty" protobuf:"varint,5,opt,name=fsGroup"`
// Sysctls hold a list of namespaced sysctls used for the pod. Pods with unsupported
// sysctls (by the container runtime) might fail to launch.
// +optional
Sysctls []Sysctl `json:"sysctls,omitempty" protobuf:"bytes,7,rep,name=sysctls"`
}
// PodQOSClass defines the supported qos classes of Pods.
@ -3057,9 +2969,26 @@ type PodDNSConfigOption struct {
}
// PodStatus represents information about the status of a pod. Status may trail the actual
// state of a system.
// state of a system, especially if the node that hosts the pod cannot contact the control
// plane.
type PodStatus struct {
// Current condition of the pod.
// The phase of a Pod is a simple, high-level summary of where the Pod is in its lifecycle.
// The conditions array, the reason and message fields, and the individual container status
// arrays contain more detail about the pod's status.
// There are five possible phase values:
//
// Pending: The pod has been accepted by the Kubernetes system, but one or more of the
// container images has not been created. This includes time before being scheduled as
// well as time spent downloading images over the network, which could take a while.
// Running: The pod has been bound to a node, and all of the containers have been created.
// At least one container is still running, or is in the process of starting or restarting.
// Succeeded: All containers in the pod have terminated in success, and will not be restarted.
// Failed: All containers in the pod have terminated, and at least one container has
// terminated in failure. The container either exited with non-zero status or was terminated
// by the system.
// Unknown: For some reason the state of the pod could not be obtained, typically due to an
// error in communicating with the host of the pod.
//
// More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#pod-phase
// +optional
Phase PodPhase `json:"phase,omitempty" protobuf:"bytes,1,opt,name=phase,casttype=PodPhase"`
@ -3556,9 +3485,6 @@ type ServiceSpec struct {
// The primary use case for setting this field is to use a StatefulSet's Headless Service
// to propagate SRV records for its Pods without respect to their readiness for purpose
// of peer discovery.
// This field will replace the service.alpha.kubernetes.io/tolerate-unready-endpoints
// when that annotation is deprecated and all clients have been converted to use this
// field.
// +optional
PublishNotReadyAddresses bool `json:"publishNotReadyAddresses,omitempty" protobuf:"varint,13,opt,name=publishNotReadyAddresses"`
// sessionAffinityConfig contains the configurations of session affinity.
@ -3604,6 +3530,7 @@ type ServicePort struct {
}
// +genclient
// +genclient:skipVerbs=deleteCollection
// +k8s:deepcopy-gen:interfaces=k8s.io/apimachinery/pkg/runtime.Object
// Service is a named abstraction of software service (for example, mysql) consisting of local port
@ -3813,10 +3740,6 @@ type NodeSpec struct {
// PodCIDR represents the pod IP range assigned to the node.
// +optional
PodCIDR string `json:"podCIDR,omitempty" protobuf:"bytes,1,opt,name=podCIDR"`
// External ID of the node assigned by some machine database (e.g. a cloud provider).
// Deprecated.
// +optional
ExternalID string `json:"externalID,omitempty" protobuf:"bytes,2,opt,name=externalID"`
// ID of the node assigned by the cloud provider in the format: <ProviderName>://<ProviderSpecificNodeID>
// +optional
ProviderID string `json:"providerID,omitempty" protobuf:"bytes,3,opt,name=providerID"`
@ -3831,14 +3754,53 @@ type NodeSpec struct {
// The DynamicKubeletConfig feature gate must be enabled for the Kubelet to use this field
// +optional
ConfigSource *NodeConfigSource `json:"configSource,omitempty" protobuf:"bytes,6,opt,name=configSource"`
}
// +k8s:deepcopy-gen:interfaces=k8s.io/apimachinery/pkg/runtime.Object
// Deprecated. Not all kubelets will set this field. Remove field after 1.13.
// see: https://issues.k8s.io/61966
// +optional
DoNotUse_ExternalID string `json:"externalID,omitempty" protobuf:"bytes,2,opt,name=externalID"`
}
// NodeConfigSource specifies a source of node configuration. Exactly one subfield (excluding metadata) must be non-nil.
type NodeConfigSource struct {
metav1.TypeMeta `json:",inline"`
ConfigMapRef *ObjectReference `json:"configMapRef,omitempty" protobuf:"bytes,1,opt,name=configMapRef"`
// For historical context, regarding the below kind, apiVersion, and configMapRef deprecation tags:
// 1. kind/apiVersion were used by the kubelet to persist this struct to disk (they had no protobuf tags)
// 2. configMapRef and proto tag 1 were used by the API to refer to a configmap,
// but used a generic ObjectReference type that didn't really have the fields we needed
// All uses/persistence of the NodeConfigSource struct prior to 1.11 were gated by alpha feature flags,
// so there was no persisted data for these fields that needed to be migrated/handled.
// +k8s:deprecated=kind
// +k8s:deprecated=apiVersion
// +k8s:deprecated=configMapRef,protobuf=1
// ConfigMap is a reference to a Node's ConfigMap
ConfigMap *ConfigMapNodeConfigSource `json:"configMap,omitempty" protobuf:"bytes,2,opt,name=configMap"`
}
// ConfigMapNodeConfigSource contains the information to reference a ConfigMap as a config source for the Node.
type ConfigMapNodeConfigSource struct {
// Namespace is the metadata.namespace of the referenced ConfigMap.
// This field is required in all cases.
Namespace string `json:"namespace" protobuf:"bytes,1,opt,name=namespace"`
// Name is the metadata.name of the referenced ConfigMap.
// This field is required in all cases.
Name string `json:"name" protobuf:"bytes,2,opt,name=name"`
// UID is the metadata.UID of the referenced ConfigMap.
// This field is forbidden in Node.Spec, and required in Node.Status.
// +optional
UID types.UID `json:"uid,omitempty" protobuf:"bytes,3,opt,name=uid"`
// ResourceVersion is the metadata.ResourceVersion of the referenced ConfigMap.
// This field is forbidden in Node.Spec, and required in Node.Status.
// +optional
ResourceVersion string `json:"resourceVersion,omitempty" protobuf:"bytes,4,opt,name=resourceVersion"`
// KubeletConfigKey declares which key of the referenced ConfigMap corresponds to the KubeletConfiguration structure
// This field is required in all cases.
KubeletConfigKey string `json:"kubeletConfigKey" protobuf:"bytes,5,opt,name=kubeletConfigKey"`
}
// DaemonEndpoint contains information about a single Daemon endpoint.
@ -3888,6 +3850,53 @@ type NodeSystemInfo struct {
Architecture string `json:"architecture" protobuf:"bytes,10,opt,name=architecture"`
}
// NodeConfigStatus describes the status of the config assigned by Node.Spec.ConfigSource.
type NodeConfigStatus struct {
// Assigned reports the checkpointed config the node will try to use.
// When Node.Spec.ConfigSource is updated, the node checkpoints the associated
// config payload to local disk, along with a record indicating intended
// config. The node refers to this record to choose its config checkpoint, and
// reports this record in Assigned. Assigned only updates in the status after
// the record has been checkpointed to disk. When the Kubelet is restarted,
// it tries to make the Assigned config the Active config by loading and
// validating the checkpointed payload identified by Assigned.
// +optional
Assigned *NodeConfigSource `json:"assigned,omitempty" protobuf:"bytes,1,opt,name=assigned"`
// Active reports the checkpointed config the node is actively using.
// Active will represent either the current version of the Assigned config,
// or the current LastKnownGood config, depending on whether attempting to use the
// Assigned config results in an error.
// +optional
Active *NodeConfigSource `json:"active,omitempty" protobuf:"bytes,2,opt,name=active"`
// LastKnownGood reports the checkpointed config the node will fall back to
// when it encounters an error attempting to use the Assigned config.
// The Assigned config becomes the LastKnownGood config when the node determines
// that the Assigned config is stable and correct.
// This is currently implemented as a 10-minute soak period starting when the local
// record of Assigned config is updated. If the Assigned config is Active at the end
// of this period, it becomes the LastKnownGood. Note that if Spec.ConfigSource is
// reset to nil (use local defaults), the LastKnownGood is also immediately reset to nil,
// because the local default config is always assumed good.
// You should not make assumptions about the node's method of determining config stability
// and correctness, as this may change or become configurable in the future.
// +optional
LastKnownGood *NodeConfigSource `json:"lastKnownGood,omitempty" protobuf:"bytes,3,opt,name=lastKnownGood"`
// Error describes any problems reconciling the Spec.ConfigSource to the Active config.
// Errors may occur, for example, attempting to checkpoint Spec.ConfigSource to the local Assigned
// record, attempting to checkpoint the payload associated with Spec.ConfigSource, attempting
// to load or validate the Assigned config, etc.
// Errors may occur at different points while syncing config. Earlier errors (e.g. download or
// checkpointing errors) will not result in a rollback to LastKnownGood, and may resolve across
// Kubelet retries. Later errors (e.g. loading or validating a checkpointed config) will result in
// a rollback to LastKnownGood. In the latter case, it is usually possible to resolve the error
// by fixing the config assigned in Spec.ConfigSource.
// You can find additional information for debugging by searching the error message in the Kubelet log.
// Error is a human-readable description of the error state; machines can check whether or not Error
// is empty, but should not rely on the stability of the Error text across Kubelet versions.
// +optional
Error string `json:"error,omitempty" protobuf:"bytes,4,opt,name=error"`
}
// NodeStatus is information about the current status of a node.
type NodeStatus struct {
// Capacity represents the total resources of a node.
@ -3932,6 +3941,9 @@ type NodeStatus struct {
// List of volumes that are attached to the node.
// +optional
VolumesAttached []AttachedVolume `json:"volumesAttached,omitempty" protobuf:"bytes,10,rep,name=volumesAttached"`
// Status of the config assigned to the node via the dynamic Kubelet config feature.
// +optional
Config *NodeConfigStatus `json:"config,omitempty" protobuf:"bytes,11,opt,name=config"`
}
type UniqueVolumeName string
@ -4019,8 +4031,6 @@ const (
NodePIDPressure NodeConditionType = "PIDPressure"
// NodeNetworkUnavailable means that network for the node is not correctly configured.
NodeNetworkUnavailable NodeConditionType = "NetworkUnavailable"
// NodeKubeletConfigOk indicates whether the kubelet is correctly configured
NodeKubeletConfigOk NodeConditionType = "KubeletConfigOk"
)
// NodeCondition contains condition information for a node.
@ -4080,8 +4090,6 @@ const (
// Local ephemeral storage, in bytes. (500Gi = 500GiB = 500 * 1024 * 1024 * 1024)
// The resource name for ResourceEphemeralStorage is alpha and it can change across releases.
ResourceEphemeralStorage ResourceName = "ephemeral-storage"
// NVIDIA GPU, in devices. Alpha, might change: although fractional and allowing values >1, only one whole device per node is assigned.
ResourceNvidiaGPU ResourceName = "alpha.kubernetes.io/nvidia-gpu"
)
const (
@ -4089,6 +4097,8 @@ const (
ResourceDefaultNamespacePrefix = "kubernetes.io/"
// Name prefix for huge page resources (alpha).
ResourceHugePagesPrefix = "hugepages-"
// Name prefix for storage resource limits
ResourceAttachableVolumesPrefix = "attachable-volumes-"
)
// ResourceList is a set of (resource name, quantity) pairs.
@ -4171,6 +4181,7 @@ const (
// +genclient
// +genclient:nonNamespaced
// +genclient:skipVerbs=deleteCollection
// +k8s:deepcopy-gen:interfaces=k8s.io/apimachinery/pkg/runtime.Object
// Namespace provides a scope for Names.
@ -4231,95 +4242,6 @@ type Preconditions struct {
UID *types.UID `json:"uid,omitempty" protobuf:"bytes,1,opt,name=uid,casttype=k8s.io/apimachinery/pkg/types.UID"`
}
// DeletionPropagation decides if a deletion will propagate to the dependents of the object, and how the garbage collector will handle the propagation.
type DeletionPropagation string
const (
// Orphans the dependents.
DeletePropagationOrphan DeletionPropagation = "Orphan"
// Deletes the object from the key-value store, the garbage collector will delete the dependents in the background.
DeletePropagationBackground DeletionPropagation = "Background"
// The object exists in the key-value store until the garbage collector deletes all the dependents whose ownerReference.blockOwnerDeletion=true from the key-value store.
// API sever will put the "DeletingDependents" finalizer on the object, and sets its deletionTimestamp.
// This policy is cascading, i.e., the dependents will be deleted with Foreground.
DeletePropagationForeground DeletionPropagation = "Foreground"
)
// +k8s:deepcopy-gen:interfaces=k8s.io/apimachinery/pkg/runtime.Object
// DeleteOptions may be provided when deleting an API object
// DEPRECATED: This type has been moved to meta/v1 and will be removed soon.
// +k8s:openapi-gen=false
type DeleteOptions struct {
metav1.TypeMeta `json:",inline"`
// The duration in seconds before the object should be deleted. Value must be non-negative integer.
// The value zero indicates delete immediately. If this value is nil, the default grace period for the
// specified type will be used.
// Defaults to a per object value if not specified. zero means delete immediately.
// +optional
GracePeriodSeconds *int64 `json:"gracePeriodSeconds,omitempty" protobuf:"varint,1,opt,name=gracePeriodSeconds"`
// Must be fulfilled before a deletion is carried out. If not possible, a 409 Conflict status will be
// returned.
// +optional
Preconditions *Preconditions `json:"preconditions,omitempty" protobuf:"bytes,2,opt,name=preconditions"`
// Deprecated: please use the PropagationPolicy, this field will be deprecated in 1.7.
// Should the dependent objects be orphaned. If true/false, the "orphan"
// finalizer will be added to/removed from the object's finalizers list.
// Either this field or PropagationPolicy may be set, but not both.
// +optional
OrphanDependents *bool `json:"orphanDependents,omitempty" protobuf:"varint,3,opt,name=orphanDependents"`
// Whether and how garbage collection will be performed.
// Either this field or OrphanDependents may be set, but not both.
// The default policy is decided by the existing finalizer set in the
// metadata.finalizers and the resource-specific default policy.
// Acceptable values are: 'Orphan' - orphan the dependents; 'Background' -
// allow the garbage collector to delete the dependents in the background;
// 'Foreground' - a cascading policy that deletes all dependents in the
// foreground.
// +optional
PropagationPolicy *DeletionPropagation `protobuf:"bytes,4,opt,name=propagationPolicy,casttype=DeletionPropagation"`
}
// +k8s:deepcopy-gen:interfaces=k8s.io/apimachinery/pkg/runtime.Object
// ListOptions is the query options to a standard REST list call.
// DEPRECATED: This type has been moved to meta/v1 and will be removed soon.
// +k8s:openapi-gen=false
type ListOptions struct {
metav1.TypeMeta `json:",inline"`
// A selector to restrict the list of returned objects by their labels.
// Defaults to everything.
// +optional
LabelSelector string `json:"labelSelector,omitempty" protobuf:"bytes,1,opt,name=labelSelector"`
// A selector to restrict the list of returned objects by their fields.
// Defaults to everything.
// +optional
FieldSelector string `json:"fieldSelector,omitempty" protobuf:"bytes,2,opt,name=fieldSelector"`
// If true, partially initialized resources are included in the response.
// +optional
IncludeUninitialized bool `json:"includeUninitialized,omitempty" protobuf:"varint,6,opt,name=includeUninitialized"`
// Watch for changes to the described resources and return them as a stream of
// add, update, and remove notifications. Specify resourceVersion.
// +optional
Watch bool `json:"watch,omitempty" protobuf:"varint,3,opt,name=watch"`
// When specified with a watch call, shows changes that occur after that particular version of a resource.
// Defaults to changes from the beginning of history.
// When specified for list:
// - if unset, then the result is returned from remote storage based on quorum-read flag;
// - if it's 0, then we simply return what we currently have in cache, no guarantee;
// - if set to non zero, then the result is at least as fresh as given rv.
// +optional
ResourceVersion string `json:"resourceVersion,omitempty" protobuf:"bytes,4,opt,name=resourceVersion"`
// Timeout for the list/watch call.
// +optional
TimeoutSeconds *int64 `json:"timeoutSeconds,omitempty" protobuf:"varint,5,opt,name=timeoutSeconds"`
}
// +k8s:deepcopy-gen:interfaces=k8s.io/apimachinery/pkg/runtime.Object
// PodLogOptions is the query options for a Pod's logs REST call.
@ -4804,11 +4726,13 @@ const (
ResourceQuotaScopeBestEffort ResourceQuotaScope = "BestEffort"
// Match all pod objects that do not have best effort quality of service
ResourceQuotaScopeNotBestEffort ResourceQuotaScope = "NotBestEffort"
// Match all pod objects that have priority class mentioned
ResourceQuotaScopePriorityClass ResourceQuotaScope = "PriorityClass"
)
// ResourceQuotaSpec defines the desired hard limits to enforce for Quota.
type ResourceQuotaSpec struct {
// Hard is the set of desired hard limits for each named resource.
// hard is the set of desired hard limits for each named resource.
// More info: https://kubernetes.io/docs/concepts/policy/resource-quotas/
// +optional
Hard ResourceList `json:"hard,omitempty" protobuf:"bytes,1,rep,name=hard,casttype=ResourceList,castkey=ResourceName"`
@ -4816,8 +4740,48 @@ type ResourceQuotaSpec struct {
// If not specified, the quota matches all objects.
// +optional
Scopes []ResourceQuotaScope `json:"scopes,omitempty" protobuf:"bytes,2,rep,name=scopes,casttype=ResourceQuotaScope"`
// scopeSelector is also a collection of filters like scopes that must match each object tracked by a quota
// but expressed using ScopeSelectorOperator in combination with possible values.
// For a resource to match, both scopes AND scopeSelector (if specified in spec), must be matched.
// +optional
ScopeSelector *ScopeSelector `json:"scopeSelector,omitempty" protobuf:"bytes,3,opt,name=scopeSelector"`
}
// A scope selector represents the AND of the selectors represented
// by the scoped-resource selector requirements.
type ScopeSelector struct {
// A list of scope selector requirements by scope of the resources.
// +optional
MatchExpressions []ScopedResourceSelectorRequirement `json:"matchExpressions,omitempty" protobuf:"bytes,1,rep,name=matchExpressions"`
}
// A scoped-resource selector requirement is a selector that contains values, a scope name, and an operator
// that relates the scope name and values.
type ScopedResourceSelectorRequirement struct {
// The name of the scope that the selector applies to.
ScopeName ResourceQuotaScope `json:"scopeName" protobuf:"bytes,1,opt,name=scopeName"`
// Represents a scope's relationship to a set of values.
// Valid operators are In, NotIn, Exists, DoesNotExist.
Operator ScopeSelectorOperator `json:"operator" protobuf:"bytes,2,opt,name=operator,casttype=ScopedResourceSelectorOperator"`
// An array of string values. If the operator is In or NotIn,
// the values array must be non-empty. If the operator is Exists or DoesNotExist,
// the values array must be empty.
// This array is replaced during a strategic merge patch.
// +optional
Values []string `json:"values,omitempty" protobuf:"bytes,3,rep,name=values"`
}
// A scope selector operator is the set of operators that can be used in
// a scope selector requirement.
type ScopeSelectorOperator string
const (
ScopeSelectorOpIn ScopeSelectorOperator = "In"
ScopeSelectorOpNotIn ScopeSelectorOperator = "NotIn"
ScopeSelectorOpExists ScopeSelectorOperator = "Exists"
ScopeSelectorOpDoesNotExist ScopeSelectorOperator = "DoesNotExist"
)
// ResourceQuotaStatus defines the enforced hard limits and observed use.
type ResourceQuotaStatus struct {
// Hard is the set of enforced hard limits for each named resource.
@ -5245,9 +5209,9 @@ const (
// Sysctl defines a kernel parameter to be set
type Sysctl struct {
// Name of a property to set
Name string `protobuf:"bytes,1,opt,name=name"`
Name string `json:"name" protobuf:"bytes,1,opt,name=name"`
// Value of a property to set
Value string `protobuf:"bytes,2,opt,name=value"`
Value string `json:"value" protobuf:"bytes,2,opt,name=value"`
}
// NodeResources is an object for conveying resource information about a node.

View File

@ -170,11 +170,24 @@ func (CephFSVolumeSource) SwaggerDoc() map[string]string {
return map_CephFSVolumeSource
}
var map_CinderPersistentVolumeSource = map[string]string{
"": "Represents a cinder volume resource in Openstack. A Cinder volume must exist before mounting to a container. The volume must also be in the same region as the kubelet. Cinder volumes support ownership management and SELinux relabeling.",
"volumeID": "volume id used to identify the volume in cinder More info: https://releases.k8s.io/HEAD/examples/mysql-cinder-pd/README.md",
"fsType": "Filesystem type to mount. Must be a filesystem type supported by the host operating system. Examples: \"ext4\", \"xfs\", \"ntfs\". Implicitly inferred to be \"ext4\" if unspecified. More info: https://releases.k8s.io/HEAD/examples/mysql-cinder-pd/README.md",
"readOnly": "Optional: Defaults to false (read/write). ReadOnly here will force the ReadOnly setting in VolumeMounts. More info: https://releases.k8s.io/HEAD/examples/mysql-cinder-pd/README.md",
"secretRef": "Optional: points to a secret object containing parameters used to connect to OpenStack.",
}
func (CinderPersistentVolumeSource) SwaggerDoc() map[string]string {
return map_CinderPersistentVolumeSource
}
var map_CinderVolumeSource = map[string]string{
"": "Represents a cinder volume resource in Openstack. A Cinder volume must exist before mounting to a container. The volume must also be in the same region as the kubelet. Cinder volumes support ownership management and SELinux relabeling.",
"volumeID": "volume id used to identify the volume in cinder More info: https://releases.k8s.io/HEAD/examples/mysql-cinder-pd/README.md",
"fsType": "Filesystem type to mount. Must be a filesystem type supported by the host operating system. Examples: \"ext4\", \"xfs\", \"ntfs\". Implicitly inferred to be \"ext4\" if unspecified. More info: https://releases.k8s.io/HEAD/examples/mysql-cinder-pd/README.md",
"readOnly": "Optional: Defaults to false (read/write). ReadOnly here will force the ReadOnly setting in VolumeMounts. More info: https://releases.k8s.io/HEAD/examples/mysql-cinder-pd/README.md",
"secretRef": "Optional: points to a secret object containing parameters used to connect to OpenStack.",
}
func (CinderVolumeSource) SwaggerDoc() map[string]string {
@ -262,6 +275,19 @@ func (ConfigMapList) SwaggerDoc() map[string]string {
return map_ConfigMapList
}
var map_ConfigMapNodeConfigSource = map[string]string{
"": "ConfigMapNodeConfigSource contains the information to reference a ConfigMap as a config source for the Node.",
"namespace": "Namespace is the metadata.namespace of the referenced ConfigMap. This field is required in all cases.",
"name": "Name is the metadata.name of the referenced ConfigMap. This field is required in all cases.",
"uid": "UID is the metadata.UID of the referenced ConfigMap. This field is forbidden in Node.Spec, and required in Node.Status.",
"resourceVersion": "ResourceVersion is the metadata.ResourceVersion of the referenced ConfigMap. This field is forbidden in Node.Spec, and required in Node.Status.",
"kubeletConfigKey": "KubeletConfigKey declares which key of the referenced ConfigMap corresponds to the KubeletConfiguration structure This field is required in all cases.",
}
func (ConfigMapNodeConfigSource) SwaggerDoc() map[string]string {
return map_ConfigMapNodeConfigSource
}
var map_ConfigMapProjection = map[string]string{
"": "Adapts a ConfigMap into a projected volume.\n\nThe contents of the target ConfigMap's Data field will be presented in a projected volume as files using the keys in the Data field as the file names, unless the items element is populated with specific mappings of keys to paths. Note that this is identical to a configmap volume source without the default mode.",
"items": "If unspecified, each key-value pair in the Data field of the referenced ConfigMap will be projected into the volume as a file whose name is the key and content is the value. If specified, the listed keys will be projected into the specified paths, and unlisted keys will not be present. If a key is specified which is not present in the ConfigMap, the volume setup will error unless it is marked optional. Paths must be relative and may not contain the '..' path or start with '..'.",
@ -405,18 +431,6 @@ func (DaemonEndpoint) SwaggerDoc() map[string]string {
return map_DaemonEndpoint
}
var map_DeleteOptions = map[string]string{
"": "DeleteOptions may be provided when deleting an API object DEPRECATED: This type has been moved to meta/v1 and will be removed soon.",
"gracePeriodSeconds": "The duration in seconds before the object should be deleted. Value must be non-negative integer. The value zero indicates delete immediately. If this value is nil, the default grace period for the specified type will be used. Defaults to a per object value if not specified. zero means delete immediately.",
"preconditions": "Must be fulfilled before a deletion is carried out. If not possible, a 409 Conflict status will be returned.",
"orphanDependents": "Deprecated: please use the PropagationPolicy, this field will be deprecated in 1.7. Should the dependent objects be orphaned. If true/false, the \"orphan\" finalizer will be added to/removed from the object's finalizers list. Either this field or PropagationPolicy may be set, but not both.",
"PropagationPolicy": "Whether and how garbage collection will be performed. Either this field or OrphanDependents may be set, but not both. The default policy is decided by the existing finalizer set in the metadata.finalizers and the resource-specific default policy. Acceptable values are: 'Orphan' - orphan the dependents; 'Background' - allow the garbage collector to delete the dependents in the background; 'Foreground' - a cascading policy that deletes all dependents in the foreground.",
}
func (DeleteOptions) SwaggerDoc() map[string]string {
return map_DeleteOptions
}
var map_DownwardAPIProjection = map[string]string{
"": "Represents downward API info for projecting into a projected volume. Note that this is identical to a downwardAPI volume source without the default mode.",
"items": "Items is a list of DownwardAPIVolume file",
@ -671,7 +685,7 @@ func (GCEPersistentDiskVolumeSource) SwaggerDoc() map[string]string {
}
var map_GitRepoVolumeSource = map[string]string{
"": "Represents a volume that is populated with the contents of a git repository. Git repo volumes do not support ownership management. Git repo volumes support SELinux relabeling.",
"": "Represents a volume that is populated with the contents of a git repository. Git repo volumes do not support ownership management. Git repo volumes support SELinux relabeling.\n\nDEPRECATED: GitRepo is deprecated. To provision a container with a git repo, mount an EmptyDir into an InitContainer that clones the repo using git, then mount the EmptyDir into the Pod's container.",
"repository": "Repository URL",
"revision": "Commit hash for the specified revision.",
"directory": "Target directory name. Must not contain or start with '..'. If '.' is supplied, the volume directory will be the git repository. Otherwise, if specified, the volume will contain the git repository in the subdirectory with the given name.",
@ -848,20 +862,6 @@ func (LimitRangeSpec) SwaggerDoc() map[string]string {
return map_LimitRangeSpec
}
var map_ListOptions = map[string]string{
"": "ListOptions is the query options to a standard REST list call. DEPRECATED: This type has been moved to meta/v1 and will be removed soon.",
"labelSelector": "A selector to restrict the list of returned objects by their labels. Defaults to everything.",
"fieldSelector": "A selector to restrict the list of returned objects by their fields. Defaults to everything.",
"includeUninitialized": "If true, partially initialized resources are included in the response.",
"watch": "Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion.",
"resourceVersion": "When specified with a watch call, shows changes that occur after that particular version of a resource. Defaults to changes from the beginning of history. When specified for list: - if unset, then the result is returned from remote storage based on quorum-read flag; - if it's 0, then we simply return what we currently have in cache, no guarantee; - if set to non zero, then the result is at least as fresh as given rv.",
"timeoutSeconds": "Timeout for the list/watch call.",
}
func (ListOptions) SwaggerDoc() map[string]string {
return map_ListOptions
}
var map_LoadBalancerIngress = map[string]string{
"": "LoadBalancerIngress represents the status of a load-balancer ingress point: traffic intended for the service should be sent to an ingress point.",
"ip": "IP is set for load-balancer ingress points that are IP based (typically GCE or OpenStack load-balancers)",
@ -891,8 +891,8 @@ func (LocalObjectReference) SwaggerDoc() map[string]string {
}
var map_LocalVolumeSource = map[string]string{
"": "Local represents directly-attached storage with node affinity",
"path": "The full path to the volume on the node For alpha, this path must be a directory Once block as a source is supported, then this path can point to a block device",
"": "Local represents directly-attached storage with node affinity (Beta feature)",
"path": "The full path to the volume on the node. It can be either a directory or block device (disk, partition, ...). Directories can be represented only by PersistentVolume with VolumeMode=Filesystem. Block devices can be represented only by VolumeMode=Block, which also requires the BlockVolume alpha feature gate to be enabled.",
}
func (LocalVolumeSource) SwaggerDoc() map[string]string {
@ -996,12 +996,25 @@ func (NodeCondition) SwaggerDoc() map[string]string {
var map_NodeConfigSource = map[string]string{
"": "NodeConfigSource specifies a source of node configuration. Exactly one subfield (excluding metadata) must be non-nil.",
"configMap": "ConfigMap is a reference to a Node's ConfigMap",
}
func (NodeConfigSource) SwaggerDoc() map[string]string {
return map_NodeConfigSource
}
var map_NodeConfigStatus = map[string]string{
"": "NodeConfigStatus describes the status of the config assigned by Node.Spec.ConfigSource.",
"assigned": "Assigned reports the checkpointed config the node will try to use. When Node.Spec.ConfigSource is updated, the node checkpoints the associated config payload to local disk, along with a record indicating intended config. The node refers to this record to choose its config checkpoint, and reports this record in Assigned. Assigned only updates in the status after the record has been checkpointed to disk. When the Kubelet is restarted, it tries to make the Assigned config the Active config by loading and validating the checkpointed payload identified by Assigned.",
"active": "Active reports the checkpointed config the node is actively using. Active will represent either the current version of the Assigned config, or the current LastKnownGood config, depending on whether attempting to use the Assigned config results in an error.",
"lastKnownGood": "LastKnownGood reports the checkpointed config the node will fall back to when it encounters an error attempting to use the Assigned config. The Assigned config becomes the LastKnownGood config when the node determines that the Assigned config is stable and correct. This is currently implemented as a 10-minute soak period starting when the local record of Assigned config is updated. If the Assigned config is Active at the end of this period, it becomes the LastKnownGood. Note that if Spec.ConfigSource is reset to nil (use local defaults), the LastKnownGood is also immediately reset to nil, because the local default config is always assumed good. You should not make assumptions about the node's method of determining config stability and correctness, as this may change or become configurable in the future.",
"error": "Error describes any problems reconciling the Spec.ConfigSource to the Active config. Errors may occur, for example, attempting to checkpoint Spec.ConfigSource to the local Assigned record, attempting to checkpoint the payload associated with Spec.ConfigSource, attempting to load or validate the Assigned config, etc. Errors may occur at different points while syncing config. Earlier errors (e.g. download or checkpointing errors) will not result in a rollback to LastKnownGood, and may resolve across Kubelet retries. Later errors (e.g. loading or validating a checkpointed config) will result in a rollback to LastKnownGood. In the latter case, it is usually possible to resolve the error by fixing the config assigned in Spec.ConfigSource. You can find additional information for debugging by searching the error message in the Kubelet log. Error is a human-readable description of the error state; machines can check whether or not Error is empty, but should not rely on the stability of the Error text across Kubelet versions.",
}
func (NodeConfigStatus) SwaggerDoc() map[string]string {
return map_NodeConfigStatus
}
var map_NodeDaemonEndpoints = map[string]string{
"": "NodeDaemonEndpoints lists ports opened by daemons running on the Node.",
"kubeletEndpoint": "Endpoint on which Kubelet is listening.",
@ -1060,8 +1073,9 @@ func (NodeSelectorRequirement) SwaggerDoc() map[string]string {
}
var map_NodeSelectorTerm = map[string]string{
"": "A null or empty node selector term matches no objects.",
"matchExpressions": "Required. A list of node selector requirements. The requirements are ANDed.",
"": "A null or empty node selector term matches no objects. The requirements of them are ANDed. The TopologySelectorTerm type implements a subset of the NodeSelectorTerm.",
"matchExpressions": "A list of node selector requirements by node's labels.",
"matchFields": "A list of node selector requirements by node's fields.",
}
func (NodeSelectorTerm) SwaggerDoc() map[string]string {
@ -1071,11 +1085,11 @@ func (NodeSelectorTerm) SwaggerDoc() map[string]string {
var map_NodeSpec = map[string]string{
"": "NodeSpec describes the attributes that a node is created with.",
"podCIDR": "PodCIDR represents the pod IP range assigned to the node.",
"externalID": "External ID of the node assigned by some machine database (e.g. a cloud provider). Deprecated.",
"providerID": "ID of the node assigned by the cloud provider in the format: <ProviderName>://<ProviderSpecificNodeID>",
"unschedulable": "Unschedulable controls node schedulability of new pods. By default, node is schedulable. More info: https://kubernetes.io/docs/concepts/nodes/node/#manual-node-administration",
"taints": "If specified, the node's taints.",
"configSource": "If specified, the source to get node configuration from The DynamicKubeletConfig feature gate must be enabled for the Kubelet to use this field",
"externalID": "Deprecated. Not all kubelets will set this field. Remove field after 1.13. see: https://issues.k8s.io/61966",
}
func (NodeSpec) SwaggerDoc() map[string]string {
@ -1094,6 +1108,7 @@ var map_NodeStatus = map[string]string{
"images": "List of container images on this node",
"volumesInUse": "List of attachable volumes in use (mounted) by the node.",
"volumesAttached": "List of volumes that are attached to the node.",
"config": "Status of the config assigned to the node via the dynamic Kubelet config feature.",
}
func (NodeStatus) SwaggerDoc() map[string]string {
@ -1128,30 +1143,6 @@ func (ObjectFieldSelector) SwaggerDoc() map[string]string {
return map_ObjectFieldSelector
}
var map_ObjectMeta = map[string]string{
"": "ObjectMeta is metadata that all persisted resources must have, which includes all objects users must create. DEPRECATED: Use k8s.io/apimachinery/pkg/apis/meta/v1.ObjectMeta instead - this type will be removed soon.",
"name": "Name must be unique within a namespace. Is required when creating resources, although some resources may allow a client to request the generation of an appropriate name automatically. Name is primarily intended for creation idempotence and configuration definition. Cannot be updated. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names",
"generateName": "GenerateName is an optional prefix, used by the server, to generate a unique name ONLY IF the Name field has not been provided. If this field is used, the name returned to the client will be different than the name passed. This value will also be combined with a unique suffix. The provided value has the same validation rules as the Name field, and may be truncated by the length of the suffix required to make the value unique on the server.\n\nIf this field is specified and the generated name exists, the server will NOT return a 409 - instead, it will either return 201 Created or 500 with Reason ServerTimeout indicating a unique name could not be found in the time allotted, and the client should retry (optionally after the time indicated in the Retry-After header).\n\nApplied only if Name is not specified. More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#idempotency",
"namespace": "Namespace defines the space within each name must be unique. An empty namespace is equivalent to the \"default\" namespace, but \"default\" is the canonical representation. Not all objects are required to be scoped to a namespace - the value of this field for those objects will be empty.\n\nMust be a DNS_LABEL. Cannot be updated. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/",
"selfLink": "SelfLink is a URL representing this object. Populated by the system. Read-only.",
"uid": "UID is the unique in time and space value for this object. It is typically generated by the server on successful creation of a resource and is not allowed to change on PUT operations.\n\nPopulated by the system. Read-only. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#uids",
"resourceVersion": "An opaque value that represents the internal version of this object that can be used by clients to determine when objects have changed. May be used for optimistic concurrency, change detection, and the watch operation on a resource or set of resources. Clients must treat these values as opaque and passed unmodified back to the server. They may only be valid for a particular resource or set of resources.\n\nPopulated by the system. Read-only. Value must be treated as opaque by clients and . More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#concurrency-control-and-consistency",
"generation": "A sequence number representing a specific generation of the desired state. Populated by the system. Read-only.",
"creationTimestamp": "CreationTimestamp is a timestamp representing the server time when this object was created. It is not guaranteed to be set in happens-before order across separate operations. Clients may not set this value. It is represented in RFC3339 form and is in UTC.\n\nPopulated by the system. Read-only. Null for lists. More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#metadata",
"deletionTimestamp": "DeletionTimestamp is RFC 3339 date and time at which this resource will be deleted. This field is set by the server when a graceful deletion is requested by the user, and is not directly settable by a client. The resource is expected to be deleted (no longer visible from resource lists, and not reachable by name) after the time in this field. Once set, this value may not be unset or be set further into the future, although it may be shortened or the resource may be deleted prior to this time. For example, a user may request that a pod is deleted in 30 seconds. The Kubelet will react by sending a graceful termination signal to the containers in the pod. After that 30 seconds, the Kubelet will send a hard termination signal (SIGKILL) to the container and after cleanup, remove the pod from the API. In the presence of network partitions, this object may still exist after this timestamp, until an administrator or automated process can determine the resource is fully terminated. If not set, graceful deletion of the object has not been requested.\n\nPopulated by the system when a graceful deletion is requested. Read-only. More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#metadata",
"deletionGracePeriodSeconds": "Number of seconds allowed for this object to gracefully terminate before it will be removed from the system. Only set when deletionTimestamp is also set. May only be shortened. Read-only.",
"labels": "Map of string keys and values that can be used to organize and categorize (scope and select) objects. May match selectors of replication controllers and services. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/",
"annotations": "Annotations is an unstructured key value map stored with a resource that may be set by external tools to store and retrieve arbitrary metadata. They are not queryable and should be preserved when modifying objects. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/annotations/",
"ownerReferences": "List of objects depended by this object. If ALL objects in the list have been deleted, this object will be garbage collected. If this object is managed by a controller, then an entry in this list will point to this controller, with the controller field set to true. There cannot be more than one managing controller.",
"initializers": "An initializer is a controller which enforces some system invariant at object creation time. This field is a list of initializers that have not yet acted on this object. If nil or empty, this object has been completely initialized. Otherwise, the object is considered uninitialized and is hidden (in list/watch and get calls) from clients that haven't explicitly asked to observe uninitialized objects.\n\nWhen an object is created, the system will populate this list with the current set of initializers. Only privileged users may set or modify this list. Once it is empty, it may not be modified further by any user.",
"finalizers": "Must be empty before the object is deleted from the registry. Each entry is an identifier for the responsible component that will remove the entry from the list. If the deletionTimestamp of the object is non-nil, entries in this list can only be removed.",
"clusterName": "The name of the cluster which the object belongs to. This is used to distinguish resources with same name and namespace in different clusters. This field is not set anywhere right now and apiserver is going to ignore it if set in create or update request.",
}
func (ObjectMeta) SwaggerDoc() map[string]string {
return map_ObjectMeta
}
var map_ObjectReference = map[string]string{
"": "ObjectReference contains enough information to let you inspect or modify the referred object.",
"kind": "Kind of the referent. More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#types-kinds",
@ -1381,7 +1372,7 @@ func (PodAttachOptions) SwaggerDoc() map[string]string {
var map_PodCondition = map[string]string{
"": "PodCondition contains details for the current condition of this pod.",
"type": "Type is the type of the condition. Currently only Ready. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#pod-conditions",
"type": "Type is the type of the condition. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#pod-conditions",
"status": "Status is the status of the condition. Can be True, False, Unknown. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#pod-conditions",
"lastProbeTime": "Last time we probed the condition.",
"lastTransitionTime": "Last time the condition transitioned from one status to another.",
@ -1471,6 +1462,15 @@ func (PodProxyOptions) SwaggerDoc() map[string]string {
return map_PodProxyOptions
}
var map_PodReadinessGate = map[string]string{
"": "PodReadinessGate contains the reference to a pod condition",
"conditionType": "ConditionType refers to a condition in the pod's condition list with matching type.",
}
func (PodReadinessGate) SwaggerDoc() map[string]string {
return map_PodReadinessGate
}
var map_PodSecurityContext = map[string]string{
"": "PodSecurityContext holds pod-level security attributes and common container settings. Some fields are also present in container.securityContext. Field values of container.securityContext take precedence over field values of PodSecurityContext.",
"seLinuxOptions": "The SELinux context to be applied to all containers. If unspecified, the container runtime will allocate a random SELinux context for each container. May also be set in SecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence for that container.",
@ -1479,6 +1479,7 @@ var map_PodSecurityContext = map[string]string{
"runAsNonRoot": "Indicates that the container must run as a non-root user. If true, the Kubelet will validate the image at runtime to ensure that it does not run as UID 0 (root) and fail to start the container if it does. If unset or false, no such validation will be performed. May also be set in SecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence.",
"supplementalGroups": "A list of groups applied to the first process run in each container, in addition to the container's primary GID. If unspecified, no groups will be added to any container.",
"fsGroup": "A special supplemental group that applies to all containers in a pod. Some volume types allow the Kubelet to change the ownership of that volume to be owned by the pod:\n\n1. The owning GID will be the FSGroup 2. The setgid bit is set (new files created in the volume will be owned by FSGroup) 3. The permission bits are OR'd with rw-rw ",
"sysctls": "Sysctls hold a list of namespaced sysctls used for the pod. Pods with unsupported sysctls (by the container runtime) might fail to launch.",
}
func (PodSecurityContext) SwaggerDoc() map[string]string {
@ -1523,6 +1524,7 @@ var map_PodSpec = map[string]string{
"priorityClassName": "If specified, indicates the pod's priority. \"system-node-critical\" and \"system-cluster-critical\" are two special keywords which indicate the highest priorities with the former being the highest priority. Any other name must be defined by creating a PriorityClass object with that name. If not specified, the pod priority will be default or zero if there is no default.",
"priority": "The priority value. Various system components use this field to find the priority of the pod. When Priority Admission Controller is enabled, it prevents users from setting this field. The admission controller populates this field from PriorityClassName. The higher the value, the higher the priority.",
"dnsConfig": "Specifies the DNS parameters of a pod. Parameters specified here will be merged to the generated DNS configuration based on DNSPolicy.",
"readinessGates": "If specified, all readiness gates will be evaluated for pod readiness. A pod is ready when all its containers are ready AND all conditions specified in the readiness gates have status equal to \"True\" More info: https://github.com/kubernetes/community/blob/master/keps/sig-network/0007-pod-ready%2B%2B.md",
}
func (PodSpec) SwaggerDoc() map[string]string {
@ -1530,8 +1532,8 @@ func (PodSpec) SwaggerDoc() map[string]string {
}
var map_PodStatus = map[string]string{
"": "PodStatus represents information about the status of a pod. Status may trail the actual state of a system.",
"phase": "Current condition of the pod. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#pod-phase",
"": "PodStatus represents information about the status of a pod. Status may trail the actual state of a system, especially if the node that hosts the pod cannot contact the control plane.",
"phase": "The phase of a Pod is a simple, high-level summary of where the Pod is in its lifecycle. The conditions array, the reason and message fields, and the individual container status arrays contain more detail about the pod's status. There are five possible phase values:\n\nPending: The pod has been accepted by the Kubernetes system, but one or more of the container images has not been created. This includes time before being scheduled as well as time spent downloading images over the network, which could take a while. Running: The pod has been bound to a node, and all of the containers have been created. At least one container is still running, or is in the process of starting or restarting. Succeeded: All containers in the pod have terminated in success, and will not be restarted. Failed: All containers in the pod have terminated, and at least one container has terminated in failure. The container either exited with non-zero status or was terminated by the system. Unknown: For some reason the state of the pod could not be obtained, typically due to an error in communicating with the host of the pod.\n\nMore info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#pod-phase",
"conditions": "Current service state of pod. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#pod-conditions",
"message": "A human readable message indicating details about why the pod is in this condition.",
"reason": "A brief CamelCase message indicating details about why the pod is in this state. e.g. 'Evicted'",
@ -1803,8 +1805,9 @@ func (ResourceQuotaList) SwaggerDoc() map[string]string {
var map_ResourceQuotaSpec = map[string]string{
"": "ResourceQuotaSpec defines the desired hard limits to enforce for Quota.",
"hard": "Hard is the set of desired hard limits for each named resource. More info: https://kubernetes.io/docs/concepts/policy/resource-quotas/",
"hard": "hard is the set of desired hard limits for each named resource. More info: https://kubernetes.io/docs/concepts/policy/resource-quotas/",
"scopes": "A collection of filters that must match each object tracked by a quota. If not specified, the quota matches all objects.",
"scopeSelector": "scopeSelector is also a collection of filters like scopes that must match each object tracked by a quota but expressed using ScopeSelectorOperator in combination with possible values. For a resource to match, both scopes AND scopeSelector (if specified in spec), must be matched.",
}
func (ResourceQuotaSpec) SwaggerDoc() map[string]string {
@ -1879,6 +1882,26 @@ func (ScaleIOVolumeSource) SwaggerDoc() map[string]string {
return map_ScaleIOVolumeSource
}
var map_ScopeSelector = map[string]string{
"": "A scope selector represents the AND of the selectors represented by the scoped-resource selector requirements.",
"matchExpressions": "A list of scope selector requirements by scope of the resources.",
}
func (ScopeSelector) SwaggerDoc() map[string]string {
return map_ScopeSelector
}
var map_ScopedResourceSelectorRequirement = map[string]string{
"": "A scoped-resource selector requirement is a selector that contains values, a scope name, and an operator that relates the scope name and values.",
"scopeName": "The name of the scope that the selector applies to.",
"operator": "Represents a scope's relationship to a set of values. Valid operators are In, NotIn, Exists, DoesNotExist.",
"values": "An array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch.",
}
func (ScopedResourceSelectorRequirement) SwaggerDoc() map[string]string {
return map_ScopedResourceSelectorRequirement
}
var map_Secret = map[string]string{
"": "Secret holds secret data of a certain type. The total bytes of the values in the Data field must be less than MaxSecretSize bytes.",
"metadata": "Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#metadata",
@ -2010,6 +2033,17 @@ func (ServiceAccountList) SwaggerDoc() map[string]string {
return map_ServiceAccountList
}
var map_ServiceAccountTokenProjection = map[string]string{
"": "ServiceAccountTokenProjection represents a projected service account token volume. This projection can be used to insert a service account token into the pods runtime filesystem for use against APIs (Kubernetes API Server or otherwise).",
"audience": "Audience is the intended audience of the token. A recipient of a token must identify itself with an identifier specified in the audience of the token, and otherwise should reject the token. The audience defaults to the identifier of the apiserver.",
"expirationSeconds": "ExpirationSeconds is the requested duration of validity of the service account token. As the token approaches expiration, the kubelet volume plugin will proactively rotate the service account token. The kubelet will start trying to rotate the token if the token is older than 80 percent of its time to live or if the token is older than 24 hours.Defaults to 1 hour and must be at least 10 minutes.",
"path": "Path is the path relative to the mount point of the file to project the token into.",
}
func (ServiceAccountTokenProjection) SwaggerDoc() map[string]string {
return map_ServiceAccountTokenProjection
}
var map_ServiceList = map[string]string{
"": "ServiceList holds a list of services.",
"metadata": "Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#types-kinds",
@ -2055,7 +2089,7 @@ var map_ServiceSpec = map[string]string{
"externalName": "externalName is the external reference that kubedns or equivalent will return as a CNAME record for this service. No proxying will be involved. Must be a valid RFC-1123 hostname (https://tools.ietf.org/html/rfc1123) and requires Type to be ExternalName.",
"externalTrafficPolicy": "externalTrafficPolicy denotes if this Service desires to route external traffic to node-local or cluster-wide endpoints. \"Local\" preserves the client source IP and avoids a second hop for LoadBalancer and Nodeport type services, but risks potentially imbalanced traffic spreading. \"Cluster\" obscures the client source IP and may cause a second hop to another node, but should have good overall load-spreading.",
"healthCheckNodePort": "healthCheckNodePort specifies the healthcheck nodePort for the service. If not specified, HealthCheckNodePort is created by the service api backend with the allocated nodePort. Will use user-specified nodePort value if specified by the client. Only effects when Type is set to LoadBalancer and ExternalTrafficPolicy is set to Local.",
"publishNotReadyAddresses": "publishNotReadyAddresses, when set to true, indicates that DNS implementations must publish the notReadyAddresses of subsets for the Endpoints associated with the Service. The default value is false. The primary use case for setting this field is to use a StatefulSet's Headless Service to propagate SRV records for its Pods without respect to their readiness for purpose of peer discovery. This field will replace the service.alpha.kubernetes.io/tolerate-unready-endpoints when that annotation is deprecated and all clients have been converted to use this field.",
"publishNotReadyAddresses": "publishNotReadyAddresses, when set to true, indicates that DNS implementations must publish the notReadyAddresses of subsets for the Endpoints associated with the Service. The default value is false. The primary use case for setting this field is to use a StatefulSet's Headless Service to propagate SRV records for its Pods without respect to their readiness for purpose of peer discovery.",
"sessionAffinityConfig": "sessionAffinityConfig contains the configurations of session affinity.",
}
@ -2109,8 +2143,8 @@ func (StorageOSVolumeSource) SwaggerDoc() map[string]string {
var map_Sysctl = map[string]string{
"": "Sysctl defines a kernel parameter to be set",
"Name": "Name of a property to set",
"Value": "Value of a property to set",
"name": "Name of a property to set",
"value": "Value of a property to set",
}
func (Sysctl) SwaggerDoc() map[string]string {
@ -2152,6 +2186,25 @@ func (Toleration) SwaggerDoc() map[string]string {
return map_Toleration
}
var map_TopologySelectorLabelRequirement = map[string]string{
"": "A topology selector requirement is a selector that matches given label. This is an alpha feature and may change in the future.",
"key": "The label key that the selector applies to.",
"values": "An array of string values. One value must match the label to be selected. Each entry in Values is ORed.",
}
func (TopologySelectorLabelRequirement) SwaggerDoc() map[string]string {
return map_TopologySelectorLabelRequirement
}
var map_TopologySelectorTerm = map[string]string{
"": "A topology selector term represents the result of label queries. A null or empty topology selector term matches no objects. The requirements of them are ANDed. It provides a subset of functionality as NodeSelectorTerm. This is an alpha feature and may change in the future.",
"matchLabelExpressions": "A list of topology selector requirements by labels.",
}
func (TopologySelectorTerm) SwaggerDoc() map[string]string {
return map_TopologySelectorTerm
}
var map_Volume = map[string]string{
"": "Volume represents a named volume in a pod that may be accessed by any container in the pod.",
"name": "Volume's name. Must be a DNS_LABEL and unique within the pod. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names",
@ -2198,6 +2251,7 @@ var map_VolumeProjection = map[string]string{
"secret": "information about the secret data to project",
"downwardAPI": "information about the downwardAPI data to project",
"configMap": "information about the configMap data to project",
"serviceAccountToken": "information about the serviceAccountToken data to project",
}
func (VolumeProjection) SwaggerDoc() map[string]string {
@ -2210,7 +2264,7 @@ var map_VolumeSource = map[string]string{
"emptyDir": "EmptyDir represents a temporary directory that shares a pod's lifetime. More info: https://kubernetes.io/docs/concepts/storage/volumes#emptydir",
"gcePersistentDisk": "GCEPersistentDisk represents a GCE Disk resource that is attached to a kubelet's host machine and then exposed to the pod. More info: https://kubernetes.io/docs/concepts/storage/volumes#gcepersistentdisk",
"awsElasticBlockStore": "AWSElasticBlockStore represents an AWS Disk resource that is attached to a kubelet's host machine and then exposed to the pod. More info: https://kubernetes.io/docs/concepts/storage/volumes#awselasticblockstore",
"gitRepo": "GitRepo represents a git repository at a particular revision.",
"gitRepo": "GitRepo represents a git repository at a particular revision. DEPRECATED: GitRepo is deprecated. To provision a container with a git repo, mount an EmptyDir into an InitContainer that clones the repo using git, then mount the EmptyDir into the Pod's container.",
"secret": "Secret represents a secret that should populate this volume. More info: https://kubernetes.io/docs/concepts/storage/volumes#secret",
"nfs": "NFS represents an NFS mount on the host that shares a pod's lifetime More info: https://kubernetes.io/docs/concepts/storage/volumes#nfs",
"iscsi": "ISCSI represents an ISCSI Disk resource that is attached to a kubelet's host machine and then exposed to the pod. More info: https://releases.k8s.io/HEAD/examples/volumes/iscsi/README.md",

View File

@ -380,9 +380,43 @@ func (in *CephFSVolumeSource) DeepCopy() *CephFSVolumeSource {
return out
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *CinderPersistentVolumeSource) DeepCopyInto(out *CinderPersistentVolumeSource) {
*out = *in
if in.SecretRef != nil {
in, out := &in.SecretRef, &out.SecretRef
if *in == nil {
*out = nil
} else {
*out = new(SecretReference)
**out = **in
}
}
return
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new CinderPersistentVolumeSource.
func (in *CinderPersistentVolumeSource) DeepCopy() *CinderPersistentVolumeSource {
if in == nil {
return nil
}
out := new(CinderPersistentVolumeSource)
in.DeepCopyInto(out)
return out
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *CinderVolumeSource) DeepCopyInto(out *CinderVolumeSource) {
*out = *in
if in.SecretRef != nil {
in, out := &in.SecretRef, &out.SecretRef
if *in == nil {
*out = nil
} else {
*out = new(LocalObjectReference)
**out = **in
}
}
return
}
@ -631,6 +665,22 @@ func (in *ConfigMapList) DeepCopyObject() runtime.Object {
return nil
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *ConfigMapNodeConfigSource) DeepCopyInto(out *ConfigMapNodeConfigSource) {
*out = *in
return
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new ConfigMapNodeConfigSource.
func (in *ConfigMapNodeConfigSource) DeepCopy() *ConfigMapNodeConfigSource {
if in == nil {
return nil
}
out := new(ConfigMapNodeConfigSource)
in.DeepCopyInto(out)
return out
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *ConfigMapProjection) DeepCopyInto(out *ConfigMapProjection) {
*out = *in
@ -965,67 +1015,6 @@ func (in *DaemonEndpoint) DeepCopy() *DaemonEndpoint {
return out
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *DeleteOptions) DeepCopyInto(out *DeleteOptions) {
*out = *in
out.TypeMeta = in.TypeMeta
if in.GracePeriodSeconds != nil {
in, out := &in.GracePeriodSeconds, &out.GracePeriodSeconds
if *in == nil {
*out = nil
} else {
*out = new(int64)
**out = **in
}
}
if in.Preconditions != nil {
in, out := &in.Preconditions, &out.Preconditions
if *in == nil {
*out = nil
} else {
*out = new(Preconditions)
(*in).DeepCopyInto(*out)
}
}
if in.OrphanDependents != nil {
in, out := &in.OrphanDependents, &out.OrphanDependents
if *in == nil {
*out = nil
} else {
*out = new(bool)
**out = **in
}
}
if in.PropagationPolicy != nil {
in, out := &in.PropagationPolicy, &out.PropagationPolicy
if *in == nil {
*out = nil
} else {
*out = new(DeletionPropagation)
**out = **in
}
}
return
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new DeleteOptions.
func (in *DeleteOptions) DeepCopy() *DeleteOptions {
if in == nil {
return nil
}
out := new(DeleteOptions)
in.DeepCopyInto(out)
return out
}
// DeepCopyObject is an autogenerated deepcopy function, copying the receiver, creating a new runtime.Object.
func (in *DeleteOptions) DeepCopyObject() runtime.Object {
if c := in.DeepCopy(); c != nil {
return c
}
return nil
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *DownwardAPIProjection) DeepCopyInto(out *DownwardAPIProjection) {
*out = *in
@ -2141,40 +2130,6 @@ func (in *List) DeepCopyObject() runtime.Object {
return nil
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *ListOptions) DeepCopyInto(out *ListOptions) {
*out = *in
out.TypeMeta = in.TypeMeta
if in.TimeoutSeconds != nil {
in, out := &in.TimeoutSeconds, &out.TimeoutSeconds
if *in == nil {
*out = nil
} else {
*out = new(int64)
**out = **in
}
}
return
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new ListOptions.
func (in *ListOptions) DeepCopy() *ListOptions {
if in == nil {
return nil
}
out := new(ListOptions)
in.DeepCopyInto(out)
return out
}
// DeepCopyObject is an autogenerated deepcopy function, copying the receiver, creating a new runtime.Object.
func (in *ListOptions) DeepCopyObject() runtime.Object {
if c := in.DeepCopy(); c != nil {
return c
}
return nil
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *LoadBalancerIngress) DeepCopyInto(out *LoadBalancerIngress) {
*out = *in
@ -2455,13 +2410,12 @@ func (in *NodeCondition) DeepCopy() *NodeCondition {
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *NodeConfigSource) DeepCopyInto(out *NodeConfigSource) {
*out = *in
out.TypeMeta = in.TypeMeta
if in.ConfigMapRef != nil {
in, out := &in.ConfigMapRef, &out.ConfigMapRef
if in.ConfigMap != nil {
in, out := &in.ConfigMap, &out.ConfigMap
if *in == nil {
*out = nil
} else {
*out = new(ObjectReference)
*out = new(ConfigMapNodeConfigSource)
**out = **in
}
}
@ -2478,12 +2432,47 @@ func (in *NodeConfigSource) DeepCopy() *NodeConfigSource {
return out
}
// DeepCopyObject is an autogenerated deepcopy function, copying the receiver, creating a new runtime.Object.
func (in *NodeConfigSource) DeepCopyObject() runtime.Object {
if c := in.DeepCopy(); c != nil {
return c
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *NodeConfigStatus) DeepCopyInto(out *NodeConfigStatus) {
*out = *in
if in.Assigned != nil {
in, out := &in.Assigned, &out.Assigned
if *in == nil {
*out = nil
} else {
*out = new(NodeConfigSource)
(*in).DeepCopyInto(*out)
}
}
if in.Active != nil {
in, out := &in.Active, &out.Active
if *in == nil {
*out = nil
} else {
*out = new(NodeConfigSource)
(*in).DeepCopyInto(*out)
}
}
if in.LastKnownGood != nil {
in, out := &in.LastKnownGood, &out.LastKnownGood
if *in == nil {
*out = nil
} else {
*out = new(NodeConfigSource)
(*in).DeepCopyInto(*out)
}
}
return
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new NodeConfigStatus.
func (in *NodeConfigStatus) DeepCopy() *NodeConfigStatus {
if in == nil {
return nil
}
out := new(NodeConfigStatus)
in.DeepCopyInto(out)
return out
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
@ -2638,6 +2627,13 @@ func (in *NodeSelectorTerm) DeepCopyInto(out *NodeSelectorTerm) {
(*in)[i].DeepCopyInto(&(*out)[i])
}
}
if in.MatchFields != nil {
in, out := &in.MatchFields, &out.MatchFields
*out = make([]NodeSelectorRequirement, len(*in))
for i := range *in {
(*in)[i].DeepCopyInto(&(*out)[i])
}
}
return
}
@ -2731,6 +2727,15 @@ func (in *NodeStatus) DeepCopyInto(out *NodeStatus) {
*out = make([]AttachedVolume, len(*in))
copy(*out, *in)
}
if in.Config != nil {
in, out := &in.Config, &out.Config
if *in == nil {
*out = nil
} else {
*out = new(NodeConfigStatus)
(*in).DeepCopyInto(*out)
}
}
return
}
@ -2776,75 +2781,6 @@ func (in *ObjectFieldSelector) DeepCopy() *ObjectFieldSelector {
return out
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *ObjectMeta) DeepCopyInto(out *ObjectMeta) {
*out = *in
in.CreationTimestamp.DeepCopyInto(&out.CreationTimestamp)
if in.DeletionTimestamp != nil {
in, out := &in.DeletionTimestamp, &out.DeletionTimestamp
if *in == nil {
*out = nil
} else {
*out = (*in).DeepCopy()
}
}
if in.DeletionGracePeriodSeconds != nil {
in, out := &in.DeletionGracePeriodSeconds, &out.DeletionGracePeriodSeconds
if *in == nil {
*out = nil
} else {
*out = new(int64)
**out = **in
}
}
if in.Labels != nil {
in, out := &in.Labels, &out.Labels
*out = make(map[string]string, len(*in))
for key, val := range *in {
(*out)[key] = val
}
}
if in.Annotations != nil {
in, out := &in.Annotations, &out.Annotations
*out = make(map[string]string, len(*in))
for key, val := range *in {
(*out)[key] = val
}
}
if in.OwnerReferences != nil {
in, out := &in.OwnerReferences, &out.OwnerReferences
*out = make([]meta_v1.OwnerReference, len(*in))
for i := range *in {
(*in)[i].DeepCopyInto(&(*out)[i])
}
}
if in.Initializers != nil {
in, out := &in.Initializers, &out.Initializers
if *in == nil {
*out = nil
} else {
*out = new(meta_v1.Initializers)
(*in).DeepCopyInto(*out)
}
}
if in.Finalizers != nil {
in, out := &in.Finalizers, &out.Finalizers
*out = make([]string, len(*in))
copy(*out, *in)
}
return
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new ObjectMeta.
func (in *ObjectMeta) DeepCopy() *ObjectMeta {
if in == nil {
return nil
}
out := new(ObjectMeta)
in.DeepCopyInto(out)
return out
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *ObjectReference) DeepCopyInto(out *ObjectReference) {
*out = *in
@ -3180,8 +3116,8 @@ func (in *PersistentVolumeSource) DeepCopyInto(out *PersistentVolumeSource) {
if *in == nil {
*out = nil
} else {
*out = new(CinderVolumeSource)
**out = **in
*out = new(CinderPersistentVolumeSource)
(*in).DeepCopyInto(*out)
}
}
if in.CephFS != nil {
@ -3813,6 +3749,22 @@ func (in *PodProxyOptions) DeepCopyObject() runtime.Object {
return nil
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *PodReadinessGate) DeepCopyInto(out *PodReadinessGate) {
*out = *in
return
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new PodReadinessGate.
func (in *PodReadinessGate) DeepCopy() *PodReadinessGate {
if in == nil {
return nil
}
out := new(PodReadinessGate)
in.DeepCopyInto(out)
return out
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *PodSecurityContext) DeepCopyInto(out *PodSecurityContext) {
*out = *in
@ -3866,6 +3818,11 @@ func (in *PodSecurityContext) DeepCopyInto(out *PodSecurityContext) {
**out = **in
}
}
if in.Sysctls != nil {
in, out := &in.Sysctls, &out.Sysctls
*out = make([]Sysctl, len(*in))
copy(*out, *in)
}
return
}
@ -4026,6 +3983,11 @@ func (in *PodSpec) DeepCopyInto(out *PodSpec) {
(*in).DeepCopyInto(*out)
}
}
if in.ReadinessGates != nil {
in, out := &in.ReadinessGates, &out.ReadinessGates
*out = make([]PodReadinessGate, len(*in))
copy(*out, *in)
}
return
}
@ -4678,6 +4640,15 @@ func (in *ResourceQuotaSpec) DeepCopyInto(out *ResourceQuotaSpec) {
*out = make([]ResourceQuotaScope, len(*in))
copy(*out, *in)
}
if in.ScopeSelector != nil {
in, out := &in.ScopeSelector, &out.ScopeSelector
if *in == nil {
*out = nil
} else {
*out = new(ScopeSelector)
(*in).DeepCopyInto(*out)
}
}
return
}
@ -4817,6 +4788,50 @@ func (in *ScaleIOVolumeSource) DeepCopy() *ScaleIOVolumeSource {
return out
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *ScopeSelector) DeepCopyInto(out *ScopeSelector) {
*out = *in
if in.MatchExpressions != nil {
in, out := &in.MatchExpressions, &out.MatchExpressions
*out = make([]ScopedResourceSelectorRequirement, len(*in))
for i := range *in {
(*in)[i].DeepCopyInto(&(*out)[i])
}
}
return
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new ScopeSelector.
func (in *ScopeSelector) DeepCopy() *ScopeSelector {
if in == nil {
return nil
}
out := new(ScopeSelector)
in.DeepCopyInto(out)
return out
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *ScopedResourceSelectorRequirement) DeepCopyInto(out *ScopedResourceSelectorRequirement) {
*out = *in
if in.Values != nil {
in, out := &in.Values, &out.Values
*out = make([]string, len(*in))
copy(*out, *in)
}
return
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new ScopedResourceSelectorRequirement.
func (in *ScopedResourceSelectorRequirement) DeepCopy() *ScopedResourceSelectorRequirement {
if in == nil {
return nil
}
out := new(ScopedResourceSelectorRequirement)
in.DeepCopyInto(out)
return out
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *Secret) DeepCopyInto(out *Secret) {
*out = *in
@ -5257,6 +5272,31 @@ func (in *ServiceAccountList) DeepCopyObject() runtime.Object {
return nil
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *ServiceAccountTokenProjection) DeepCopyInto(out *ServiceAccountTokenProjection) {
*out = *in
if in.ExpirationSeconds != nil {
in, out := &in.ExpirationSeconds, &out.ExpirationSeconds
if *in == nil {
*out = nil
} else {
*out = new(int64)
**out = **in
}
}
return
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new ServiceAccountTokenProjection.
func (in *ServiceAccountTokenProjection) DeepCopy() *ServiceAccountTokenProjection {
if in == nil {
return nil
}
out := new(ServiceAccountTokenProjection)
in.DeepCopyInto(out)
return out
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *ServiceList) DeepCopyInto(out *ServiceList) {
*out = *in
@ -5553,6 +5593,50 @@ func (in *Toleration) DeepCopy() *Toleration {
return out
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *TopologySelectorLabelRequirement) DeepCopyInto(out *TopologySelectorLabelRequirement) {
*out = *in
if in.Values != nil {
in, out := &in.Values, &out.Values
*out = make([]string, len(*in))
copy(*out, *in)
}
return
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new TopologySelectorLabelRequirement.
func (in *TopologySelectorLabelRequirement) DeepCopy() *TopologySelectorLabelRequirement {
if in == nil {
return nil
}
out := new(TopologySelectorLabelRequirement)
in.DeepCopyInto(out)
return out
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *TopologySelectorTerm) DeepCopyInto(out *TopologySelectorTerm) {
*out = *in
if in.MatchLabelExpressions != nil {
in, out := &in.MatchLabelExpressions, &out.MatchLabelExpressions
*out = make([]TopologySelectorLabelRequirement, len(*in))
for i := range *in {
(*in)[i].DeepCopyInto(&(*out)[i])
}
}
return
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new TopologySelectorTerm.
func (in *TopologySelectorTerm) DeepCopy() *TopologySelectorTerm {
if in == nil {
return nil
}
out := new(TopologySelectorTerm)
in.DeepCopyInto(out)
return out
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *Volume) DeepCopyInto(out *Volume) {
*out = *in
@ -5666,6 +5750,15 @@ func (in *VolumeProjection) DeepCopyInto(out *VolumeProjection) {
(*in).DeepCopyInto(*out)
}
}
if in.ServiceAccountToken != nil {
in, out := &in.ServiceAccountToken, &out.ServiceAccountToken
if *in == nil {
*out = nil
} else {
*out = new(ServiceAccountTokenProjection)
(*in).DeepCopyInto(*out)
}
}
return
}
@ -5796,7 +5889,7 @@ func (in *VolumeSource) DeepCopyInto(out *VolumeSource) {
*out = nil
} else {
*out = new(CinderVolumeSource)
**out = **in
(*in).DeepCopyInto(*out)
}
}
if in.CephFS != nil {

View File

@ -25,8 +25,6 @@ import (
"strconv"
"strings"
flag "github.com/spf13/pflag"
inf "gopkg.in/inf.v0"
)
@ -747,43 +745,3 @@ func (q *Quantity) Copy() *Quantity {
Format: q.Format,
}
}
// qFlag is a helper type for the Flag function
type qFlag struct {
dest *Quantity
}
// Sets the value of the internal Quantity. (used by flag & pflag)
func (qf qFlag) Set(val string) error {
q, err := ParseQuantity(val)
if err != nil {
return err
}
// This copy is OK because q will not be referenced again.
*qf.dest = q
return nil
}
// Converts the value of the internal Quantity to a string. (used by flag & pflag)
func (qf qFlag) String() string {
return qf.dest.String()
}
// States the type of flag this is (Quantity). (used by pflag)
func (qf qFlag) Type() string {
return "quantity"
}
// QuantityFlag is a helper that makes a quantity flag (using standard flag package).
// Will panic if defaultValue is not a valid quantity.
func QuantityFlag(flagName, defaultValue, description string) *Quantity {
q := MustParse(defaultValue)
flag.Var(NewQuantityFlagValue(&q), flagName, description)
return &q
}
// NewQuantityFlagValue returns an object that can be used to back a flag,
// pointing at the given Quantity variable.
func NewQuantityFlagValue(q *Quantity) flag.Value {
return qFlag{q}
}

View File

@ -75,6 +75,8 @@ func AddConversionFuncs(scheme *runtime.Scheme) error {
Convert_unversioned_LabelSelector_to_map,
Convert_Slice_string_To_Slice_int32,
Convert_Slice_string_To_v1_DeletionPropagation,
)
}
@ -304,3 +306,13 @@ func Convert_Slice_string_To_Slice_int32(in *[]string, out *[]int32, s conversio
}
return nil
}
// Convert_Slice_string_To_v1_DeletionPropagation allows converting a URL query parameter propagationPolicy
func Convert_Slice_string_To_v1_DeletionPropagation(input *[]string, out *DeletionPropagation, s conversion.Scope) error {
if len(*input) > 0 {
*out = DeletionPropagation((*input)[0])
} else {
*out = ""
}
return nil
}

View File

@ -31,7 +31,10 @@ type Duration struct {
// UnmarshalJSON implements the json.Unmarshaller interface.
func (d *Duration) UnmarshalJSON(b []byte) error {
var str string
json.Unmarshal(b, &str)
err := json.Unmarshal(b, &str)
if err != nil {
return err
}
pd, err := time.ParseDuration(str)
if err != nil {

View File

@ -49,6 +49,7 @@ message APIGroup {
// The server returns only those CIDRs that it thinks that the client can match.
// For example: the master will return an internal IP CIDR only, if the client reaches the server using an internal IP.
// Server looks at X-Forwarded-For header or X-Real-Ip header or request.RemoteAddr (in that order) to get the client IP.
// +optional
repeated ServerAddressByClientCIDR serverAddressByClientCIDRs = 4;
}

View File

@ -104,7 +104,10 @@ func (t *MicroTime) UnmarshalJSON(b []byte) error {
}
var str string
json.Unmarshal(b, &str)
err := json.Unmarshal(b, &str)
if err != nil {
return err
}
pt, err := time.Parse(RFC3339Micro, str)
if err != nil {

View File

@ -106,7 +106,10 @@ func (t *Time) UnmarshalJSON(b []byte) error {
}
var str string
json.Unmarshal(b, &str)
err := json.Unmarshal(b, &str)
if err != nil {
return err
}
pt, err := time.Parse(time.RFC3339, str)
if err != nil {

View File

@ -799,7 +799,8 @@ type APIGroup struct {
// The server returns only those CIDRs that it thinks that the client can match.
// For example: the master will return an internal IP CIDR only, if the client reaches the server using an internal IP.
// Server looks at X-Forwarded-For header or X-Real-Ip header or request.RemoteAddr (in that order) to get the client IP.
ServerAddressByClientCIDRs []ServerAddressByClientCIDR `json:"serverAddressByClientCIDRs" protobuf:"bytes,4,rep,name=serverAddressByClientCIDRs"`
// +optional
ServerAddressByClientCIDRs []ServerAddressByClientCIDR `json:"serverAddressByClientCIDRs,omitempty" protobuf:"bytes,4,rep,name=serverAddressByClientCIDRs"`
}
// ServerAddressByClientCIDR helps the client to determine the server address that they should use, depending on the clientCIDR that they match.

View File

@ -0,0 +1,451 @@
/*
Copyright 2015 The Kubernetes Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package unstructured
import (
gojson "encoding/json"
"fmt"
"io"
"strings"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/runtime"
"k8s.io/apimachinery/pkg/runtime/schema"
"k8s.io/apimachinery/pkg/types"
"k8s.io/apimachinery/pkg/util/json"
)
// NestedFieldCopy returns a deep copy of the value of a nested field.
// Returns false if the value is missing.
// No error is returned for a nil field.
func NestedFieldCopy(obj map[string]interface{}, fields ...string) (interface{}, bool, error) {
val, found, err := NestedFieldNoCopy(obj, fields...)
if !found || err != nil {
return nil, found, err
}
return runtime.DeepCopyJSONValue(val), true, nil
}
// NestedFieldNoCopy returns a reference to a nested field.
// Returns false if value is not found and an error if unable
// to traverse obj.
func NestedFieldNoCopy(obj map[string]interface{}, fields ...string) (interface{}, bool, error) {
var val interface{} = obj
for i, field := range fields {
if m, ok := val.(map[string]interface{}); ok {
val, ok = m[field]
if !ok {
return nil, false, nil
}
} else {
return nil, false, fmt.Errorf("%v accessor error: %v is of the type %T, expected map[string]interface{}", jsonPath(fields[:i+1]), val, val)
}
}
return val, true, nil
}
// NestedString returns the string value of a nested field.
// Returns false if value is not found and an error if not a string.
func NestedString(obj map[string]interface{}, fields ...string) (string, bool, error) {
val, found, err := NestedFieldNoCopy(obj, fields...)
if !found || err != nil {
return "", found, err
}
s, ok := val.(string)
if !ok {
return "", false, fmt.Errorf("%v accessor error: %v is of the type %T, expected string", jsonPath(fields), val, val)
}
return s, true, nil
}
// NestedBool returns the bool value of a nested field.
// Returns false if value is not found and an error if not a bool.
func NestedBool(obj map[string]interface{}, fields ...string) (bool, bool, error) {
val, found, err := NestedFieldNoCopy(obj, fields...)
if !found || err != nil {
return false, found, err
}
b, ok := val.(bool)
if !ok {
return false, false, fmt.Errorf("%v accessor error: %v is of the type %T, expected bool", jsonPath(fields), val, val)
}
return b, true, nil
}
// NestedFloat64 returns the float64 value of a nested field.
// Returns false if value is not found and an error if not a float64.
func NestedFloat64(obj map[string]interface{}, fields ...string) (float64, bool, error) {
val, found, err := NestedFieldNoCopy(obj, fields...)
if !found || err != nil {
return 0, found, err
}
f, ok := val.(float64)
if !ok {
return 0, false, fmt.Errorf("%v accessor error: %v is of the type %T, expected float64", jsonPath(fields), val, val)
}
return f, true, nil
}
// NestedInt64 returns the int64 value of a nested field.
// Returns false if value is not found and an error if not an int64.
func NestedInt64(obj map[string]interface{}, fields ...string) (int64, bool, error) {
val, found, err := NestedFieldNoCopy(obj, fields...)
if !found || err != nil {
return 0, found, err
}
i, ok := val.(int64)
if !ok {
return 0, false, fmt.Errorf("%v accessor error: %v is of the type %T, expected int64", jsonPath(fields), val, val)
}
return i, true, nil
}
// NestedStringSlice returns a copy of []string value of a nested field.
// Returns false if value is not found and an error if not a []interface{} or contains non-string items in the slice.
func NestedStringSlice(obj map[string]interface{}, fields ...string) ([]string, bool, error) {
val, found, err := NestedFieldNoCopy(obj, fields...)
if !found || err != nil {
return nil, found, err
}
m, ok := val.([]interface{})
if !ok {
return nil, false, fmt.Errorf("%v accessor error: %v is of the type %T, expected []interface{}", jsonPath(fields), val, val)
}
strSlice := make([]string, 0, len(m))
for _, v := range m {
if str, ok := v.(string); ok {
strSlice = append(strSlice, str)
} else {
return nil, false, fmt.Errorf("%v accessor error: contains non-string key in the slice: %v is of the type %T, expected string", jsonPath(fields), v, v)
}
}
return strSlice, true, nil
}
// NestedSlice returns a deep copy of []interface{} value of a nested field.
// Returns false if value is not found and an error if not a []interface{}.
func NestedSlice(obj map[string]interface{}, fields ...string) ([]interface{}, bool, error) {
val, found, err := NestedFieldNoCopy(obj, fields...)
if !found || err != nil {
return nil, found, err
}
_, ok := val.([]interface{})
if !ok {
return nil, false, fmt.Errorf("%v accessor error: %v is of the type %T, expected []interface{}", jsonPath(fields), val, val)
}
return runtime.DeepCopyJSONValue(val).([]interface{}), true, nil
}
// NestedStringMap returns a copy of map[string]string value of a nested field.
// Returns false if value is not found and an error if not a map[string]interface{} or contains non-string values in the map.
func NestedStringMap(obj map[string]interface{}, fields ...string) (map[string]string, bool, error) {
m, found, err := nestedMapNoCopy(obj, fields...)
if !found || err != nil {
return nil, found, err
}
strMap := make(map[string]string, len(m))
for k, v := range m {
if str, ok := v.(string); ok {
strMap[k] = str
} else {
return nil, false, fmt.Errorf("%v accessor error: contains non-string key in the map: %v is of the type %T, expected string", jsonPath(fields), v, v)
}
}
return strMap, true, nil
}
// NestedMap returns a deep copy of map[string]interface{} value of a nested field.
// Returns false if value is not found and an error if not a map[string]interface{}.
func NestedMap(obj map[string]interface{}, fields ...string) (map[string]interface{}, bool, error) {
m, found, err := nestedMapNoCopy(obj, fields...)
if !found || err != nil {
return nil, found, err
}
return runtime.DeepCopyJSON(m), true, nil
}
// nestedMapNoCopy returns a map[string]interface{} value of a nested field.
// Returns false if value is not found and an error if not a map[string]interface{}.
func nestedMapNoCopy(obj map[string]interface{}, fields ...string) (map[string]interface{}, bool, error) {
val, found, err := NestedFieldNoCopy(obj, fields...)
if !found || err != nil {
return nil, found, err
}
m, ok := val.(map[string]interface{})
if !ok {
return nil, false, fmt.Errorf("%v accessor error: %v is of the type %T, expected map[string]interface{}", jsonPath(fields), val, val)
}
return m, true, nil
}
// SetNestedField sets the value of a nested field to a deep copy of the value provided.
// Returns an error if value cannot be set because one of the nesting levels is not a map[string]interface{}.
func SetNestedField(obj map[string]interface{}, value interface{}, fields ...string) error {
return setNestedFieldNoCopy(obj, runtime.DeepCopyJSONValue(value), fields...)
}
func setNestedFieldNoCopy(obj map[string]interface{}, value interface{}, fields ...string) error {
m := obj
for i, field := range fields[:len(fields)-1] {
if val, ok := m[field]; ok {
if valMap, ok := val.(map[string]interface{}); ok {
m = valMap
} else {
return fmt.Errorf("value cannot be set because %v is not a map[string]interface{}", jsonPath(fields[:i+1]))
}
} else {
newVal := make(map[string]interface{})
m[field] = newVal
m = newVal
}
}
m[fields[len(fields)-1]] = value
return nil
}
// SetNestedStringSlice sets the string slice value of a nested field.
// Returns an error if value cannot be set because one of the nesting levels is not a map[string]interface{}.
func SetNestedStringSlice(obj map[string]interface{}, value []string, fields ...string) error {
m := make([]interface{}, 0, len(value)) // convert []string into []interface{}
for _, v := range value {
m = append(m, v)
}
return setNestedFieldNoCopy(obj, m, fields...)
}
// SetNestedSlice sets the slice value of a nested field.
// Returns an error if value cannot be set because one of the nesting levels is not a map[string]interface{}.
func SetNestedSlice(obj map[string]interface{}, value []interface{}, fields ...string) error {
return SetNestedField(obj, value, fields...)
}
// SetNestedStringMap sets the map[string]string value of a nested field.
// Returns an error if value cannot be set because one of the nesting levels is not a map[string]interface{}.
func SetNestedStringMap(obj map[string]interface{}, value map[string]string, fields ...string) error {
m := make(map[string]interface{}, len(value)) // convert map[string]string into map[string]interface{}
for k, v := range value {
m[k] = v
}
return setNestedFieldNoCopy(obj, m, fields...)
}
// SetNestedMap sets the map[string]interface{} value of a nested field.
// Returns an error if value cannot be set because one of the nesting levels is not a map[string]interface{}.
func SetNestedMap(obj map[string]interface{}, value map[string]interface{}, fields ...string) error {
return SetNestedField(obj, value, fields...)
}
// RemoveNestedField removes the nested field from the obj.
func RemoveNestedField(obj map[string]interface{}, fields ...string) {
m := obj
for _, field := range fields[:len(fields)-1] {
if x, ok := m[field].(map[string]interface{}); ok {
m = x
} else {
return
}
}
delete(m, fields[len(fields)-1])
}
func getNestedString(obj map[string]interface{}, fields ...string) string {
val, found, err := NestedString(obj, fields...)
if !found || err != nil {
return ""
}
return val
}
func jsonPath(fields []string) string {
return "." + strings.Join(fields, ".")
}
func extractOwnerReference(v map[string]interface{}) metav1.OwnerReference {
// though this field is a *bool, but when decoded from JSON, it's
// unmarshalled as bool.
var controllerPtr *bool
if controller, found, err := NestedBool(v, "controller"); err == nil && found {
controllerPtr = &controller
}
var blockOwnerDeletionPtr *bool
if blockOwnerDeletion, found, err := NestedBool(v, "blockOwnerDeletion"); err == nil && found {
blockOwnerDeletionPtr = &blockOwnerDeletion
}
return metav1.OwnerReference{
Kind: getNestedString(v, "kind"),
Name: getNestedString(v, "name"),
APIVersion: getNestedString(v, "apiVersion"),
UID: types.UID(getNestedString(v, "uid")),
Controller: controllerPtr,
BlockOwnerDeletion: blockOwnerDeletionPtr,
}
}
// UnstructuredJSONScheme is capable of converting JSON data into the Unstructured
// type, which can be used for generic access to objects without a predefined scheme.
// TODO: move into serializer/json.
var UnstructuredJSONScheme runtime.Codec = unstructuredJSONScheme{}
type unstructuredJSONScheme struct{}
func (s unstructuredJSONScheme) Decode(data []byte, _ *schema.GroupVersionKind, obj runtime.Object) (runtime.Object, *schema.GroupVersionKind, error) {
var err error
if obj != nil {
err = s.decodeInto(data, obj)
} else {
obj, err = s.decode(data)
}
if err != nil {
return nil, nil, err
}
gvk := obj.GetObjectKind().GroupVersionKind()
if len(gvk.Kind) == 0 {
return nil, &gvk, runtime.NewMissingKindErr(string(data))
}
return obj, &gvk, nil
}
func (unstructuredJSONScheme) Encode(obj runtime.Object, w io.Writer) error {
switch t := obj.(type) {
case *Unstructured:
return json.NewEncoder(w).Encode(t.Object)
case *UnstructuredList:
items := make([]interface{}, 0, len(t.Items))
for _, i := range t.Items {
items = append(items, i.Object)
}
listObj := make(map[string]interface{}, len(t.Object)+1)
for k, v := range t.Object { // Make a shallow copy
listObj[k] = v
}
listObj["items"] = items
return json.NewEncoder(w).Encode(listObj)
case *runtime.Unknown:
// TODO: Unstructured needs to deal with ContentType.
_, err := w.Write(t.Raw)
return err
default:
return json.NewEncoder(w).Encode(t)
}
}
func (s unstructuredJSONScheme) decode(data []byte) (runtime.Object, error) {
type detector struct {
Items gojson.RawMessage
}
var det detector
if err := json.Unmarshal(data, &det); err != nil {
return nil, err
}
if det.Items != nil {
list := &UnstructuredList{}
err := s.decodeToList(data, list)
return list, err
}
// No Items field, so it wasn't a list.
unstruct := &Unstructured{}
err := s.decodeToUnstructured(data, unstruct)
return unstruct, err
}
func (s unstructuredJSONScheme) decodeInto(data []byte, obj runtime.Object) error {
switch x := obj.(type) {
case *Unstructured:
return s.decodeToUnstructured(data, x)
case *UnstructuredList:
return s.decodeToList(data, x)
case *runtime.VersionedObjects:
o, err := s.decode(data)
if err == nil {
x.Objects = []runtime.Object{o}
}
return err
default:
return json.Unmarshal(data, x)
}
}
func (unstructuredJSONScheme) decodeToUnstructured(data []byte, unstruct *Unstructured) error {
m := make(map[string]interface{})
if err := json.Unmarshal(data, &m); err != nil {
return err
}
unstruct.Object = m
return nil
}
func (s unstructuredJSONScheme) decodeToList(data []byte, list *UnstructuredList) error {
type decodeList struct {
Items []gojson.RawMessage
}
var dList decodeList
if err := json.Unmarshal(data, &dList); err != nil {
return err
}
if err := json.Unmarshal(data, &list.Object); err != nil {
return err
}
// For typed lists, e.g., a PodList, API server doesn't set each item's
// APIVersion and Kind. We need to set it.
listAPIVersion := list.GetAPIVersion()
listKind := list.GetKind()
itemKind := strings.TrimSuffix(listKind, "List")
delete(list.Object, "items")
list.Items = make([]Unstructured, 0, len(dList.Items))
for _, i := range dList.Items {
unstruct := &Unstructured{}
if err := s.decodeToUnstructured([]byte(i), unstruct); err != nil {
return err
}
// This is hacky. Set the item's Kind and APIVersion to those inferred
// from the List.
if len(unstruct.GetKind()) == 0 && len(unstruct.GetAPIVersion()) == 0 {
unstruct.SetKind(itemKind)
unstruct.SetAPIVersion(listAPIVersion)
}
list.Items = append(list.Items, *unstruct)
}
return nil
}
type JSONFallbackEncoder struct {
runtime.Encoder
}
func (c JSONFallbackEncoder) Encode(obj runtime.Object, w io.Writer) error {
err := c.Encoder.Encode(obj, w)
if runtime.IsNotRegisteredError(err) {
switch obj.(type) {
case *Unstructured, *UnstructuredList:
return UnstructuredJSONScheme.Encode(obj, w)
}
}
return err
}

View File

@ -0,0 +1,399 @@
/*
Copyright 2015 The Kubernetes Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package unstructured
import (
"bytes"
"errors"
"fmt"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/runtime"
"k8s.io/apimachinery/pkg/runtime/schema"
"k8s.io/apimachinery/pkg/types"
utilruntime "k8s.io/apimachinery/pkg/util/runtime"
)
// Unstructured allows objects that do not have Golang structs registered to be manipulated
// generically. This can be used to deal with the API objects from a plug-in. Unstructured
// objects still have functioning TypeMeta features-- kind, version, etc.
//
// WARNING: This object has accessors for the v1 standard metadata. You *MUST NOT* use this
// type if you are dealing with objects that are not in the server meta v1 schema.
//
// TODO: make the serialization part of this type distinct from the field accessors.
// +k8s:deepcopy-gen:interfaces=k8s.io/apimachinery/pkg/runtime.Object
// +k8s:deepcopy-gen=true
type Unstructured struct {
// Object is a JSON compatible map with string, float, int, bool, []interface{}, or
// map[string]interface{}
// children.
Object map[string]interface{}
}
var _ metav1.Object = &Unstructured{}
var _ runtime.Unstructured = &Unstructured{}
func (obj *Unstructured) GetObjectKind() schema.ObjectKind { return obj }
func (obj *Unstructured) IsList() bool {
field, ok := obj.Object["items"]
if !ok {
return false
}
_, ok = field.([]interface{})
return ok
}
func (obj *Unstructured) ToList() (*UnstructuredList, error) {
if !obj.IsList() {
// return an empty list back
return &UnstructuredList{Object: obj.Object}, nil
}
ret := &UnstructuredList{}
ret.Object = obj.Object
err := obj.EachListItem(func(item runtime.Object) error {
castItem := item.(*Unstructured)
ret.Items = append(ret.Items, *castItem)
return nil
})
if err != nil {
return nil, err
}
return ret, nil
}
func (obj *Unstructured) EachListItem(fn func(runtime.Object) error) error {
field, ok := obj.Object["items"]
if !ok {
return errors.New("content is not a list")
}
items, ok := field.([]interface{})
if !ok {
return fmt.Errorf("content is not a list: %T", field)
}
for _, item := range items {
child, ok := item.(map[string]interface{})
if !ok {
return fmt.Errorf("items member is not an object: %T", child)
}
if err := fn(&Unstructured{Object: child}); err != nil {
return err
}
}
return nil
}
func (obj *Unstructured) UnstructuredContent() map[string]interface{} {
if obj.Object == nil {
return make(map[string]interface{})
}
return obj.Object
}
func (obj *Unstructured) SetUnstructuredContent(content map[string]interface{}) {
obj.Object = content
}
// MarshalJSON ensures that the unstructured object produces proper
// JSON when passed to Go's standard JSON library.
func (u *Unstructured) MarshalJSON() ([]byte, error) {
var buf bytes.Buffer
err := UnstructuredJSONScheme.Encode(u, &buf)
return buf.Bytes(), err
}
// UnmarshalJSON ensures that the unstructured object properly decodes
// JSON when passed to Go's standard JSON library.
func (u *Unstructured) UnmarshalJSON(b []byte) error {
_, _, err := UnstructuredJSONScheme.Decode(b, nil, u)
return err
}
func (in *Unstructured) DeepCopy() *Unstructured {
if in == nil {
return nil
}
out := new(Unstructured)
*out = *in
out.Object = runtime.DeepCopyJSON(in.Object)
return out
}
func (u *Unstructured) setNestedField(value interface{}, fields ...string) {
if u.Object == nil {
u.Object = make(map[string]interface{})
}
SetNestedField(u.Object, value, fields...)
}
func (u *Unstructured) setNestedSlice(value []string, fields ...string) {
if u.Object == nil {
u.Object = make(map[string]interface{})
}
SetNestedStringSlice(u.Object, value, fields...)
}
func (u *Unstructured) setNestedMap(value map[string]string, fields ...string) {
if u.Object == nil {
u.Object = make(map[string]interface{})
}
SetNestedStringMap(u.Object, value, fields...)
}
func (u *Unstructured) GetOwnerReferences() []metav1.OwnerReference {
field, found, err := NestedFieldNoCopy(u.Object, "metadata", "ownerReferences")
if !found || err != nil {
return nil
}
original, ok := field.([]interface{})
if !ok {
return nil
}
ret := make([]metav1.OwnerReference, 0, len(original))
for _, obj := range original {
o, ok := obj.(map[string]interface{})
if !ok {
// expected map[string]interface{}, got something else
return nil
}
ret = append(ret, extractOwnerReference(o))
}
return ret
}
func (u *Unstructured) SetOwnerReferences(references []metav1.OwnerReference) {
newReferences := make([]interface{}, 0, len(references))
for _, reference := range references {
out, err := runtime.DefaultUnstructuredConverter.ToUnstructured(&reference)
if err != nil {
utilruntime.HandleError(fmt.Errorf("unable to convert Owner Reference: %v", err))
continue
}
newReferences = append(newReferences, out)
}
u.setNestedField(newReferences, "metadata", "ownerReferences")
}
func (u *Unstructured) GetAPIVersion() string {
return getNestedString(u.Object, "apiVersion")
}
func (u *Unstructured) SetAPIVersion(version string) {
u.setNestedField(version, "apiVersion")
}
func (u *Unstructured) GetKind() string {
return getNestedString(u.Object, "kind")
}
func (u *Unstructured) SetKind(kind string) {
u.setNestedField(kind, "kind")
}
func (u *Unstructured) GetNamespace() string {
return getNestedString(u.Object, "metadata", "namespace")
}
func (u *Unstructured) SetNamespace(namespace string) {
u.setNestedField(namespace, "metadata", "namespace")
}
func (u *Unstructured) GetName() string {
return getNestedString(u.Object, "metadata", "name")
}
func (u *Unstructured) SetName(name string) {
u.setNestedField(name, "metadata", "name")
}
func (u *Unstructured) GetGenerateName() string {
return getNestedString(u.Object, "metadata", "generateName")
}
func (u *Unstructured) SetGenerateName(name string) {
u.setNestedField(name, "metadata", "generateName")
}
func (u *Unstructured) GetUID() types.UID {
return types.UID(getNestedString(u.Object, "metadata", "uid"))
}
func (u *Unstructured) SetUID(uid types.UID) {
u.setNestedField(string(uid), "metadata", "uid")
}
func (u *Unstructured) GetResourceVersion() string {
return getNestedString(u.Object, "metadata", "resourceVersion")
}
func (u *Unstructured) SetResourceVersion(version string) {
u.setNestedField(version, "metadata", "resourceVersion")
}
func (u *Unstructured) GetGeneration() int64 {
val, found, err := NestedInt64(u.Object, "metadata", "generation")
if !found || err != nil {
return 0
}
return val
}
func (u *Unstructured) SetGeneration(generation int64) {
u.setNestedField(generation, "metadata", "generation")
}
func (u *Unstructured) GetSelfLink() string {
return getNestedString(u.Object, "metadata", "selfLink")
}
func (u *Unstructured) SetSelfLink(selfLink string) {
u.setNestedField(selfLink, "metadata", "selfLink")
}
func (u *Unstructured) GetContinue() string {
return getNestedString(u.Object, "metadata", "continue")
}
func (u *Unstructured) SetContinue(c string) {
u.setNestedField(c, "metadata", "continue")
}
func (u *Unstructured) GetCreationTimestamp() metav1.Time {
var timestamp metav1.Time
timestamp.UnmarshalQueryParameter(getNestedString(u.Object, "metadata", "creationTimestamp"))
return timestamp
}
func (u *Unstructured) SetCreationTimestamp(timestamp metav1.Time) {
ts, _ := timestamp.MarshalQueryParameter()
if len(ts) == 0 || timestamp.Time.IsZero() {
RemoveNestedField(u.Object, "metadata", "creationTimestamp")
return
}
u.setNestedField(ts, "metadata", "creationTimestamp")
}
func (u *Unstructured) GetDeletionTimestamp() *metav1.Time {
var timestamp metav1.Time
timestamp.UnmarshalQueryParameter(getNestedString(u.Object, "metadata", "deletionTimestamp"))
if timestamp.IsZero() {
return nil
}
return &timestamp
}
func (u *Unstructured) SetDeletionTimestamp(timestamp *metav1.Time) {
if timestamp == nil {
RemoveNestedField(u.Object, "metadata", "deletionTimestamp")
return
}
ts, _ := timestamp.MarshalQueryParameter()
u.setNestedField(ts, "metadata", "deletionTimestamp")
}
func (u *Unstructured) GetDeletionGracePeriodSeconds() *int64 {
val, found, err := NestedInt64(u.Object, "metadata", "deletionGracePeriodSeconds")
if !found || err != nil {
return nil
}
return &val
}
func (u *Unstructured) SetDeletionGracePeriodSeconds(deletionGracePeriodSeconds *int64) {
if deletionGracePeriodSeconds == nil {
RemoveNestedField(u.Object, "metadata", "deletionGracePeriodSeconds")
return
}
u.setNestedField(*deletionGracePeriodSeconds, "metadata", "deletionGracePeriodSeconds")
}
func (u *Unstructured) GetLabels() map[string]string {
m, _, _ := NestedStringMap(u.Object, "metadata", "labels")
return m
}
func (u *Unstructured) SetLabels(labels map[string]string) {
u.setNestedMap(labels, "metadata", "labels")
}
func (u *Unstructured) GetAnnotations() map[string]string {
m, _, _ := NestedStringMap(u.Object, "metadata", "annotations")
return m
}
func (u *Unstructured) SetAnnotations(annotations map[string]string) {
u.setNestedMap(annotations, "metadata", "annotations")
}
func (u *Unstructured) SetGroupVersionKind(gvk schema.GroupVersionKind) {
u.SetAPIVersion(gvk.GroupVersion().String())
u.SetKind(gvk.Kind)
}
func (u *Unstructured) GroupVersionKind() schema.GroupVersionKind {
gv, err := schema.ParseGroupVersion(u.GetAPIVersion())
if err != nil {
return schema.GroupVersionKind{}
}
gvk := gv.WithKind(u.GetKind())
return gvk
}
func (u *Unstructured) GetInitializers() *metav1.Initializers {
m, found, err := nestedMapNoCopy(u.Object, "metadata", "initializers")
if !found || err != nil {
return nil
}
out := &metav1.Initializers{}
if err := runtime.DefaultUnstructuredConverter.FromUnstructured(m, out); err != nil {
utilruntime.HandleError(fmt.Errorf("unable to retrieve initializers for object: %v", err))
return nil
}
return out
}
func (u *Unstructured) SetInitializers(initializers *metav1.Initializers) {
if initializers == nil {
RemoveNestedField(u.Object, "metadata", "initializers")
return
}
out, err := runtime.DefaultUnstructuredConverter.ToUnstructured(initializers)
if err != nil {
utilruntime.HandleError(fmt.Errorf("unable to retrieve initializers for object: %v", err))
}
u.setNestedField(out, "metadata", "initializers")
}
func (u *Unstructured) GetFinalizers() []string {
val, _, _ := NestedStringSlice(u.Object, "metadata", "finalizers")
return val
}
func (u *Unstructured) SetFinalizers(finalizers []string) {
u.setNestedSlice(finalizers, "metadata", "finalizers")
}
func (u *Unstructured) GetClusterName() string {
return getNestedString(u.Object, "metadata", "clusterName")
}
func (u *Unstructured) SetClusterName(clusterName string) {
u.setNestedField(clusterName, "metadata", "clusterName")
}

View File

@ -0,0 +1,188 @@
/*
Copyright 2015 The Kubernetes Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package unstructured
import (
"bytes"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/runtime"
"k8s.io/apimachinery/pkg/runtime/schema"
)
var _ runtime.Unstructured = &UnstructuredList{}
var _ metav1.ListInterface = &UnstructuredList{}
// UnstructuredList allows lists that do not have Golang structs
// registered to be manipulated generically. This can be used to deal
// with the API lists from a plug-in.
// +k8s:deepcopy-gen:interfaces=k8s.io/apimachinery/pkg/runtime.Object
// +k8s:deepcopy-gen=true
type UnstructuredList struct {
Object map[string]interface{}
// Items is a list of unstructured objects.
Items []Unstructured `json:"items"`
}
func (u *UnstructuredList) GetObjectKind() schema.ObjectKind { return u }
func (u *UnstructuredList) IsList() bool { return true }
func (u *UnstructuredList) EachListItem(fn func(runtime.Object) error) error {
for i := range u.Items {
if err := fn(&u.Items[i]); err != nil {
return err
}
}
return nil
}
// UnstructuredContent returns a map contain an overlay of the Items field onto
// the Object field. Items always overwrites overlay.
func (u *UnstructuredList) UnstructuredContent() map[string]interface{} {
out := make(map[string]interface{}, len(u.Object)+1)
// shallow copy every property
for k, v := range u.Object {
out[k] = v
}
items := make([]interface{}, len(u.Items))
for i, item := range u.Items {
items[i] = item.UnstructuredContent()
}
out["items"] = items
return out
}
// SetUnstructuredContent obeys the conventions of List and keeps Items and the items
// array in sync. If items is not an array of objects in the incoming map, then any
// mismatched item will be removed.
func (obj *UnstructuredList) SetUnstructuredContent(content map[string]interface{}) {
obj.Object = content
if content == nil {
obj.Items = nil
return
}
items, ok := obj.Object["items"].([]interface{})
if !ok || items == nil {
items = []interface{}{}
}
unstructuredItems := make([]Unstructured, 0, len(items))
newItems := make([]interface{}, 0, len(items))
for _, item := range items {
o, ok := item.(map[string]interface{})
if !ok {
continue
}
unstructuredItems = append(unstructuredItems, Unstructured{Object: o})
newItems = append(newItems, o)
}
obj.Items = unstructuredItems
obj.Object["items"] = newItems
}
func (u *UnstructuredList) DeepCopy() *UnstructuredList {
if u == nil {
return nil
}
out := new(UnstructuredList)
*out = *u
out.Object = runtime.DeepCopyJSON(u.Object)
out.Items = make([]Unstructured, len(u.Items))
for i := range u.Items {
u.Items[i].DeepCopyInto(&out.Items[i])
}
return out
}
// MarshalJSON ensures that the unstructured list object produces proper
// JSON when passed to Go's standard JSON library.
func (u *UnstructuredList) MarshalJSON() ([]byte, error) {
var buf bytes.Buffer
err := UnstructuredJSONScheme.Encode(u, &buf)
return buf.Bytes(), err
}
// UnmarshalJSON ensures that the unstructured list object properly
// decodes JSON when passed to Go's standard JSON library.
func (u *UnstructuredList) UnmarshalJSON(b []byte) error {
_, _, err := UnstructuredJSONScheme.Decode(b, nil, u)
return err
}
func (u *UnstructuredList) GetAPIVersion() string {
return getNestedString(u.Object, "apiVersion")
}
func (u *UnstructuredList) SetAPIVersion(version string) {
u.setNestedField(version, "apiVersion")
}
func (u *UnstructuredList) GetKind() string {
return getNestedString(u.Object, "kind")
}
func (u *UnstructuredList) SetKind(kind string) {
u.setNestedField(kind, "kind")
}
func (u *UnstructuredList) GetResourceVersion() string {
return getNestedString(u.Object, "metadata", "resourceVersion")
}
func (u *UnstructuredList) SetResourceVersion(version string) {
u.setNestedField(version, "metadata", "resourceVersion")
}
func (u *UnstructuredList) GetSelfLink() string {
return getNestedString(u.Object, "metadata", "selfLink")
}
func (u *UnstructuredList) SetSelfLink(selfLink string) {
u.setNestedField(selfLink, "metadata", "selfLink")
}
func (u *UnstructuredList) GetContinue() string {
return getNestedString(u.Object, "metadata", "continue")
}
func (u *UnstructuredList) SetContinue(c string) {
u.setNestedField(c, "metadata", "continue")
}
func (u *UnstructuredList) SetGroupVersionKind(gvk schema.GroupVersionKind) {
u.SetAPIVersion(gvk.GroupVersion().String())
u.SetKind(gvk.Kind)
}
func (u *UnstructuredList) GroupVersionKind() schema.GroupVersionKind {
gv, err := schema.ParseGroupVersion(u.GetAPIVersion())
if err != nil {
return schema.GroupVersionKind{}
}
gvk := gv.WithKind(u.GetKind())
return gvk
}
func (u *UnstructuredList) setNestedField(value interface{}, fields ...string) {
if u.Object == nil {
u.Object = make(map[string]interface{})
}
SetNestedField(u.Object, value, fields...)
}

View File

@ -0,0 +1,55 @@
// +build !ignore_autogenerated
/*
Copyright The Kubernetes Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
// Code generated by deepcopy-gen. DO NOT EDIT.
package unstructured
import (
runtime "k8s.io/apimachinery/pkg/runtime"
)
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *Unstructured) DeepCopyInto(out *Unstructured) {
clone := in.DeepCopy()
*out = *clone
return
}
// DeepCopyObject is an autogenerated deepcopy function, copying the receiver, creating a new runtime.Object.
func (in *Unstructured) DeepCopyObject() runtime.Object {
if c := in.DeepCopy(); c != nil {
return c
}
return nil
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *UnstructuredList) DeepCopyInto(out *UnstructuredList) {
clone := in.DeepCopy()
*out = *clone
return
}
// DeepCopyObject is an autogenerated deepcopy function, copying the receiver, creating a new runtime.Object.
func (in *UnstructuredList) DeepCopyObject() runtime.Object {
if c := in.DeepCopy(); c != nil {
return c
}
return nil
}

View File

@ -16,6 +16,8 @@ limitations under the License.
package v1beta1
import "k8s.io/apimachinery/pkg/runtime"
func (in *TableRow) DeepCopy() *TableRow {
if in == nil {
return nil
@ -26,7 +28,7 @@ func (in *TableRow) DeepCopy() *TableRow {
if in.Cells != nil {
out.Cells = make([]interface{}, len(in.Cells))
for i := range in.Cells {
out.Cells[i] = deepCopyJSON(in.Cells[i])
out.Cells[i] = runtime.DeepCopyJSONValue(in.Cells[i])
}
}
@ -40,22 +42,3 @@ func (in *TableRow) DeepCopy() *TableRow {
in.Object.DeepCopyInto(&out.Object)
return out
}
func deepCopyJSON(x interface{}) interface{} {
switch x := x.(type) {
case map[string]interface{}:
clone := make(map[string]interface{}, len(x))
for k, v := range x {
clone[k] = deepCopyJSON(v)
}
return clone
case []interface{}:
clone := make([]interface{}, len(x))
for i := range x {
clone[i] = deepCopyJSON(x[i])
}
return clone
default:
return x
}
}

View File

@ -66,8 +66,8 @@ type TableColumnDefinition struct {
// TableRow is an individual row in a table.
// +protobuf=false
type TableRow struct {
// cells will be as wide as headers and may contain strings, numbers, booleans, simple maps, or lists, or
// null. See the type field of the column definition for a more detailed description.
// cells will be as wide as headers and may contain strings, numbers (float64 or int64), booleans, simple
// maps, or lists, or null. See the type field of the column definition for a more detailed description.
Cells []interface{} `json:"cells"`
// conditions describe additional status of a row that are relevant for a human user.
// +optional

View File

@ -80,7 +80,7 @@ func (TableOptions) SwaggerDoc() map[string]string {
var map_TableRow = map[string]string{
"": "TableRow is an individual row in a table.",
"cells": "cells will be as wide as headers and may contain strings, numbers, booleans, simple maps, or lists, or null. See the type field of the column definition for a more detailed description.",
"cells": "cells will be as wide as headers and may contain strings, numbers (float64 or int64), booleans, simple maps, or lists, or null. See the type field of the column definition for a more detailed description.",
"conditions": "conditions describe additional status of a row that are relevant for a human user.",
"object": "This field contains the requested additional information about each object based on the includeObject policy when requesting the Table. If \"None\", this field is empty, if \"Object\" this will be the default serialization of the object for the current API version, and if \"Metadata\" (the default) will contain the object metadata. Check the returned kind and apiVersion of the object before parsing.",
}

View File

@ -55,6 +55,21 @@ type Selector interface {
DeepCopySelector() Selector
}
type nothingSelector struct{}
func (n nothingSelector) Matches(_ Fields) bool { return false }
func (n nothingSelector) Empty() bool { return false }
func (n nothingSelector) String() string { return "" }
func (n nothingSelector) Requirements() Requirements { return nil }
func (n nothingSelector) DeepCopySelector() Selector { return n }
func (n nothingSelector) RequiresExactMatch(field string) (value string, found bool) { return "", false }
func (n nothingSelector) Transform(fn TransformFunc) (Selector, error) { return n, nil }
// Nothing returns a selector that matches no fields
func Nothing() Selector {
return nothingSelector{}
}
// Everything returns a selector that matches all fields.
func Everything() Selector {
return andTerm{}
@ -449,6 +464,12 @@ func OneTermEqualSelector(k, v string) Selector {
return &hasTerm{field: k, value: v}
}
// OneTermNotEqualSelector returns an object that matches objects where one field/field does not equal one value.
// Cannot return an error.
func OneTermNotEqualSelector(k, v string) Selector {
return &notHasTerm{field: k, value: v}
}
// AndSelectors creates a selector that is the logical AND of all the given selectors
func AndSelectors(selectors ...Selector) Selector {
return andTerm(selectors)

View File

@ -73,7 +73,6 @@ var (
mapStringInterfaceType = reflect.TypeOf(map[string]interface{}{})
stringType = reflect.TypeOf(string(""))
int64Type = reflect.TypeOf(int64(0))
uint64Type = reflect.TypeOf(uint64(0))
float64Type = reflect.TypeOf(float64(0))
boolType = reflect.TypeOf(bool(false))
fieldCache = newFieldsCache()
@ -411,8 +410,7 @@ func (c *unstructuredConverter) ToUnstructured(obj interface{}) (map[string]inte
var u map[string]interface{}
var err error
if unstr, ok := obj.(Unstructured); ok {
// UnstructuredContent() mutates the object so we need to make a copy first
u = unstr.DeepCopyObject().(Unstructured).UnstructuredContent()
u = unstr.UnstructuredContent()
} else {
t := reflect.TypeOf(obj)
value := reflect.ValueOf(obj)
@ -439,22 +437,32 @@ func (c *unstructuredConverter) ToUnstructured(obj interface{}) (map[string]inte
}
// DeepCopyJSON deep copies the passed value, assuming it is a valid JSON representation i.e. only contains
// types produced by json.Unmarshal().
// types produced by json.Unmarshal() and also int64.
// bool, int64, float64, string, []interface{}, map[string]interface{}, json.Number and nil
func DeepCopyJSON(x map[string]interface{}) map[string]interface{} {
return DeepCopyJSONValue(x).(map[string]interface{})
}
// DeepCopyJSONValue deep copies the passed value, assuming it is a valid JSON representation i.e. only contains
// types produced by json.Unmarshal().
// types produced by json.Unmarshal() and also int64.
// bool, int64, float64, string, []interface{}, map[string]interface{}, json.Number and nil
func DeepCopyJSONValue(x interface{}) interface{} {
switch x := x.(type) {
case map[string]interface{}:
if x == nil {
// Typed nil - an interface{} that contains a type map[string]interface{} with a value of nil
return x
}
clone := make(map[string]interface{}, len(x))
for k, v := range x {
clone[k] = DeepCopyJSONValue(v)
}
return clone
case []interface{}:
if x == nil {
// Typed nil - an interface{} that contains a type []interface{} with a value of nil
return x
}
clone := make([]interface{}, len(x))
for i, v := range x {
clone[i] = DeepCopyJSONValue(v)
@ -584,10 +592,14 @@ func toUnstructured(sv, dv reflect.Value) error {
dv.Set(reflect.ValueOf(sv.Int()))
return nil
case reflect.Uint, reflect.Uint8, reflect.Uint16, reflect.Uint32, reflect.Uint64:
if dt.Kind() == reflect.Interface && dv.NumMethod() == 0 {
dv.Set(reflect.New(uint64Type))
uVal := sv.Uint()
if uVal > math.MaxInt64 {
return fmt.Errorf("unsigned value %d does not fit into int64 (overflow)", uVal)
}
dv.Set(reflect.ValueOf(sv.Uint()))
if dt.Kind() == reflect.Interface && dv.NumMethod() == 0 {
dv.Set(reflect.New(int64Type))
}
dv.Set(reflect.ValueOf(int64(uVal)))
return nil
case reflect.Float32, reflect.Float64:
if dt.Kind() == reflect.Interface && dv.NumMethod() == 0 {

View File

@ -41,10 +41,18 @@ func NewNotRegisteredErrForTarget(t reflect.Type, target GroupVersioner) error {
return &notRegisteredErr{t: t, target: target}
}
func NewNotRegisteredGVKErrForTarget(gvk schema.GroupVersionKind, target GroupVersioner) error {
return &notRegisteredErr{gvk: gvk, target: target}
}
func (k *notRegisteredErr) Error() string {
if k.t != nil && k.target != nil {
return fmt.Sprintf("%v is not suitable for converting to %q", k.t, k.target)
}
nullGVK := schema.GroupVersionKind{}
if k.gvk != nullGVK && k.target != nil {
return fmt.Sprintf("%q is not suitable for converting to %q", k.gvk.GroupVersion(), k.target)
}
if k.t != nil {
return fmt.Sprintf("no kind is registered for the type %v", k.t)
}

View File

@ -174,13 +174,16 @@ type ObjectVersioner interface {
// ObjectConvertor converts an object to a different version.
type ObjectConvertor interface {
// Convert attempts to convert one object into another, or returns an error. This method does
// not guarantee the in object is not mutated. The context argument will be passed to
// all nested conversions.
// Convert attempts to convert one object into another, or returns an error. This
// method does not mutate the in object, but the in and out object might share data structures,
// i.e. the out object cannot be mutated without mutating the in object as well.
// The context argument will be passed to all nested conversions.
Convert(in, out, context interface{}) error
// ConvertToVersion takes the provided object and converts it the provided version. This
// method does not guarantee that the in object is not mutated. This method is similar to
// Convert() but handles specific details of choosing the correct output version.
// method does not mutate the in object, but the in and out object might share data structures,
// i.e. the out object cannot be mutated without mutating the in object as well.
// This method is similar to Convert() but handles specific details of choosing the correct
// output version.
ConvertToVersion(in Object, gv GroupVersioner) (out Object, err error)
ConvertFieldLabel(version, kind, label, value string) (string, string, error)
}
@ -234,9 +237,9 @@ type Object interface {
// to JSON allowed.
type Unstructured interface {
Object
// UnstructuredContent returns a non-nil, mutable map of the contents of this object. Values may be
// UnstructuredContent returns a non-nil map with this object's contents. Values may be
// []interface{}, map[string]interface{}, or any primitive type. Contents are typically serialized to
// and from JSON.
// and from JSON. SetUnstructuredContent should be used to mutate the contents.
UnstructuredContent() map[string]interface{}
// SetUnstructuredContent updates the object content to match the provided map.
SetUnstructuredContent(map[string]interface{})

View File

@ -21,8 +21,11 @@ import (
"net/url"
"reflect"
"strings"
"k8s.io/apimachinery/pkg/conversion"
"k8s.io/apimachinery/pkg/runtime/schema"
"k8s.io/apimachinery/pkg/util/sets"
)
// Scheme defines methods for serializing and deserializing API objects, a type
@ -68,6 +71,13 @@ type Scheme struct {
// converter stores all registered conversion functions. It also has
// default coverting behavior.
converter *conversion.Converter
// versionPriority is a map of groups to ordered lists of versions for those groups indicating the
// default priorities of these versions as registered in the scheme
versionPriority map[string][]string
// observedVersions keeps track of the order we've seen versions during type registration
observedVersions []schema.GroupVersion
}
// Function to convert a field selector to internal representation.
@ -82,6 +92,7 @@ func NewScheme() *Scheme {
unversionedKinds: map[string]reflect.Type{},
fieldLabelConversionFuncs: map[string]map[string]FieldLabelConversionFunc{},
defaulterFuncs: map[reflect.Type]func(interface{}){},
versionPriority: map[string][]string{},
}
s.converter = conversion.NewConverter(s.nameFunc)
@ -111,7 +122,7 @@ func (s *Scheme) nameFunc(t reflect.Type) string {
for _, gvk := range gvks {
internalGV := gvk.GroupVersion()
internalGV.Version = "__internal" // this is hacky and maybe should be passed in
internalGV.Version = APIVersionInternal // this is hacky and maybe should be passed in
internalGVK := internalGV.WithKind(gvk.Kind)
if internalType, exists := s.gvkToType[internalGVK]; exists {
@ -141,6 +152,7 @@ func (s *Scheme) Converter() *conversion.Converter {
// TODO: there is discussion about removing unversioned and replacing it with objects that are manifest into
// every version with particular schemas. Resolve this method at that point.
func (s *Scheme) AddUnversionedTypes(version schema.GroupVersion, types ...Object) {
s.addObservedVersion(version)
s.AddKnownTypes(version, types...)
for _, obj := range types {
t := reflect.TypeOf(obj).Elem()
@ -158,6 +170,7 @@ func (s *Scheme) AddUnversionedTypes(version schema.GroupVersion, types ...Objec
// the struct becomes the "kind" field when encoding. Version may not be empty - use the
// APIVersionInternal constant if you have a type that does not have a formal version.
func (s *Scheme) AddKnownTypes(gv schema.GroupVersion, types ...Object) {
s.addObservedVersion(gv)
for _, obj := range types {
t := reflect.TypeOf(obj)
if t.Kind() != reflect.Ptr {
@ -173,6 +186,7 @@ func (s *Scheme) AddKnownTypes(gv schema.GroupVersion, types ...Object) {
// your structs. Version may not be empty - use the APIVersionInternal constant if you have a
// type that does not have a formal version.
func (s *Scheme) AddKnownTypeWithName(gvk schema.GroupVersionKind, obj Object) {
s.addObservedVersion(gvk.GroupVersion())
t := reflect.TypeOf(obj)
if len(gvk.Version) == 0 {
panic(fmt.Sprintf("version is required on all types: %s %v", gvk, t))
@ -620,3 +634,133 @@ func setTargetKind(obj Object, kind schema.GroupVersionKind) {
}
obj.GetObjectKind().SetGroupVersionKind(kind)
}
// SetVersionPriority allows specifying a precise order of priority. All specified versions must be in the same group,
// and the specified order overwrites any previously specified order for this group
func (s *Scheme) SetVersionPriority(versions ...schema.GroupVersion) error {
groups := sets.String{}
order := []string{}
for _, version := range versions {
if len(version.Version) == 0 || version.Version == APIVersionInternal {
return fmt.Errorf("internal versions cannot be prioritized: %v", version)
}
groups.Insert(version.Group)
order = append(order, version.Version)
}
if len(groups) != 1 {
return fmt.Errorf("must register versions for exactly one group: %v", strings.Join(groups.List(), ", "))
}
s.versionPriority[groups.List()[0]] = order
return nil
}
// PrioritizedVersionsForGroup returns versions for a single group in priority order
func (s *Scheme) PrioritizedVersionsForGroup(group string) []schema.GroupVersion {
ret := []schema.GroupVersion{}
for _, version := range s.versionPriority[group] {
ret = append(ret, schema.GroupVersion{Group: group, Version: version})
}
for _, observedVersion := range s.observedVersions {
if observedVersion.Group != group {
continue
}
found := false
for _, existing := range ret {
if existing == observedVersion {
found = true
break
}
}
if !found {
ret = append(ret, observedVersion)
}
}
return ret
}
// PrioritizedVersionsAllGroups returns all known versions in their priority order. Groups are random, but
// versions for a single group are prioritized
func (s *Scheme) PrioritizedVersionsAllGroups() []schema.GroupVersion {
ret := []schema.GroupVersion{}
for group, versions := range s.versionPriority {
for _, version := range versions {
ret = append(ret, schema.GroupVersion{Group: group, Version: version})
}
}
for _, observedVersion := range s.observedVersions {
found := false
for _, existing := range ret {
if existing == observedVersion {
found = true
break
}
}
if !found {
ret = append(ret, observedVersion)
}
}
return ret
}
// PreferredVersionAllGroups returns the most preferred version for every group.
// group ordering is random.
func (s *Scheme) PreferredVersionAllGroups() []schema.GroupVersion {
ret := []schema.GroupVersion{}
for group, versions := range s.versionPriority {
for _, version := range versions {
ret = append(ret, schema.GroupVersion{Group: group, Version: version})
break
}
}
for _, observedVersion := range s.observedVersions {
found := false
for _, existing := range ret {
if existing.Group == observedVersion.Group {
found = true
break
}
}
if !found {
ret = append(ret, observedVersion)
}
}
return ret
}
// IsGroupRegistered returns true if types for the group have been registered with the scheme
func (s *Scheme) IsGroupRegistered(group string) bool {
for _, observedVersion := range s.observedVersions {
if observedVersion.Group == group {
return true
}
}
return false
}
// IsVersionRegistered returns true if types for the version have been registered with the scheme
func (s *Scheme) IsVersionRegistered(version schema.GroupVersion) bool {
for _, observedVersion := range s.observedVersions {
if observedVersion == version {
return true
}
}
return false
}
func (s *Scheme) addObservedVersion(version schema.GroupVersion) {
if len(version.Version) == 0 || version.Version == APIVersionInternal {
return
}
for _, observedVersion := range s.observedVersions {
if observedVersion == version {
return
}
}
s.observedVersions = append(s.observedVersions, version)
}

View File

@ -75,11 +75,6 @@ func init() {
case jsoniter.NumberValue:
var number json.Number
iter.ReadVal(&number)
u64, err := strconv.ParseUint(string(number), 10, 64)
if err == nil {
*(*interface{})(ptr) = u64
return
}
i64, err := strconv.ParseInt(string(number), 10, 64)
if err == nil {
*(*interface{})(ptr) = i64

View File

@ -19,6 +19,7 @@ package versioning
import (
"io"
"k8s.io/apimachinery/pkg/apis/meta/v1/unstructured"
"k8s.io/apimachinery/pkg/runtime"
"k8s.io/apimachinery/pkg/runtime/schema"
)
@ -166,9 +167,27 @@ func (c *codec) Decode(data []byte, defaultGVK *schema.GroupVersionKind, into ru
// Encode ensures the provided object is output in the appropriate group and version, invoking
// conversion if necessary. Unversioned objects (according to the ObjectTyper) are output as is.
func (c *codec) Encode(obj runtime.Object, w io.Writer) error {
switch obj.(type) {
case *runtime.Unknown, runtime.Unstructured:
switch obj := obj.(type) {
case *runtime.Unknown:
return c.encoder.Encode(obj, w)
case runtime.Unstructured:
// An unstructured list can contain objects of multiple group version kinds. don't short-circuit just
// because the top-level type matches our desired destination type. actually send the object to the converter
// to give it a chance to convert the list items if needed.
if _, ok := obj.(*unstructured.UnstructuredList); !ok {
// avoid conversion roundtrip if GVK is the right one already or is empty (yes, this is a hack, but the old behaviour we rely on in kubectl)
objGVK := obj.GetObjectKind().GroupVersionKind()
if len(objGVK.Version) == 0 {
return c.encoder.Encode(obj, w)
}
targetGVK, ok := c.encodeVersion.KindForGroupVersionKinds([]schema.GroupVersionKind{objGVK})
if !ok {
return runtime.NewNotRegisteredGVKErrForTarget(objGVK, c.encodeVersion)
}
if targetGVK == objGVK {
return c.encoder.Encode(obj, w)
}
}
}
gvks, isUnversioned, err := c.typer.ObjectKinds(obj)

View File

@ -18,7 +18,6 @@ package types
import (
"fmt"
"strings"
)
// NamespacedName comprises a resource name, with a mandatory namespace,
@ -42,19 +41,3 @@ const (
func (n NamespacedName) String() string {
return fmt.Sprintf("%s%c%s", n.Namespace, Separator, n.Name)
}
// NewNamespacedNameFromString parses the provided string and returns a NamespacedName.
// The expected format is as per String() above.
// If the input string is invalid, the returned NamespacedName has all empty string field values.
// This allows a single-value return from this function, while still allowing error checks in the caller.
// Note that an input string which does not include exactly one Separator is not a valid input (as it could never
// have neem returned by String() )
func NewNamespacedNameFromString(s string) NamespacedName {
nn := NamespacedName{}
result := strings.Split(s, string(Separator))
if len(result) == 2 {
nn.Namespace = result[0]
nn.Name = result[1]
}
return nn
}

View File

@ -26,18 +26,12 @@ import (
type Clock interface {
Now() time.Time
Since(time.Time) time.Duration
After(d time.Duration) <-chan time.Time
NewTimer(d time.Duration) Timer
Sleep(d time.Duration)
Tick(d time.Duration) <-chan time.Time
After(time.Duration) <-chan time.Time
NewTimer(time.Duration) Timer
Sleep(time.Duration)
NewTicker(time.Duration) Ticker
}
var (
_ = Clock(RealClock{})
_ = Clock(&FakeClock{})
_ = Clock(&IntervalClock{})
)
// RealClock really calls time.Now()
type RealClock struct{}
@ -62,8 +56,10 @@ func (RealClock) NewTimer(d time.Duration) Timer {
}
}
func (RealClock) Tick(d time.Duration) <-chan time.Time {
return time.Tick(d)
func (RealClock) NewTicker(d time.Duration) Ticker {
return &realTicker{
ticker: time.NewTicker(d),
}
}
func (RealClock) Sleep(d time.Duration) {
@ -137,7 +133,7 @@ func (f *FakeClock) NewTimer(d time.Duration) Timer {
return timer
}
func (f *FakeClock) Tick(d time.Duration) <-chan time.Time {
func (f *FakeClock) NewTicker(d time.Duration) Ticker {
f.lock.Lock()
defer f.lock.Unlock()
tickTime := f.time.Add(d)
@ -149,7 +145,9 @@ func (f *FakeClock) Tick(d time.Duration) <-chan time.Time {
destChan: ch,
})
return ch
return &fakeTicker{
c: ch,
}
}
// Move clock by Duration, notify anyone that's called After, Tick, or NewTimer
@ -242,8 +240,8 @@ func (*IntervalClock) NewTimer(d time.Duration) Timer {
// Unimplemented, will panic.
// TODO: make interval clock use FakeClock so this can be implemented.
func (*IntervalClock) Tick(d time.Duration) <-chan time.Time {
panic("IntervalClock doesn't implement Tick")
func (*IntervalClock) NewTicker(d time.Duration) Ticker {
panic("IntervalClock doesn't implement NewTicker")
}
func (*IntervalClock) Sleep(d time.Duration) {
@ -258,11 +256,6 @@ type Timer interface {
Reset(d time.Duration) bool
}
var (
_ = Timer(&realTimer{})
_ = Timer(&fakeTimer{})
)
// realTimer is backed by an actual time.Timer.
type realTimer struct {
timer *time.Timer
@ -325,3 +318,31 @@ func (f *fakeTimer) Reset(d time.Duration) bool {
return active
}
type Ticker interface {
C() <-chan time.Time
Stop()
}
type realTicker struct {
ticker *time.Ticker
}
func (t *realTicker) C() <-chan time.Time {
return t.ticker.C
}
func (t *realTicker) Stop() {
t.ticker.Stop()
}
type fakeTicker struct {
c <-chan time.Time
}
func (t *fakeTicker) C() <-chan time.Time {
return t.c
}
func (t *fakeTicker) Stop() {
}

View File

@ -19,6 +19,7 @@ package spdy
import (
"bufio"
"bytes"
"context"
"crypto/tls"
"encoding/base64"
"fmt"
@ -118,7 +119,7 @@ func (s *SpdyRoundTripper) dial(req *http.Request) (net.Conn, error) {
}
if proxyURL == nil {
return s.dialWithoutProxy(req.URL)
return s.dialWithoutProxy(req.Context(), req.URL)
}
// ensure we use a canonical host with proxyReq
@ -136,7 +137,7 @@ func (s *SpdyRoundTripper) dial(req *http.Request) (net.Conn, error) {
proxyReq.Header.Set("Proxy-Authorization", pa)
}
proxyDialConn, err := s.dialWithoutProxy(proxyURL)
proxyDialConn, err := s.dialWithoutProxy(req.Context(), proxyURL)
if err != nil {
return nil, err
}
@ -187,14 +188,15 @@ func (s *SpdyRoundTripper) dial(req *http.Request) (net.Conn, error) {
}
// dialWithoutProxy dials the host specified by url, using TLS if appropriate.
func (s *SpdyRoundTripper) dialWithoutProxy(url *url.URL) (net.Conn, error) {
func (s *SpdyRoundTripper) dialWithoutProxy(ctx context.Context, url *url.URL) (net.Conn, error) {
dialAddr := netutil.CanonicalAddr(url)
if url.Scheme == "http" {
if s.Dialer == nil {
return net.Dial("tcp", dialAddr)
var d net.Dialer
return d.DialContext(ctx, "tcp", dialAddr)
} else {
return s.Dialer.Dial("tcp", dialAddr)
return s.Dialer.DialContext(ctx, "tcp", dialAddr)
}
}

View File

@ -19,6 +19,7 @@ package net
import (
"bufio"
"bytes"
"context"
"crypto/tls"
"fmt"
"io"
@ -90,8 +91,8 @@ func SetOldTransportDefaults(t *http.Transport) *http.Transport {
// ProxierWithNoProxyCIDR allows CIDR rules in NO_PROXY
t.Proxy = NewProxierWithNoProxyCIDR(http.ProxyFromEnvironment)
}
if t.Dial == nil {
t.Dial = defaultTransport.Dial
if t.DialContext == nil {
t.DialContext = defaultTransport.DialContext
}
if t.TLSHandshakeTimeout == 0 {
t.TLSHandshakeTimeout = defaultTransport.TLSHandshakeTimeout
@ -119,7 +120,7 @@ type RoundTripperWrapper interface {
WrappedRoundTripper() http.RoundTripper
}
type DialFunc func(net, addr string) (net.Conn, error)
type DialFunc func(ctx context.Context, net, addr string) (net.Conn, error)
func DialerFor(transport http.RoundTripper) (DialFunc, error) {
if transport == nil {
@ -128,7 +129,7 @@ func DialerFor(transport http.RoundTripper) (DialFunc, error) {
switch transport := transport.(type) {
case *http.Transport:
return transport.Dial, nil
return transport.DialContext, nil
case RoundTripperWrapper:
return DialerFor(transport.WrappedRoundTripper())
default:

View File

@ -160,3 +160,10 @@ func RecoverFromPanic(err *error) {
callers)
}
}
// Must panics on non-nil errors. Useful to handling programmer level errors.
func Must(err error) {
if err != nil {
panic(err)
}
}

View File

@ -284,12 +284,32 @@ func PollImmediateInfinite(interval time.Duration, condition ConditionFunc) erro
// PollUntil tries a condition func until it returns true, an error or stopCh is
// closed.
//
// PolUntil always waits interval before the first run of 'condition'.
// PollUntil always waits interval before the first run of 'condition'.
// 'condition' will always be invoked at least once.
func PollUntil(interval time.Duration, condition ConditionFunc, stopCh <-chan struct{}) error {
return WaitFor(poller(interval, 0), condition, stopCh)
}
// PollImmediateUntil tries a condition func until it returns true, an error or stopCh is closed.
//
// PollImmediateUntil runs the 'condition' before waiting for the interval.
// 'condition' will always be invoked at least once.
func PollImmediateUntil(interval time.Duration, condition ConditionFunc, stopCh <-chan struct{}) error {
done, err := condition()
if err != nil {
return err
}
if done {
return nil
}
select {
case <-stopCh:
return ErrWaitTimeout
default:
return PollUntil(interval, condition, stopCh)
}
}
// WaitFunc creates a channel that receives an item every time a test
// should be executed and is closed when the last test should be invoked.
type WaitFunc func(done <-chan struct{}) <-chan struct{}

88
vendor/k8s.io/apimachinery/pkg/version/helpers.go generated vendored Normal file
View File

@ -0,0 +1,88 @@
/*
Copyright 2018 The Kubernetes Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package version
import (
"regexp"
"strconv"
"strings"
)
type versionType int
const (
// Bigger the version type number, higher priority it is
versionTypeAlpha versionType = iota
versionTypeBeta
versionTypeGA
)
var kubeVersionRegex = regexp.MustCompile("^v([\\d]+)(?:(alpha|beta)([\\d]+))?$")
func parseKubeVersion(v string) (majorVersion int, vType versionType, minorVersion int, ok bool) {
var err error
submatches := kubeVersionRegex.FindStringSubmatch(v)
if len(submatches) != 4 {
return 0, 0, 0, false
}
switch submatches[2] {
case "alpha":
vType = versionTypeAlpha
case "beta":
vType = versionTypeBeta
case "":
vType = versionTypeGA
default:
return 0, 0, 0, false
}
if majorVersion, err = strconv.Atoi(submatches[1]); err != nil {
return 0, 0, 0, false
}
if vType != versionTypeGA {
if minorVersion, err = strconv.Atoi(submatches[3]); err != nil {
return 0, 0, 0, false
}
}
return majorVersion, vType, minorVersion, true
}
// CompareKubeAwareVersionStrings compares two kube-like version strings.
// Kube-like version strings are starting with a v, followed by a major version, optional "alpha" or "beta" strings
// followed by a minor version (e.g. v1, v2beta1). Versions will be sorted based on GA/alpha/beta first and then major
// and minor versions. e.g. v2, v1, v1beta2, v1beta1, v1alpha1.
func CompareKubeAwareVersionStrings(v1, v2 string) int {
if v1 == v2 {
return 0
}
v1major, v1type, v1minor, ok1 := parseKubeVersion(v1)
v2major, v2type, v2minor, ok2 := parseKubeVersion(v2)
switch {
case !ok1 && !ok2:
return strings.Compare(v2, v1)
case !ok1 && ok2:
return -1
case ok1 && !ok2:
return 1
}
if v1type != v2type {
return int(v1type) - int(v2type)
}
if v1major != v2major {
return v1major - v2major
}
return v1minor - v2minor
}

View File

@ -62,11 +62,7 @@ func (fw *filteredWatch) Stop() {
// loop waits for new values, filters them, and resends them.
func (fw *filteredWatch) loop() {
defer close(fw.result)
for {
event, ok := <-fw.incoming.ResultChan()
if !ok {
break
}
for event := range fw.incoming.ResultChan() {
filtered, keep := fw.f(event)
if keep {
fw.result <- filtered

View File

@ -204,11 +204,7 @@ func (m *Broadcaster) Shutdown() {
func (m *Broadcaster) loop() {
// Deliberately not catching crashes here. Yes, bring down the process if there's a
// bug in watch.Broadcaster.
for {
event, ok := <-m.incoming
if !ok {
break
}
for event := range m.incoming {
if event.Type == internalRunFunctionMarker {
event.Object.(functionFakeRuntimeObject)()
continue

View File

@ -44,7 +44,7 @@ func (e Equalities) AddFunc(eqFunc interface{}) error {
return fmt.Errorf("expected func, got: %v", ft)
}
if ft.NumIn() != 2 {
return fmt.Errorf("expected three 'in' params, got: %v", ft)
return fmt.Errorf("expected two 'in' params, got: %v", ft)
}
if ft.NumOut() != 1 {
return fmt.Errorf("expected one 'out' param, got: %v", ft)

View File

@ -146,7 +146,7 @@ func (rl *respLogger) Addf(format string, data ...interface{}) {
// Log is intended to be called once at the end of your request handler, via defer
func (rl *respLogger) Log() {
latency := time.Since(rl.startTime)
if glog.V(2) {
if glog.V(3) {
if !rl.hijacked {
glog.InfoDepth(1, fmt.Sprintf("%s %s: (%v) %v%v%v [%s %s]", rl.req.Method, rl.req.RequestURI, latency, rl.status, rl.statusStack, rl.addedInfo, rl.req.Header["User-Agent"], rl.req.RemoteAddr))
} else {

30
vendor/k8s.io/client-go/README.md generated vendored
View File

@ -2,9 +2,9 @@
Go clients for talking to a [kubernetes](http://kubernetes.io/) cluster.
We currently recommend using the v6.0.0 tag. See [INSTALL.md](/INSTALL.md) for
We currently recommend using the v7.0.0 tag. See [INSTALL.md](/INSTALL.md) for
detailed installation instructions. `go get k8s.io/client-go/...` works, but
will give you head and doesn't handle the dependencies well.
will build `master`, which doesn't handle the dependencies well.
[![BuildStatus Widget]][BuildStatus Result]
[![GoReport Widget]][GoReport Status]
@ -91,16 +91,17 @@ We will backport bugfixes--but not new features--into older versions of
#### Compatibility matrix
| | Kubernetes 1.4 | Kubernetes 1.5 | Kubernetes 1.6 | Kubernetes 1.7 | Kubernetes 1.8 | Kubernetes 1.9 |
|---------------------|----------------|----------------|----------------|----------------|----------------|----------------|
| client-go 1.4 | ✓ | - | - | - | - | - |
| client-go 1.5 | + | - | - | - | - | - |
| client-go 2.0 | +- | ✓ | +- | +- | +- | +- |
| client-go 3.0 | +- | +- | ✓ | - | +- | +- |
| client-go 4.0 | +- | +- | +- | ✓ | +- | +- |
| client-go 5.0 | +- | +- | +- | +- | ✓ | +- |
| client-go 6.0 | +- | +- | +- | +- | +- | ✓ |
| client-go HEAD | +- | +- | +- | +- | +- | + |
| | Kubernetes 1.4 | Kubernetes 1.5 | Kubernetes 1.6 | Kubernetes 1.7 | Kubernetes 1.8 | Kubernetes 1.9 | Kubernetes 1.10 |
|---------------------|----------------|----------------|----------------|----------------|----------------|----------------|-----------------|
| client-go 1.4 | ✓ | - | - | - | - | - | - |
| client-go 1.5 | + | - | - | - | - | - | - |
| client-go 2.0 | +- | ✓ | +- | +- | +- | +- | +- |
| client-go 3.0 | +- | +- | ✓ | - | +- | +- | +- |
| client-go 4.0 | +- | +- | +- | ✓ | +- | +- | +- |
| client-go 5.0 | +- | +- | +- | +- | ✓ | +- | +- |
| client-go 6.0 | +- | +- | +- | +- | +- | ✓ | +- |
| client-go 7.0 | +- | +- | +- | +- | +- | +- | ✓ |
| client-go HEAD | +- | +- | +- | +- | +- | + | + |
Key:
@ -125,9 +126,10 @@ between client-go versions.
| client-go 1.5 | Kubernetes main repo, 1.5 branch | = - |
| client-go 2.0 | Kubernetes main repo, 1.5 branch | = - |
| client-go 3.0 | Kubernetes main repo, 1.6 branch | = - |
| client-go 4.0 | Kubernetes main repo, 1.7 branch | |
| client-go 4.0 | Kubernetes main repo, 1.7 branch | = - |
| client-go 5.0 | Kubernetes main repo, 1.8 branch | ✓ |
| client-go 6.0 | Kubernetes main repo, 1.9 branch | ✓ |
| client-go 7.0 | Kubernetes main repo, 1.10 branch | ✓ |
| client-go HEAD | Kubernetes main repo, master branch | ✓ |
Key:
@ -152,7 +154,7 @@ existing users won't be broken.
### Kubernetes tags
As of October 2017, client-go is still a mirror of
As of April 2018, client-go is still a mirror of
[k8s.io/kubernetes/staging/src/client-go](https://github.com/kubernetes/kubernetes/tree/master/staging/src/k8s.io/client-go),
the code development is still done in the staging area. Since Kubernetes 1.8
release, when syncing the code from the staging area, we also sync the Kubernetes

View File

@ -57,7 +57,14 @@ type ExecCredentialStatus struct {
// +optional
ExpirationTimestamp *metav1.Time
// Token is a bearer token used by the client for request authentication.
// +optional
Token string
// PEM-encoded client TLS certificate.
// +optional
ClientCertificateData string
// PEM-encoded client TLS private key.
// +optional
ClientKeyData string
}
// Response defines metadata about a failed request, including HTTP status code and

View File

@ -52,12 +52,20 @@ type ExecCredentialSpec struct {
}
// ExecCredentialStatus holds credentials for the transport to use.
//
// Token and ClientKeyData are sensitive fields. This data should only be
// transmitted in-memory between client and exec plugin process. Exec plugin
// itself should at least be protected via file permissions.
type ExecCredentialStatus struct {
// ExpirationTimestamp indicates a time when the provided credentials expire.
// +optional
ExpirationTimestamp *metav1.Time `json:"expirationTimestamp,omitempty"`
// Token is a bearer token used by the client for request authentication.
Token string `json:"token,omitempty"`
// PEM-encoded client TLS certificates (including intermediates, if any).
ClientCertificateData string `json:"clientCertificateData,omitempty"`
// PEM-encoded private key for the above certificate.
ClientKeyData string `json:"clientKeyData,omitempty"`
}
// Response defines metadata about a failed request, including HTTP status code and

View File

@ -99,6 +99,8 @@ func Convert_clientauthentication_ExecCredentialSpec_To_v1alpha1_ExecCredentialS
func autoConvert_v1alpha1_ExecCredentialStatus_To_clientauthentication_ExecCredentialStatus(in *ExecCredentialStatus, out *clientauthentication.ExecCredentialStatus, s conversion.Scope) error {
out.ExpirationTimestamp = (*v1.Time)(unsafe.Pointer(in.ExpirationTimestamp))
out.Token = in.Token
out.ClientCertificateData = in.ClientCertificateData
out.ClientKeyData = in.ClientKeyData
return nil
}
@ -110,6 +112,8 @@ func Convert_v1alpha1_ExecCredentialStatus_To_clientauthentication_ExecCredentia
func autoConvert_clientauthentication_ExecCredentialStatus_To_v1alpha1_ExecCredentialStatus(in *clientauthentication.ExecCredentialStatus, out *ExecCredentialStatus, s conversion.Scope) error {
out.ExpirationTimestamp = (*v1.Time)(unsafe.Pointer(in.ExpirationTimestamp))
out.Token = in.Token
out.ClientCertificateData = in.ClientCertificateData
out.ClientKeyData = in.ClientKeyData
return nil
}

View File

@ -0,0 +1,26 @@
/*
Copyright 2018 The Kubernetes Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package v1beta1
import (
conversion "k8s.io/apimachinery/pkg/conversion"
clientauthentication "k8s.io/client-go/pkg/apis/clientauthentication"
)
func Convert_clientauthentication_ExecCredentialSpec_To_v1beta1_ExecCredentialSpec(in *clientauthentication.ExecCredentialSpec, out *ExecCredentialSpec, s conversion.Scope) error {
return nil
}

View File

@ -0,0 +1,23 @@
/*
Copyright 2018 The Kubernetes Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
// +k8s:deepcopy-gen=package
// +k8s:conversion-gen=k8s.io/client-go/pkg/apis/clientauthentication
// +k8s:openapi-gen=true
// +k8s:defaulter-gen=TypeMeta
// +groupName=client.authentication.k8s.io
package v1beta1 // import "k8s.io/client-go/pkg/apis/clientauthentication/v1beta1"

View File

@ -0,0 +1,55 @@
/*
Copyright 2018 The Kubernetes Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package v1beta1
import (
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/runtime"
"k8s.io/apimachinery/pkg/runtime/schema"
)
// GroupName is the group name use in this package
const GroupName = "client.authentication.k8s.io"
// SchemeGroupVersion is group version used to register these objects
var SchemeGroupVersion = schema.GroupVersion{Group: GroupName, Version: "v1beta1"}
// Resource takes an unqualified resource and returns a Group qualified GroupResource
func Resource(resource string) schema.GroupResource {
return SchemeGroupVersion.WithResource(resource).GroupResource()
}
var (
SchemeBuilder runtime.SchemeBuilder
localSchemeBuilder = &SchemeBuilder
AddToScheme = localSchemeBuilder.AddToScheme
)
func init() {
// We only register manually written functions here. The registration of the
// generated functions takes place in the generated files. The separation
// makes the code compile even when the generated files are missing.
localSchemeBuilder.Register(addKnownTypes)
}
func addKnownTypes(scheme *runtime.Scheme) error {
scheme.AddKnownTypes(SchemeGroupVersion,
&ExecCredential{},
)
metav1.AddToGroupVersion(scheme, SchemeGroupVersion)
return nil
}

View File

@ -0,0 +1,59 @@
/*
Copyright 2018 The Kubernetes Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package v1beta1
import (
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
)
// +k8s:deepcopy-gen:interfaces=k8s.io/apimachinery/pkg/runtime.Object
// ExecCredentials is used by exec-based plugins to communicate credentials to
// HTTP transports.
type ExecCredential struct {
metav1.TypeMeta `json:",inline"`
// Spec holds information passed to the plugin by the transport. This contains
// request and runtime specific information, such as if the session is interactive.
Spec ExecCredentialSpec `json:"spec,omitempty"`
// Status is filled in by the plugin and holds the credentials that the transport
// should use to contact the API.
// +optional
Status *ExecCredentialStatus `json:"status,omitempty"`
}
// ExecCredenitalSpec holds request and runtime specific information provided by
// the transport.
type ExecCredentialSpec struct{}
// ExecCredentialStatus holds credentials for the transport to use.
//
// Token and ClientKeyData are sensitive fields. This data should only be
// transmitted in-memory between client and exec plugin process. Exec plugin
// itself should at least be protected via file permissions.
type ExecCredentialStatus struct {
// ExpirationTimestamp indicates a time when the provided credentials expire.
// +optional
ExpirationTimestamp *metav1.Time `json:"expirationTimestamp,omitempty"`
// Token is a bearer token used by the client for request authentication.
Token string `json:"token,omitempty"`
// PEM-encoded client TLS certificates (including intermediates, if any).
ClientCertificateData string `json:"clientCertificateData,omitempty"`
// PEM-encoded private key for the above certificate.
ClientKeyData string `json:"clientKeyData,omitempty"`
}

View File

@ -0,0 +1,114 @@
// +build !ignore_autogenerated
/*
Copyright The Kubernetes Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
// Code generated by conversion-gen. DO NOT EDIT.
package v1beta1
import (
unsafe "unsafe"
v1 "k8s.io/apimachinery/pkg/apis/meta/v1"
conversion "k8s.io/apimachinery/pkg/conversion"
runtime "k8s.io/apimachinery/pkg/runtime"
clientauthentication "k8s.io/client-go/pkg/apis/clientauthentication"
)
func init() {
localSchemeBuilder.Register(RegisterConversions)
}
// RegisterConversions adds conversion functions to the given scheme.
// Public to allow building arbitrary schemes.
func RegisterConversions(scheme *runtime.Scheme) error {
return scheme.AddGeneratedConversionFuncs(
Convert_v1beta1_ExecCredential_To_clientauthentication_ExecCredential,
Convert_clientauthentication_ExecCredential_To_v1beta1_ExecCredential,
Convert_v1beta1_ExecCredentialSpec_To_clientauthentication_ExecCredentialSpec,
Convert_clientauthentication_ExecCredentialSpec_To_v1beta1_ExecCredentialSpec,
Convert_v1beta1_ExecCredentialStatus_To_clientauthentication_ExecCredentialStatus,
Convert_clientauthentication_ExecCredentialStatus_To_v1beta1_ExecCredentialStatus,
)
}
func autoConvert_v1beta1_ExecCredential_To_clientauthentication_ExecCredential(in *ExecCredential, out *clientauthentication.ExecCredential, s conversion.Scope) error {
if err := Convert_v1beta1_ExecCredentialSpec_To_clientauthentication_ExecCredentialSpec(&in.Spec, &out.Spec, s); err != nil {
return err
}
out.Status = (*clientauthentication.ExecCredentialStatus)(unsafe.Pointer(in.Status))
return nil
}
// Convert_v1beta1_ExecCredential_To_clientauthentication_ExecCredential is an autogenerated conversion function.
func Convert_v1beta1_ExecCredential_To_clientauthentication_ExecCredential(in *ExecCredential, out *clientauthentication.ExecCredential, s conversion.Scope) error {
return autoConvert_v1beta1_ExecCredential_To_clientauthentication_ExecCredential(in, out, s)
}
func autoConvert_clientauthentication_ExecCredential_To_v1beta1_ExecCredential(in *clientauthentication.ExecCredential, out *ExecCredential, s conversion.Scope) error {
if err := Convert_clientauthentication_ExecCredentialSpec_To_v1beta1_ExecCredentialSpec(&in.Spec, &out.Spec, s); err != nil {
return err
}
out.Status = (*ExecCredentialStatus)(unsafe.Pointer(in.Status))
return nil
}
// Convert_clientauthentication_ExecCredential_To_v1beta1_ExecCredential is an autogenerated conversion function.
func Convert_clientauthentication_ExecCredential_To_v1beta1_ExecCredential(in *clientauthentication.ExecCredential, out *ExecCredential, s conversion.Scope) error {
return autoConvert_clientauthentication_ExecCredential_To_v1beta1_ExecCredential(in, out, s)
}
func autoConvert_v1beta1_ExecCredentialSpec_To_clientauthentication_ExecCredentialSpec(in *ExecCredentialSpec, out *clientauthentication.ExecCredentialSpec, s conversion.Scope) error {
return nil
}
// Convert_v1beta1_ExecCredentialSpec_To_clientauthentication_ExecCredentialSpec is an autogenerated conversion function.
func Convert_v1beta1_ExecCredentialSpec_To_clientauthentication_ExecCredentialSpec(in *ExecCredentialSpec, out *clientauthentication.ExecCredentialSpec, s conversion.Scope) error {
return autoConvert_v1beta1_ExecCredentialSpec_To_clientauthentication_ExecCredentialSpec(in, out, s)
}
func autoConvert_clientauthentication_ExecCredentialSpec_To_v1beta1_ExecCredentialSpec(in *clientauthentication.ExecCredentialSpec, out *ExecCredentialSpec, s conversion.Scope) error {
// WARNING: in.Response requires manual conversion: does not exist in peer-type
// WARNING: in.Interactive requires manual conversion: does not exist in peer-type
return nil
}
func autoConvert_v1beta1_ExecCredentialStatus_To_clientauthentication_ExecCredentialStatus(in *ExecCredentialStatus, out *clientauthentication.ExecCredentialStatus, s conversion.Scope) error {
out.ExpirationTimestamp = (*v1.Time)(unsafe.Pointer(in.ExpirationTimestamp))
out.Token = in.Token
out.ClientCertificateData = in.ClientCertificateData
out.ClientKeyData = in.ClientKeyData
return nil
}
// Convert_v1beta1_ExecCredentialStatus_To_clientauthentication_ExecCredentialStatus is an autogenerated conversion function.
func Convert_v1beta1_ExecCredentialStatus_To_clientauthentication_ExecCredentialStatus(in *ExecCredentialStatus, out *clientauthentication.ExecCredentialStatus, s conversion.Scope) error {
return autoConvert_v1beta1_ExecCredentialStatus_To_clientauthentication_ExecCredentialStatus(in, out, s)
}
func autoConvert_clientauthentication_ExecCredentialStatus_To_v1beta1_ExecCredentialStatus(in *clientauthentication.ExecCredentialStatus, out *ExecCredentialStatus, s conversion.Scope) error {
out.ExpirationTimestamp = (*v1.Time)(unsafe.Pointer(in.ExpirationTimestamp))
out.Token = in.Token
out.ClientCertificateData = in.ClientCertificateData
out.ClientKeyData = in.ClientKeyData
return nil
}
// Convert_clientauthentication_ExecCredentialStatus_To_v1beta1_ExecCredentialStatus is an autogenerated conversion function.
func Convert_clientauthentication_ExecCredentialStatus_To_v1beta1_ExecCredentialStatus(in *clientauthentication.ExecCredentialStatus, out *ExecCredentialStatus, s conversion.Scope) error {
return autoConvert_clientauthentication_ExecCredentialStatus_To_v1beta1_ExecCredentialStatus(in, out, s)
}

View File

@ -0,0 +1,100 @@
// +build !ignore_autogenerated
/*
Copyright The Kubernetes Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
// Code generated by deepcopy-gen. DO NOT EDIT.
package v1beta1
import (
runtime "k8s.io/apimachinery/pkg/runtime"
)
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *ExecCredential) DeepCopyInto(out *ExecCredential) {
*out = *in
out.TypeMeta = in.TypeMeta
out.Spec = in.Spec
if in.Status != nil {
in, out := &in.Status, &out.Status
if *in == nil {
*out = nil
} else {
*out = new(ExecCredentialStatus)
(*in).DeepCopyInto(*out)
}
}
return
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new ExecCredential.
func (in *ExecCredential) DeepCopy() *ExecCredential {
if in == nil {
return nil
}
out := new(ExecCredential)
in.DeepCopyInto(out)
return out
}
// DeepCopyObject is an autogenerated deepcopy function, copying the receiver, creating a new runtime.Object.
func (in *ExecCredential) DeepCopyObject() runtime.Object {
if c := in.DeepCopy(); c != nil {
return c
}
return nil
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *ExecCredentialSpec) DeepCopyInto(out *ExecCredentialSpec) {
*out = *in
return
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new ExecCredentialSpec.
func (in *ExecCredentialSpec) DeepCopy() *ExecCredentialSpec {
if in == nil {
return nil
}
out := new(ExecCredentialSpec)
in.DeepCopyInto(out)
return out
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *ExecCredentialStatus) DeepCopyInto(out *ExecCredentialStatus) {
*out = *in
if in.ExpirationTimestamp != nil {
in, out := &in.ExpirationTimestamp, &out.ExpirationTimestamp
if *in == nil {
*out = nil
} else {
*out = (*in).DeepCopy()
}
}
return
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new ExecCredentialStatus.
func (in *ExecCredentialStatus) DeepCopy() *ExecCredentialStatus {
if in == nil {
return nil
}
out := new(ExecCredentialStatus)
in.DeepCopyInto(out)
return out
}

View File

@ -0,0 +1,32 @@
// +build !ignore_autogenerated
/*
Copyright The Kubernetes Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
// Code generated by defaulter-gen. DO NOT EDIT.
package v1beta1
import (
runtime "k8s.io/apimachinery/pkg/runtime"
)
// RegisterDefaults adds defaulters functions to the given scheme.
// Public to allow building arbitrary schemes.
// All generated defaulters are covering - they call all nested defaulters.
func RegisterDefaults(scheme *runtime.Scheme) error {
return nil
}

View File

@ -43,7 +43,7 @@ var (
gitMinor string = "" // minor version, numeric possibly followed by "+"
// semantic version, derived by build scripts (see
// https://github.com/kubernetes/kubernetes/blob/master/docs/design/versioning.md
// https://git.k8s.io/community/contributors/design-proposals/release/versioning.md
// for a detailed discussion of this field)
//
// TODO: This field is still called "gitVersion" for legacy

View File

@ -18,11 +18,15 @@ package exec
import (
"bytes"
"context"
"crypto/tls"
"fmt"
"io"
"net"
"net/http"
"os"
"os/exec"
"reflect"
"sync"
"time"
@ -34,7 +38,10 @@ import (
"k8s.io/apimachinery/pkg/runtime/serializer"
"k8s.io/client-go/pkg/apis/clientauthentication"
"k8s.io/client-go/pkg/apis/clientauthentication/v1alpha1"
"k8s.io/client-go/pkg/apis/clientauthentication/v1beta1"
"k8s.io/client-go/tools/clientcmd/api"
"k8s.io/client-go/transport"
"k8s.io/client-go/util/connrotation"
)
const execInfoEnv = "KUBERNETES_EXEC_INFO"
@ -45,6 +52,7 @@ var codecs = serializer.NewCodecFactory(scheme)
func init() {
v1.AddToGroupVersion(scheme, schema.GroupVersion{Version: "v1"})
v1alpha1.AddToScheme(scheme)
v1beta1.AddToScheme(scheme)
clientauthentication.AddToScheme(scheme)
}
@ -55,6 +63,7 @@ var (
// The list of API versions we accept.
apiVersions = map[string]schema.GroupVersion{
v1alpha1.SchemeGroupVersion.String(): v1alpha1.SchemeGroupVersion,
v1beta1.SchemeGroupVersion.String(): v1beta1.SchemeGroupVersion,
}
)
@ -147,14 +156,55 @@ type Authenticator struct {
// The mutex also guards calling the plugin. Since the plugin could be
// interactive we want to make sure it's only called once.
mu sync.Mutex
cachedToken string
cachedCreds *credentials
exp time.Time
onRotate func()
}
// WrapTransport instruments an existing http.RoundTripper with credentials returned
// by the plugin.
func (a *Authenticator) WrapTransport(rt http.RoundTripper) http.RoundTripper {
type credentials struct {
token string
cert *tls.Certificate
}
// UpdateTransportConfig updates the transport.Config to use credentials
// returned by the plugin.
func (a *Authenticator) UpdateTransportConfig(c *transport.Config) error {
wt := c.WrapTransport
c.WrapTransport = func(rt http.RoundTripper) http.RoundTripper {
if wt != nil {
rt = wt(rt)
}
return &roundTripper{a, rt}
}
getCert := c.TLS.GetCert
c.TLS.GetCert = func() (*tls.Certificate, error) {
// If previous GetCert is present and returns a valid non-nil
// certificate, use that. Otherwise use cert from exec plugin.
if getCert != nil {
cert, err := getCert()
if err != nil {
return nil, err
}
if cert != nil {
return cert, nil
}
}
return a.cert()
}
var dial func(ctx context.Context, network, addr string) (net.Conn, error)
if c.Dial != nil {
dial = c.Dial
} else {
dial = (&net.Dialer{Timeout: 30 * time.Second, KeepAlive: 30 * time.Second}).DialContext
}
d := connrotation.NewDialer(dial)
a.onRotate = d.CloseAll
c.Dial = d.DialContext
return nil
}
type roundTripper struct {
@ -169,11 +219,13 @@ func (r *roundTripper) RoundTrip(req *http.Request) (*http.Response, error) {
return r.base.RoundTrip(req)
}
token, err := r.a.token()
creds, err := r.a.getCreds()
if err != nil {
return nil, fmt.Errorf("getting token: %v", err)
return nil, fmt.Errorf("getting credentials: %v", err)
}
if creds.token != "" {
req.Header.Set("Authorization", "Bearer "+creds.token)
}
req.Header.Set("Authorization", "Bearer "+token)
res, err := r.base.RoundTrip(req)
if err != nil {
@ -184,47 +236,60 @@ func (r *roundTripper) RoundTrip(req *http.Request) (*http.Response, error) {
Header: res.Header,
Code: int32(res.StatusCode),
}
if err := r.a.refresh(token, resp); err != nil {
glog.Errorf("refreshing token: %v", err)
if err := r.a.maybeRefreshCreds(creds, resp); err != nil {
glog.Errorf("refreshing credentials: %v", err)
}
}
return res, nil
}
func (a *Authenticator) tokenExpired() bool {
func (a *Authenticator) credsExpired() bool {
if a.exp.IsZero() {
return false
}
return a.now().After(a.exp)
}
func (a *Authenticator) token() (string, error) {
a.mu.Lock()
defer a.mu.Unlock()
if a.cachedToken != "" && !a.tokenExpired() {
return a.cachedToken, nil
func (a *Authenticator) cert() (*tls.Certificate, error) {
creds, err := a.getCreds()
if err != nil {
return nil, err
}
return a.getToken(nil)
return creds.cert, nil
}
// refresh executes the plugin to force a rotation of the token.
func (a *Authenticator) refresh(token string, r *clientauthentication.Response) error {
func (a *Authenticator) getCreds() (*credentials, error) {
a.mu.Lock()
defer a.mu.Unlock()
if a.cachedCreds != nil && !a.credsExpired() {
return a.cachedCreds, nil
}
if err := a.refreshCredsLocked(nil); err != nil {
return nil, err
}
return a.cachedCreds, nil
}
// maybeRefreshCreds executes the plugin to force a rotation of the
// credentials, unless they were rotated already.
func (a *Authenticator) maybeRefreshCreds(creds *credentials, r *clientauthentication.Response) error {
a.mu.Lock()
defer a.mu.Unlock()
if token != a.cachedToken {
// Token already rotated.
// Since we're not making a new pointer to a.cachedCreds in getCreds, no
// need to do deep comparison.
if creds != a.cachedCreds {
// Credentials already rotated.
return nil
}
_, err := a.getToken(r)
return err
return a.refreshCredsLocked(r)
}
// getToken executes the plugin and reads the credentials from stdout. It must be
// called while holding the Authenticator's mutex.
func (a *Authenticator) getToken(r *clientauthentication.Response) (string, error) {
// refreshCredsLocked executes the plugin and reads the credentials from
// stdout. It must be called while holding the Authenticator's mutex.
func (a *Authenticator) refreshCredsLocked(r *clientauthentication.Response) error {
cred := &clientauthentication.ExecCredential{
Spec: clientauthentication.ExecCredentialSpec{
Response: r,
@ -232,13 +297,18 @@ func (a *Authenticator) getToken(r *clientauthentication.Response) (string, erro
},
}
env := append(a.environ(), a.env...)
if a.group == v1alpha1.SchemeGroupVersion {
// Input spec disabled for beta due to lack of use. Possibly re-enable this later if
// someone wants it back.
//
// See: https://github.com/kubernetes/kubernetes/issues/61796
data, err := runtime.Encode(codecs.LegacyCodec(a.group), cred)
if err != nil {
return "", fmt.Errorf("encode ExecCredentials: %v", err)
return fmt.Errorf("encode ExecCredentials: %v", err)
}
env := append(a.environ(), a.env...)
env = append(env, fmt.Sprintf("%s=%s", execInfoEnv, data))
}
stdout := &bytes.Buffer{}
cmd := exec.Command(a.cmd, a.args...)
@ -250,23 +320,26 @@ func (a *Authenticator) getToken(r *clientauthentication.Response) (string, erro
}
if err := cmd.Run(); err != nil {
return "", fmt.Errorf("exec: %v", err)
return fmt.Errorf("exec: %v", err)
}
_, gvk, err := codecs.UniversalDecoder(a.group).Decode(stdout.Bytes(), nil, cred)
if err != nil {
return "", fmt.Errorf("decode stdout: %v", err)
return fmt.Errorf("decoding stdout: %v", err)
}
if gvk.Group != a.group.Group || gvk.Version != a.group.Version {
return "", fmt.Errorf("exec plugin is configured to use API version %s, plugin returned version %s",
return fmt.Errorf("exec plugin is configured to use API version %s, plugin returned version %s",
a.group, schema.GroupVersion{Group: gvk.Group, Version: gvk.Version})
}
if cred.Status == nil {
return "", fmt.Errorf("exec plugin didn't return a status field")
return fmt.Errorf("exec plugin didn't return a status field")
}
if cred.Status.Token == "" {
return "", fmt.Errorf("exec plugin didn't return a token")
if cred.Status.Token == "" && cred.Status.ClientCertificateData == "" && cred.Status.ClientKeyData == "" {
return fmt.Errorf("exec plugin didn't return a token or cert/key pair")
}
if (cred.Status.ClientCertificateData == "") != (cred.Status.ClientKeyData == "") {
return fmt.Errorf("exec plugin returned only certificate or key, not both")
}
if cred.Status.ExpirationTimestamp != nil {
@ -274,7 +347,24 @@ func (a *Authenticator) getToken(r *clientauthentication.Response) (string, erro
} else {
a.exp = time.Time{}
}
a.cachedToken = cred.Status.Token
return a.cachedToken, nil
newCreds := &credentials{
token: cred.Status.Token,
}
if cred.Status.ClientKeyData != "" && cred.Status.ClientCertificateData != "" {
cert, err := tls.X509KeyPair([]byte(cred.Status.ClientCertificateData), []byte(cred.Status.ClientKeyData))
if err != nil {
return fmt.Errorf("failed parsing client key/certificate: %v", err)
}
newCreds.cert = &cert
}
oldCreds := a.cachedCreds
a.cachedCreds = newCreds
// Only close all connections when TLS cert rotates. Token rotation doesn't
// need the extra noise.
if a.onRotate != nil && oldCreds != nil && !reflect.DeepEqual(oldCreds.cert, a.cachedCreds.cert) {
a.onRotate()
}
return nil
}

View File

@ -17,6 +17,7 @@ limitations under the License.
package rest
import (
"context"
"fmt"
"io/ioutil"
"net"
@ -29,7 +30,6 @@ import (
"github.com/golang/glog"
"k8s.io/api/core/v1"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/runtime"
"k8s.io/apimachinery/pkg/runtime/schema"
@ -111,7 +111,7 @@ type Config struct {
Timeout time.Duration
// Dial specifies the dial function for creating unencrypted TCP connections.
Dial func(network, addr string) (net.Conn, error)
Dial func(ctx context.Context, network, address string) (net.Conn, error)
// Version forces a specific version to be used (if registered)
// Do we need this?
@ -316,12 +316,12 @@ func InClusterConfig() (*Config, error) {
return nil, fmt.Errorf("unable to load in-cluster configuration, KUBERNETES_SERVICE_HOST and KUBERNETES_SERVICE_PORT must be defined")
}
token, err := ioutil.ReadFile("/var/run/secrets/kubernetes.io/serviceaccount/" + v1.ServiceAccountTokenKey)
token, err := ioutil.ReadFile("/var/run/secrets/kubernetes.io/serviceaccount/token")
if err != nil {
return nil, err
}
tlsClientConfig := TLSClientConfig{}
rootCAFile := "/var/run/secrets/kubernetes.io/serviceaccount/" + v1.ServiceAccountRootCAKey
rootCAFile := "/var/run/secrets/kubernetes.io/serviceaccount/ca.crt"
if _, err := certutil.NewPool(rootCAFile); err != nil {
glog.Errorf("Expected to load root CA config from %s, but got err: %v", rootCAFile, err)
} else {

View File

@ -317,10 +317,14 @@ func (r *Request) Param(paramName, s string) *Request {
// VersionedParams will not write query parameters that have omitempty set and are empty. If a
// parameter has already been set it is appended to (Params and VersionedParams are additive).
func (r *Request) VersionedParams(obj runtime.Object, codec runtime.ParameterCodec) *Request {
return r.SpecificallyVersionedParams(obj, codec, *r.content.GroupVersion)
}
func (r *Request) SpecificallyVersionedParams(obj runtime.Object, codec runtime.ParameterCodec, version schema.GroupVersion) *Request {
if r.err != nil {
return r
}
params, err := codec.EncodeParameters(obj, *r.content.GroupVersion)
params, err := codec.EncodeParameters(obj, version)
if err != nil {
r.err = err
return r
@ -353,8 +357,8 @@ func (r *Request) SetHeader(key string, values ...string) *Request {
return r
}
// Timeout makes the request use the given duration as a timeout. Sets the "timeout"
// parameter.
// Timeout makes the request use the given duration as an overall timeout for the
// request. Additionally, if set passes the value as "timeout" parameter in URL.
func (r *Request) Timeout(d time.Duration) *Request {
if r.err != nil {
return r
@ -485,6 +489,19 @@ func (r *Request) tryThrottle() {
// Watch attempts to begin watching the requested location.
// Returns a watch.Interface, or an error.
func (r *Request) Watch() (watch.Interface, error) {
return r.WatchWithSpecificDecoders(
func(body io.ReadCloser) streaming.Decoder {
framer := r.serializers.Framer.NewFrameReader(body)
return streaming.NewDecoder(framer, r.serializers.StreamingSerializer)
},
r.serializers.Decoder,
)
}
// WatchWithSpecificDecoders attempts to begin watching the requested location with a *different* decoder.
// Turns out that you want one "standard" decoder for the watch event and one "personal" decoder for the content
// Returns a watch.Interface, or an error.
func (r *Request) WatchWithSpecificDecoders(wrapperDecoderFn func(io.ReadCloser) streaming.Decoder, embeddedDecoder runtime.Decoder) (watch.Interface, error) {
// We specifically don't want to rate limit watches, so we
// don't use r.throttle here.
if r.err != nil {
@ -532,9 +549,8 @@ func (r *Request) Watch() (watch.Interface, error) {
}
return nil, fmt.Errorf("for request '%+v', got status: %v", url, resp.StatusCode)
}
framer := r.serializers.Framer.NewFrameReader(resp.Body)
decoder := streaming.NewDecoder(framer, r.serializers.StreamingSerializer)
return watch.NewStreamWatcher(restclientwatch.NewDecoder(decoder, r.serializers.Decoder)), nil
wrapperDecoder := wrapperDecoderFn(resp.Body)
return watch.NewStreamWatcher(restclientwatch.NewDecoder(wrapperDecoder, embeddedDecoder)), nil
}
// updateURLMetrics is a convenience function for pushing metrics.
@ -640,7 +656,6 @@ func (r *Request) request(fn func(*http.Request, *http.Response)) error {
}
// Right now we make about ten retry attempts if we get a Retry-After response.
// TODO: Change to a timeout based approach.
maxRetries := 10
retries := 0
for {
@ -649,6 +664,14 @@ func (r *Request) request(fn func(*http.Request, *http.Response)) error {
if err != nil {
return err
}
if r.timeout > 0 {
if r.ctx == nil {
r.ctx = context.Background()
}
var cancelFn context.CancelFunc
r.ctx, cancelFn = context.WithTimeout(r.ctx, r.timeout)
defer cancelFn()
}
if r.ctx != nil {
req = req.WithContext(r.ctx)
}

View File

@ -18,6 +18,7 @@ package rest
import (
"crypto/tls"
"errors"
"net/http"
"k8s.io/client-go/plugin/pkg/client/auth/exec"
@ -59,39 +60,10 @@ func HTTPWrappersForConfig(config *Config, rt http.RoundTripper) (http.RoundTrip
// TransportConfig converts a client config to an appropriate transport config.
func (c *Config) TransportConfig() (*transport.Config, error) {
wt := c.WrapTransport
if c.ExecProvider != nil {
provider, err := exec.GetAuthenticator(c.ExecProvider)
if err != nil {
return nil, err
}
if wt != nil {
previousWT := wt
wt = func(rt http.RoundTripper) http.RoundTripper {
return provider.WrapTransport(previousWT(rt))
}
} else {
wt = provider.WrapTransport
}
}
if c.AuthProvider != nil {
provider, err := GetAuthProvider(c.Host, c.AuthProvider, c.AuthConfigPersister)
if err != nil {
return nil, err
}
if wt != nil {
previousWT := wt
wt = func(rt http.RoundTripper) http.RoundTripper {
return provider.WrapTransport(previousWT(rt))
}
} else {
wt = provider.WrapTransport
}
}
return &transport.Config{
conf := &transport.Config{
UserAgent: c.UserAgent,
Transport: c.Transport,
WrapTransport: wt,
WrapTransport: c.WrapTransport,
TLS: transport.TLSConfig{
Insecure: c.Insecure,
ServerName: c.ServerName,
@ -111,5 +83,34 @@ func (c *Config) TransportConfig() (*transport.Config, error) {
Extra: c.Impersonate.Extra,
},
Dial: c.Dial,
}, nil
}
if c.ExecProvider != nil && c.AuthProvider != nil {
return nil, errors.New("execProvider and authProvider cannot be used in combination")
}
if c.ExecProvider != nil {
provider, err := exec.GetAuthenticator(c.ExecProvider)
if err != nil {
return nil, err
}
if err := provider.UpdateTransportConfig(conf); err != nil {
return nil, err
}
}
if c.AuthProvider != nil {
provider, err := GetAuthProvider(c.Host, c.AuthProvider, c.AuthConfigPersister)
if err != nil {
return nil, err
}
wt := conf.WrapTransport
if wt != nil {
conf.WrapTransport = func(rt http.RoundTripper) http.RoundTripper {
return provider.WrapTransport(wt(rt))
}
} else {
conf.WrapTransport = provider.WrapTransport
}
}
return conf, nil
}

View File

@ -43,7 +43,9 @@ type tlsCacheKey struct {
caData string
certData string
keyData string
getCert string
serverName string
dial string
}
func (t tlsCacheKey) String() string {
@ -51,7 +53,7 @@ func (t tlsCacheKey) String() string {
if len(t.keyData) > 0 {
keyText = "<redacted>"
}
return fmt.Sprintf("insecure:%v, caData:%#v, certData:%#v, keyData:%s, serverName:%s", t.insecure, t.caData, t.certData, keyText, t.serverName)
return fmt.Sprintf("insecure:%v, caData:%#v, certData:%#v, keyData:%s, getCert: %s, serverName:%s, dial:%s", t.insecure, t.caData, t.certData, keyText, t.getCert, t.serverName, t.dial)
}
func (c *tlsTransportCache) get(config *Config) (http.RoundTripper, error) {
@ -75,7 +77,7 @@ func (c *tlsTransportCache) get(config *Config) (http.RoundTripper, error) {
return nil, err
}
// The options didn't require a custom TLS config
if tlsConfig == nil {
if tlsConfig == nil && config.Dial == nil {
return http.DefaultTransport, nil
}
@ -84,7 +86,7 @@ func (c *tlsTransportCache) get(config *Config) (http.RoundTripper, error) {
dial = (&net.Dialer{
Timeout: 30 * time.Second,
KeepAlive: 30 * time.Second,
}).Dial
}).DialContext
}
// Cache a single transport for these options
c.transports[key] = utilnet.SetTransportDefaults(&http.Transport{
@ -92,7 +94,7 @@ func (c *tlsTransportCache) get(config *Config) (http.RoundTripper, error) {
TLSHandshakeTimeout: 10 * time.Second,
TLSClientConfig: tlsConfig,
MaxIdleConnsPerHost: idleConnsPerHost,
Dial: dial,
DialContext: dial,
})
return c.transports[key], nil
}
@ -108,6 +110,8 @@ func tlsConfigKey(c *Config) (tlsCacheKey, error) {
caData: string(c.TLS.CAData),
certData: string(c.TLS.CertData),
keyData: string(c.TLS.KeyData),
getCert: fmt.Sprintf("%p", c.TLS.GetCert),
serverName: c.TLS.ServerName,
dial: fmt.Sprintf("%p", c.Dial),
}, nil
}

View File

@ -17,6 +17,8 @@ limitations under the License.
package transport
import (
"context"
"crypto/tls"
"net"
"net/http"
)
@ -53,7 +55,7 @@ type Config struct {
WrapTransport func(rt http.RoundTripper) http.RoundTripper
// Dial specifies the dial function for creating unencrypted TCP connections.
Dial func(network, addr string) (net.Conn, error)
Dial func(ctx context.Context, network, address string) (net.Conn, error)
}
// ImpersonationConfig has all the available impersonation options
@ -83,7 +85,12 @@ func (c *Config) HasTokenAuth() bool {
// HasCertAuth returns whether the configuration has certificate authentication or not.
func (c *Config) HasCertAuth() bool {
return len(c.TLS.CertData) != 0 || len(c.TLS.CertFile) != 0
return (len(c.TLS.CertData) != 0 || len(c.TLS.CertFile) != 0) && (len(c.TLS.KeyData) != 0 || len(c.TLS.KeyFile) != 0)
}
// HasCertCallbacks returns whether the configuration has certificate callback or not.
func (c *Config) HasCertCallback() bool {
return c.TLS.GetCert != nil
}
// TLSConfig holds the information needed to set up a TLS transport.
@ -98,4 +105,6 @@ type TLSConfig struct {
CAData []byte // Bytes of the PEM-encoded server trusted root certificates. Supercedes CAFile.
CertData []byte // Bytes of the PEM-encoded client certificate. Supercedes CertFile.
KeyData []byte // Bytes of the PEM-encoded client key. Supercedes KeyFile.
GetCert func() (*tls.Certificate, error) // Callback that returns a TLS client certificate. CertData, CertFile, KeyData and KeyFile supercede this field.
}

View File

@ -331,7 +331,7 @@ func (r *requestInfo) toCurl() string {
headers := ""
for key, values := range r.RequestHeaders {
for _, value := range values {
headers += fmt.Sprintf(` -H %q`, fmt.Sprintf("%s: '%s'", key, value))
headers += fmt.Sprintf(` -H %q`, fmt.Sprintf("%s: %s", key, value))
}
}

View File

@ -28,7 +28,7 @@ import (
// or transport level security defined by the provided Config.
func New(config *Config) (http.RoundTripper, error) {
// Set transport level security
if config.Transport != nil && (config.HasCA() || config.HasCertAuth() || config.TLS.Insecure) {
if config.Transport != nil && (config.HasCA() || config.HasCertAuth() || config.HasCertCallback() || config.TLS.Insecure) {
return nil, fmt.Errorf("using a custom transport with TLS certificate options or the insecure flag is not allowed")
}
@ -52,7 +52,7 @@ func New(config *Config) (http.RoundTripper, error) {
// TLSConfigFor returns a tls.Config that will provide the transport level security defined
// by the provided Config. Will return nil if no transport level security is requested.
func TLSConfigFor(c *Config) (*tls.Config, error) {
if !(c.HasCA() || c.HasCertAuth() || c.TLS.Insecure) {
if !(c.HasCA() || c.HasCertAuth() || c.HasCertCallback() || c.TLS.Insecure || len(c.TLS.ServerName) > 0) {
return nil, nil
}
if c.HasCA() && c.TLS.Insecure {
@ -75,12 +75,40 @@ func TLSConfigFor(c *Config) (*tls.Config, error) {
tlsConfig.RootCAs = rootCertPool(c.TLS.CAData)
}
var staticCert *tls.Certificate
if c.HasCertAuth() {
// If key/cert were provided, verify them before setting up
// tlsConfig.GetClientCertificate.
cert, err := tls.X509KeyPair(c.TLS.CertData, c.TLS.KeyData)
if err != nil {
return nil, err
}
tlsConfig.Certificates = []tls.Certificate{cert}
staticCert = &cert
}
if c.HasCertAuth() || c.HasCertCallback() {
tlsConfig.GetClientCertificate = func(*tls.CertificateRequestInfo) (*tls.Certificate, error) {
// Note: static key/cert data always take precedence over cert
// callback.
if staticCert != nil {
return staticCert, nil
}
if c.HasCertCallback() {
cert, err := c.TLS.GetCert()
if err != nil {
return nil, err
}
// GetCert may return empty value, meaning no cert.
if cert != nil {
return cert, nil
}
}
// Both c.TLS.CertData/KeyData were unset and GetCert didn't return
// anything. Return an empty tls.Certificate, no client cert will
// be sent to the server.
return &tls.Certificate{}, nil
}
}
return tlsConfig, nil

View File

@ -17,7 +17,11 @@ limitations under the License.
package cert
import (
"crypto"
"crypto/ecdsa"
"crypto/rsa"
"crypto/x509"
"encoding/pem"
"fmt"
"io/ioutil"
"os"
@ -101,6 +105,27 @@ func LoadOrGenerateKeyFile(keyPath string) (data []byte, wasGenerated bool, err
return generatedData, true, nil
}
// MarshalPrivateKeyToPEM converts a known private key type of RSA or ECDSA to
// a PEM encoded block or returns an error.
func MarshalPrivateKeyToPEM(privateKey crypto.PrivateKey) ([]byte, error) {
switch t := privateKey.(type) {
case *ecdsa.PrivateKey:
derBytes, err := x509.MarshalECPrivateKey(t)
if err != nil {
return nil, err
}
privateKeyPemBlock := &pem.Block{
Type: ECPrivateKeyBlockType,
Bytes: derBytes,
}
return pem.EncodeToMemory(privateKeyPemBlock), nil
case *rsa.PrivateKey:
return EncodePrivateKeyPEM(t), nil
default:
return nil, fmt.Errorf("private key is not a recognized type: %T", privateKey)
}
}
// NewPool returns an x509.CertPool containing the certificates in the given PEM-encoded file.
// Returns an error if the file could not be read, a certificate could not be parsed, or if the file does not contain any certificates
func NewPool(filename string) (*x509.CertPool, error) {

View File

@ -0,0 +1,105 @@
/*
Copyright 2018 The Kubernetes Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
// Package connrotation implements a connection dialer that tracks and can close
// all created connections.
//
// This is used for credential rotation of long-lived connections, when there's
// no way to re-authenticate on a live connection.
package connrotation
import (
"context"
"net"
"sync"
)
// DialFunc is a shorthand for signature of net.DialContext.
type DialFunc func(ctx context.Context, network, address string) (net.Conn, error)
// Dialer opens connections through Dial and tracks them.
type Dialer struct {
dial DialFunc
mu sync.Mutex
conns map[*closableConn]struct{}
}
// NewDialer creates a new Dialer instance.
//
// If dial is not nil, it will be used to create new underlying connections.
// Otherwise net.DialContext is used.
func NewDialer(dial DialFunc) *Dialer {
return &Dialer{
dial: dial,
conns: make(map[*closableConn]struct{}),
}
}
// CloseAll forcibly closes all tracked connections.
//
// Note: new connections may get created before CloseAll returns.
func (d *Dialer) CloseAll() {
d.mu.Lock()
conns := d.conns
d.conns = make(map[*closableConn]struct{})
d.mu.Unlock()
for conn := range conns {
conn.Close()
}
}
// Dial creates a new tracked connection.
func (d *Dialer) Dial(network, address string) (net.Conn, error) {
return d.DialContext(context.Background(), network, address)
}
// DialContext creates a new tracked connection.
func (d *Dialer) DialContext(ctx context.Context, network, address string) (net.Conn, error) {
conn, err := d.dial(ctx, network, address)
if err != nil {
return nil, err
}
closable := &closableConn{Conn: conn}
// Start tracking the connection
d.mu.Lock()
d.conns[closable] = struct{}{}
d.mu.Unlock()
// When the connection is closed, remove it from the map. This will
// be no-op if the connection isn't in the map, e.g. if CloseAll()
// is called.
closable.onClose = func() {
d.mu.Lock()
delete(d.conns, closable)
d.mu.Unlock()
}
return closable, nil
}
type closableConn struct {
onClose func()
net.Conn
}
func (c *closableConn) Close() error {
go c.onClose()
return c.Conn.Close()
}

View File

@ -5,8 +5,8 @@ Building Kubernetes is easy if you take advantage of the containerized build env
## Requirements
1. Docker, using one of the following configurations:
* **Mac OS X** You can either use Docker for Mac or docker-machine. See installation instructions [here](https://docs.docker.com/docker-for-mac/).
**Note**: You will want to set the Docker VM to have at least 3GB of initial memory or building will likely fail. (See: [#11852]( http://issue.k8s.io/11852)).
* **macOS** You can either use Docker for Mac or docker-machine. See installation instructions [here](https://docs.docker.com/docker-for-mac/).
**Note**: You will want to set the Docker VM to have at least 4.5GB of initial memory or building will likely fail. (See: [#11852]( http://issue.k8s.io/11852)).
* **Linux with local Docker** Install Docker according to the [instructions](https://docs.docker.com/installation/#installation) for your OS.
* **Remote Docker engine** Use a big machine in the cloud to build faster. This is a little trickier so look at the section later on.
2. **Optional** [Google Cloud SDK](https://developers.google.com/cloud/sdk/)
@ -107,4 +107,23 @@ In addition, there are some other tar files that are created:
When building final release tars, they are first staged into `_output/release-stage` before being tar'd up and put into `_output/release-tars`.
## Reproducibility
`make release`, its variant `make quick-release`, and Bazel all provide a
hermetic build environment which should provide some level of reproducibility
for builds. `make` itself is **not** hermetic.
The Kubernetes build environment supports the [`SOURCE_DATE_EPOCH` environment
variable](https://reproducible-builds.org/specs/source-date-epoch/) specified by
the Reproducible Builds project, which can be set to a UNIX epoch timestamp.
This will be used for the build timestamps embedded in compiled Go binaries,
and maybe someday also Docker images.
One reasonable setting for this variable is to use the commit timestamp from the
tip of the tree being built; this is what the Kubernetes CI system uses. For
example, you could use the following one-liner:
```bash
SOURCE_DATE_EPOCH=$(git show -s --format=format:%ct HEAD)
```
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/build/README.md?pixel)]()

View File

@ -45,24 +45,17 @@ const (
// to one container of a pod.
SeccompContainerAnnotationKeyPrefix string = "container.seccomp.security.alpha.kubernetes.io/"
// SeccompProfileRuntimeDefault represents the default seccomp profile used by container runtime.
SeccompProfileRuntimeDefault string = "runtime/default"
// DeprecatedSeccompProfileDockerDefault represents the default seccomp profile used by docker.
// This is now deprecated and should be replaced by SeccompProfileRuntimeDefault.
DeprecatedSeccompProfileDockerDefault string = "docker/default"
// PreferAvoidPodsAnnotationKey represents the key of preferAvoidPods data (json serialized)
// in the Annotations of a Node.
PreferAvoidPodsAnnotationKey string = "scheduler.alpha.kubernetes.io/preferAvoidPods"
// SysctlsPodAnnotationKey represents the key of sysctls which are set for the infrastructure
// container of a pod. The annotation value is a comma separated list of sysctl_name=value
// key-value pairs. Only a limited set of whitelisted and isolated sysctls is supported by
// the kubelet. Pods with other sysctls will fail to launch.
SysctlsPodAnnotationKey string = "security.alpha.kubernetes.io/sysctls"
// UnsafeSysctlsPodAnnotationKey represents the key of sysctls which are set for the infrastructure
// container of a pod. The annotation value is a comma separated list of sysctl_name=value
// key-value pairs. Unsafe sysctls must be explicitly enabled for a kubelet. They are properly
// namespaced to a pod or a container, but their isolation is usually unclear or weak. Their use
// is at-your-own-risk. Pods that attempt to set an unsafe sysctl that is not enabled for a kubelet
// will fail to launch.
UnsafeSysctlsPodAnnotationKey string = "security.alpha.kubernetes.io/unsafe-sysctls"
// ObjectTTLAnnotations represents a suggestion for kubelet for how long it can cache
// an object (e.g. secret, config map) before fetching it again from apiserver.
// This annotation can be attached to node.

View File

@ -60,7 +60,6 @@ func addKnownTypes(scheme *runtime.Scheme) error {
&ServiceProxyOptions{},
&NodeList{},
&Node{},
&NodeConfigSource{},
&NodeProxyOptions{},
&Endpoints{},
&EndpointsList{},

View File

@ -47,13 +47,6 @@ func (self *ResourceList) Pods() *resource.Quantity {
return &resource.Quantity{}
}
func (self *ResourceList) NvidiaGPU() *resource.Quantity {
if val, ok := (*self)[ResourceNvidiaGPU]; ok {
return &val
}
return &resource.Quantity{}
}
func (self *ResourceList) StorageEphemeral() *resource.Quantity {
if val, ok := (*self)[ResourceEphemeralStorage]; ok {
return &val

View File

@ -20,177 +20,10 @@ import (
"k8s.io/apimachinery/pkg/api/resource"
metainternalversion "k8s.io/apimachinery/pkg/apis/meta/internalversion"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/fields"
"k8s.io/apimachinery/pkg/labels"
"k8s.io/apimachinery/pkg/types"
"k8s.io/apimachinery/pkg/util/intstr"
)
// Common string formats
// ---------------------
// Many fields in this API have formatting requirements. The commonly used
// formats are defined here.
//
// C_IDENTIFIER: This is a string that conforms to the definition of an "identifier"
// in the C language. This is captured by the following regex:
// [A-Za-z_][A-Za-z0-9_]*
// This defines the format, but not the length restriction, which should be
// specified at the definition of any field of this type.
//
// DNS_LABEL: This is a string, no more than 63 characters long, that conforms
// to the definition of a "label" in RFCs 1035 and 1123. This is captured
// by the following regex:
// [a-z0-9]([-a-z0-9]*[a-z0-9])?
//
// DNS_SUBDOMAIN: This is a string, no more than 253 characters long, that conforms
// to the definition of a "subdomain" in RFCs 1035 and 1123. This is captured
// by the following regex:
// [a-z0-9]([-a-z0-9]*[a-z0-9])?(\.[a-z0-9]([-a-z0-9]*[a-z0-9])?)*
// or more simply:
// DNS_LABEL(\.DNS_LABEL)*
//
// IANA_SVC_NAME: This is a string, no more than 15 characters long, that
// conforms to the definition of IANA service name in RFC 6335.
// It must contains at least one letter [a-z] and it must contains only [a-z0-9-].
// Hypens ('-') cannot be leading or trailing character of the string
// and cannot be adjacent to other hyphens.
// ObjectMeta is metadata that all persisted resources must have, which includes all objects
// users must create.
// DEPRECATED: Use k8s.io/apimachinery/pkg/apis/meta/v1.ObjectMeta instead - this type will be removed soon.
type ObjectMeta struct {
// Name is unique within a namespace. Name is required when creating resources, although
// some resources may allow a client to request the generation of an appropriate name
// automatically. Name is primarily intended for creation idempotence and configuration
// definition.
// +optional
Name string
// GenerateName indicates that the name should be made unique by the server prior to persisting
// it. A non-empty value for the field indicates the name will be made unique (and the name
// returned to the client will be different than the name passed). The value of this field will
// be combined with a unique suffix on the server if the Name field has not been provided.
// The provided value must be valid within the rules for Name, and may be truncated by the length
// of the suffix required to make the value unique on the server.
//
// If this field is specified, and Name is not present, the server will NOT return a 409 if the
// generated name exists - instead, it will either return 201 Created or 500 with Reason
// ServerTimeout indicating a unique name could not be found in the time allotted, and the client
// should retry (optionally after the time indicated in the Retry-After header).
// +optional
GenerateName string
// Namespace defines the space within which name must be unique. An empty namespace is
// equivalent to the "default" namespace, but "default" is the canonical representation.
// Not all objects are required to be scoped to a namespace - the value of this field for
// those objects will be empty.
// +optional
Namespace string
// SelfLink is a URL representing this object.
// +optional
SelfLink string
// UID is the unique in time and space value for this object. It is typically generated by
// the server on successful creation of a resource and is not allowed to change on PUT
// operations.
// +optional
UID types.UID
// An opaque value that represents the version of this resource. May be used for optimistic
// concurrency, change detection, and the watch operation on a resource or set of resources.
// Clients must treat these values as opaque and values may only be valid for a particular
// resource or set of resources. Only servers will generate resource versions.
// +optional
ResourceVersion string
// A sequence number representing a specific generation of the desired state.
// Populated by the system. Read-only.
// +optional
Generation int64
// CreationTimestamp is a timestamp representing the server time when this object was
// created. It is not guaranteed to be set in happens-before order across separate operations.
// Clients may not set this value. It is represented in RFC3339 form and is in UTC.
// +optional
CreationTimestamp metav1.Time
// DeletionTimestamp is RFC 3339 date and time at which this resource will be deleted. This
// field is set by the server when a graceful deletion is requested by the user, and is not
// directly settable by a client. The resource is expected to be deleted (no longer visible
// from resource lists, and not reachable by name) after the time in this field. Once set,
// this value may not be unset or be set further into the future, although it may be shortened
// or the resource may be deleted prior to this time. For example, a user may request that
// a pod is deleted in 30 seconds. The Kubelet will react by sending a graceful termination
// signal to the containers in the pod. After that 30 seconds, the Kubelet will send a hard
// termination signal (SIGKILL) to the container and after cleanup, remove the pod from the
// API. In the presence of network partitions, this object may still exist after this
// timestamp, until an administrator or automated process can determine the resource is
// fully terminated.
// If not set, graceful deletion of the object has not been requested.
//
// Populated by the system when a graceful deletion is requested.
// Read-only.
// More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#metadata
// +optional
DeletionTimestamp *metav1.Time
// DeletionGracePeriodSeconds records the graceful deletion value set when graceful deletion
// was requested. Represents the most recent grace period, and may only be shortened once set.
// +optional
DeletionGracePeriodSeconds *int64
// Labels are key value pairs that may be used to scope and select individual resources.
// Label keys are of the form:
// label-key ::= prefixed-name | name
// prefixed-name ::= prefix '/' name
// prefix ::= DNS_SUBDOMAIN
// name ::= DNS_LABEL
// The prefix is optional. If the prefix is not specified, the key is assumed to be private
// to the user. Other system components that wish to use labels must specify a prefix. The
// "kubernetes.io/" prefix is reserved for use by kubernetes components.
// +optional
Labels map[string]string
// Annotations are unstructured key value data stored with a resource that may be set by
// external tooling. They are not queryable and should be preserved when modifying
// objects. Annotation keys have the same formatting restrictions as Label keys. See the
// comments on Labels for details.
// +optional
Annotations map[string]string
// List of objects depended by this object. If ALL objects in the list have
// been deleted, this object will be garbage collected. If this object is managed by a controller,
// then an entry in this list will point to this controller, with the controller field set to true.
// There cannot be more than one managing controller.
// +optional
OwnerReferences []metav1.OwnerReference
// An initializer is a controller which enforces some system invariant at object creation time.
// This field is a list of initializers that have not yet acted on this object. If nil or empty,
// this object has been completely initialized. Otherwise, the object is considered uninitialized
// and is hidden (in list/watch and get calls) from clients that haven't explicitly asked to
// observe uninitialized objects.
//
// When an object is created, the system will populate this list with the current set of initializers.
// Only privileged users may set or modify this list. Once it is empty, it may not be modified further
// by any user.
Initializers *metav1.Initializers
// Must be empty before the object is deleted from the registry. Each entry
// is an identifier for the responsible component that will remove the entry
// from the list. If the deletionTimestamp of the object is non-nil, entries
// in this list can only be removed.
// +optional
Finalizers []string
// The name of the cluster which the object belongs to.
// This is used to distinguish resources with same name and namespace in different clusters.
// This field is not set anywhere right now and apiserver is going to ignore it if set in create or update request.
// +optional
ClusterName string
}
const (
// NamespaceDefault means the object is in the default namespace which is applied when not specified by clients
NamespaceDefault string = "default"
@ -242,6 +75,9 @@ type VolumeSource struct {
// +optional
AWSElasticBlockStore *AWSElasticBlockStoreVolumeSource
// GitRepo represents a git repository at a particular revision.
// DEPRECATED: GitRepo is deprecated. To provision a container with a git repo, mount an
// EmptyDir into an InitContainer that clones the repo using git, then mount the EmptyDir
// into the Pod's container.
// +optional
GitRepo *GitRepoVolumeSource
// Secret represents a secret that should populate this volume.
@ -357,7 +193,7 @@ type PersistentVolumeSource struct {
FlexVolume *FlexPersistentVolumeSource
// Cinder represents a cinder volume attached and mounted on kubelets host machine
// +optional
Cinder *CinderVolumeSource
Cinder *CinderPersistentVolumeSource
// CephFS represents a Ceph FS mount on the host that shares a pod's lifetime
// +optional
CephFS *CephFSPersistentVolumeSource
@ -412,10 +248,6 @@ const (
// MountOptionAnnotation defines mount option annotation used in PVs
MountOptionAnnotation = "volume.beta.kubernetes.io/mount-options"
// AlphaStorageNodeAffinityAnnotation defines node affinity policies for a PersistentVolume.
// Value is a string of the json representation of type NodeAffinity
AlphaStorageNodeAffinityAnnotation = "volume.alpha.kubernetes.io/node-affinity"
)
// +genclient
@ -961,6 +793,10 @@ type AWSElasticBlockStoreVolumeSource struct {
// Represents a volume that is populated with the contents of a git repository.
// Git repo volumes do not support ownership management.
// Git repo volumes support SELinux relabeling.
//
// DEPRECATED: GitRepo is deprecated. To provision a container with a git repo, mount an
// EmptyDir into an InitContainer that clones the repo using git, then mount the EmptyDir
// into the Pod's container.
type GitRepoVolumeSource struct {
// Repository URL
Repository string
@ -1163,6 +999,32 @@ type CinderVolumeSource struct {
// the ReadOnly setting in VolumeMounts.
// +optional
ReadOnly bool
// Optional: points to a secret object containing parameters used to connect
// to OpenStack.
// +optional
SecretRef *LocalObjectReference
}
// Represents a cinder volume resource in Openstack. A Cinder volume
// must exist before mounting to a container. The volume must also be
// in the same region as the kubelet. Cinder volumes support ownership
// management and SELinux relabeling.
type CinderPersistentVolumeSource struct {
// Unique id of the volume used to identify the cinder volume
VolumeID string
// Filesystem type to mount.
// Must be a filesystem type supported by the host operating system.
// Ex. "ext4", "xfs", "ntfs". Implicitly inferred to be "ext4" if unspecified.
// +optional
FSType string
// Optional: Defaults to false (read/write). ReadOnly here will force
// the ReadOnly setting in VolumeMounts.
// +optional
ReadOnly bool
// Optional: points to a secret object containing parameters used to connect
// to OpenStack.
// +optional
SecretRef *SecretReference
}
// Represents a Ceph Filesystem mount that lasts the lifetime of a pod
@ -1562,6 +1424,28 @@ type ConfigMapProjection struct {
Optional *bool
}
// ServiceAccountTokenProjection represents a projected service account token
// volume. This projection can be used to insert a service account token into
// the pods runtime filesystem for use against APIs (Kubernetes API Server or
// otherwise).
type ServiceAccountTokenProjection struct {
// Audience is the intended audience of the token. A recipient of a token
// must identify itself with an identifier specified in the audience of the
// token, and otherwise should reject the token. The audience defaults to the
// identifier of the apiserver.
Audience string
// ExpirationSeconds is the requested duration of validity of the service
// account token. As the token approaches expiration, the kubelet volume
// plugin will proactively rotate the service account token. The kubelet will
// start trying to rotate the token if the token is older than 80 percent of
// its time to live or if the token is older than 24 hours.Defaults to 1 hour
// and must be at least 10 minutes.
ExpirationSeconds int64
// Path is the path relative to the mount point of the file to project the
// token into.
Path string
}
// Represents a projected volume source
type ProjectedVolumeSource struct {
// list of volume projections
@ -1585,6 +1469,8 @@ type VolumeProjection struct {
DownwardAPI *DownwardAPIProjection
// information about the configMap data to project
ConfigMap *ConfigMapProjection
// information about the serviceAccountToken data to project
ServiceAccountToken *ServiceAccountTokenProjection
}
// Maps a string key to a path within a volume.
@ -1605,11 +1491,13 @@ type KeyToPath struct {
Mode *int32
}
// Local represents directly-attached storage with node affinity
// Local represents directly-attached storage with node affinity (Beta feature)
type LocalVolumeSource struct {
// The full path to the volume on the node
// For alpha, this path must be a directory
// Once block as a source is supported, then this path can point to a block device
// The full path to the volume on the node.
// It can be either a directory or block device (disk, partition, ...).
// Directories can be represented only by PersistentVolume with VolumeMode=Filesystem.
// Block devices can be represented only by VolumeMode=Block, which also requires the
// BlockVolume alpha feature gate to be enabled.
Path string
}
@ -1711,6 +1599,12 @@ type VolumeMount struct {
type MountPropagationMode string
const (
// MountPropagationNone means that the volume in a container will
// not receive new mounts from the host or other containers, and filesystems
// mounted inside the container won't be propagated to the host or other
// containers.
// Note that this mode corresponds to "private" in Linux terminology.
MountPropagationNone MountPropagationMode = "None"
// MountPropagationHostToContainer means that the volume in a container will
// receive new mounts from the host or other containers, but filesystems
// mounted inside the container won't be propagated to the host or other
@ -2200,6 +2094,8 @@ const (
// PodReasonUnschedulable reason in PodScheduled PodCondition means that the scheduler
// can't schedule the pod right now, for example due to insufficient resources in the cluster.
PodReasonUnschedulable = "Unschedulable"
// ContainersReady indicates whether all containers in the pod are ready.
ContainersReady PodConditionType = "ContainersReady"
)
type PodCondition struct {
@ -2270,10 +2166,14 @@ type NodeSelector struct {
NodeSelectorTerms []NodeSelectorTerm
}
// A null or empty node selector term matches no objects.
// A null or empty node selector term matches no objects. The requirements of
// them are ANDed.
// The TopologySelectorTerm type implements a subset of the NodeSelectorTerm.
type NodeSelectorTerm struct {
//Required. A list of node selector requirements. The requirements are ANDed.
// A list of node selector requirements by node's labels.
MatchExpressions []NodeSelectorRequirement
// A list of node selector requirements by node's fields.
MatchFields []NodeSelectorRequirement
}
// A node selector requirement is a selector that contains values, a key, and an operator
@ -2306,6 +2206,27 @@ const (
NodeSelectorOpLt NodeSelectorOperator = "Lt"
)
// A topology selector term represents the result of label queries.
// A null or empty topology selector term matches no objects.
// The requirements of them are ANDed.
// It provides a subset of functionality as NodeSelectorTerm.
// This is an alpha feature and may change in the future.
type TopologySelectorTerm struct {
// A list of topology selector requirements by labels.
// +optional
MatchLabelExpressions []TopologySelectorLabelRequirement
}
// A topology selector requirement is a selector that matches given label.
// This is an alpha feature and may change in the future.
type TopologySelectorLabelRequirement struct {
// The label key that the selector applies to.
Key string
// An array of string values. One value must match the label to be selected.
// Each entry in Values is ORed.
Values []string
}
// Affinity is a group of affinity scheduling rules.
type Affinity struct {
// Describes node affinity scheduling rules for the pod.
@ -2538,6 +2459,12 @@ const (
TolerationOpEqual TolerationOperator = "Equal"
)
// PodReadinessGate contains the reference to a pod condition
type PodReadinessGate struct {
// ConditionType refers to a condition in the pod's condition list with matching type.
ConditionType PodConditionType
}
// PodSpec is a description of a pod
type PodSpec struct {
Volumes []Volume
@ -2634,6 +2561,12 @@ type PodSpec struct {
// configuration based on DNSPolicy.
// +optional
DNSConfig *PodDNSConfig
// If specified, all readiness gates will be evaluated for pod readiness.
// A pod is ready when all its containers are ready AND
// all conditions specified in the readiness gates have status equal to "True"
// More info: https://github.com/kubernetes/community/blob/master/keps/sig-network/0007-pod-ready%2B%2B.md
// +optional
ReadinessGates []PodReadinessGate
}
// HostAlias holds the mapping between IP and hostnames that will be injected as an entry in the
@ -2726,6 +2659,10 @@ type PodSecurityContext struct {
// If unset, the Kubelet will not modify the ownership and permissions of any volume.
// +optional
FSGroup *int64
// Sysctls hold a list of namespaced sysctls used for the pod. Pods with unsupported
// sysctls (by the container runtime) might fail to launch.
// +optional
Sysctls []Sysctl
}
// PodQOSClass defines the supported qos classes of Pods.
@ -3213,9 +3150,6 @@ type ServiceSpec struct {
// The primary use case for setting this field is to use a StatefulSet's Headless Service
// to propagate SRV records for its Pods without respect to their readiness for purpose
// of peer discovery.
// This field will replace the service.alpha.kubernetes.io/tolerate-unready-endpoints
// when that annotation is deprecated and all clients have been converted to use this
// field.
// +optional
PublishNotReadyAddresses bool
}
@ -3393,10 +3327,6 @@ type NodeSpec struct {
// +optional
PodCIDR string
// External ID of the node assigned by some machine database (e.g. a cloud provider)
// +optional
ExternalID string
// ID of the node assigned by the cloud provider
// Note: format is "<ProviderName>://<ProviderSpecificNodeID>"
// +optional
@ -3414,14 +3344,40 @@ type NodeSpec struct {
// The DynamicKubeletConfig feature gate must be enabled for the Kubelet to use this field
// +optional
ConfigSource *NodeConfigSource
}
// +k8s:deepcopy-gen:interfaces=k8s.io/apimachinery/pkg/runtime.Object
// Deprecated. Not all kubelets will set this field. Remove field after 1.13.
// see: https://issues.k8s.io/61966
// +optional
DoNotUse_ExternalID string
}
// NodeConfigSource specifies a source of node configuration. Exactly one subfield must be non-nil.
type NodeConfigSource struct {
metav1.TypeMeta
ConfigMapRef *ObjectReference
ConfigMap *ConfigMapNodeConfigSource
}
type ConfigMapNodeConfigSource struct {
// Namespace is the metadata.namespace of the referenced ConfigMap.
// This field is required in all cases.
Namespace string
// Name is the metadata.name of the referenced ConfigMap.
// This field is required in all cases.
Name string
// UID is the metadata.UID of the referenced ConfigMap.
// This field is forbidden in Node.Spec, and required in Node.Status.
// +optional
UID types.UID
// ResourceVersion is the metadata.ResourceVersion of the referenced ConfigMap.
// This field is forbidden in Node.Spec, and required in Node.Status.
// +optional
ResourceVersion string
// KubeletConfigKey declares which key of the referenced ConfigMap corresponds to the KubeletConfiguration structure
// This field is required in all cases.
KubeletConfigKey string
}
// DaemonEndpoint contains information about a single Daemon endpoint.
@ -3471,6 +3427,53 @@ type NodeSystemInfo struct {
Architecture string
}
// NodeConfigStatus describes the status of the config assigned by Node.Spec.ConfigSource.
type NodeConfigStatus struct {
// Assigned reports the checkpointed config the node will try to use.
// When Node.Spec.ConfigSource is updated, the node checkpoints the associated
// config payload to local disk, along with a record indicating intended
// config. The node refers to this record to choose its config checkpoint, and
// reports this record in Assigned. Assigned only updates in the status after
// the record has been checkpointed to disk. When the Kubelet is restarted,
// it tries to make the Assigned config the Active config by loading and
// validating the checkpointed payload identified by Assigned.
// +optional
Assigned *NodeConfigSource
// Active reports the checkpointed config the node is actively using.
// Active will represent either the current version of the Assigned config,
// or the current LastKnownGood config, depending on whether attempting to use the
// Assigned config results in an error.
// +optional
Active *NodeConfigSource
// LastKnownGood reports the checkpointed config the node will fall back to
// when it encounters an error attempting to use the Assigned config.
// The Assigned config becomes the LastKnownGood config when the node determines
// that the Assigned config is stable and correct.
// This is currently implemented as a 10-minute soak period starting when the local
// record of Assigned config is updated. If the Assigned config is Active at the end
// of this period, it becomes the LastKnownGood. Note that if Spec.ConfigSource is
// reset to nil (use local defaults), the LastKnownGood is also immediately reset to nil,
// because the local default config is always assumed good.
// You should not make assumptions about the node's method of determining config stability
// and correctness, as this may change or become configurable in the future.
// +optional
LastKnownGood *NodeConfigSource
// Error describes any problems reconciling the Spec.ConfigSource to the Active config.
// Errors may occur, for example, attempting to checkpoint Spec.ConfigSource to the local Assigned
// record, attempting to checkpoint the payload associated with Spec.ConfigSource, attempting
// to load or validate the Assigned config, etc.
// Errors may occur at different points while syncing config. Earlier errors (e.g. download or
// checkpointing errors) will not result in a rollback to LastKnownGood, and may resolve across
// Kubelet retries. Later errors (e.g. loading or validating a checkpointed config) will result in
// a rollback to LastKnownGood. In the latter case, it is usually possible to resolve the error
// by fixing the config assigned in Spec.ConfigSource.
// You can find additional information for debugging by searching the error message in the Kubelet log.
// Error is a human-readable description of the error state; machines can check whether or not Error
// is empty, but should not rely on the stability of the Error text across Kubelet versions.
// +optional
Error string
}
// NodeStatus is information about the current status of a node.
type NodeStatus struct {
// Capacity represents the total resources of a node.
@ -3503,6 +3506,9 @@ type NodeStatus struct {
// List of volumes that are attached to the node.
// +optional
VolumesAttached []AttachedVolume
// Status of the config assigned to the node via the dynamic Kubelet config feature.
// +optional
Config *NodeConfigStatus
}
type UniqueVolumeName string
@ -3587,8 +3593,6 @@ const (
NodeDiskPressure NodeConditionType = "DiskPressure"
// NodeNetworkUnavailable means that network for the node is not correctly configured.
NodeNetworkUnavailable NodeConditionType = "NetworkUnavailable"
// NodeKubeletConfigOk indicates whether the kubelet is correctly configured
NodeKubeletConfigOk NodeConditionType = "KubeletConfigOk"
)
type NodeCondition struct {
@ -3645,8 +3649,6 @@ const (
// Local ephemeral storage, in bytes. (500Gi = 500GiB = 500 * 1024 * 1024 * 1024)
// The resource name for ResourceEphemeralStorage is alpha and it can change across releases.
ResourceEphemeralStorage ResourceName = "ephemeral-storage"
// NVIDIA GPU, in devices. Alpha, might change: although fractional and allowing values >1, only one whole device per node is assigned.
ResourceNvidiaGPU ResourceName = "alpha.kubernetes.io/nvidia-gpu"
)
const (
@ -3654,6 +3656,8 @@ const (
ResourceDefaultNamespacePrefix = "kubernetes.io/"
// Name prefix for huge page resources (alpha).
ResourceHugePagesPrefix = "hugepages-"
// Name prefix for storage resource limits
ResourceAttachableVolumesPrefix = "attachable-volumes-"
)
// ResourceList is a set of (resource name, quantity) pairs.
@ -3774,86 +3778,6 @@ type Preconditions struct {
UID *types.UID
}
// DeletionPropagation decides whether and how garbage collection will be performed.
type DeletionPropagation string
const (
// Orphans the dependents.
DeletePropagationOrphan DeletionPropagation = "Orphan"
// Deletes the object from the key-value store, the garbage collector will delete the dependents in the background.
DeletePropagationBackground DeletionPropagation = "Background"
// The object exists in the key-value store until the garbage collector deletes all the dependents whose ownerReference.blockOwnerDeletion=true from the key-value store.
// API sever will put the "DeletingDependents" finalizer on the object, and sets its deletionTimestamp.
// This policy is cascading, i.e., the dependents will be deleted with Foreground.
DeletePropagationForeground DeletionPropagation = "Foreground"
)
// +k8s:deepcopy-gen:interfaces=k8s.io/apimachinery/pkg/runtime.Object
// DeleteOptions may be provided when deleting an API object
// DEPRECATED: This type has been moved to meta/v1 and will be removed soon.
type DeleteOptions struct {
metav1.TypeMeta
// Optional duration in seconds before the object should be deleted. Value must be non-negative integer.
// The value zero indicates delete immediately. If this value is nil, the default grace period for the
// specified type will be used.
// +optional
GracePeriodSeconds *int64
// Must be fulfilled before a deletion is carried out. If not possible, a 409 Conflict status will be
// returned.
// +optional
Preconditions *Preconditions
// Deprecated: please use the PropagationPolicy, this field will be deprecated in 1.7.
// Should the dependent objects be orphaned. If true/false, the "orphan"
// finalizer will be added to/removed from the object's finalizers list.
// Either this field or PropagationPolicy may be set, but not both.
// +optional
OrphanDependents *bool
// Whether and how garbage collection will be performed.
// Either this field or OrphanDependents may be set, but not both.
// The default policy is decided by the existing finalizer set in the
// metadata.finalizers and the resource-specific default policy.
// Acceptable values are: 'Orphan' - orphan the dependents; 'Background' -
// allow the garbage collector to delete the dependents in the background;
// 'Foreground' - a cascading policy that deletes all dependents in the
// foreground.
// +optional
PropagationPolicy *DeletionPropagation
}
// +k8s:deepcopy-gen:interfaces=k8s.io/apimachinery/pkg/runtime.Object
// ListOptions is the query options to a standard REST list call, and has future support for
// watch calls.
// DEPRECATED: This type has been moved to meta/v1 and will be removed soon.
type ListOptions struct {
metav1.TypeMeta
// A selector based on labels
LabelSelector labels.Selector
// A selector based on fields
FieldSelector fields.Selector
// If true, partially initialized resources are included in the response.
IncludeUninitialized bool
// If true, watch for changes to this list
Watch bool
// When specified with a watch call, shows changes that occur after that particular version of a resource.
// Defaults to changes from the beginning of history.
// When specified for list:
// - if unset, then the result is returned from remote storage based on quorum-read flag;
// - if it's 0, then we simply return what we currently have in cache, no guarantee;
// - if set to non zero, then the result is at least as fresh as given rv.
ResourceVersion string
// Timeout for the list/watch call.
TimeoutSeconds *int64
}
// +k8s:deepcopy-gen:interfaces=k8s.io/apimachinery/pkg/runtime.Object
// PodLogOptions is the query options for a Pod's logs REST call
@ -4272,6 +4196,8 @@ const (
ResourceQuotaScopeBestEffort ResourceQuotaScope = "BestEffort"
// Match all pod objects that do not have best effort quality of service
ResourceQuotaScopeNotBestEffort ResourceQuotaScope = "NotBestEffort"
// Match all pod objects that have priority class mentioned
ResourceQuotaScopePriorityClass ResourceQuotaScope = "PriorityClass"
)
// ResourceQuotaSpec defines the desired hard limits to enforce for Quota
@ -4283,8 +4209,47 @@ type ResourceQuotaSpec struct {
// If not specified, the quota matches all objects.
// +optional
Scopes []ResourceQuotaScope
// ScopeSelector is also a collection of filters like Scopes that must match each object tracked by a quota
// but expressed using ScopeSelectorOperator in combination with possible values.
// +optional
ScopeSelector *ScopeSelector
}
// A scope selector represents the AND of the selectors represented
// by the scoped-resource selector terms.
type ScopeSelector struct {
// A list of scope selector requirements by scope of the resources.
// +optional
MatchExpressions []ScopedResourceSelectorRequirement
}
// A scoped-resource selector requirement is a selector that contains values, a scope name, and an operator
// that relates the scope name and values.
type ScopedResourceSelectorRequirement struct {
// The name of the scope that the selector applies to.
ScopeName ResourceQuotaScope
// Represents a scope's relationship to a set of values.
// Valid operators are In, NotIn, Exists, DoesNotExist.
Operator ScopeSelectorOperator
// An array of string values. If the operator is In or NotIn,
// the values array must be non-empty. If the operator is Exists or DoesNotExist,
// the values array must be empty.
// This array is replaced during a strategic merge patch.
// +optional
Values []string
}
// A scope selector operator is the set of operators that can be used in
// a scope selector requirement.
type ScopeSelectorOperator string
const (
ScopeSelectorOpIn ScopeSelectorOperator = "In"
ScopeSelectorOpNotIn ScopeSelectorOperator = "NotIn"
ScopeSelectorOpExists ScopeSelectorOperator = "Exists"
ScopeSelectorOpDoesNotExist ScopeSelectorOperator = "DoesNotExist"
)
// ResourceQuotaStatus defines the enforced hard limits and observed use
type ResourceQuotaStatus struct {
// Hard is the set of enforced hard limits for each named resource

View File

@ -1,7 +1,7 @@
// +build !ignore_autogenerated
/*
Copyright 2018 The Kubernetes Authors.
Copyright The Kubernetes Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
@ -380,9 +380,43 @@ func (in *CephFSVolumeSource) DeepCopy() *CephFSVolumeSource {
return out
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *CinderPersistentVolumeSource) DeepCopyInto(out *CinderPersistentVolumeSource) {
*out = *in
if in.SecretRef != nil {
in, out := &in.SecretRef, &out.SecretRef
if *in == nil {
*out = nil
} else {
*out = new(SecretReference)
**out = **in
}
}
return
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new CinderPersistentVolumeSource.
func (in *CinderPersistentVolumeSource) DeepCopy() *CinderPersistentVolumeSource {
if in == nil {
return nil
}
out := new(CinderPersistentVolumeSource)
in.DeepCopyInto(out)
return out
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *CinderVolumeSource) DeepCopyInto(out *CinderVolumeSource) {
*out = *in
if in.SecretRef != nil {
in, out := &in.SecretRef, &out.SecretRef
if *in == nil {
*out = nil
} else {
*out = new(LocalObjectReference)
**out = **in
}
}
return
}
@ -631,6 +665,22 @@ func (in *ConfigMapList) DeepCopyObject() runtime.Object {
return nil
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *ConfigMapNodeConfigSource) DeepCopyInto(out *ConfigMapNodeConfigSource) {
*out = *in
return
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new ConfigMapNodeConfigSource.
func (in *ConfigMapNodeConfigSource) DeepCopy() *ConfigMapNodeConfigSource {
if in == nil {
return nil
}
out := new(ConfigMapNodeConfigSource)
in.DeepCopyInto(out)
return out
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *ConfigMapProjection) DeepCopyInto(out *ConfigMapProjection) {
*out = *in
@ -965,67 +1015,6 @@ func (in *DaemonEndpoint) DeepCopy() *DaemonEndpoint {
return out
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *DeleteOptions) DeepCopyInto(out *DeleteOptions) {
*out = *in
out.TypeMeta = in.TypeMeta
if in.GracePeriodSeconds != nil {
in, out := &in.GracePeriodSeconds, &out.GracePeriodSeconds
if *in == nil {
*out = nil
} else {
*out = new(int64)
**out = **in
}
}
if in.Preconditions != nil {
in, out := &in.Preconditions, &out.Preconditions
if *in == nil {
*out = nil
} else {
*out = new(Preconditions)
(*in).DeepCopyInto(*out)
}
}
if in.OrphanDependents != nil {
in, out := &in.OrphanDependents, &out.OrphanDependents
if *in == nil {
*out = nil
} else {
*out = new(bool)
**out = **in
}
}
if in.PropagationPolicy != nil {
in, out := &in.PropagationPolicy, &out.PropagationPolicy
if *in == nil {
*out = nil
} else {
*out = new(DeletionPropagation)
**out = **in
}
}
return
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new DeleteOptions.
func (in *DeleteOptions) DeepCopy() *DeleteOptions {
if in == nil {
return nil
}
out := new(DeleteOptions)
in.DeepCopyInto(out)
return out
}
// DeepCopyObject is an autogenerated deepcopy function, copying the receiver, creating a new runtime.Object.
func (in *DeleteOptions) DeepCopyObject() runtime.Object {
if c := in.DeepCopy(); c != nil {
return c
}
return nil
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *DownwardAPIProjection) DeepCopyInto(out *DownwardAPIProjection) {
*out = *in
@ -2145,50 +2134,6 @@ func (in *List) DeepCopyObject() runtime.Object {
return nil
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *ListOptions) DeepCopyInto(out *ListOptions) {
*out = *in
out.TypeMeta = in.TypeMeta
if in.LabelSelector == nil {
out.LabelSelector = nil
} else {
out.LabelSelector = in.LabelSelector.DeepCopySelector()
}
if in.FieldSelector == nil {
out.FieldSelector = nil
} else {
out.FieldSelector = in.FieldSelector.DeepCopySelector()
}
if in.TimeoutSeconds != nil {
in, out := &in.TimeoutSeconds, &out.TimeoutSeconds
if *in == nil {
*out = nil
} else {
*out = new(int64)
**out = **in
}
}
return
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new ListOptions.
func (in *ListOptions) DeepCopy() *ListOptions {
if in == nil {
return nil
}
out := new(ListOptions)
in.DeepCopyInto(out)
return out
}
// DeepCopyObject is an autogenerated deepcopy function, copying the receiver, creating a new runtime.Object.
func (in *ListOptions) DeepCopyObject() runtime.Object {
if c := in.DeepCopy(); c != nil {
return c
}
return nil
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *LoadBalancerIngress) DeepCopyInto(out *LoadBalancerIngress) {
*out = *in
@ -2469,13 +2414,12 @@ func (in *NodeCondition) DeepCopy() *NodeCondition {
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *NodeConfigSource) DeepCopyInto(out *NodeConfigSource) {
*out = *in
out.TypeMeta = in.TypeMeta
if in.ConfigMapRef != nil {
in, out := &in.ConfigMapRef, &out.ConfigMapRef
if in.ConfigMap != nil {
in, out := &in.ConfigMap, &out.ConfigMap
if *in == nil {
*out = nil
} else {
*out = new(ObjectReference)
*out = new(ConfigMapNodeConfigSource)
**out = **in
}
}
@ -2492,12 +2436,47 @@ func (in *NodeConfigSource) DeepCopy() *NodeConfigSource {
return out
}
// DeepCopyObject is an autogenerated deepcopy function, copying the receiver, creating a new runtime.Object.
func (in *NodeConfigSource) DeepCopyObject() runtime.Object {
if c := in.DeepCopy(); c != nil {
return c
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *NodeConfigStatus) DeepCopyInto(out *NodeConfigStatus) {
*out = *in
if in.Assigned != nil {
in, out := &in.Assigned, &out.Assigned
if *in == nil {
*out = nil
} else {
*out = new(NodeConfigSource)
(*in).DeepCopyInto(*out)
}
}
if in.Active != nil {
in, out := &in.Active, &out.Active
if *in == nil {
*out = nil
} else {
*out = new(NodeConfigSource)
(*in).DeepCopyInto(*out)
}
}
if in.LastKnownGood != nil {
in, out := &in.LastKnownGood, &out.LastKnownGood
if *in == nil {
*out = nil
} else {
*out = new(NodeConfigSource)
(*in).DeepCopyInto(*out)
}
}
return
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new NodeConfigStatus.
func (in *NodeConfigStatus) DeepCopy() *NodeConfigStatus {
if in == nil {
return nil
}
out := new(NodeConfigStatus)
in.DeepCopyInto(out)
return out
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
@ -2652,6 +2631,13 @@ func (in *NodeSelectorTerm) DeepCopyInto(out *NodeSelectorTerm) {
(*in)[i].DeepCopyInto(&(*out)[i])
}
}
if in.MatchFields != nil {
in, out := &in.MatchFields, &out.MatchFields
*out = make([]NodeSelectorRequirement, len(*in))
for i := range *in {
(*in)[i].DeepCopyInto(&(*out)[i])
}
}
return
}
@ -2745,6 +2731,15 @@ func (in *NodeStatus) DeepCopyInto(out *NodeStatus) {
*out = make([]AttachedVolume, len(*in))
copy(*out, *in)
}
if in.Config != nil {
in, out := &in.Config, &out.Config
if *in == nil {
*out = nil
} else {
*out = new(NodeConfigStatus)
(*in).DeepCopyInto(*out)
}
}
return
}
@ -2790,75 +2785,6 @@ func (in *ObjectFieldSelector) DeepCopy() *ObjectFieldSelector {
return out
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *ObjectMeta) DeepCopyInto(out *ObjectMeta) {
*out = *in
in.CreationTimestamp.DeepCopyInto(&out.CreationTimestamp)
if in.DeletionTimestamp != nil {
in, out := &in.DeletionTimestamp, &out.DeletionTimestamp
if *in == nil {
*out = nil
} else {
*out = (*in).DeepCopy()
}
}
if in.DeletionGracePeriodSeconds != nil {
in, out := &in.DeletionGracePeriodSeconds, &out.DeletionGracePeriodSeconds
if *in == nil {
*out = nil
} else {
*out = new(int64)
**out = **in
}
}
if in.Labels != nil {
in, out := &in.Labels, &out.Labels
*out = make(map[string]string, len(*in))
for key, val := range *in {
(*out)[key] = val
}
}
if in.Annotations != nil {
in, out := &in.Annotations, &out.Annotations
*out = make(map[string]string, len(*in))
for key, val := range *in {
(*out)[key] = val
}
}
if in.OwnerReferences != nil {
in, out := &in.OwnerReferences, &out.OwnerReferences
*out = make([]v1.OwnerReference, len(*in))
for i := range *in {
(*in)[i].DeepCopyInto(&(*out)[i])
}
}
if in.Initializers != nil {
in, out := &in.Initializers, &out.Initializers
if *in == nil {
*out = nil
} else {
*out = new(v1.Initializers)
(*in).DeepCopyInto(*out)
}
}
if in.Finalizers != nil {
in, out := &in.Finalizers, &out.Finalizers
*out = make([]string, len(*in))
copy(*out, *in)
}
return
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new ObjectMeta.
func (in *ObjectMeta) DeepCopy() *ObjectMeta {
if in == nil {
return nil
}
out := new(ObjectMeta)
in.DeepCopyInto(out)
return out
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *ObjectReference) DeepCopyInto(out *ObjectReference) {
*out = *in
@ -3212,8 +3138,8 @@ func (in *PersistentVolumeSource) DeepCopyInto(out *PersistentVolumeSource) {
if *in == nil {
*out = nil
} else {
*out = new(CinderVolumeSource)
**out = **in
*out = new(CinderPersistentVolumeSource)
(*in).DeepCopyInto(*out)
}
}
if in.CephFS != nil {
@ -3827,6 +3753,22 @@ func (in *PodProxyOptions) DeepCopyObject() runtime.Object {
return nil
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *PodReadinessGate) DeepCopyInto(out *PodReadinessGate) {
*out = *in
return
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new PodReadinessGate.
func (in *PodReadinessGate) DeepCopy() *PodReadinessGate {
if in == nil {
return nil
}
out := new(PodReadinessGate)
in.DeepCopyInto(out)
return out
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *PodSecurityContext) DeepCopyInto(out *PodSecurityContext) {
*out = *in
@ -3889,6 +3831,11 @@ func (in *PodSecurityContext) DeepCopyInto(out *PodSecurityContext) {
**out = **in
}
}
if in.Sysctls != nil {
in, out := &in.Sysctls, &out.Sysctls
*out = make([]Sysctl, len(*in))
copy(*out, *in)
}
return
}
@ -4040,6 +3987,11 @@ func (in *PodSpec) DeepCopyInto(out *PodSpec) {
(*in).DeepCopyInto(*out)
}
}
if in.ReadinessGates != nil {
in, out := &in.ReadinessGates, &out.ReadinessGates
*out = make([]PodReadinessGate, len(*in))
copy(*out, *in)
}
return
}
@ -4683,6 +4635,15 @@ func (in *ResourceQuotaSpec) DeepCopyInto(out *ResourceQuotaSpec) {
*out = make([]ResourceQuotaScope, len(*in))
copy(*out, *in)
}
if in.ScopeSelector != nil {
in, out := &in.ScopeSelector, &out.ScopeSelector
if *in == nil {
*out = nil
} else {
*out = new(ScopeSelector)
(*in).DeepCopyInto(*out)
}
}
return
}
@ -4822,6 +4783,50 @@ func (in *ScaleIOVolumeSource) DeepCopy() *ScaleIOVolumeSource {
return out
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *ScopeSelector) DeepCopyInto(out *ScopeSelector) {
*out = *in
if in.MatchExpressions != nil {
in, out := &in.MatchExpressions, &out.MatchExpressions
*out = make([]ScopedResourceSelectorRequirement, len(*in))
for i := range *in {
(*in)[i].DeepCopyInto(&(*out)[i])
}
}
return
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new ScopeSelector.
func (in *ScopeSelector) DeepCopy() *ScopeSelector {
if in == nil {
return nil
}
out := new(ScopeSelector)
in.DeepCopyInto(out)
return out
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *ScopedResourceSelectorRequirement) DeepCopyInto(out *ScopedResourceSelectorRequirement) {
*out = *in
if in.Values != nil {
in, out := &in.Values, &out.Values
*out = make([]string, len(*in))
copy(*out, *in)
}
return
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new ScopedResourceSelectorRequirement.
func (in *ScopedResourceSelectorRequirement) DeepCopy() *ScopedResourceSelectorRequirement {
if in == nil {
return nil
}
out := new(ScopedResourceSelectorRequirement)
in.DeepCopyInto(out)
return out
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *Secret) DeepCopyInto(out *Secret) {
*out = *in
@ -5255,6 +5260,22 @@ func (in *ServiceAccountList) DeepCopyObject() runtime.Object {
return nil
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *ServiceAccountTokenProjection) DeepCopyInto(out *ServiceAccountTokenProjection) {
*out = *in
return
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new ServiceAccountTokenProjection.
func (in *ServiceAccountTokenProjection) DeepCopy() *ServiceAccountTokenProjection {
if in == nil {
return nil
}
out := new(ServiceAccountTokenProjection)
in.DeepCopyInto(out)
return out
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *ServiceList) DeepCopyInto(out *ServiceList) {
*out = *in
@ -5551,6 +5572,50 @@ func (in *Toleration) DeepCopy() *Toleration {
return out
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *TopologySelectorLabelRequirement) DeepCopyInto(out *TopologySelectorLabelRequirement) {
*out = *in
if in.Values != nil {
in, out := &in.Values, &out.Values
*out = make([]string, len(*in))
copy(*out, *in)
}
return
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new TopologySelectorLabelRequirement.
func (in *TopologySelectorLabelRequirement) DeepCopy() *TopologySelectorLabelRequirement {
if in == nil {
return nil
}
out := new(TopologySelectorLabelRequirement)
in.DeepCopyInto(out)
return out
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *TopologySelectorTerm) DeepCopyInto(out *TopologySelectorTerm) {
*out = *in
if in.MatchLabelExpressions != nil {
in, out := &in.MatchLabelExpressions, &out.MatchLabelExpressions
*out = make([]TopologySelectorLabelRequirement, len(*in))
for i := range *in {
(*in)[i].DeepCopyInto(&(*out)[i])
}
}
return
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new TopologySelectorTerm.
func (in *TopologySelectorTerm) DeepCopy() *TopologySelectorTerm {
if in == nil {
return nil
}
out := new(TopologySelectorTerm)
in.DeepCopyInto(out)
return out
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *Volume) DeepCopyInto(out *Volume) {
*out = *in
@ -5664,6 +5729,15 @@ func (in *VolumeProjection) DeepCopyInto(out *VolumeProjection) {
(*in).DeepCopyInto(*out)
}
}
if in.ServiceAccountToken != nil {
in, out := &in.ServiceAccountToken, &out.ServiceAccountToken
if *in == nil {
*out = nil
} else {
*out = new(ServiceAccountTokenProjection)
**out = **in
}
}
return
}
@ -5803,7 +5877,7 @@ func (in *VolumeSource) DeepCopyInto(out *VolumeSource) {
*out = nil
} else {
*out = new(CinderVolumeSource)
**out = **in
(*in).DeepCopyInto(*out)
}
}
if in.CephFS != nil {

File diff suppressed because it is too large Load Diff

View File

@ -171,7 +171,9 @@ enum MountPropagation {
message Mount {
// Path of the mount within the container.
string container_path = 1;
// Path of the mount on the host.
// Path of the mount on the host. If the hostPath doesn't exist, then runtimes
// should report error. If the hostpath is a symbolic link, runtimes should
// follow the symlink and mount the real destination to container.
string host_path = 2;
// If set, the mount is read-only.
bool readonly = 3;
@ -235,7 +237,8 @@ message LinuxSandboxSecurityContext {
SELinuxOption selinux_options = 2;
// UID to run sandbox processes as, when applicable.
Int64Value run_as_user = 3;
// GID to run sandbox processes as, when applicable.
// GID to run sandbox processes as, when applicable. run_as_group should only
// be specified when run_as_user is specified; otherwise, the runtime MUST error.
Int64Value run_as_group = 8;
// If set, the root filesystem of the sandbox is read-only.
bool readonly_rootfs = 4;
@ -249,7 +252,7 @@ message LinuxSandboxSecurityContext {
// privileged containers are expected to be run.
bool privileged = 6;
// Seccomp profile for the sandbox, candidate values are:
// * docker/default: the default profile for the docker container runtime
// * runtime/default: the default profile for the container runtime
// * unconfined: unconfined profile, ie, no seccomp sandboxing
// * localhost/<full-path-to-profile>: the profile installed on the node.
// <full-path-to-profile> is the full path of the profile.
@ -304,7 +307,7 @@ message PodSandboxConfig {
// structured logs, systemd-journald journal files, gRPC trace files, etc.
// E.g.,
// PodSandboxConfig.LogDirectory = `/var/log/pods/<podUID>/`
// ContainerConfig.LogPath = `containerName_Instance#.log`
// ContainerConfig.LogPath = `containerName/Instance#.log`
//
// WARNING: Log management and how kubelet should interface with the
// container logs are under active discussion in
@ -553,8 +556,9 @@ message LinuxContainerSecurityContext {
// UID to run the container process as. Only one of run_as_user and
// run_as_username can be specified at a time.
Int64Value run_as_user = 5;
// GID to run the container process as. Only one of run_as_group and
// run_as_groupname can be specified at a time.
// GID to run the container process as. run_as_group should only be specified
// when run_as_user or run_as_username is specified; otherwise, the runtime
// MUST error.
Int64Value run_as_group = 12;
// User name to run the container process as. If specified, the user MUST
// exist in the container image (i.e. in the /etc/passwd inside the image),
@ -573,7 +577,7 @@ message LinuxContainerSecurityContext {
// http://wiki.apparmor.net/index.php/AppArmor_Core_Policy_Reference
string apparmor_profile = 9;
// Seccomp profile for the container, candidate values are:
// * docker/default: the default profile for the docker container runtime
// * runtime/default: the default profile for the container runtime
// * unconfined: unconfined profile, ie, no seccomp sandboxing
// * localhost/<full-path-to-profile>: the profile installed on the node.
// <full-path-to-profile> is the full path of the profile.
@ -593,11 +597,21 @@ message LinuxContainerConfig {
LinuxContainerSecurityContext security_context = 2;
}
// WindowsContainerSecurityContext holds windows security configuration that will be applied to a container.
message WindowsContainerSecurityContext {
// User name to run the container process as. If specified, the user MUST
// exist in the container image and be resolved there by the runtime;
// otherwise, the runtime MUST return error.
string run_as_username = 1;
}
// WindowsContainerConfig contains platform-specific configuration for
// Windows-based containers.
message WindowsContainerConfig {
// Resources specification for the container.
WindowsContainerResources resources = 1;
// WindowsContainerSecurityContext configuration for the container.
WindowsContainerSecurityContext security_context = 2;
}
// WindowsContainerResources specifies Windows specific configuration for
@ -682,7 +696,7 @@ message ContainerConfig {
// the log (STDOUT and STDERR) on the host.
// E.g.,
// PodSandboxConfig.LogDirectory = `/var/log/pods/<podUID>/`
// ContainerConfig.LogPath = `containerName_Instance#.log`
// ContainerConfig.LogPath = `containerName/Instance#.log`
//
// WARNING: Log management and how kubelet should interface with the
// container logs are under active discussion in
@ -1043,7 +1057,8 @@ message RemoveImageRequest {
message RemoveImageResponse {}
message NetworkConfig {
// CIDR to use for pod IP addresses.
// CIDR to use for pod IP addresses. If the CIDR is empty, runtimes
// should omit it.
string pod_cidr = 1;
}

Some files were not shown because too many files have changed in this diff Show More