updates to devel/*.md files

Signed-off-by: mikebrow <brownwm@us.ibm.com>
This commit is contained in:
mikebrow 2016-04-19 09:41:38 -05:00
parent 6bdc0bfdb7
commit 1053be8e33
6 changed files with 1192 additions and 553 deletions

View File

@ -35,30 +35,55 @@ Documentation for other releases can be found at
Adding an API Group Adding an API Group
=============== ===============
This document includes the steps to add an API group. You may also want to take a look at PR [#16621](https://github.com/kubernetes/kubernetes/pull/16621) and PR [#13146](https://github.com/kubernetes/kubernetes/pull/13146), which add API groups. This document includes the steps to add an API group. You may also want to take
a look at PR [#16621](https://github.com/kubernetes/kubernetes/pull/16621) and
PR [#13146](https://github.com/kubernetes/kubernetes/pull/13146), which add API
groups.
Please also read about [API conventions](api-conventions.md) and [API changes](api_changes.md) before adding an API group. Please also read about [API conventions](api-conventions.md) and
[API changes](api_changes.md) before adding an API group.
### Your core group package: ### Your core group package:
We plan on improving the way the types are factored in the future; see [#16062](https://github.com/kubernetes/kubernetes/pull/16062) for the directions in which this might evolve. We plan on improving the way the types are factored in the future; see
[#16062](https://github.com/kubernetes/kubernetes/pull/16062) for the directions
in which this might evolve.
1. Create a folder in pkg/apis to hold you group. Create types.go in pkg/apis/`<group>`/ and pkg/apis/`<group>`/`<version>`/ to define API objects in your group; 1. Create a folder in pkg/apis to hold you group. Create types.go in
pkg/apis/`<group>`/ and pkg/apis/`<group>`/`<version>`/ to define API objects
in your group;
2. Create pkg/apis/`<group>`/{register.go, `<version>`/register.go} to register this group's API objects to the encoding/decoding scheme (e.g., [pkg/apis/extensions/register.go](../../pkg/apis/extensions/register.go) and [pkg/apis/extensions/v1beta1/register.go](../../pkg/apis/extensions/v1beta1/register.go); 2. Create pkg/apis/`<group>`/{register.go, `<version>`/register.go} to register
this group's API objects to the encoding/decoding scheme (e.g.,
[pkg/apis/extensions/register.go](../../pkg/apis/extensions/register.go) and
[pkg/apis/extensions/v1beta1/register.go](../../pkg/apis/extensions/v1beta1/register.go);
3. Add a pkg/apis/`<group>`/install/install.go, which is responsible for adding the group to the `latest` package, so that other packages can access the group's meta through `latest.Group`. You probably only need to change the name of group and version in the [example](../../pkg/apis/extensions/install/install.go)). You need to import this `install` package in {pkg/master, pkg/client/unversioned}/import_known_versions.go, if you want to make your group accessible to other packages in the kube-apiserver binary, binaries that uses the client package. 3. Add a pkg/apis/`<group>`/install/install.go, which is responsible for adding
the group to the `latest` package, so that other packages can access the group's
meta through `latest.Group`. You probably only need to change the name of group
and version in the [example](../../pkg/apis/extensions/install/install.go)). You
need to import this `install` package in {pkg/master,
pkg/client/unversioned}/import_known_versions.go, if you want to make your group
accessible to other packages in the kube-apiserver binary, binaries that uses
the client package.
Step 2 and 3 are mechanical, we plan on autogenerate these using the cmd/libs/go2idl/ tool. Step 2 and 3 are mechanical, we plan on autogenerate these using the
cmd/libs/go2idl/ tool.
### Scripts changes and auto-generated code: ### Scripts changes and auto-generated code:
1. Generate conversions and deep-copies: 1. Generate conversions and deep-copies:
1. Add your "group/" or "group/version" into hack/after-build/{update-generated-conversions.sh, update-generated-deep-copies.sh, verify-generated-conversions.sh, verify-generated-deep-copies.sh}; 1. Add your "group/" or "group/version" into
2. Make sure your pkg/apis/`<group>`/`<version>` directory has a doc.go file with the comment `// +genconversion=true`, to catch the attention of our gen-conversion script. hack/after-build/{update-generated-conversions.sh,
update-generated-deep-copies.sh, verify-generated-conversions.sh,
verify-generated-deep-copies.sh};
2. Make sure your pkg/apis/`<group>`/`<version>` directory has a doc.go file
with the comment `// +genconversion=true`, to catch the attention of our
gen-conversion script.
3. Run hack/update-all.sh. 3. Run hack/update-all.sh.
2. Generate files for Ugorji codec: 2. Generate files for Ugorji codec:
1. Touch types.generated.go in pkg/apis/`<group>`{/, `<version>`}; 1. Touch types.generated.go in pkg/apis/`<group>`{/, `<version>`};
@ -66,19 +91,29 @@ Step 2 and 3 are mechanical, we plan on autogenerate these using the cmd/libs/go
### Client (optional): ### Client (optional):
We are overhauling pkg/client, so this section might be outdated; see [#15730](https://github.com/kubernetes/kubernetes/pull/15730) for how the client package might evolve. Currently, to add your group to the client package, you need to We are overhauling pkg/client, so this section might be outdated; see
[#15730](https://github.com/kubernetes/kubernetes/pull/15730) for how the client
package might evolve. Currently, to add your group to the client package, you
need to:
1. Create pkg/client/unversioned/`<group>`.go, define a group client interface and implement the client. You can take pkg/client/unversioned/extensions.go as a reference. 1. Create pkg/client/unversioned/`<group>`.go, define a group client interface
and implement the client. You can take pkg/client/unversioned/extensions.go as a
reference.
2. Add the group client interface to the `Interface` in pkg/client/unversioned/client.go and add method to fetch the interface. Again, you can take how we add the Extensions group there as an example. 2. Add the group client interface to the `Interface` in
pkg/client/unversioned/client.go and add method to fetch the interface. Again,
you can take how we add the Extensions group there as an example.
3. If you need to support the group in kubectl, you'll also need to modify pkg/kubectl/cmd/util/factory.go. 3. If you need to support the group in kubectl, you'll also need to modify
pkg/kubectl/cmd/util/factory.go.
### Make the group/version selectable in unit tests (optional): ### Make the group/version selectable in unit tests (optional):
1. Add your group in pkg/api/testapi/testapi.go, then you can access the group in tests through testapi.`<group>`; 1. Add your group in pkg/api/testapi/testapi.go, then you can access the group
in tests through testapi.`<group>`;
2. Add your "group/version" to `KUBE_API_VERSIONS` and `KUBE_TEST_API_VERSIONS` in hack/test-go.sh. 2. Add your "group/version" to `KUBE_API_VERSIONS` and `KUBE_TEST_API_VERSIONS`
in hack/test-go.sh.
TODO: Add a troubleshooting section. TODO: Add a troubleshooting section.

File diff suppressed because it is too large Load Diff

View File

@ -65,15 +65,14 @@ found at [API Conventions](api-conventions.md).
# So you want to change the API? # So you want to change the API?
Before attempting a change to the API, you should familiarize yourself Before attempting a change to the API, you should familiarize yourself with a
with a number of existing API types and with the [API number of existing API types and with the [API conventions](api-conventions.md).
conventions](api-conventions.md). If creating a new API If creating a new API type/resource, we also recommend that you first send a PR
type/resource, we also recommend that you first send a PR containing containing just a proposal for the new API types, and that you initially target
just a proposal for the new API types, and that you initially target
the extensions API (pkg/apis/extensions). the extensions API (pkg/apis/extensions).
The Kubernetes API has two major components - the internal structures and The Kubernetes API has two major components - the internal structures and
the versioned APIs. The versioned APIs are intended to be stable, while the the versioned APIs. The versioned APIs are intended to be stable, while the
internal structures are implemented to best reflect the needs of the Kubernetes internal structures are implemented to best reflect the needs of the Kubernetes
code itself. code itself.
@ -88,8 +87,8 @@ It is important to have a high level understanding of the API system used in
Kubernetes in order to navigate the rest of this document. Kubernetes in order to navigate the rest of this document.
As mentioned above, the internal representation of an API object is decoupled As mentioned above, the internal representation of an API object is decoupled
from any one API version. This provides a lot of freedom to evolve the code, from any one API version. This provides a lot of freedom to evolve the code,
but it requires robust infrastructure to convert between representations. There but it requires robust infrastructure to convert between representations. There
are multiple steps in processing an API operation - even something as simple as are multiple steps in processing an API operation - even something as simple as
a GET involves a great deal of machinery. a GET involves a great deal of machinery.
@ -97,7 +96,7 @@ The conversion process is logically a "star" with the internal form at the
center. Every versioned API can be converted to the internal form (and center. Every versioned API can be converted to the internal form (and
vice-versa), but versioned APIs do not convert to other versioned APIs directly. vice-versa), but versioned APIs do not convert to other versioned APIs directly.
This sounds like a heavy process, but in reality we do not intend to keep more This sounds like a heavy process, but in reality we do not intend to keep more
than a small number of versions alive at once. While all of the Kubernetes code than a small number of versions alive at once. While all of the Kubernetes code
operates on the internal structures, they are always converted to a versioned operates on the internal structures, they are always converted to a versioned
form before being written to storage (disk or etcd) or being sent over a wire. form before being written to storage (disk or etcd) or being sent over a wire.
Clients should consume and operate on the versioned APIs exclusively. Clients should consume and operate on the versioned APIs exclusively.
@ -110,11 +109,11 @@ To demonstrate the general process, here is a (hypothetical) example:
4. The `v7beta1.Pod` is converted to an `api.Pod` structure 4. The `v7beta1.Pod` is converted to an `api.Pod` structure
5. The `api.Pod` is validated, and any errors are returned to the user 5. The `api.Pod` is validated, and any errors are returned to the user
6. The `api.Pod` is converted to a `v6.Pod` (because v6 is the latest stable 6. The `api.Pod` is converted to a `v6.Pod` (because v6 is the latest stable
version) version)
7. The `v6.Pod` is marshalled into JSON and written to etcd 7. The `v6.Pod` is marshalled into JSON and written to etcd
Now that we have the `Pod` object stored, a user can GET that object in any Now that we have the `Pod` object stored, a user can GET that object in any
supported api version. For example: supported api version. For example:
1. A user GETs the `Pod` from `/api/v5/...` 1. A user GETs the `Pod` from `/api/v5/...`
2. The JSON is read from etcd and unmarshalled into a `v6.Pod` structure 2. The JSON is read from etcd and unmarshalled into a `v6.Pod` structure
@ -132,7 +131,7 @@ Before talking about how to make API changes, it is worthwhile to clarify what
we mean by API compatibility. An API change is considered backward-compatible we mean by API compatibility. An API change is considered backward-compatible
if it: if it:
* adds new functionality that is not required for correct behavior (e.g., * adds new functionality that is not required for correct behavior (e.g.,
does not add a new required field) does not add a new required field)
* does not change existing semantics, including: * does not change existing semantics, including:
* default values and behavior * default values and behavior
* interpretation of existing API types, fields, and values * interpretation of existing API types, fields, and values
@ -141,37 +140,37 @@ if it:
Put another way: Put another way:
1. Any API call (e.g. a structure POSTed to a REST endpoint) that worked before 1. Any API call (e.g. a structure POSTed to a REST endpoint) that worked before
your change must work the same after your change. your change must work the same after your change.
2. Any API call that uses your change must not cause problems (e.g. crash or 2. Any API call that uses your change must not cause problems (e.g. crash or
degrade behavior) when issued against servers that do not include your change. degrade behavior) when issued against servers that do not include your change.
3. It must be possible to round-trip your change (convert to different API 3. It must be possible to round-trip your change (convert to different API
versions and back) with no loss of information. versions and back) with no loss of information.
4. Existing clients need not be aware of your change in order for them to continue 4. Existing clients need not be aware of your change in order for them to
to function as they did previously, even when your change is utilized continue to function as they did previously, even when your change is utilized.
If your change does not meet these criteria, it is not considered strictly If your change does not meet these criteria, it is not considered strictly
compatible. compatible.
Let's consider some examples. In a hypothetical API (assume we're at version Let's consider some examples. In a hypothetical API (assume we're at version
v6), the `Frobber` struct looks something like this: v6), the `Frobber` struct looks something like this:
```go ```go
// API v6. // API v6.
type Frobber struct { type Frobber struct {
Height int `json:"height"` Height int `json:"height"`
Param string `json:"param"` Param string `json:"param"`
} }
``` ```
You want to add a new `Width` field. It is generally safe to add new fields You want to add a new `Width` field. It is generally safe to add new fields
without changing the API version, so you can simply change it to: without changing the API version, so you can simply change it to:
```go ```go
// Still API v6. // Still API v6.
type Frobber struct { type Frobber struct {
Height int `json:"height"` Height int `json:"height"`
Width int `json:"width"` Width int `json:"width"`
Param string `json:"param"` Param string `json:"param"`
} }
``` ```
@ -179,75 +178,76 @@ The onus is on you to define a sane default value for `Width` such that rule #1
above is true - API calls and stored objects that used to work must continue to above is true - API calls and stored objects that used to work must continue to
work. work.
For your next change you want to allow multiple `Param` values. You can not For your next change you want to allow multiple `Param` values. You can not
simply change `Param string` to `Params []string` (without creating a whole new simply change `Param string` to `Params []string` (without creating a whole new
API version) - that fails rules #1 and #2. You can instead do something like: API version) - that fails rules #1 and #2. You can instead do something like:
```go ```go
// Still API v6, but kind of clumsy. // Still API v6, but kind of clumsy.
type Frobber struct { type Frobber struct {
Height int `json:"height"` Height int `json:"height"`
Width int `json:"width"` Width int `json:"width"`
Param string `json:"param"` // the first param Param string `json:"param"` // the first param
ExtraParams []string `json:"extraParams"` // additional params ExtraParams []string `json:"extraParams"` // additional params
} }
``` ```
Now you can satisfy the rules: API calls that provide the old style `Param` Now you can satisfy the rules: API calls that provide the old style `Param`
will still work, while servers that don't understand `ExtraParams` can ignore will still work, while servers that don't understand `ExtraParams` can ignore
it. This is somewhat unsatisfying as an API, but it is strictly compatible. it. This is somewhat unsatisfying as an API, but it is strictly compatible.
Part of the reason for versioning APIs and for using internal structs that are Part of the reason for versioning APIs and for using internal structs that are
distinct from any one version is to handle growth like this. The internal distinct from any one version is to handle growth like this. The internal
representation can be implemented as: representation can be implemented as:
```go ```go
// Internal, soon to be v7beta1. // Internal, soon to be v7beta1.
type Frobber struct { type Frobber struct {
Height int Height int
Width int Width int
Params []string Params []string
} }
``` ```
The code that converts to/from versioned APIs can decode this into the somewhat The code that converts to/from versioned APIs can decode this into the somewhat
uglier (but compatible!) structures. Eventually, a new API version, let's call uglier (but compatible!) structures. Eventually, a new API version, let's call
it v7beta1, will be forked and it can use the clean internal structure. it v7beta1, will be forked and it can use the clean internal structure.
We've seen how to satisfy rules #1 and #2. Rule #3 means that you can not We've seen how to satisfy rules #1 and #2. Rule #3 means that you can not
extend one versioned API without also extending the others. For example, an extend one versioned API without also extending the others. For example, an
API call might POST an object in API v7beta1 format, which uses the cleaner API call might POST an object in API v7beta1 format, which uses the cleaner
`Params` field, but the API server might store that object in trusty old v6 `Params` field, but the API server might store that object in trusty old v6
form (since v7beta1 is "beta"). When the user reads the object back in the form (since v7beta1 is "beta"). When the user reads the object back in the
v7beta1 API it would be unacceptable to have lost all but `Params[0]`. This v7beta1 API it would be unacceptable to have lost all but `Params[0]`. This
means that, even though it is ugly, a compatible change must be made to the v6 means that, even though it is ugly, a compatible change must be made to the v6
API. API.
However, this is very challenging to do correctly. It often requires However, this is very challenging to do correctly. It often requires multiple
multiple representations of the same information in the same API resource, which representations of the same information in the same API resource, which need to
need to be kept in sync in the event that either is changed. For example, be kept in sync in the event that either is changed. For example, let's say you
let's say you decide to rename a field within the same API version. In this case, decide to rename a field within the same API version. In this case, you add
you add units to `height` and `width`. You implement this by adding duplicate units to `height` and `width`. You implement this by adding duplicate fields:
fields:
```go ```go
type Frobber struct { type Frobber struct {
Height *int `json:"height"` Height *int `json:"height"`
Width *int `json:"width"` Width *int `json:"width"`
HeightInInches *int `json:"heightInInches"` HeightInInches *int `json:"heightInInches"`
WidthInInches *int `json:"widthInInches"` WidthInInches *int `json:"widthInInches"`
} }
``` ```
You convert all of the fields to pointers in order to distinguish between unset and You convert all of the fields to pointers in order to distinguish between unset
set to 0, and then set each corresponding field from the other in the defaulting and set to 0, and then set each corresponding field from the other in the
pass (e.g., `heightInInches` from `height`, and vice versa), which runs just prior defaulting pass (e.g., `heightInInches` from `height`, and vice versa), which
to conversion. That works fine when the user creates a resource from a hand-written runs just prior to conversion. That works fine when the user creates a resource
configuration -- clients can write either field and read either field, but what about from a hand-written configuration -- clients can write either field and read
creation or update from the output of GET, or update via PATCH (see either field, but what about creation or update from the output of GET, or
update via PATCH (see
[In-place updates](../user-guide/managing-deployments.md#in-place-updates-of-resources))? [In-place updates](../user-guide/managing-deployments.md#in-place-updates-of-resources))?
In this case, the two fields will conflict, because only one field would be updated In this case, the two fields will conflict, because only one field would be
in the case of an old client that was only aware of the old field (e.g., `height`). updated in the case of an old client that was only aware of the old field (e.g.,
`height`).
Say the client creates: Say the client creates:
@ -280,93 +280,101 @@ then PUTs back:
} }
``` ```
The update should not fail, because it would have worked before `heightInInches` was added. The update should not fail, because it would have worked before `heightInInches`
was added.
Therefore, when there are duplicate fields, the old field MUST take precedence Therefore, when there are duplicate fields, the old field MUST take precedence
over the new, and the new field should be set to match by the server upon write. over the new, and the new field should be set to match by the server upon write.
A new client would be aware of the old field as well as the new, and so can ensure A new client would be aware of the old field as well as the new, and so can
that the old field is either unset or is set consistently with the new field. However, ensure that the old field is either unset or is set consistently with the new
older clients would be unaware of the new field. Please avoid introducing duplicate field. However, older clients would be unaware of the new field. Please avoid
fields due to the complexity they incur in the API. introducing duplicate fields due to the complexity they incur in the API.
A new representation, even in a new API version, that is more expressive than an old one A new representation, even in a new API version, that is more expressive than an
breaks backward compatibility, since clients that only understood the old representation old one breaks backward compatibility, since clients that only understood the
would not be aware of the new representation nor its semantics. Examples of old representation would not be aware of the new representation nor its
proposals that have run into this challenge include [generalized label semantics. Examples of proposals that have run into this challenge include
selectors](http://issues.k8s.io/341) and [pod-level security [generalized label selectors](http://issues.k8s.io/341) and [pod-level security
context](http://prs.k8s.io/12823). context](http://prs.k8s.io/12823).
As another interesting example, enumerated values cause similar challenges. As another interesting example, enumerated values cause similar challenges.
Adding a new value to an enumerated set is *not* a compatible change. Clients Adding a new value to an enumerated set is *not* a compatible change. Clients
which assume they know how to handle all possible values of a given field will which assume they know how to handle all possible values of a given field will
not be able to handle the new values. However, removing value from an not be able to handle the new values. However, removing value from an enumerated
enumerated set *can* be a compatible change, if handled properly (treat the set *can* be a compatible change, if handled properly (treat the removed value
removed value as deprecated but allowed). This is actually a special case of as deprecated but allowed). This is actually a special case of a new
a new representation, discussed above. representation, discussed above.
For [Unions](api-conventions.md), sets of fields where at most one should be set, For [Unions](api-conventions.md#unions), sets of fields where at most one should
it is acceptable to add a new option to the union if the [appropriate conventions] be set, it is acceptable to add a new option to the union if the [appropriate
were followed in the original object. Removing an option requires following conventions](api-conventions.md#objects) were followed in the original object.
the deprecation process. Removing an option requires following the deprecation process.
## Incompatible API changes ## Incompatible API changes
There are times when this might be OK, but mostly we want changes that There are times when this might be OK, but mostly we want changes that meet this
meet this definition. If you think you need to break compatibility, definition. If you think you need to break compatibility, you should talk to the
you should talk to the Kubernetes team first. Kubernetes team first.
Breaking compatibility of a beta or stable API version, such as v1, is unacceptable. Breaking compatibility of a beta or stable API version, such as v1, is
Compatibility for experimental or alpha APIs is not strictly required, but unacceptable. Compatibility for experimental or alpha APIs is not strictly
breaking compatibility should not be done lightly, as it disrupts all users of the required, but breaking compatibility should not be done lightly, as it disrupts
feature. Experimental APIs may be removed. Alpha and beta API versions may be deprecated all users of the feature. Experimental APIs may be removed. Alpha and beta API
and eventually removed wholesale, as described in the [versioning document](../design/versioning.md). versions may be deprecated and eventually removed wholesale, as described in the
Document incompatible changes across API versions under the [conversion tips](../api.md). [versioning document](../design/versioning.md). Document incompatible changes
across API versions under the appropriate
[{v? conversion tips tag in the api.md doc](../api.md).
If your change is going to be backward incompatible or might be a breaking change for API If your change is going to be backward incompatible or might be a breaking
consumers, please send an announcement to `kubernetes-dev@googlegroups.com` before change for API consumers, please send an announcement to
the change gets in. If you are unsure, ask. Also make sure that the change gets documented in `kubernetes-dev@googlegroups.com` before the change gets in. If you are unsure,
the release notes for the next release by labeling the PR with the "release-note" github label. ask. Also make sure that the change gets documented in the release notes for the
next release by labeling the PR with the "release-note" github label.
If you found that your change accidentally broke clients, it should be reverted. If you found that your change accidentally broke clients, it should be reverted.
In short, the expected API evolution is as follows: In short, the expected API evolution is as follows:
* `extensions/v1alpha1` -> * `extensions/v1alpha1` ->
* `newapigroup/v1alpha1` -> ... -> `newapigroup/v1alphaN` -> * `newapigroup/v1alpha1` -> ... -> `newapigroup/v1alphaN` ->
* `newapigroup/v1beta1` -> ... -> `newapigroup/v1betaN` -> * `newapigroup/v1beta1` -> ... -> `newapigroup/v1betaN` ->
* `newapigroup/v1` -> * `newapigroup/v1` ->
* `newapigroup/v2alpha1` -> ... * `newapigroup/v2alpha1` -> ...
While in extensions we have no obligation to move forward with the API at all and may delete or break it at any time. While in extensions we have no obligation to move forward with the API at all
and may delete or break it at any time.
While in alpha we expect to move forward with it, but may break it. While in alpha we expect to move forward with it, but may break it.
Once in beta we will preserve forward compatibility, but may introduce new versions and delete old ones. Once in beta we will preserve forward compatibility, but may introduce new
versions and delete old ones.
v1 must be backward-compatible for an extended length of time. v1 must be backward-compatible for an extended length of time.
## Changing versioned APIs ## Changing versioned APIs
For most changes, you will probably find it easiest to change the versioned For most changes, you will probably find it easiest to change the versioned
APIs first. This forces you to think about how to make your change in a APIs first. This forces you to think about how to make your change in a
compatible way. Rather than doing each step in every version, it's usually compatible way. Rather than doing each step in every version, it's usually
easier to do each versioned API one at a time, or to do all of one version easier to do each versioned API one at a time, or to do all of one version
before starting "all the rest". before starting "all the rest".
### Edit types.go ### Edit types.go
The struct definitions for each API are in `pkg/api/<version>/types.go`. Edit The struct definitions for each API are in `pkg/api/<version>/types.go`. Edit
those files to reflect the change you want to make. Note that all types and non-inline those files to reflect the change you want to make. Note that all types and
fields in versioned APIs must be preceded by descriptive comments - these are used to generate non-inline fields in versioned APIs must be preceded by descriptive comments -
documentation. Comments for types should not contain the type name; API documentation is these are used to generate documentation. Comments for types should not contain
generated from these comments and end-users should not be exposed to golang type names. the type name; API documentation is generated from these comments and end-users
should not be exposed to golang type names.
Optional fields should have the `,omitempty` json tag; fields are interpreted as being Optional fields should have the `,omitempty` json tag; fields are interpreted as
required otherwise. being required otherwise.
### Edit defaults.go ### Edit defaults.go
If your change includes new fields for which you will need default values, you If your change includes new fields for which you will need default values, you
need to add cases to `pkg/api/<version>/defaults.go`. Of course, since you need to add cases to `pkg/api/<version>/defaults.go`. Of course, since you
have added code, you have to add a test: `pkg/api/<version>/defaults_test.go`. have added code, you have to add a test: `pkg/api/<version>/defaults_test.go`.
Do use pointers to scalars when you need to distinguish between an unset value Do use pointers to scalars when you need to distinguish between an unset value
@ -380,19 +388,20 @@ Don't forget to run the tests!
### Edit conversion.go ### Edit conversion.go
Given that you have not yet changed the internal structs, this might feel Given that you have not yet changed the internal structs, this might feel
premature, and that's because it is. You don't yet have anything to convert to premature, and that's because it is. You don't yet have anything to convert to
or from. We will revisit this in the "internal" section. If you're doing this or from. We will revisit this in the "internal" section. If you're doing this
all in a different order (i.e. you started with the internal structs), then you all in a different order (i.e. you started with the internal structs), then you
should jump to that topic below. In the very rare case that you are making an should jump to that topic below. In the very rare case that you are making an
incompatible change you might or might not want to do this now, but you will incompatible change you might or might not want to do this now, but you will
have to do more later. The files you want are have to do more later. The files you want are
`pkg/api/<version>/conversion.go` and `pkg/api/<version>/conversion_test.go`. `pkg/api/<version>/conversion.go` and `pkg/api/<version>/conversion_test.go`.
Note that the conversion machinery doesn't generically handle conversion of values, Note that the conversion machinery doesn't generically handle conversion of
such as various kinds of field references and API constants. [The client values, such as various kinds of field references and API constants. [The client
library](../../pkg/client/unversioned/request.go) has custom conversion code for library](../../pkg/client/unversioned/request.go) has custom conversion code for
field references. You also need to add a call to api.Scheme.AddFieldLabelConversionFunc field references. You also need to add a call to
with a mapping function that understands supported translations. api.Scheme.AddFieldLabelConversionFunc with a mapping function that understands
supported translations.
## Changing the internal structures ## Changing the internal structures
@ -402,7 +411,7 @@ used.
### Edit types.go ### Edit types.go
Similar to the versioned APIs, the definitions for the internal structs are in Similar to the versioned APIs, the definitions for the internal structs are in
`pkg/api/types.go`. Edit those files to reflect the change you want to make. `pkg/api/types.go`. Edit those files to reflect the change you want to make.
Keep in mind that the internal structs must be able to express *all* of the Keep in mind that the internal structs must be able to express *all* of the
versioned APIs. versioned APIs.
@ -410,10 +419,10 @@ versioned APIs.
Most changes made to the internal structs need some form of input validation. Most changes made to the internal structs need some form of input validation.
Validation is currently done on internal objects in Validation is currently done on internal objects in
`pkg/api/validation/validation.go`. This validation is the one of the first `pkg/api/validation/validation.go`. This validation is the one of the first
opportunities we have to make a great user experience - good error messages and opportunities we have to make a great user experience - good error messages and
thorough validation help ensure that users are giving you what you expect and, thorough validation help ensure that users are giving you what you expect and,
when they don't, that they know why and how to fix it. Think hard about the when they don't, that they know why and how to fix it. Think hard about the
contents of `string` fields, the bounds of `int` fields and the contents of `string` fields, the bounds of `int` fields and the
requiredness/optionalness of fields. requiredness/optionalness of fields.
@ -433,26 +442,26 @@ than the generic ones (which are based on reflections and thus are highly
inefficient). inefficient).
The conversion code resides with each versioned API. There are two files: The conversion code resides with each versioned API. There are two files:
- `pkg/api/<version>/conversion.go` containing manually written conversion - `pkg/api/<version>/conversion.go` containing manually written conversion
functions functions
- `pkg/api/<version>/conversion_generated.go` containing auto-generated - `pkg/api/<version>/conversion_generated.go` containing auto-generated
conversion functions conversion functions
- `pkg/apis/extensions/<version>/conversion.go` containing manually written - `pkg/apis/extensions/<version>/conversion.go` containing manually written
conversion functions conversion functions
- `pkg/apis/extensions/<version>/conversion_generated.go` containing - `pkg/apis/extensions/<version>/conversion_generated.go` containing
auto-generated conversion functions auto-generated conversion functions
Since auto-generated conversion functions are using manually written ones, Since auto-generated conversion functions are using manually written ones,
those manually written should be named with a defined convention, i.e. a function those manually written should be named with a defined convention, i.e. a
converting type X in pkg a to type Y in pkg b, should be named: function converting type X in pkg a to type Y in pkg b, should be named:
`convert_a_X_To_b_Y`. `convert_a_X_To_b_Y`.
Also note that you can (and for efficiency reasons should) use auto-generated Also note that you can (and for efficiency reasons should) use auto-generated
conversion functions when writing your conversion functions. conversion functions when writing your conversion functions.
Once all the necessary manually written conversions are added, you need to Once all the necessary manually written conversions are added, you need to
regenerate auto-generated ones. To regenerate them: regenerate auto-generated ones. To regenerate them run:
- run
```sh ```sh
hack/update-codegen.sh hack/update-codegen.sh
@ -469,8 +478,9 @@ regenerate it. If the auto-generated conversion methods are not used by the
manually-written ones, it's fine to just remove the whole file and let the manually-written ones, it's fine to just remove the whole file and let the
generator to create it from scratch. generator to create it from scratch.
Unsurprisingly, adding manually written conversion also requires you to add tests to Unsurprisingly, adding manually written conversion also requires you to add
`pkg/api/<version>/conversion_test.go`. tests to `pkg/api/<version>/conversion_test.go`.
## Edit json (un)marshaling code ## Edit json (un)marshaling code
@ -478,11 +488,11 @@ We are auto-generating code for marshaling and unmarshaling json representation
of api objects - this is to improve the overall system performance. of api objects - this is to improve the overall system performance.
The auto-generated code resides with each versioned API: The auto-generated code resides with each versioned API:
- `pkg/api/<version>/types.generated.go` - `pkg/api/<version>/types.generated.go`
- `pkg/apis/extensions/<version>/types.generated.go` - `pkg/apis/extensions/<version>/types.generated.go`
To regenerate them: To regenerate them run:
- run
```sh ```sh
hack/update-codecgen.sh hack/update-codecgen.sh
@ -492,56 +502,56 @@ hack/update-codecgen.sh
This section is under construction, as we make the tooling completely generic. This section is under construction, as we make the tooling completely generic.
At the moment, you'll have to make a new directory under pkg/apis/; copy the At the moment, you'll have to make a new directory under `pkg/apis/`; copy the
directory structure from pkg/apis/extensions. Add the new group/version to all directory structure from `pkg/apis/extensions`. Add the new group/version to all
of the hack/{verify,update}-generated-{deep-copy,conversions,swagger}.sh files of the `hack/{verify,update}-generated-{deep-copy,conversions,swagger}.sh` files
in the appropriate places--it should just require adding your new group/version in the appropriate places--it should just require adding your new group/version
to a bash array. You will also need to make sure your new types are imported by to a bash array. You will also need to make sure your new types are imported by
the generation commands (cmd/gendeepcopy/ & cmd/genconversion). These the generation commands (`cmd/gendeepcopy/` & `cmd/genconversion`). These
instructions may not be complete and will be updated as we gain experience. instructions may not be complete and will be updated as we gain experience.
Adding API groups outside of the pkg/apis/ directory is not currently supported, Adding API groups outside of the `pkg/apis/` directory is not currently
but is clearly desirable. The deep copy & conversion generators need to work by supported, but is clearly desirable. The deep copy & conversion generators need
parsing go files instead of by reflection; then they will be easy to point at to work by parsing go files instead of by reflection; then they will be easy to
arbitrary directories: see issue [#13775](http://issue.k8s.io/13775). point at arbitrary directories: see issue [#13775](http://issue.k8s.io/13775).
## Update the fuzzer ## Update the fuzzer
Part of our testing regimen for APIs is to "fuzz" (fill with random values) API Part of our testing regimen for APIs is to "fuzz" (fill with random values) API
objects and then convert them to and from the different API versions. This is objects and then convert them to and from the different API versions. This is
a great way of exposing places where you lost information or made bad a great way of exposing places where you lost information or made bad
assumptions. If you have added any fields which need very careful formatting assumptions. If you have added any fields which need very careful formatting
(the test does not run validation) or if you have made assumptions such as (the test does not run validation) or if you have made assumptions such as
"this slice will always have at least 1 element", you may get an error or even "this slice will always have at least 1 element", you may get an error or even
a panic from the `serialization_test`. If so, look at the diff it produces (or a panic from the `serialization_test`. If so, look at the diff it produces (or
the backtrace in case of a panic) and figure out what you forgot. Encode that the backtrace in case of a panic) and figure out what you forgot. Encode that
into the fuzzer's custom fuzz functions. Hint: if you added defaults for a field, into the fuzzer's custom fuzz functions. Hint: if you added defaults for a
that field will need to have a custom fuzz function that ensures that the field is field, that field will need to have a custom fuzz function that ensures that the
fuzzed to a non-empty value. field is fuzzed to a non-empty value.
The fuzzer can be found in `pkg/api/testing/fuzzer.go`. The fuzzer can be found in `pkg/api/testing/fuzzer.go`.
## Update the semantic comparisons ## Update the semantic comparisons
VERY VERY rarely is this needed, but when it hits, it hurts. In some rare VERY VERY rarely is this needed, but when it hits, it hurts. In some rare cases
cases we end up with objects (e.g. resource quantities) that have morally we end up with objects (e.g. resource quantities) that have morally equivalent
equivalent values with different bitwise representations (e.g. value 10 with a values with different bitwise representations (e.g. value 10 with a base-2
base-2 formatter is the same as value 0 with a base-10 formatter). The only way formatter is the same as value 0 with a base-10 formatter). The only way Go
Go knows how to do deep-equality is through field-by-field bitwise comparisons. knows how to do deep-equality is through field-by-field bitwise comparisons.
This is a problem for us. This is a problem for us.
The first thing you should do is try not to do that. If you really can't avoid The first thing you should do is try not to do that. If you really can't avoid
this, I'd like to introduce you to our semantic DeepEqual routine. It supports this, I'd like to introduce you to our `semantic DeepEqual` routine. It supports
custom overrides for specific types - you can find that in `pkg/api/helpers.go`. custom overrides for specific types - you can find that in `pkg/api/helpers.go`.
There's one other time when you might have to touch this: unexported fields. There's one other time when you might have to touch this: `unexported fields`.
You see, while Go's `reflect` package is allowed to touch unexported fields, us You see, while Go's `reflect` package is allowed to touch `unexported fields`,
mere mortals are not - this includes semantic DeepEqual. Fortunately, most of us mere mortals are not - this includes `semantic DeepEqual`. Fortunately, most
our API objects are "dumb structs" all the way down - all fields are exported of our API objects are "dumb structs" all the way down - all fields are exported
(start with a capital letter) and there are no unexported fields. But sometimes (start with a capital letter) and there are no unexported fields. But sometimes
you want to include an object in our API that does have unexported fields you want to include an object in our API that does have unexported fields
somewhere in it (for example, `time.Time` has unexported fields). If this hits somewhere in it (for example, `time.Time` has unexported fields). If this hits
you, you may have to touch the semantic DeepEqual customization functions. you, you may have to touch the `semantic DeepEqual` customization functions.
## Implement your change ## Implement your change
@ -550,17 +560,17 @@ doing!
## Write end-to-end tests ## Write end-to-end tests
Check out the [E2E docs](e2e-tests.md) for detailed information about how to write end-to-end Check out the [E2E docs](e2e-tests.md) for detailed information about how to
tests for your feature. write end-to-end tests for your feature.
## Examples and docs ## Examples and docs
At last, your change is done, all unit tests pass, e2e passes, you're done, At last, your change is done, all unit tests pass, e2e passes, you're done,
right? Actually, no. You just changed the API. If you are touching an right? Actually, no. You just changed the API. If you are touching an existing
existing facet of the API, you have to try *really* hard to make sure that facet of the API, you have to try *really* hard to make sure that *all* the
*all* the examples and docs are updated. There's no easy way to do this, due examples and docs are updated. There's no easy way to do this, due in part to
in part to JSON and YAML silently dropping unknown fields. You're clever - JSON and YAML silently dropping unknown fields. You're clever - you'll figure it
you'll figure it out. Put `grep` or `ack` to good use. out. Put `grep` or `ack` to good use.
If you added functionality, you should consider documenting it and/or writing If you added functionality, you should consider documenting it and/or writing
an example to illustrate your change. an example to illustrate your change.
@ -575,81 +585,95 @@ The API spec changes should be in a commit separate from your other changes.
## Alpha, Beta, and Stable Versions ## Alpha, Beta, and Stable Versions
New feature development proceeds through a series of stages of increasing maturity: New feature development proceeds through a series of stages of increasing
maturity:
- Development level - Development level
- Object Versioning: no convention - Object Versioning: no convention
- Availability: not committed to main kubernetes repo, and thus not available in official releases - Availability: not committed to main kubernetes repo, and thus not available
- Audience: other developers closely collaborating on a feature or proof-of-concept in official releases
- Upgradeability, Reliability, Completeness, and Support: no requirements or guarantees - Audience: other developers closely collaborating on a feature or
proof-of-concept
- Upgradeability, Reliability, Completeness, and Support: no requirements or
guarantees
- Alpha level - Alpha level
- Object Versioning: API version name contains `alpha` (e.g. `v1alpha1`) - Object Versioning: API version name contains `alpha` (e.g. `v1alpha1`)
- Availability: committed to main kubernetes repo; appears in an official release; feature is - Availability: committed to main kubernetes repo; appears in an official
disabled by default, but may be enabled by flag release; feature is disabled by default, but may be enabled by flag
- Audience: developers and expert users interested in giving early feedback on features - Audience: developers and expert users interested in giving early feedback on
- Completeness: some API operations, CLI commands, or UI support may not be implemented; the API features
need not have had an *API review* (an intensive and targeted review of the API, on top of a normal - Completeness: some API operations, CLI commands, or UI support may not be
code review) implemented; the API need not have had an *API review* (an intensive and
- Upgradeability: the object schema and semantics may change in a later software release, without targeted review of the API, on top of a normal code review)
any provision for preserving objects in an existing cluster; - Upgradeability: the object schema and semantics may change in a later
removing the upgradability concern allows developers to make rapid progress; in particular, software release, without any provision for preserving objects in an existing
API versions can increment faster than the minor release cadence and the developer need not cluster; removing the upgradability concern allows developers to make rapid
maintain multiple versions; developers should still increment the API version when object schema progress; in particular, API versions can increment faster than the minor
or semantics change in an [incompatible way](#on-compatibility) release cadence and the developer need not maintain multiple versions;
- Cluster Reliability: because the feature is relatively new, and may lack complete end-to-end developers should still increment the API version when object schema or
tests, enabling the feature via a flag might expose bugs with destabilize the cluster (e.g. a semantics change in an [incompatible way](#on-compatibility)
bug in a control loop might rapidly create excessive numbers of object, exhausting API storage). - Cluster Reliability: because the feature is relatively new, and may lack
- Support: there is *no commitment* from the project to complete the feature; the feature may be complete end-to-end tests, enabling the feature via a flag might expose bugs
dropped entirely in a later software release with destabilize the cluster (e.g. a bug in a control loop might rapidly create
- Recommended Use Cases: only in short-lived testing clusters, due to complexity of upgradeability excessive numbers of object, exhausting API storage).
and lack of long-term support and lack of upgradability. - Support: there is *no commitment* from the project to complete the feature;
the feature may be dropped entirely in a later software release
- Recommended Use Cases: only in short-lived testing clusters, due to
complexity of upgradeability and lack of long-term support and lack of
upgradability.
- Beta level: - Beta level:
- Object Versioning: API version name contains `beta` (e.g. `v2beta3`) - Object Versioning: API version name contains `beta` (e.g. `v2beta3`)
- Availability: in official Kubernetes releases, and enabled by default - Availability: in official Kubernetes releases, and enabled by default
- Audience: users interested in providing feedback on features - Audience: users interested in providing feedback on features
- Completeness: all API operations, CLI commands, and UI support should be implemented; end-to-end - Completeness: all API operations, CLI commands, and UI support should be
tests complete; the API has had a thorough API review and is thought to be complete, though use implemented; end-to-end tests complete; the API has had a thorough API review
during beta may frequently turn up API issues not thought of during review and is thought to be complete, though use during beta may frequently turn up API
- Upgradeability: the object schema and semantics may change in a later software release; when issues not thought of during review
this happens, an upgrade path will be documented; in some cases, objects will be automatically - Upgradeability: the object schema and semantics may change in a later
converted to the new version; in other cases, a manual upgrade may be necessary; a manual software release; when this happens, an upgrade path will be documented; in some
upgrade may require downtime for anything relying on the new feature, and may require cases, objects will be automatically converted to the new version; in other
manual conversion of objects to the new version; when manual conversion is necessary, the cases, a manual upgrade may be necessary; a manual upgrade may require downtime
project will provide documentation on the process (for an example, see [v1 conversion for anything relying on the new feature, and may require manual conversion of
tips](../api.md)) objects to the new version; when manual conversion is necessary, the project
- Cluster Reliability: since the feature has e2e tests, enabling the feature via a flag should not will provide documentation on the process (for an example, see [v1 conversion
create new bugs in unrelated features; because the feature is new, it may have minor bugs tips](../api.md#v1-conversion-tips))
- Support: the project commits to complete the feature, in some form, in a subsequent Stable - Cluster Reliability: since the feature has e2e tests, enabling the feature
version; typically this will happen within 3 months, but sometimes longer; releases should via a flag should not create new bugs in unrelated features; because the feature
simultaneously support two consecutive versions (e.g. `v1beta1` and `v1beta2`; or `v1beta2` and is new, it may have minor bugs
`v1`) for at least one minor release cycle (typically 3 months) so that users have enough time - Support: the project commits to complete the feature, in some form, in a
to upgrade and migrate objects subsequent Stable version; typically this will happen within 3 months, but
- Recommended Use Cases: in short-lived testing clusters; in production clusters as part of a sometimes longer; releases should simultaneously support two consecutive
short-lived evaluation of the feature in order to provide feedback versions (e.g. `v1beta1` and `v1beta2`; or `v1beta2` and `v1`) for at least one
minor release cycle (typically 3 months) so that users have enough time to
upgrade and migrate objects
- Recommended Use Cases: in short-lived testing clusters; in production
clusters as part of a short-lived evaluation of the feature in order to provide
feedback
- Stable level: - Stable level:
- Object Versioning: API version `vX` where `X` is an integer (e.g. `v1`) - Object Versioning: API version `vX` where `X` is an integer (e.g. `v1`)
- Availability: in official Kubernetes releases, and enabled by default - Availability: in official Kubernetes releases, and enabled by default
- Audience: all users - Audience: all users
- Completeness: same as beta - Completeness: same as beta
- Upgradeability: only [strictly compatible](#on-compatibility) changes allowed in subsequent - Upgradeability: only [strictly compatible](#on-compatibility) changes
software releases allowed in subsequent software releases
- Cluster Reliability: high - Cluster Reliability: high
- Support: API version will continue to be present for many subsequent software releases; - Support: API version will continue to be present for many subsequent
software releases;
- Recommended Use Cases: any - Recommended Use Cases: any
### Adding Unstable Features to Stable Versions ### Adding Unstable Features to Stable Versions
When adding a feature to an object which is already Stable, the new fields and new behaviors When adding a feature to an object which is already Stable, the new fields and
need to meet the Stable level requirements. If these cannot be met, then the new new behaviors need to meet the Stable level requirements. If these cannot be
field cannot be added to the object. met, then the new field cannot be added to the object.
For example, consider the following object: For example, consider the following object:
```go ```go
// API v6. // API v6.
type Frobber struct { type Frobber struct {
Height int `json:"height"` Height int `json:"height"`
Param string `json:"param"` Param string `json:"param"`
} }
``` ```
@ -658,26 +682,29 @@ A developer is considering adding a new `Width` parameter, like this:
```go ```go
// API v6. // API v6.
type Frobber struct { type Frobber struct {
Height int `json:"height"` Height int `json:"height"`
Width int `json:"height"` Width int `json:"height"`
Param string `json:"param"` Param string `json:"param"`
} }
``` ```
However, the new feature is not stable enough to be used in a stable version (`v6`). However, the new feature is not stable enough to be used in a stable version
Some reasons for this might include: (`v6`). Some reasons for this might include:
- the final representation is undecided (e.g. should it be called `Width` or `Breadth`?) - the final representation is undecided (e.g. should it be called `Width` or
- the implementation is not stable enough for general use (e.g. the `Area()` routine sometimes overflows.) `Breadth`?)
- the implementation is not stable enough for general use (e.g. the `Area()`
routine sometimes overflows.)
The developer cannot add the new field until stability is met. However, sometimes stability The developer cannot add the new field until stability is met. However,
cannot be met until some users try the new feature, and some users are only able or willing sometimes stability cannot be met until some users try the new feature, and some
to accept a released version of Kubernetes. In that case, the developer has a few options, users are only able or willing to accept a released version of Kubernetes. In
both of which require staging work over several releases. that case, the developer has a few options, both of which require staging work
over several releases.
A preferred option is to first make a release where the new value (`Width` in this example) A preferred option is to first make a release where the new value (`Width` in
is specified via an annotation, like this: this example) is specified via an annotation, like this:
```go ```go
kind: frobber kind: frobber
@ -690,9 +717,9 @@ height: 4
param: "green and blue" param: "green and blue"
``` ```
This format allows users to specify the new field, but makes it clear This format allows users to specify the new field, but makes it clear that they
that they are using a Alpha feature when they do, since the word `alpha` are using a Alpha feature when they do, since the word `alpha` is in the
is in the annotation key. annotation key.
Another option is to introduce a new type with an new `alpha` or `beta` version Another option is to introduce a new type with an new `alpha` or `beta` version
designator, like this: designator, like this:
@ -700,18 +727,19 @@ designator, like this:
``` ```
// API v6alpha2 // API v6alpha2
type Frobber struct { type Frobber struct {
Height int `json:"height"` Height int `json:"height"`
Width int `json:"height"` Width int `json:"height"`
Param string `json:"param"` Param string `json:"param"`
} }
``` ```
The latter requires that all objects in the same API group as `Frobber` to be replicated in The latter requires that all objects in the same API group as `Frobber` to be
the new version, `v6alpha2`. This also requires user to use a new client which uses the replicated in the new version, `v6alpha2`. This also requires user to use a new
other version. Therefore, this is not a preferred option. client which uses the other version. Therefore, this is not a preferred option.
A releated issue is how a cluster manager can roll back from a new version A releated issue is how a cluster manager can roll back from a new version
with a new feature, that is already being used by users. See https://github.com/kubernetes/kubernetes/issues/4855. with a new feature, that is already being used by users. See
https://github.com/kubernetes/kubernetes/issues/4855.
<!-- BEGIN MUNGE: GENERATED_ANALYTICS --> <!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/devel/api_changes.md?pixel)]() [![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/devel/api_changes.md?pixel)]()

View File

@ -36,8 +36,9 @@ Documentation for other releases can be found at
## Overview ## Overview
Kubernetes uses a variety of automated tools in an attempt to relieve developers of repetitive, low Kubernetes uses a variety of automated tools in an attempt to relieve developers
brain power work. This document attempts to describe these processes. of repetitive, low brain power work. This document attempts to describe these
processes.
## Submit Queue ## Submit Queue
@ -47,8 +48,11 @@ In an effort to
* maintain e2e stability * maintain e2e stability
* load test githubs label feature * load test githubs label feature
We have added an automated [submit-queue](https://github.com/kubernetes/contrib/blob/master/mungegithub/mungers/submit-queue.go) to the We have added an automated [submit-queue]
[github "munger"](https://github.com/kubernetes/contrib/tree/master/mungegithub) for kubernetes. (https://github.com/kubernetes/contrib/blob/master/mungegithub/mungers/submit-queue.go)
to the
[github "munger"](https://github.com/kubernetes/contrib/tree/master/mungegithub)
for kubernetes.
The submit-queue does the following: The submit-queue does the following:
@ -76,59 +80,76 @@ A PR is considered "ready for merging" if it matches the following:
* it has passed the Jenkins e2e test * it has passed the Jenkins e2e test
* it has the `e2e-not-required` label * it has the `e2e-not-required` label
Note that the combined whitelist/committer list is available at [submit-queue.k8s.io](http://submit-queue.k8s.io) Note that the combined whitelist/committer list is available at
[submit-queue.k8s.io](http://submit-queue.k8s.io)
### Merge process ### Merge process
Merges _only_ occur when the `critical builds` (Jenkins e2e for gce, gke, scalability, upgrade) are passing. Merges _only_ occur when the `critical builds` (Jenkins e2e for gce, gke,
We're open to including more builds here, let us know... scalability, upgrade) are passing. We're open to including more builds here, let
us know...
Merges are serialized, so only a single PR is merged at a time, to ensure against races. Merges are serialized, so only a single PR is merged at a time, to ensure
against races.
If the PR has the `e2e-not-required` label, it is simply merged. If the PR has the `e2e-not-required` label, it is simply merged. If the PR does
If the PR does not have this label, e2e tests are re-run, if these new tests pass, the PR is merged. not have this label, e2e tests are re-run, if these new tests pass, the PR is
merged.
If e2e flakes or is currently buggy, the PR will not be merged, but it will be re-run on the following If e2e flakes or is currently buggy, the PR will not be merged, but it will be
pass. re-run on the following pass.
## Github Munger ## Github Munger
We also run a [github "munger"](https://github.com/kubernetes/contrib/tree/master/mungegithub) We also run a [github "munger."]
(https://github.com/kubernetes/contrib/tree/master/mungegithub)
This runs repeatedly over github pulls and issues and runs modular "mungers" similar to "mungedocs" This runs repeatedly over github pulls and issues and runs modular "mungers"
similar to "mungedocs."
Currently this runs: Currently this runs:
* blunderbuss - Tries to automatically find an owner for a PR without an owner, uses mapping file here: * blunderbuss - Tries to automatically find an owner for a PR without an
owner, uses mapping file here:
https://github.com/kubernetes/contrib/blob/master/mungegithub/blunderbuss.yml https://github.com/kubernetes/contrib/blob/master/mungegithub/blunderbuss.yml
* needs-rebase - Adds `needs-rebase` to PRs that aren't currently mergeable, and removes it from those that are. * needs-rebase - Adds `needs-rebase` to PRs that aren't currently mergeable,
and removes it from those that are.
* size - Adds `size/xs` - `size/xxl` labels to PRs * size - Adds `size/xs` - `size/xxl` labels to PRs
* ok-to-test - Adds the `ok-to-test` message to PRs that have an `lgtm` but the e2e-builder would otherwise not test due to whitelist * ok-to-test - Adds the `ok-to-test` message to PRs that have an `lgtm` but
* ping-ci - Attempts to ping the ci systems (Travis) if they are missing from a PR. the e2e-builder would otherwise not test due to whitelist
* lgtm-after-commit - Removes the `lgtm` label from PRs where there are commits that are newer than the `lgtm` label * ping-ci - Attempts to ping the ci systems (Travis) if they are missing from
a PR.
* lgtm-after-commit - Removes the `lgtm` label from PRs where there are
commits that are newer than the `lgtm` label
In the works: In the works:
* issue-detector - machine learning for determining if an issue that has been filed is a `support` issue, `bug` or `feature` * issue-detector - machine learning for determining if an issue that has been
filed is a `support` issue, `bug` or `feature`
Please feel free to unleash your creativity on this tool, send us new mungers that you think will help support the Kubernetes development process. Please feel free to unleash your creativity on this tool, send us new mungers
that you think will help support the Kubernetes development process.
## PR builder ## PR builder
We also run a robotic PR builder that attempts to run e2e tests for each PR. We also run a robotic PR builder that attempts to run e2e tests for each PR.
Before a PR from an unknown user is run, the PR builder bot (`k8s-bot`) asks to a message from a Before a PR from an unknown user is run, the PR builder bot (`k8s-bot`) asks to
contributor that a PR is "ok to test", the contributor replies with that message. Contributors can also a message from a contributor that a PR is "ok to test", the contributor replies
add users to the whitelist by replying with the message "add to whitelist" ("please" is optional, but with that message. Contributors can also add users to the whitelist by replying
remember to treat your robots with kindness...) with the message "add to whitelist" ("please" is optional, but remember to treat
your robots with kindness...)
If a PR is approved for testing, and tests either haven't run, or need to be re-run, you can ask the If a PR is approved for testing, and tests either haven't run, or need to be
PR builder to re-run the tests. To do this, reply to the PR with a message that begins with `@k8s-bot test this`, this should trigger a re-build/re-test. re-run, you can ask the PR builder to re-run the tests. To do this, reply to the
PR with a message that begins with `@k8s-bot test this`, this should trigger a
re-build/re-test.
## FAQ: ## FAQ:
#### How can I ask my PR to be tested again for Jenkins failures? #### How can I ask my PR to be tested again for Jenkins failures?
Right now you have to ask a contributor (this may be you!) to re-run the test with "@k8s-bot test this" Right now you have to ask a contributor (this may be you!) to re-run the test
with "@k8s-bot test this"
### How can I kick Travis to re-test on a failure? ### How can I kick Travis to re-test on a failure?

View File

@ -40,46 +40,54 @@ depending on the point in the release cycle.
## Propose a Cherry Pick ## Propose a Cherry Pick
1. Cherrypicks are [managed with labels and milestones](pull-requests.md#release-notes) 1. Cherrypicks are [managed with labels and milestones]
1. All label/milestone accounting happens on PRs on master. There's nothing to do on PRs targeted to the release branches. (pull-requests.md#release-notes)
1. When you want a PR to be merged to the release branch, make the following label changes to the **master** branch PR: 1. All label/milestone accounting happens on PRs on master. There's nothing to
do on PRs targeted to the release branches.
1. When you want a PR to be merged to the release branch, make the following
label changes to the **master** branch PR:
* Remove release-note-label-needed * Remove release-note-label-needed
* Add an appropriate release-note-(!label-needed) label * Add an appropriate release-note-(!label-needed) label
* Add an appropriate milestone * Add an appropriate milestone
* Add the `cherrypick-candidate` label * Add the `cherrypick-candidate` label
* The PR title is the **release note** you want published at release time and * The PR title is the **release note** you want published at release time and
note that PR titles are mutable and should reflect a release note note that PR titles are mutable and should reflect a release note
friendly message for any `release-note-*` labeled PRs. friendly message for any `release-note-*` labeled PRs.
### How do cherrypick-candidates make it to the release branch? ### How do cherrypick-candidates make it to the release branch?
1. **BATCHING:** After a branch is first created and before the X.Y.0 release 1. **BATCHING:** After a branch is first created and before the X.Y.0 release
* Branch owners review the list of `cherrypick-candidate` labeled PRs. * Branch owners review the list of `cherrypick-candidate` labeled PRs.
* PRs batched up and merged to the release branch get a `cherrypick-approved` label and lose the `cherrypick-candidate` label. * PRs batched up and merged to the release branch get a `cherrypick-approved`
* PRs that won't be merged to the release branch, lose the `cherrypick-candidate` label. label and lose the `cherrypick-candidate` label.
* PRs that won't be merged to the release branch, lose the
`cherrypick-candidate` label.
1. **INDIVIDUAL CHERRYPICKS:** After the first X.Y.0 on a branch 1. **INDIVIDUAL CHERRYPICKS:** After the first X.Y.0 on a branch
* Run the cherry pick script. This example applies a master branch PR #98765 to the remote branch `upstream/release-3.14`: * Run the cherry pick script. This example applies a master branch PR #98765
`hack/cherry_pick_pull.sh upstream/release-3.14 98765` to the remote branch `upstream/release-3.14`:
`hack/cherry_pick_pull.sh upstream/release-3.14 98765`
* Your cherrypick PR (targeted to the branch) will immediately get the * Your cherrypick PR (targeted to the branch) will immediately get the
`do-not-merge` label. The branch owner will triage PRs targeted to `do-not-merge` label. The branch owner will triage PRs targeted to
the branch and label the ones to be merged by applying the `lgtm` the branch and label the ones to be merged by applying the `lgtm`
label. label.
There is an [issue](https://github.com/kubernetes/kubernetes/issues/23347) open tracking the tool to automate the batching procedure. There is an [issue](https://github.com/kubernetes/kubernetes/issues/23347) open
tracking the tool to automate the batching procedure.
#### Cherrypicking a doc change #### Cherrypicking a doc change
If you are cherrypicking a change which adds a doc, then you also need to run If you are cherrypicking a change which adds a doc, then you also need to run
`build/versionize-docs.sh` in the release branch to versionize that doc. `build/versionize-docs.sh` in the release branch to versionize that doc.
Ideally, just running `hack/cherry_pick_pull.sh` should be enough, but we are not there Ideally, just running `hack/cherry_pick_pull.sh` should be enough, but we are
yet: [#18861](https://github.com/kubernetes/kubernetes/issues/18861) not there yet: [#18861](https://github.com/kubernetes/kubernetes/issues/18861)
To cherrypick PR 123456 to release-3.14, run the following commands after running `hack/cherry_pick_pull.sh` and before merging the PR: To cherrypick PR 123456 to release-3.14, run the following commands after
running `hack/cherry_pick_pull.sh` and before merging the PR:
``` ```
$ git checkout -b automated-cherry-pick-of-#123456-upstream-release-3.14 $ git checkout -b automated-cherry-pick-of-#123456-upstream-release-3.14
origin/automated-cherry-pick-of-#123456-upstream-release-3.14 origin/automated-cherry-pick-of-#123456-upstream-release-3.14
$ ./build/versionize-docs.sh release-3.14 $ ./build/versionize-docs.sh release-3.14
$ git commit -a -m "Running versionize docs" $ git commit -a -m "Running versionize docs"
$ git push origin automated-cherry-pick-of-#123456-upstream-release-3.14 $ git push origin automated-cherry-pick-of-#123456-upstream-release-3.14
@ -97,9 +105,9 @@ requested - this should not be the norm, but it may happen.
See the [cherrypick queue dashboard](http://cherrypick.k8s.io/#/queue) for See the [cherrypick queue dashboard](http://cherrypick.k8s.io/#/queue) for
status of PRs labeled as `cherrypick-candidate`. status of PRs labeled as `cherrypick-candidate`.
[Contributor License Agreements](http://releases.k8s.io/HEAD/CONTRIBUTING.md) is considered implicit [Contributor License Agreements](http://releases.k8s.io/HEAD/CONTRIBUTING.md) is
for all code within cherry-pick pull requests, ***unless there is a large considered implicit for all code within cherry-pick pull requests, ***unless
conflict***. there is a large conflict***.
<!-- BEGIN MUNGE: GENERATED_ANALYTICS --> <!-- BEGIN MUNGE: GENERATED_ANALYTICS -->

View File

@ -40,7 +40,8 @@ Documentation for other releases can be found at
### User Contributed ### User Contributed
*Note: Libraries provided by outside parties are supported by their authors, not the core Kubernetes team* *Note: Libraries provided by outside parties are supported by their authors, not
the core Kubernetes team*
* [Clojure](https://github.com/yanatan16/clj-kubernetes-api) * [Clojure](https://github.com/yanatan16/clj-kubernetes-api)
* [Java (OSGi)](https://bitbucket.org/amdatulabs/amdatu-kubernetes) * [Java (OSGi)](https://bitbucket.org/amdatulabs/amdatu-kubernetes)