Automatic merge from submit-queue
Add readyReplicas to replica sets
@bgrant0607 for the api changes
@bprashanth for the controllers changes
@deads2k fyi
Automatic merge from submit-queue
Kubelet: add --container-runtime-endpoint and --image-service-endpoint
Flag `--container-runtime-endpoint` (overrides `--container-runtime`) is introduced to identify the unix socket file of the remote runtime service. And flag `--image-service-endpoint` is introduced to identify the unix socket file of the image service.
This PR is part of #28789 Milestone 0.
CC @yujuhong @Random-Liu
Automatic merge from submit-queue
Basic scaler/reaper for petset
Currently scaling or upgrading a petset is more complicated than it should be. Would be nice if this made code freeze on friday. I'm planning on a follow up change with generation number and e2es post freeze.
Automatic merge from submit-queue
allow group impersonation
Adds an "Impersonate-Group" header that can be used to specify exactly which groups to use on an impersonation request.
This also restructures the code to make it easier to add the scopes header next. This closely parallels the "Impersonate-User" header, so I figured I'd start easy.
@kubernetes/sig-auth
@ericchiang are you comfortable reviewing?
New flag --container-runtime-endpoint (overrides --container-runtime)
is introduced to kubelet which identifies the unix socket file of
the remote runtime service. And new flag --image-service-endpoint is
introduced to kubelet which identifies the unix socket file of the
image service.
Automatic merge from submit-queue
Revert "Revert "syncNetworkUtil in kubelet and fix loadbalancerSourceRange on GCE
Reverts kubernetes/kubernetes#30729
Automatic merge from submit-queue
Expose flags for new NodeEviction logic in NodeController
Fix#28832
Last PR from the NodeController NodeEviction logic series.
cc @davidopp @lavalamp @mml
Automatic merge from submit-queue
[kubelet] Introduce --protect-kernel-defaults flag to make the tunable behaviour configurable
Let's make the default behaviour of kernel tuning configurable. The default behaviour is kept modify as has been so far.
Automatic merge from submit-queue
[Kubelet] Rename `--config` to `--pod-manifest-path`. `--config` is deprecated.
This field holds the location of a manifest file or directory of manifest
files for pods the Kubelet is supposed to run. The name of the field
should reflect that purpose. I didn't change the flag name because that
API should remain stable.
<!-- Reviewable:start -->
---
This change is [<img src="https://reviewable.io/review_button.svg" height="34" align="absmiddle" alt="Reviewable"/>](https://reviewable.kubernetes.io/reviews/kubernetes/kubernetes/29999)
<!-- Reviewable:end -->
Automatic merge from submit-queue
Simplify canonical element term in deepcopy
Replace the old functional canonical element term in deepcopy registration with direct struct instantiation.
The old way was an artifact of non-uniform pointer/non-pointer types in the signature of deepcopy function. Since we changed that to always be a pointer, we can simplify the code.
<!-- Reviewable:start -->
---
This change is [<img src="https://reviewable.kubernetes.io/review_button.svg" height="34" align="absmiddle" alt="Reviewable"/>](https://reviewable.kubernetes.io/reviews/kubernetes/kubernetes/30341)
<!-- Reviewable:end -->
Also provide a new --pod-manifest-path flag and deprecate the old
--config one.
This field holds the location of a manifest file or directory of manifest
files for pods the Kubelet is supposed to run. The name of the field
should reflect that purpose.
Automatic merge from submit-queue
Rewrite service controller to apply best controller pattern
This PR is a long term solution for #21625:
We apply the same pattern like replication controller to service controller to avoid the potential process order messes in service controller, the change includes:
1. introduce informer controller to watch service changes from kube-apiserver, so that every changes on same service will be kept in serviceStore as the only element.
2. put the service name to be processed to working queue
3. when process service, always get info from serviceStore to ensure the info is up-to-date
4. keep the retry mechanism, sleep for certain interval and add it back to queue.
5. remote the logic of reading last service info from kube-apiserver before processing the LB info as we trust the info from serviceStore.
The UT has been passed, manual test passed after I hardcode the cloud provider as FakeCloud, however I am not able to boot a k8s cluster with any available cloudprovider, so e2e test is not done.
Submit this PR first for review and for triggering a e2e test.
Automatic merge from submit-queue
rbac validation: rules can't combine non-resource URLs and regular resources
This PR updates the validation used for RBAC to prevent rules from mixing non-resource URLs and regular resources.
For example the following is no longer valid
```yml
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1alpha1
metadata:
name: admins
rules:
- apiGroups: ["*"]
resources: ["*"]
verbs: ["*"]
nonResourceURLs: ["*"]
```
And must be rewritten as so.
```yml
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1alpha1
metadata:
name: admins
rules:
- apiGroups: ["*"]
resources: ["*"]
verbs: ["*"]
- nonResourceURLs: ["*"]
verbs: ["*"]
```
It also:
* Mandates non-zero length arrays for required resources.
* Mandates non-resource URLs only be used for ClusterRoles (not namespaced Roles).
* Updates the swagger validation so `verbs` are the only required field in a rule. Further validation is done by the server.
Also, do we need to bump the API version?
Discussed by @erictune and @liggitt in #28304
Updates kubernetes/features#2
cc @kubernetes/sig-auth
Edit:
* Need to update the RBAC docs if this change goes in.
Automatic merge from submit-queue
Add API for StorageClasses
This is the API objects only required for dynamic provisioning picked apart from the controller logic.
Entire feature is here: https://github.com/kubernetes/kubernetes/pull/29006
Automatic merge from submit-queue
add tokenreviews endpoint to implement webhook
Wires up an API resource under `apis/authentication.k8s.io/v1beta1` to expose the webhook token authentication API as an API resource. This allows one API server to use another for authentication and uses existing policy engines for the "authoritative" API server to controller access to the endpoint.
@cjcullen you wrote the initial type
Automatic merge from submit-queue
Fix RBAC authorizer of ServiceAccount
RBAC authorizer assigns a role to a wrong service account.
How to reproduce
1.Create role and rolebinding to allow default user in kube-system namespace to read secrets in kube-system namespace.
```
# kubectl create -f role.yaml
# kubectl create -f binding.yaml
```
```yaml
# role.yaml
kind: Role
apiVersion: rbac.authorization.k8s.io/v1alpha1
metadata:
name: secret-reader
namespace: kube-system
rules:
- apiGroups: [""]
resources: ["secrets"]
verbs: ["get", "watch", "list"]
nonResourceURLs: []
```
```yaml
# binding.yaml
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1alpha1
metadata:
name: read-secrets
namespace: kube-system
subjects:
- kind: ServiceAccount
name: default
namespace: kube-system
roleRef:
kind: Role
namespace: kube-system
name: secret-reader
apiVersion: rbac.authorization.k8s.io/v1alpha1
```
2.Set a credential of default user
```
$ kubectl config set-credentials default_user --token=<token_of_system:serviceaccount:kube-system:default>
$ kubectl config set-context default_user-context --cluster=test-cluster --user=default_user
$ kubectl config use-context default_user-context
```
3.Try to get secrets as default user in kube-system namespace
```
$ kubectl --namespace=kube-system get secrets
the server does not allow access to the requested resource (get secrets)
```
As shown above, default user could not access to secrets.
But if I have kube-system user in default namespace, it is allowed access to secrets.
4.Create a service account and try to get secrets as kube-system user in default namespace
```
# kubectl --namespace=default create serviceaccount kube-system
serviceaccount "kube-system" created
$ kubectl config set-credentials kube-system_user --token=<token_of_system:serviceaccount:default:kube-system>
$ kubectl config set-context kube-system_user-context --cluster=test-cluster --user=kube-system_user
$ kubectl config use-context kube-system_user-context
$ kubectl --namespace=kube-system get secrets
NAME TYPE DATA AGE
default-token-8pyb3 kubernetes.io/service-account-token 3 4d
```
Automatic merge from submit-queue
Undelete generated files
There's been enough people broken by not committing generated code, that we
should undo that until we have a proper client that is `go get` compatible.
This is temporary.
Fixes#28920
There's been enough people broken by not committing generated code, that we
should undo that until we have a proper client that is `go get` compatible.
This is temporary.
Automatic merge from submit-queue
Generate a better Stringer method for proto types
This replaces the bad string output generated by golang/proto with gogo/protobuf stringer generation. Makes the output similar to %#v and more debuggable. We have to have a String() method to implement proto.Message, so this is strictly better.
@wojtek-t, @thockin for after your PR merges
Fixes#28756
Move SystemReserved and KubeReserved into KubeletConfiguration struct
Convert int64 to int32 for some external type fields so they match internal ones
tLS* to tls* for JSON field names
Fix dependency on removed options.AutoDetectCloudProvider
Change floats in KubeletConfiguration API to ints
Update external KubeletConfiguration type
Add defaults for new KubeletConfiguration fields
Modify some defaults to match upstream settings
Add/rename some conversion functions
Updated codegen
Fixed typos
Mike Danese caught that s.NodeLabels wasn't allocated, fix on line 118
of cmd/kubelet/app/options/options.go.
Provide list of valid sources in comment for HostNetworkSources field
Automatic merge from submit-queue
controller-manager support number of garbage collector workers to be configurable
The number of garbage collector workers of controller-manager is a fixed value 5 now, make it configurable should more properly
This mostly takes the previously checked in files and removes them, and moves
the generation to be on-demand instead of manual. Manually verified no change
in generated output.
Automatic merge from submit-queue
Deepcopy: avoid struct copies and reflection Call
- make signature of generated deepcopy methods symmetric with `in *type, out *type`, avoiding copies of big structs on the stack
- switch to `in interface{}, out interface{}` which allows us to call them with without `reflect.Call`
The first change reduces runtime of BenchmarkPodCopy-4 from `> 3500ns` to around `2300ns`.
The second change reduces runtime to around `1900ns`.
This drives conversion generation from file tags like:
// +conversion-gen=k8s.io/my/internal/version
.. rather than hardcoded lists of packages.
The only net change in generated code can be explained as correct. Previously
it didn't know that conversion was available.
This is the last piece of Clayton's #26179 to be implemented with file tags.
All diffs are accounted for. Followup will use this to streamline some
packages.
Also add some V(5) debugging - it was helpful in diagnosing various issues, it
may be helpful again.
This drives most of the logic of deep-copy generation from tags like:
// +deepcopy-gen=package
..rather than hardcoded lists of packages. This will make it possible to
subsequently generate code ONLY for packages that need it *right now*, rather
than all of them always.
Also remove pkgs that really do not need deep-copies (no symbols used
anywhere).
This is in prep to simplify tag logic. Don't rely on processing commas as new
tag delimiters. Put new tags on new lines. This had zero effect on generated
code (as intended).
In bringing back Clayton's PR piece-by-piece this was almost as easy to
implement as his version, and is much more like what I think we should be
doing.
Specifically, any time which defines a .DeepCopy() method will have that method
called preferentially. Otherwise we generate our own functions for
deep-copying. This affected exactly one type - resource.Quantity. In applying
this heuristic, several places in the generated code were simplified.
To achieve this I had to convert types.Type.Methods from a slice to a map,
which seems correct anyway (to do by-name lookups).
His PR cam during the middle of this development cycle, and it was easier to
burn it down and recreate it than try to patch it into an existing series and
re-test every assumption. This behavior will be re-introduced in subsequent
commits.
Automatic merge from submit-queue
TLS bootstrap API group (alpha)
This PR only covers the new types and related client/storage code- the vast majority of the line count is codegen. The implementation differs slightly from the current proposal document based on discussions in design thread (#20439). The controller logic and kubelet support mentioned in the proposal are forthcoming in separate requests.
I submit that #18762 ("Creating a new API group is really hard") is, if anything, understating it. I've tried to structure the commits to illustrate the process.
@mikedanese @erictune @smarterclayton @deads2k
```release-note-experimental
An alpha implementation of the the TLS bootstrap API described in docs/proposals/kubelet-tls-bootstrap.md.
```
[]()
Automatic merge from submit-queue
Proportionally scale paused and rolling deployments
Enable paused and rolling deployments to be proportionally scaled.
Also have cleanup policy work for paused deployments.
Fixes#20853Fixes#20966Fixes#20754
@bgrant0607 @janetkuo @ironcladlou @nikhiljindal
<!-- Reviewable:start -->
---
This change is [<img src="http://reviewable.k8s.io/review_button.svg" height="35" align="absmiddle" alt="Reviewable"/>](http://reviewable.k8s.io/reviews/kubernetes/kubernetes/20273)
<!-- Reviewable:end -->
Automatic merge from submit-queue
let dynamic client handle non-registered ListOptions
And register v1.ListOptions in the policy group.
Fix#27622
@lavalamp @smarterclayton @krousey
Automatic merge from submit-queue
add unit and integration tests for rbac authorizer
This PR adds lots of tests for the RBAC authorizer.
The plan over the next couple days is to add a lot more test cases.
Updates #23396
cc @erictune
Automatic merge from submit-queue
ObjectMeta, ListMeta, and TypeMeta should implement their interfaces
Make unversioned.ListMeta implement List. Update all the *List types so they implement GetListMeta.
This helps avoid using reflection to get list information.
Remove all unnecessary boilerplate, move the interfaces to the right
places, and add a test that verifies that objects implement one, the
other, but never both.
@ncdc @lavalamp this supercedes #26964 with the boilerplate removed. Added tests
Make unversioned.ListMeta implement List. Update all the *List types so they implement GetListMeta.
This helps avoid using reflection to get list information.
Remove all unnecessary boilerplate, move the interfaces to the right
places, and add a test that verifies that objects implement one, the
other, but never both.
This PR contains Kubelet changes to enable attach/detach controller control.
* It introduces a new "enable-controller-attach-detach" kubelet flag to
enable control by controller. Default enabled.
* It removes all references "SafeToDetach" annoation from controller.
* It adds the new VolumesInUse field to the Node Status API object.
* It modifies the controller to use VolumesInUse instead of SafeToDetach
annotation to gate detachment.
* There is a bug in node-problem-detector that causes VolumesInUse to
get reset every 30 seconds. Issue https://github.com/kubernetes/node-problem-detector/issues/9
opened to fix that.
Automatic merge from submit-queue
Round should avoid clearing s, save a string
Instead of saving bytes, save a string, which makes String() faster
and does not unduly penalize marshal. During parse, save the string
if it is in canonical form.
@wojtek-t @lavalamp this makes quantity.String() faster for a few cases
where it matters. We were also not clearing s properly before on Round()
This allows kube-controller-manager to allocate CIDRs to nodes (with
allocate-node-cidrs=true), but will not try to configure them on the
cloud provider, even if the cloud provider supports Routes.
The default is configure-cloud-routes=true, and it will only try to
configure routes if allocate-node-cidrs is also configured, so the
default behaviour is unchanged.
This is useful because on AWS the cloud provider configures routes by
setting up VPC routing table entries, but there is a limit of 50
entries. So setting configure-cloud-routes on AWS would allow us to
continue to allocate node CIDRs as today, but replace the VPC
route-table mechanism with something not limited to 50 nodes.
We can't just turn off the cloud-provider entirely because it also
controls other things - node discovery, load balancer creation etc.
Fix#25602
Automatic merge from submit-queue
Add release_1_3 clientset in update-codegen
Add release_1_3 clientset in update-codegen to keep it update-to-date; update the generated clientset.
Split controller cache into actual and desired state of world.
Controller will only operate on volumes scheduled to nodes that
have the "volumes.kubernetes.io/controller-managed-attach" annotation.
Automatic merge from submit-queue
vSphere Volume Plugin Implementation
This PR implements vSphere Volume plugin support in Kubernetes (ref. issue #23932).
Automatic merge from submit-queue
Use protobufs by default to communicate with apiserver (still store JSONs in etcd)
@lavalamp @kubernetes/sig-api-machinery
Automatic merge from submit-queue
Cache Webhook Authentication responses
Add a simple LRU cache w/ 2 minute TTL to the webhook authenticator.
Kubectl is a little spammy, w/ >= 4 API requests per command. This also prevents a single unauthenticated user from being able to DOS the remote authenticator.
Automatic merge from submit-queue
Add NetworkPolicy API Resource
API implementation of https://github.com/kubernetes/kubernetes/pull/24154
Still to do:
- [x] Get it working (See comments)
- [x] Make sure user-facing comments are correct.
- [x] Update naming in response to #24154
- [x] kubectl / client support
- [x] Release note.
```release-note
Implement NetworkPolicy v1beta1 API object / client support.
```
Next Steps:
- UTs in separate PR.
- e2e test in separate PR.
- make `Ports` + `From` pointers to slices (TODOs in code - to be done when auto-gen is fixed)
CC @thockin
[]()
Automatic merge from submit-queue
Make name validators return string slices
Part of the larger validation PR, broken out for easier review and merge. Builds on previous PRs in the series.
This patch adds the --exit-on-lock-contention flag, which must be used
in conjunction with the --lock-file flag. When provided, it causes the
kubelet to wait for inotify events for that lock file. When an 'open'
event is received, the kubelet will exit.
Automatic merge from submit-queue
validate third party resources
addresses validation portion of https://github.com/kubernetes/kubernetes/issues/22768
* ThirdPartyResource: validates name (3 segment DNS subdomain) and version names (single segment DNS label)
* ThirdPartyResourceData: validates objectmeta (name is validated as a DNS label)
* removes ability to use GenerateName with thirdpartyresources (kind and api group should not be randomized, in my opinion)
test improvements:
* updates resttest to clean up after create tests (so the same valid object can be used)
* updates resttest to take a name generator (in case "foo1" isn't a valid name for the object under test)
action required for alpha thirdpartyresource users:
* existing thirdpartyresource objects that do not match these validation rules will need to be removed/updated (after removing thirdpartyresourcedata objects stored under the disallowed versions, kind, or group names)
* existing thirdpartyresourcedata objects that do not match the name validation rule will not be able to be updated, but can be removed
Automatic merge from submit-queue
The remaining API changes for PodDisruptionBudget.
It's mostly the boilerplate required for the registry, some extra codegen, and a few tests.
Will squash once we're sure it's good.
Automatic merge from submit-queue
Add eviction-pressure-transitition-period flag to kubelet
This PR does the following:
* add the new flag to control how often a node will go out of memory pressure or disk pressure conditions see: https://github.com/kubernetes/kubernetes/pull/25282
* pass an `eviction.Config` into `kubelet` so we can group config
/cc @vishh
Automatic merge from submit-queue
WIP v0 NVIDIA GPU support
```release-note
* Alpha support for scheduling pods on machines with NVIDIA GPUs whose kubelets use the `--experimental-nvidia-gpus` flag, using the alpha.kubernetes.io/nvidia-gpu resource
```
Implements part of #24071 for #23587
I am not familiar with the scheduler enough to know what to do with the scores. Mostly punting for now.
Missing items from the implementation plan: limitranger, rkt support, kubectl
support and docs
cc @erictune @davidopp @dchen1107 @vishh @Hui-Zhi @gopinatht
Automatic merge from submit-queue
pkg/apis/rbac: Add Openshift authorization API types
This PR updates #23396 by adding the Openshift RBAC types to a new API group.
Changes from Openshift:
* Omission of [ResourceGroups](4589987883/pkg/authorization/api/types.go (L32-L104)) as most of these were Openshift specific. Would like to add the concept back in for a later release of the API.
* Omission of IsPersonalSubjectAccessReview as its implementation relied on Openshift capability.
* Omission of SubjectAccessReview and ResourceAccessReview types. These are defined in `authorization.k8s.io`
~~API group is named `rbac.authorization.openshift.com` as we omitted the AccessReview stuff and that seemed to be the lest controversial based on conversations in #23396. Would be happy to change it if there's a dislike for the name.~~ Edit: API groups is named `rbac`, sorry misread the original thread.
As discussed in #18762, creating a new API group is kind difficult right now and the documentation is very out of date. Got a little help from @soltysh but I'm sure I'm missing some things. Also still need to add validation and a RESTStorage registry interface. Hence "WIP".
Any initial comments welcome.
cc @erictune @deads2k @sym3tri @philips
Automatic merge from submit-queue
Webhook Token Authenticator
Add a webhook token authenticator plugin to allow a remote service to make authentication decisions.
Automatic merge from submit-queue
PSP admission
```release-note
Update PodSecurityPolicy types and add admission controller that could enforce them
```
Still working on removing the non-relevant parts of the tests but I wanted to get this open to start soliciting feedback.
- [x] bring PSP up to date with any new features we've added to SCC for discussion
- [x] create admission controller that is a pared down version of SCC (no ns based strategies, no user/groups/service account permissioning)
- [x] fix tests
@liggitt @pmorie - this is the simple implementation requested that assumes all PSPs should be checked for each requests. It is a slimmed down version of our SCC admission controller
@erictune @smarterclayton
Automatic merge from submit-queue
Move internal types of hpa from pkg/apis/extensions to pkg/apis/autoscaling
ref #21577
@lavalamp could you please review or delegate to someone from CSI team?
@janetkuo could you please take a look into the kubelet changes?
cc @fgrzadkowski @jszczepkowski @mwielgus @kubernetes/autoscaling
Automatic merge from submit-queue
Added JobTemplate, a preliminary step for ScheduledJob and Workflow
@sdminonne as promised, sorry it took this long 😊
@erictune fyi though it does not have to be in for 1.2
<!-- Reviewable:start -->
---
This change is [<img src="http://reviewable.k8s.io/review_button.svg" height="35" align="absmiddle" alt="Reviewable"/>](http://reviewable.k8s.io/reviews/kubernetes/kubernetes/21675)
<!-- Reviewable:end -->
Implements part of #24071
I am not familiar with the scheduler enough to know what to do with the scores. Punting for now.
Missing items from the implementation plan: limitranger, rkt support, kubectl
support and user docs
Automatic merge from submit-queue
Make ThirdPartyResource a root scoped object
ThirdPartyResource (the registration of a third party type) belongs at the cluster scope. It results in resource handlers installed in every namespace, and the same name in two namespaces collides (namespace is ignored when determining group/kind).
ThirdPartyResourceData (an actual instance of that type) is still namespace-scoped.
This PR moves ThirdPartyResource to be a root scope object. Someone previously using ThirdPartyResource definitions in alpha should be able to move them from namespace to root scope like this:
setup (run on 1.2):
```
kubectl create ns ns1
echo '{"kind":"ThirdPartyResource","apiVersion":"extensions/v1beta1","metadata":{"name":"foo.example.com"},"versions":[{"name":"v8"}]}' | kubectl create -f - --namespace=ns1
echo '{"kind":"Foo","apiVersion":"example.com/v8","metadata":{"name":"MyFoo"},"testkey":"testvalue"}' | kubectl create -f - --namespace=ns1
```
export:
```
kubectl get thirdpartyresource --all-namespaces -o yaml > tprs.yaml
```
remove namespaced kind registrations (this shouldn't remove the data of that type, which is another possible issue):
```
kubectl delete -f tprs.yaml
```
... upgrade ...
re-register the custom types at the root scope:
```
kubectl create -f tprs.yaml
```
Additionally, pre-1.3 clients that expect to read/write ThirdPartyResource at a namespace scope will not be compatible with 1.3+ servers, and 1.3+ clients that expect to read/write ThirdPartyResource at a root scope will not be compatible with pre-1.3 servers.
Automatic merge from submit-queue
API changes for Cascading deletion
This PR includes the necessary API changes to implement cascading deletion with finalizers as proposed is in #23656. Comments are welcome.
@lavalamp @derekwaynecarr @bgrant0607 @rata @hongchaodeng
Having internal and external integer types being different hides
potential conversion problems. Propagate that out further (which will
also allow us to better optimize conversion).
Automatic merge from submit-queue
Make all defaulters public
Will allow for generating direct accessors in conversion code instead of using reflection.
@wojtek-t
Automatic merge from submit-queue
Add kubelet flags for eviction threshold configuration
This PR just adds the flags for kubelet eviction and the associated generated code.
I am happy to tweak text, but we can also do that later at this point in the release.
Since this causes codegen, I wanted to stage this first.
/cc @vishh @kubernetes/sig-node
Automatic merge from submit-queue
Implement a streaming serializer for watch
Changeover watch to use streaming serialization. Properly version the
watch objects. Implement simple framing for JSON and Protobuf (but not
YAML).
@wojtek-t @lavalamp
Automatic merge from submit-queue
Additional go vet fixes
Mostly:
- pass lock by value
- bad syntax for struct tag value
- example functions not formatted properly
Automatic merge from submit-queue
rkt: bump rkt version to 1.2.1
Upon bumping the rkt version, `--hostname` is supported. Also we now gets the configs from the rkt api service, so `stage1-image` is deprecated.
cc @yujuhong @Random-Liu
Here are a list of changes along with an explanation of how they work:
1. Add a new string field called TargetSelector to the external version of
extensions Scale type (extensions/v1beta1.Scale). This is a serialized
version of either the map-based selector (in case of ReplicationControllers)
or the unversioned.LabelSelector struct (in case of Deployments and
ReplicaSets).
2. Change the selector field in the internal Scale type (extensions.Scale) to
unversioned.LabelSelector.
3. Add conversion functions to convert from two external selector fields to a
single internal selector field. The rules for conversion are as follows:
i. If the target resource that this scale targets supports LabelSelector
(Deployments and ReplicaSets), then serialize the LabelSelector and
store the string in the TargetSelector field in the external version
and leave the map-based Selector field as nil.
ii. If the target resource only supports a map-based selector
(ReplicationControllers), then still serialize that selector and
store the serialized string in the TargetSelector field. Also,
set the the Selector map field in the external Scale type.
iii. When converting from external to internal version, parse the
TargetSelector string into LabelSelector struct if the string isn't
empty. If it is empty, then check if the Selector map is set and just
assign that map to the MatchLabels component of the LabelSelector.
iv. When converting from internal to external version, serialize the
LabelSelector and store it in the TargetSelector field. If only
the MatchLabel component is set, then also copy that value to
the Selector map field in the external version.
4. HPA now just converts the LabelSelector field to a Selector interface
type to list the pods.
5. Scale Get and Update etcd methods for Deployments and ReplicaSets now
return extensions.Scale instead of autoscaling.Scale.
6. Consequently, SubresourceGroupVersion override and is "autoscaling"
enabled check is now removed from pkg/master/master.go
7. Other small changes to labels package, fuzzer and LabelSelector
helpers to piece this all together.
8. Add unit tests to HPA targeting Deployments and ReplicaSets.
9. Add an e2e test to HPA targeting ReplicaSets.
In podSecurityPolicy:
1. Rename .seLinuxContext to .seLinux
2. Rename .seLinux.type to .seLinux.rule
3. Rename .runAsUser.type to .runAsUser.rule
4. Rename .seLinux.SELinuxOptions
1,2,3 as suggested by thockin in #22159.
I added 3 for consistency with 2.
Added selector generation to Job's
strategy.Validate, right before validation.
Can't do in defaulting since UID is not known.
Added a validation to Job to ensure that the generated
labels and selector are correct when generation was requested.
This happens right after generation, but validation is in a better
place to return an error.
Adds "manualSelector" field to batch/v1 Job to control selector generation.
Adds same field to extensions/__internal. Conversion between those two
is automatic.
Adds "autoSelector" field to extensions/v1beta1 Job. Used for storing batch/v1 Jobs
- Default for v1 is to do generation.
- Default for v1beta1 is to not do it.
- In both cases, unset == false == do the default thing.
Release notes:
Added batch/v1 group, which contains just Job, and which is the next
version of extensions/v1beta1 Job.
The changes from the previous version are:
- Users no longer need to ensure labels on their pod template are unique to the enclosing
job (but may add labels as needed for categorization).
- In v1beta1, job.spec.selector was defaulted from pod labels, with the user responsible for uniqueness.
In v1, a unique label is generated and added to the pod template, and used as the selector (other
labels added by user stay on pod template, but need not be used by selector).
- a new field called "manualSelector" field exists to control whether the new behavior is used,
versus a more error-prone but more flexible "manual" (not generated) seletor. Most users
will not need to use this field and should leave it unset.
Users who are creating extensions.Job go objects and then posting them using the go client
will see a change in the default behavior. They need to either stop providing a selector (relying on
selector generation) or else specify "spec.manualSelector" until they are ready to do the former.
This new Scale type supports the more powerful set-based label selector
semantics. The selector, however, is stored in a serialized format, as
a string.
This is needed for Job because it uses PodSpec and PodSpec
does not have auto-generatable conversion for some reason.
Copied pkg/apis/extensions/v1beta1/conversion.go, and removed
non-Job things.
This should allow users to update DaemonSet pods by manually deleting
the corresponding running pods. Users can use this mechanism for
DaemonSet updates until we implement Deployment style rolling update
for DaemonSet.
`--kubelet-cgroups` and `--system-cgroups` respectively.
Updated `--runtime-container` to `--runtime-cgroups`.
Cleaned up most of the kubelet code that consumes these flags to match
the flag name changes.
Signed-off-by: Vishnu kannan <vishnuk@google.com>
Leaving the type fields as comments for reference and reminder. But
deleting the conversion, defaulting and validation code. They can
always be brough back from the previous PR once the types are
introduced. Because builds break without them anyway that serves as a
reminder, so there is no need to leave them commented out.
The message as it is framed right now does not make any sense for the
end users of our system. It might even lead to confusion. So this is
attempt to make the error message less confusing.
Update the Deployments' API types, defaulting code, conversions, helpers
and validation to use ReplicaSets instead of ReplicationControllers and
LabelSelector instead of map[string]string for selectors.
Also update the Deployment controller, registry, kubectl subcommands,
client listers package and e2e tests to use ReplicaSets and
LabelSelector for Deployments.
I can't revert with github which says "Sorry, this pull request couldn’t be
reverted automatically. It may have already been reverted, or the content may
have changed since it was merged."
Reverts commit: 0c191e787b
* Metrics will not be expose until they are hooked up to a handler
* Metrics are not cached and expose a dos vector, this must be fixed before release or the stats should not be exposed through an api endpoint
In pkg/apis/extensions/v1beta1/conversion.go,
some conversion code was copied from the
legacy api because Pod conversions cannot be
automatically generated because of something
about deprecatedServiceAccount.
This PR fixes two problems due to that copying.
First, the copied code could drift from its source
To fix that, I replaced the Convert_api_ and Convert_v1
implementations with a call to the original function.
I left a wrapper in case something needed to have
a package-local function name.
Second, the copied Convert_* functions, were copied,
in a way that they refer to other conversion functions
that aren't in the current package. This prevented
genconverion from working from a clean start
(no conversion_generated.go). Perhaps the person
who wrote this in the first place had copied
the conversion_generated.go file from legacy,
so it worked. So, I added the v1
package name to calls to Convert_* functions.
So, when someone Cargo-Cult copies the conversion.go
file, like I did, they now will not have to
wonder why genconversion complains about missing
Convert_ functions.
Deleted the conversion_generated.go and reran genconversion
and it worked, no diffs old vs new conversion_generated.go.
Move type LabelSelector and type LabelSelectorRequirement from pkg/apis/extensions
This avoids an import loop when Job (and later DaemonSet, Deployment, ReplicaSet)
are moved out of extensions to new api groups.
Also Move LabelSelectorAsSelector utility from pkg/apis/extensions/ to pkg/api/unversioned/
Also its test.
Also LabelSelectorOp* constants.
Also the pkg/apis/extensions/validation functions ValidateLabelSelectorRequirement and
ValidateLabelSelector move to pkg/api/unversioned
The related type in pkg/apis/extensions/v1beta1/ is staying there. I might move
it in another PR if neccessary.
This commit adds support for paused deployments so a user can choose
when to run a deployment that exists in the system instead of having
the deployment controller automatically reconciling it after every
change or sync interval.
When job.spec.completions is nil, only
one task needs to succeed for the job to succeed,
and parallelism can be scaled freely during runtime.
Added tests.
Release Note:
This causes two minor changes to the API.
First, unset parallelism previously was defaulted to be
equal to completions. Now it always defaults to 1 if unset.
Second, having parallelism=N and completions unset would previously
be defaulted to 1 completion and N parallelism.
(this is not something we expect people to do, though)
Now, no defaulting occurs in that case, and the job's
behavior is different (any completion causes success).
Pass down into the server initialization the necessary interface for
handling client/server content type negotiation. Add integration tests
for the negotiation.
Remove Codec from versionInterfaces in meta (RESTMapper is now agnostic
to codec and serialization). Register api/latest.Codecs as the codec
factory and use latest.Codecs.LegacyCodec(version) as an equvialent to
the previous codec.
It makes more sense for `ValidatePositiveField` and
`ValidatePositiveQuantity` methods to be named `ValidateNonnegativeField`
and `ValidateNonnegativeQuantity` as that is what is truly being
checked. This commit simply updates the method names everywhere they are
used.
This is part of migrating kubelet configuration to the componentconfig api
group and is preliminary to retrofitting client configuration and
implementing full fledged API group mechinary.
Signed-off-by: Mike Danese <mikedanese@google.com>
I took a hard look at error output and played until I was happier. This now
prints JSON for structs in the error, rather than go's format.
Also made the error message easier to read.
Fixed tests.
The pending codec -> conversion split changes the signature of
Encode and Decode to be more complicated. Create a stub helper
with the exact semantics of today and do the simple mechanical
refactor here to reduce the cost of that change.
This enables use of software or hardware transports viz. be2iscsi,
bnx2i, cxgb3i, cxgb4i, qla4xx, iser and ocs. The default transport
(tcp) happens to be called "default".
Use of non-default transports changes the disk path to the following format:
/dev/disk/by-path/pci-<pci_id>-ip-<portal>-iscsi-<iqn>-lun-<lun_id>
Before this change we have a mish-mash of ways to pass field names around for
error generation. Sometimes string fieldnames, sometimes .Prefix(), sometimes
neither, often wrong names or not indexed when it should be.
Instead of that mess, this is part one of a couple of commits that will make it
more strongly typed and hopefully encourage correct behavior. At least you
will have to think about field names, which is better than nothing.
It turned out to be really hard to do this incrementally.
All external types that are not int64 are now marked as int32,
including
IntOrString. Prober is now int32 (43 years should be enough of an initial
probe time for anyone).
Did not change the metadata fields for now.
This commit adds support for using kubectl scale to scale deployments. Makes use of the
deployments/scale endpoint instead of updating deployment.spec.replicas directly.
This commit introduces a validator for use with Scale updates.
The validator checks that we have > 0 replica count, as well
as the normal ObjectMeta checks (some of which have to be
faked since they don't exist on the Scale object).
- PeriodSeconds - How often to probe
- SuccessThreshold - Number of successful probes to go from failure to success state
- FailureThreshold - Number of failing probes to go from success to failure state
This commit includes to changes in behavior:
1. InitialDelaySeconds now defaults to 10 seconds, rather than the
kubelet sync interval (although that also defaults to 10 seconds).
2. Prober only retries on probe error, not failure. To compensate, the
default FailureThreshold is set to the maxRetries, 3.
Flocker [1] is an open-source container data volume manager for
Dockerized applications.
This PR adds a volume plugin for Flocker.
The plugin interfaces the Flocker Control Service REST API [2] to
attachment attach the volume to the pod.
Each kubelet host should run Flocker agents (Container Agent and Dataset
Agent).
The kubelet will also require environment variables that contain the
host and port of the Flocker Control Service. (see Flocker architecture
[3] for more).
- `FLOCKER_CONTROL_SERVICE_HOST`
- `FLOCKER_CONTROL_SERVICE_PORT`
The contribution introduces a new 'flocker' volume type to the API with
fields:
- `datasetName`: which indicates the name of the dataset in Flocker
added to metadata;
- `size`: a human-readable number that indicates the maximum size of the
requested dataset.
Full documentation can be found docs/user-guide/volumes.md and examples
can be found at the examples/ folder
[1] https://clusterhq.com/flocker/introduction/
[2] https://docs.clusterhq.com/en/1.3.1/reference/api.html
[3] https://docs.clusterhq.com/en/1.3.1/concepts/architecture.html
Rather than an "all or nothing" approach to defining a custom conversion
function (which seems destined to cause problems eventually), this is an
attempt to make it possible to call the auto-generated code and then "fix it
up".
Specifically, consider you have a fooBar struct. If you don't define a
conversion for FooBar, you will get a generated function like:
convert_v1_FooBar_To_api_FooBar()
Before this PR, if you define your own conversion function, you get no
generated function. After this PR you get:
autoconvert_v1_FooBar_To_api_FooBar()
...which you can call yourself in your custom function.
Added status REST storage.
Added validation for Status Updates.
Changed job controller to update status rather than just job
(which ignores status updates).
Currently jobs will only default completions and parallelism. This adds
copying labels map for pod's template as selectors, similarly how it's done
in replication controller.
Add a HostIPC field to the Pod Spec to create containers sharing
the same ipc of the host.
This feature must be explicitly enabled in apiserver using the
option host-ipc-sources.
Signed-off-by: Federico Simoncelli <fsimonce@redhat.com>