Before https://github.com/kubernetes/kubernetes/pull/83084, `kubectl
apply --prune` can prune resources in all namespaces specified in
config files. After that PR got merged, only a single namespace is
considered for pruning. It is OK if namespace is explicitly specified
by --namespace option, but what the PR does is use the default
namespace (or from kubeconfig) if not overridden by command line flag.
That breaks the existing usage of `kubectl apply --prune` without
--namespace option. If --namespace is not used, there is no error,
and no one notices this issue unless they actually check that pruning
happens. This issue also prevents resources in multiple namespaces in
config file from being pruned.
kubectl 1.16 does not have this bug. Let's see the difference between
kubectl 1.16 and kubectl 1.17. Suppose the following config file:
```yaml
apiVersion: v1
kind: ConfigMap
metadata:
creationTimestamp: null
name: foo
namespace: a
labels:
pl: foo
data:
foo: bar
---
apiVersion: v1
kind: ConfigMap
metadata:
creationTimestamp: null
name: bar
namespace: a
labels:
pl: foo
data:
foo: bar
```
Apply it with `kubectl apply -f file`. Then comment out ConfigMap foo
in this file. kubectl 1.16 prunes ConfigMap foo with the following
command:
$ kubectl-1.16 apply -f file -l pl=foo --prune
configmap/bar configured
configmap/foo pruned
But kubectl 1.17 does not prune ConfigMap foo with the same command:
$ kubectl-1.17 apply -f file -l pl=foo --prune
configmap/bar configured
With this patch, kubectl once again can prune the resource as before.
This change adds the generic ability for request handlers that run
before WithAudit to set annotations in the audit.Event.Annotations
map.
Note that this change does not use this capability yet. Determining
which handlers should set audit annotations and what keys and values
should be used requires further discussion (this data will become
part of our public API).
Signed-off-by: Monis Khan <mok@vmware.com>
Watches against etcd in the API server can hang forever if the etcd
cluster loses quorum, e.g. the majority of nodes crashes. This fix
improves responsiveness (detection and reaction time) of API server
watches against etcd in some rare (but still possible) edge cases so
that watches are terminated with `"etcdserver: no leader"
(ErrNoLeader)`.
Implementation behavior described by jingyih:
```
The etcd server waits until it cannot find a leader for 3 election
timeouts to cancel existing streams. 3 is currently a hard coded
constant. The election timeout defaults to 1000ms.
If the cluster is healthy, when the leader is stopped, the leadership
transfer should be smooth. (leader transfers its leadership before
stopping). If leader is hard killed, other servers will take an election
timeout to realize leader lost and start campaign.
```
For further details, discussion and validation see
https://github.com/kubernetes/kubernetes/issues/89488#issuecomment-606491110
and https://github.com/etcd-io/etcd/issues/8980.
Closes: https://github.com/kubernetes/kubernetes/issues/89488
Signed-off-by: Michael Gasch <mgasch@vmware.com>
* Api doc clarification for the mode value.
Apply suggestions from code review
Co-Authored-By: Robert Kielty <rob.kielty@gmail.com>
* Added a note regarding json and yaml api.
* running hack/* scripts.
Co-authored-by: Robert Kielty <rob.kielty@gmail.com>
While rambling again about how unsafe labels.SelectorFromSet is as it
just returns an empty selector that matches everything when it
encounters a parsing error, I noticed that we do not even have a safe
alternate. This commit fixes that by adding a `ValidatedSelectorFromSet`
func that either returns a Selector or an error.
It also changes SelectorFromSet to use SelectorFromValidatedSet under
the hood, so invalid Sets are send to the server and rejected there,
rather than silently doing the wrong thing by using am empty Selector.
In case the last upper bound is +Inf, computed quantile is +Inf as well.
Given there's no restriction on how far individual upper bounds are from each other,
cut the last interval and consider the second last upper bound as the final one.
* move well-known kubelet cloud provider annotations to k8s.io/cloud-provider
Signed-off-by: andrewsykim <kim.andrewsy@gmail.com>
* cloud provider: rename AnnotationProvidedIPAddr to AnnotationAlphaProvidedIPAddr to indicate alpha status
Signed-off-by: Andrew Sy Kim <kim.andrewsy@gmail.com>
* minor update
Added a missing period.
* add some more missing periods
- add in the missing period in 2 more places
- update generated files with `make update`
The fake clientset used a slice to store each kind of objects, it's
quite slow to init the clientset with massive objects because it checked
existence of an object by traversing all objects before adding it, which
leads to O(n^2) time complexity. Also, the Create, Update, Get, Delete
methods needs to traverse all objects, which affects the time statistic
of code that calls them.
This patch changed to use a map to store each kind of objects, reduced
the time complexity of initializing clientset to O(n) and the Create,
Update, Get, Delete to O(1).
For example:
Before this patch, it took ~29s to init a clientset with 30000 Pods,
and 2~4ms to create and get an Pod.
After this patch, it took ~50ms to init a clientset with 30000 Pods,
and tens of µs to create and get an Pod.