Automatic merge from submit-queue
Adding default StorageClass annotation printout for resource_printer and describer and some refactoring
adding ISDEFAULT for _kubectl get storageclass_ output
```
[root@screeley-sc1 gce]# kubectl get storageclass
NAME TYPE ISDEFAULT
another-class kubernetes.io/gce-pd NO
generic1-slow kubernetes.io/gce-pd YES
generic2-fast kubernetes.io/gce-pd YES
```
```release-note
Add ISDEFAULT to kubectl get storageClass output
```
@kubernetes/sig-storage
Automatic merge from submit-queue
fix more RS controller flakes
I saw another flake:
```
panic: Fail in goroutine after TestUpdatePods has completed
/usr/local/go/src/runtime/panic.go:500 +0x1ae
/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/pkg/util/runtime/runtime.go:56 +0x17d
/usr/local/go/src/runtime/panic.go:458 +0x271
/usr/local/go/src/testing/testing.go:412 +0x182
/usr/local/go/src/testing/testing.go:484 +0x95
/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/pkg/controller/replicaset/replica_set_test.go:619 +0x1d2
/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/pkg/controller/replicaset/replica_set.go:414 +0x191
/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/pkg/controller/replicaset/replica_set.go:403 +0x39
/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/pkg/controller/replicaset/replica_set.go:169 +0x42
/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/pkg/util/wait/wait.go:87 +0x70
/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/pkg/util/wait/wait.go:88 +0xbe
/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/pkg/util/wait/wait.go:49 +0x5b
/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/pkg/controller/replicaset/replica_set_test.go:625 +0x369
```
This resolves that by separating the listers from the watch like it used to be in this set of tests. The tests were like this before the refactor. I think they limit utility, but I'm not prepared to re-write them all.
@kargakis
Automatic merge from submit-queue
+optional tag for OpenAPI spec
OpenAPI rely on "omitempty" json tag to determine if a field is optional or not. This change will add "+optional" tag to all fields with "omitempty" json tag and support the tag in OpenAPI spec generator.
Automatic merge from submit-queue
controller: set minReadySeconds in deployment's replica sets
* Estimate available pods for a deployment by using minReadySeconds on
the replica set.
* Stop requeueing deployments on pod events, superseded by following the
replica set status.
* Cleanup redundant deployment utilities
Fixes https://github.com/kubernetes/kubernetes/issues/26079
@kubernetes/deployment ptal
Automatic merge from submit-queue
Pass whole PVC to provisioner plugins
Gluster provisioner is interested in namespace of PVCs that are being provisioned and I don't want to add at as a new field in `volume.VolumeOptions` - it would contain almost whole PVC.
Let's rework `VolumeOptions` and pass direct reference to PVC there instead of some "interesting" fields and let the provisioner to pick information it is interested in.
There was lot of refactoring in volume plugins to apply this change (too many plugins), however the logic is simple and it's all the same in all plugins.
@rootfs @humblec
Automatic merge from submit-queue
NodeController waits for informer sync before doing anything
cc @lavalamp @davidopp
```release-note
NodeController waits for full sync of all it's informers before taking any action.
```
Automatic merge from submit-queue
Run rbac authorizer from cache
RBAC authorization can be run very effectively out of a cache. The cache is a normal reflector backed cache (shared informer).
I've split this into three parts:
1. slim down the authorizer interfaces
1. boilerplate for adding rbac shared informers and associated listers which conform to the new interfaces
1. wiring
@liggitt @ericchiang @kubernetes/sig-auth
Automatic merge from submit-queue
Handle DeletedFinalStateUnknown in NodeController
Fix#34692
```release-note
Fix panic in NodeController caused by receiving DeletedFinalStateUnknown object from the cache.
```
cc @davidopp
* Estimate available pods for a deployment by using minReadySeconds on
the replica set.
* Stop requeueing deployments on pod events, superseded by following the
replica set status.
* Cleanup redundant deployment utilities
Automatic merge from submit-queue
Revert "Error out when any RS has more available pods then its spec r…
Reverts https://github.com/kubernetes/kubernetes/pull/29808
The PR is wrong because we can have more available pods than desired every time we scale down.
@kubernetes/deployment ptal
Automatic merge from submit-queue
Fix missleading comment
**What this PR does / why we need it**: It just fixes misleading comment. It took me some time to figure out real behavior.
Gluster provisioner is interested in pvc.Namespace and I don't want to add
at as a new field in VolumeOptions - it would contain almost whole PVC.
Let's pass direct reference to PVC instead and let the provisioner to pick
information it is interested in.
Automatic merge from submit-queue
Copy finalizers from template spec to pod.
**What this PR does / why we need it**: The PodTemplateSpec has a finalizers field whose contents are not copied over to a pod during creation.
Automatic merge from submit-queue
convert deployment controller to shared informers
Converts the deployment controller to shared informers.
@kargakis I think you've been in here. Pretty straight forward swap.
Fixes#27687
Automatic merge from submit-queue
controller: save older revisions for Deployment's replica sets
@jwforres the only usable way I could find for multiple old revisions for a single replica set is to stuff them as comma-separated values.
@kubernetes/deployment this retains old revisions served by a replica set inside an annotation.
Fixes https://github.com/kubernetes/kubernetes/issues/33844
Automatic merge from submit-queue
update deployment and replicaset listers
Updates the deployment lister to avoid copies and updates the deployment controller to use shared informers.
Pushing WIP to see which tests are broken.
Automatic merge from submit-queue
Delete evicted pet
If pet was evicted by kubelet - it will stuck in this state forever.
By analogy to regular pod we need to re-create pet so that it will
be re-scheduled to another node, so in order to re-create pet
and preserve consitent naming we will delete it in petset controller
and create after that.
fixes: https://github.com/kubernetes/kubernetes/issues/31098
Automatic merge from submit-queue
Fix issue in updating device path when volume is attached multiple times
When volume is attached, it is possible that the actual state
already has this volume object (e.g., the volume is attached to multiple
nodes, or volume was detached and attached again). We need to update the
device path in such situation, otherwise, the device path would be stale
information and cause kubelet mount to the wrong device.
This PR partially fixes issue #29324
When volume is attached, it is possible that the actual state
already has this volume object (e.g., the volume is attached to multiple
nodes, or volume was detached and attached again). We need to update the
device path in such situation, otherwise, the device path would be stale
information and cause kubelet mount to the wrong device.
This PR partially fixes issue #29324
Automatic merge from submit-queue
PetSet replica count status test
**What this PR does / why we need it**: It adds a test for PetSet status replica count. It should fail now, but will pass when https://github.com/kubernetes/kubernetes/pull/32117 is merged.
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes#31965
**Special notes for your reviewer**: It will need to be rebased after #32117 is merged in, don't need detailed review before that.
**Release note**:
```release-note
NONE
```
Added fakeKubeClient and other fake types needed to test what is sent to
API when replica count is updated. These fakes can be extended for
other tests.
Automatic merge from submit-queue
Fix TestCreateWithNonExistentOwner
Fix#30228
As https://github.com/kubernetes/kubernetes/issues/30228#issuecomment-248779567 described, the GC did delete the garbage, it's the test logic failed.
The test used to rely on `gc.QueuesDrained()`, which could return before the GC finished processing. It seems to be the only possible reason of the test failure. Hence, this PR changed the test to poll for the deletion of garbage.
Automatic merge from submit-queue
MinReadySeconds / AvailableReplicas for ReplicaSets
This PR adds minReadySeconds and availableReplicas for replica sets / replication controllers
Partially addresses https://github.com/kubernetes/kubernetes/issues/28381
cc: @mfojtik
@bgrant0607 for the api changes, @janetkuo for controller changes
Automatic merge from submit-queue
PetSet returns valid replica count in status
**What this PR does / why we need it**: It prevents the PetSet replica count to be invalid regardless of pods not being created due to
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes#31965
**Special notes for your reviewer**:
**Release note**:
```release-note
```
Automatic merge from submit-queue
Use strongly-typed types.NodeName for a node name
We had another bug where we confused the hostname with the NodeName.
Also, if we want to use different values for the Node.Name (which is
an important step for making installation easier), we need to keep
better control over this.
A tedious but mechanical commit therefore, to change all uses of the
node name to use types.NodeName
Automatic merge from submit-queue
Refactor volume controller parameters into a structure
`persistentvolumecontroller.NewPersistentVolumeController` has 11 arguments now,
put them into a structure.
Also, rename `NewPersistentVolumeController` to `NewController`, `persistentvolume`
is already name of the package.
Fixes#30219
We had another bug where we confused the hostname with the NodeName.
To avoid this happening again, and to make the code more
self-documenting, we use types.NodeName (a typedef alias for string)
whenever we are referring to the Node.Name.
A tedious but mechanical commit therefore, to change all uses of the
node name to use types.NodeName
Also clean up some of the (many) places where the NodeName is referred
to as a hostname (not true on AWS), or an instanceID (not true on GCE),
etc.
Automatic merge from submit-queue
controller: don't retry deployments with overlapping selectors
Returning an error will cause the deployment to be requeued. We should
just emit an event for deployments with overlapping selectors and silently
drop then out of the queue. This should be transitioned to a Condition
once we have them.
@kubernetes/deployment ptal
In order to determine whether a node should run its daemon pod,
DaemonController creates a dummy pod based on DaemonSet's template and
then uses scheduler predicates (currently GeneralPredicates) to test
whether such pod can be run by the node. The problem was that
DaemonController was not setting Namespace for the dummy pod. This was
not affecting currently used GeneralPredicates but this problem could
bite later when some namespace-dependent predicates are added to
GeneralPredicates or directly to DaemonController's node checks
(e.g. pod affinity).
Stumbled upon it while working on e2e test for #31136
Returning an error will cause the deployment to be requeued. We should
just emit an event for deployments with overlapping selectors and silently
drop then out of the queue. This should be transitioned to a Condition
once we have them.
persistentvolumecontroller.NewPersistentVolumeController has 11 arguments now,
put them into a structure.
Also, rename NewPersistentVolumeController to NewController, persistentvolume
is already name of the package.
Fixes#30219
Automatic merge from submit-queue
Do not report error when deleting an attached volume
Persistent volume controller should not send warning events to a PV and mark the PV as failed when the volume is still attached.
This happens when a user quickly deletes a pod and associated PVC - PV is slowly detaching, while the PVC is already deleted and the PV enters Failed phase.
`Deleter.Deleter` can now return `tryAgainError`, which is sent as INFO to the PV to let the user know we did not forget to delete the PV, however the PV stays in Released state. The controller tries again in the next sync (15 seconds by default).
Fixes#31511
Automatic merge from submit-queue
simplify RC listers
Make the RC and SVC listers use the common list functions that more closely match client APIs, are consistent with other listers, and avoid unnecessary copies.
Automatic merge from submit-queue
Allow garbage collection to work against different API prefixes
The GC needs to build clients based only on Resource or Kind. Hoist the
restmapper out of the controller and the clientpool, support a new
ClientForGroupVersionKind and ClientForGroupVersionResource, and use the
appropriate one in both places.
Allows OpenShift to use the GC
Automatic merge from submit-queue
Remove hacks from ScheduledJobs cron spec parsing
Previusly `github.com/robfig/cron` library did not allow passing cron spec without seconds. First commit updates the library, which has additional method ParseStandard which follows the standard cron spec, iow. minute, hour, day of month, month, day of week.
@janetkuo @erictune as promised in #30227 I've updated the library and now I'm updating it in k8s
Automatic merge from submit-queue
Move HighWaterMark to the top of the struct in order to fix arm, second time
ref: #33117
Sorry for not fixing everyone at once, but I seriously wasn't prepared for that quick LGTM 😄, so here's the other half.
@lavalamp
> lgtm, but seriously, this is terrible, we probably have this bug all over. And what if someone embeds the etcdWatcher struct in something else not at the top? We need the compiler to enforce things like this, it just can't be done manually. Can you file or link a golang issue for this?
I totally agree! There isn't currently a way of programmatically detecting this unfortunately.
I guess @davecheney or @minux can explain better to you why it's so hard.
This is noted in https://github.com/kubernetes/kubernetes/blob/master/docs/proposals/multi-platform.md as a corner case indeed.
@pwittrock This should be cherrypicked toghether with #33117
Automatic merge from submit-queue
Fix race condition in setting node statusUpdateNeeded flag
This PR fixes the race condition in setting node statusUpdateNeeded flag
in master's attachdetach controller. This flag is used to indicate
whether a node status has been updated by the node_status_updater or
not. When updater finishes update a node status, it is set to false.
When the node status is changed such as volume is detached or new volume
is attached to the node, the flag is set to true so that updater can
update the status again. The previous workflow has a race condition as
follows
1. updater gets the currently attached volume list from the node which needs to be
updated.
2. A new volume A is attached to the same node right after 1 and set the
flag to TRUE
3. updater updates the node attached volume list (which does not include volume A) and then set the flag to FALSE.
The result is that volume A will be never added to the attached volume
list so at node side, this volume is never attached.
So in this PR, the flag is set to FALSE when updater tries to get the
attached volume list (as in an atomic operation). So in the above
example, after step 2, the flag will be TRUE again, in step 3, updater
does not set the flag if updates is sucessful. So after that, flag is
still TRUE and in next round of update, the node status will be updated.