Commit Graph

1576 Commits

Author SHA1 Message Date
Kubernetes Submit Queue
331eb83585 Merge pull request #33376 from luxas/fix_arm_atomics_2
Automatic merge from submit-queue

Move HighWaterMark to the top of the struct in order to fix arm, second time

ref: #33117

Sorry for not fixing everyone at once, but I seriously wasn't prepared for that quick LGTM 😄, so here's the other half.

@lavalamp 

> lgtm, but seriously, this is terrible, we probably have this bug all over. And what if someone embeds the etcdWatcher struct in something else not at the top? We need the compiler to enforce things like this, it just can't be done manually. Can you file or link a golang issue for this?

I totally agree! There isn't currently a way of programmatically detecting this unfortunately.
I guess @davecheney or @minux can explain better to you why it's so hard.

This is noted in https://github.com/kubernetes/kubernetes/blob/master/docs/proposals/multi-platform.md as a corner case indeed.

@pwittrock This should be cherrypicked toghether with #33117
2016-09-23 12:05:09 -07:00
Kubernetes Submit Queue
0a4316f11e Merge pull request #32807 from jingxu97/stateupdateNeeded-9-15
Automatic merge from submit-queue

Fix race condition in setting node statusUpdateNeeded flag

This PR fixes the race condition in setting node statusUpdateNeeded flag
in master's attachdetach controller. This flag is used to indicate
whether a node status has been updated by the node_status_updater or
not. When updater finishes update a node status, it is set to false.
When the node status is changed such as volume is detached or new volume
is attached to the node, the flag is set to true so that updater can
update the status again. The previous workflow has a race condition as
follows
1. updater gets the currently attached volume list from the node which needs to be
updated.
2. A new volume A is attached to the same node right after 1 and set the
flag to TRUE
3. updater updates the node attached volume list (which does not include volume A) and then set the flag to FALSE.
The result is that volume A will be never added to the attached volume
list so at node side, this volume is never attached.

So in this PR, the flag is set to FALSE when updater tries to get the
attached volume list (as in an atomic operation). So in the above
example, after step 2, the flag will be TRUE again, in step 3, updater
does not set the flag if updates is sucessful. So after that, flag is
still TRUE and in next round of update, the node status will be updated.
2016-09-23 11:25:16 -07:00
Lucas Käldström
06917531b3 Move HighWaterMark to the top of the struct in order to fix arm, second time 2016-09-23 20:58:28 +03:00
Kubernetes Submit Queue
b2aed32578 Merge pull request #33269 from deads2k/client-15-svc-lister
Automatic merge from submit-queue

simplify svc lister

trying to track down what killed the e2e tests.
2016-09-23 03:10:57 -07:00
Jing Xu
14cad206f5 Fix race conditino in setting node statusUpdateNeeded flag
This PR fixes the race condition in setting node statusUpdateNeeded flag
in master's attachdetach controller. This flag is used to indicate
whether a node status has been updated by the node_status_updater or
not. When updater finishes update a node status, it is set to false.
When the node status is changed such as volume is detached or new volume
is attached to the node, the flag is set to true so that updater can
update the status again. The previous workflow has a race condition as
follows
1. updater gets the currently attached volume list from the node which needs to be
updated.
2. A new volume A is attached to the same node right after 1 and set the
flag to TRUE
3. updater updates the node attached volume list (which does not include volume A) and then set the flag to FALSE.
The result is that volume A will be never added to the attached volume
list so at node side, this volume is never attached.

So in this PR, the flag is set to FALSE when updater tries to get the
attached volume list (as in an atomic operation). So in the above
example, after step 2, the flag will be TRUE again, in step 3, updater
does not set the flag if updates is sucessful. So after that, flag is
still TRUE and in next round of update, the node status will be updated.

This PR also changes a unit test due to the workflow changes
2016-09-22 14:02:30 -07:00
Kubernetes Submit Queue
e9f4db2748 Merge pull request #27714 from jsafrane/event-recycle
Automatic merge from submit-queue

Send recycle events from pod to pv.

This allows users to diagnose what's wrong with recycler. Recycler pods are started automatically with a cryptic name and they are deleted immediately when they finish.

e.g, `kubectl describe pv` could show that NFS cannot be mounted (and how many pods have tried it):

```
  FirstSeen     LastSeen        Count   From                            SubobjectPath   Type            Reason          Message
  ---------     --------        -----   ----                            -------------   --------        ------          -------
  59m           59m             1       {persistentvolume-controller }                  Warning         RecyclerPod     Recycler pod: Unable to mount volumes for pod "recycler-for-nfs_default(5421800e-347b-11e6-a79b-3c970e965218)": timeout expired waiting for volumes to attach/mount for pod "recycler-for-nfs"/"default". list of unattached/unmounted volumes=[vol]
  53m           53m             1       {persistentvolume-controller }                  Warning         RecyclerPod     Recycler pod: Unable to mount volumes for pod "recycler-for-nfs_default(3c9809e5-347c-11e6-a79b-3c970e965218)": timeout expired waiting for volumes to attach/mount for pod "recycler-for-nfs"/"default". list of unattached/unmounted volumes=[vol]
  46m           46m             1       {persistentvolume-controller }                  Warning         RecyclerPod     Recycler pod: Unable to mount volumes for pod "recycler-for-nfs_default(250dd2a2-347d-11e6-a79b-3c970e965218)": timeout expired waiting for volumes to attach/mount for pod "recycler-for-nfs"/"default". list of unattached/unmounted volumes=[vol]
  40m           40m             1       {persistentvolume-controller }                  Warning         RecyclerPod     Recycler pod: Unable to mount volumes for pod "recycler-for-nfs_default(0d84ea33-347e-11e6-a79b-3c970e965218)": timeout expired waiting for volumes to attach/mount for pod "recycler-for-nfs"/"default". list of unattached/unmounted volumes=[vol]
  33m           33m             1       {persistentvolume-controller }                  Warning         RecyclerPod     Recycler pod: Unable to mount volumes for pod "recycler-for-nfs_default(f5fb63bf-347e-11e6-a79b-3c970e965218)": timeout expired waiting for volumes to attach/mount for pod "recycler-for-nfs"/"default". list of unattached/unmounted volumes=[vol]
  27m           27m             1       {persistentvolume-controller }                  Warning         RecyclerPod     Recycler pod: Unable to mount volumes for pod "recycler-for-nfs_default(de7128fd-347f-11e6-a79b-3c970e965218)": timeout expired waiting for volumes to attach/mount for pod "recycler-for-nfs"/"default". list of unattached/unmounted volumes=[vol]
  1h            3m              75      {persistentvolume-controller }                  Normal          RecyclerPod     Recycler pod: Successfully assigned recycler-for-nfs to 127.0.0.1
  1h            3m              76      {persistentvolume-controller }                  Normal          RecyclerPod     Recycler pod: Pod was active on the node longer than specified deadline
  1h            1m              12      {persistentvolume-controller }                  Warning         RecyclerPod     Recycler pod: Error syncing pod, skipping: timeout expired waiting for volumes to attach/mount for pod "recycler-for-nfs"/"default". list of unattached/unmounted volumes=[vol]
  20m           1m              4       {persistentvolume-controller }                  Warning         RecyclerPod     (events with common reason combined)
```

These steps were necessary:

- added event watcher to volume.RecycleVolumeByWatchingPodUntilCompletion
- pass all these events through volume plugins to volume controller
- rework volume.RecycleVolumeByWatchingPodUntilCompletion unit tests to a table (too much copy-paste)
- fix all unit tests along the way
2016-09-22 12:18:53 -07:00
Kubernetes Submit Queue
4ab5a76338 Merge pull request #33103 from deads2k/controller-03-kill-non-generatedclient
Automatic merge from submit-queue

switch controller manager to generated clients

Switches the controller manager to generated clients.

@ncdc ptal
2016-09-22 11:37:01 -07:00
Kubernetes Submit Queue
6e25117891 Merge pull request #32655 from dshulyak/fix_node_fake_update
Automatic merge from submit-queue

Fix FakeNodeHandler Update behaviour

Two problems:
1. Get is always using Existing nodes slice, and you will for sure miss any updated data
2. Each Update adds a duplicate node entry to UpdatedNodes slice

For the 1st, we will try to find a node in UpdatedNodes slice (same as for the List).
2nd - append only if there is no node with same name as updated, if there is we will replace object in UpdatedNodes slice.
2016-09-22 07:43:18 -07:00
deads2k
7ee5b26ad1 incorrect key determination 2016-09-22 09:55:24 -04:00
deads2k
483af28944 fix up service lister 2016-09-22 09:12:37 -04:00
Kubernetes Submit Queue
5af04d1dd1 Merge pull request #32876 from errordeveloper/more-cert-utils
Automatic merge from submit-queue

Refactor cert utils into one pkg, add funcs from bootkube for kubeadm to use

**What this PR does / why we need it**:

We have ended-up with rather incomplete and fragmented collection of utils for handling certificates. It may be worse to consider using `cfssl` for doing all of these things, but for now there is some functionality that we need in `kubeadm` that we can borrow from bootkube. It makes sense to move the utils from bookube into core, as discussed in #31221.

**Special notes for your reviewer**: I've taken the opportunity to review names of existing funcs and tried to make some improvements in that area (with help from @peterbourgon).

**Release note**:

```release-note
NONE
```
2016-09-22 01:29:46 -07:00
Kubernetes Submit Queue
e115a4282d Merge pull request #33169 from deads2k/api-12-move-groups
Automatic merge from submit-queue

move registry packages for all API groups

This continues the pattern of `registry/<group>/resource` for our backing storage.  This entire pull is nothing but moves.  I'll reswizzle the actual storage next, but these are cargo-culted everywhere, so I want to lay this down early.

@sttts @ncdc
2016-09-22 00:51:59 -07:00
Antoine Pelisse
938872582e Revert "simplify RC and SVC listers" 2016-09-21 15:49:38 -07:00
deads2k
561f8d75a5 move core resource registry packages 2016-09-21 10:11:50 -04:00
Kubernetes Submit Queue
2d9d84dc64 Merge pull request #32888 from deads2k/client-10-fixup-remaining-listers
Automatic merge from submit-queue

simplify RC and SVC listers

Make the RC and SVC listers use the common list functions that more closely match client APIs, are consistent with other listers, and avoid unnecessary copies.
2016-09-21 04:13:56 -07:00
Kubernetes Submit Queue
02605106a6 Merge pull request #29505 from kargakis/debug-recreate-flake
Automatic merge from submit-queue

controller: enhance timeout error message for Recreate deployments

Makes the error message from https://github.com/kubernetes/kubernetes/issues/29197 more obvious

@kubernetes/deployment
2016-09-21 01:45:47 -07:00
Matt Liggett
ce0e7586a8 Only approve evictions when budgets would stay enforced after.
Prior to this, we would approve eviction as long as the current state of
the pods matched the budget.  The new version requires that after the
eviction, the pods would still match the budget.

Also update tests to match.
2016-09-20 18:23:50 -07:00
Kubernetes Submit Queue
ad7ba62b24 Merge pull request #32785 from m1093782566/m109-job-controller-hot-loop
Automatic merge from submit-queue

[Controller Manager] Fix job controller hot loop

<!--  Thanks for sending a pull request!  Here are some tips for you:
1. If this is your first time, read our contributor guidelines https://github.com/kubernetes/kubernetes/blob/master/CONTRIBUTING.md and developer guide https://github.com/kubernetes/kubernetes/blob/master/docs/devel/development.md
2. If you want *faster* PR reviews, read how: https://github.com/kubernetes/kubernetes/blob/master/docs/devel/faster_reviews.md
3. Follow the instructions for writing a release note: https://github.com/kubernetes/kubernetes/blob/master/docs/devel/pull-requests.md#release-notes
-->

**What this PR does / why we need it:**

Fix Job controller hotloop on unexpected API server rejections.

**Which issue this PR fixes**

Related issue is #30629

**Special notes for your reviewer:**

@deads2k @derekwaynecarr PTAL.
2016-09-20 13:52:45 -07:00
deads2k
b83a317003 switch controller manager to generated clientset 2016-09-20 12:53:47 -04:00
m1093782566
27cc90cebb fix job controller hot loop
Change-Id: I55ce706381f1494e5cd2064177b938f56d9c356a
2016-09-20 22:25:11 +08:00
Michail Kargakis
59da5385e0 controller: enhance timeout error message for Recreate deployments 2016-09-20 15:53:24 +02:00
deads2k
16fbb47189 fix up service lister 2016-09-20 08:24:33 -04:00
deads2k
185a7adf84 fix RC lister 2016-09-20 08:24:32 -04:00
Kubernetes Submit Queue
4a176600fc Merge pull request #32482 from m1093782566/m109-pet-set-fix-update-bug
Automatic merge from submit-queue

[Pet Set] Fix losing pet updated information between update retries

<!--  Thanks for sending a pull request!  Here are some tips for you:
1. If this is your first time, read our contributor guidelines https://github.com/kubernetes/kubernetes/blob/master/CONTRIBUTING.md and developer guide https://github.com/kubernetes/kubernetes/blob/master/docs/devel/development.md
2. If you want *faster* PR reviews, read how: https://github.com/kubernetes/kubernetes/blob/master/docs/devel/faster_reviews.md
3. Follow the instructions for writing a release note: https://github.com/kubernetes/kubernetes/blob/master/docs/devel/pull-requests.md#release-notes
-->

**What this PR does / why we need it**:

Address #32481

@bprashanth
2016-09-20 05:16:04 -07:00
d00369826
3de4695057 fix petset update(pet) retries bug
Change-Id: I92e2b653ab78fca72ae41cf87945d90fbbc67f44
2016-09-20 11:35:58 +08:00
Ilya Dmitrichenko
386fae4592
Refactor utils that deal with certs
- merge `pkg/util/{crypto,certificates}`
- add funcs from `github.com/kubernetes-incubator/bootkube/pkg/tlsutil`
- ensure naming of funcs is fairly consistent
2016-09-19 09:03:42 +01:00
Michail Kargakis
2fd3c490df controller: a couple of fixes for csr
Fixes:
* delete resource handler wasn't taking into account tombstones
* csr would requeue twice on update failure
2016-09-18 22:48:46 +02:00
Wojciech Tyczynski
27d75054b3 Avoid unnecessary API calls from GC 2016-09-18 07:09:11 +02:00
Kubernetes Submit Queue
920581d964 Merge pull request #32664 from m1093782566/m109-certificates-hot-loop
Automatic merge from submit-queue

[Controller Manager] Fix certificates controller hotloop and use utilruntime.HandleError to replace glog.Errorf

<!--  Thanks for sending a pull request!  Here are some tips for you:
1. If this is your first time, read our contributor guidelines https://github.com/kubernetes/kubernetes/blob/master/CONTRIBUTING.md and developer guide https://github.com/kubernetes/kubernetes/blob/master/docs/devel/development.md
2. If you want *faster* PR reviews, read how: https://github.com/kubernetes/kubernetes/blob/master/docs/devel/faster_reviews.md
3. Follow the instructions for writing a release note: https://github.com/kubernetes/kubernetes/blob/master/docs/devel/pull-requests.md#release-notes
-->

**What this PR does / why we need it**:

Fix certificates controller hotloop on unexpected API server rejections.

**Which issue this PR fixes** 

Related issue is #30629

**Special notes for your reviewer**:

@deads2k @derekwaynecarr PTAL.

I find there is no unit test for certificates controller, and I will implement unit tests for it later.
2016-09-17 21:00:59 -07:00
Kubernetes Submit Queue
41fc0a4506 Merge pull request #32776 from m1093782566/m109-fix-endpoint-controller-hotloop
Automatic merge from submit-queue

[Controller Manager] Fix endpoint controller hot loop and use utilruntime.HandleError to replace glog.Errorf

<!--  Thanks for sending a pull request!  Here are some tips for you:
1. If this is your first time, read our contributor guidelines https://github.com/kubernetes/kubernetes/blob/master/CONTRIBUTING.md and developer guide https://github.com/kubernetes/kubernetes/blob/master/docs/devel/development.md
2. If you want *faster* PR reviews, read how: https://github.com/kubernetes/kubernetes/blob/master/docs/devel/faster_reviews.md
3. Follow the instructions for writing a release note: https://github.com/kubernetes/kubernetes/blob/master/docs/devel/pull-requests.md#release-notes
-->

**Why**:

Fix endpoint controller hot loop and use `utilruntime.HandleError` to replace `glog.Errorf`

**What**

1. Fix endpoint controller hot loop in `pkg/controller/endpoint`

2.  Fix endpoint controller hot loop in `contrib/mesos/pkg/service`

3. Sweep cases of `glog.Errorf` and use `utilruntime.HandleError` instead.

**Which issue this PR fixes**

Fixes #32843
Related issue is #30629 

**Special notes for your reviewer**:

@deads2k @derekwaynecarr 

The changes on `pkg/controller/endpoints_controller.go` and `contrib/mesos/pkg/service/endpoints_controller.go` are almost the same except `contrib/mesos/pkg/service/endpoints_controller.go` does not pass `podInformer` as the parameter of `NewEndpointController()`. 

So, I didn't wait `podStoreSynced` before `syncService()`(Just leave it as it was). Will it lead to a problem?
2016-09-17 20:01:41 -07:00
Kubernetes Submit Queue
9fc8ebdafa Merge pull request #32891 from wojtek-t/route_controller_fix
Automatic merge from submit-queue

Don't update NodeNetworkUnavailable condition if it's already set cor…

Ref #32571
2016-09-17 00:39:34 -07:00
Kubernetes Submit Queue
cd051703b3 Merge pull request #32877 from deads2k/client-09-fixup-lister
Automatic merge from submit-queue

change factorization of listers to make them easier to add

`Listers` have a tremendous amount of duplicate code.  This factors that out.

@smarterclayton ptal.
2016-09-16 22:39:37 -07:00
Wojciech Tyczynski
d7d6249781 Don't update NodeNetworkUnavailable condition if it's already set correctly 2016-09-16 21:03:20 +02:00
deads2k
1bf17eb4e9 change factorization of listers to make them easier to add 2016-09-16 14:49:00 -04:00
Kubernetes Submit Queue
2ca15b9f76 Merge pull request #32815 from deads2k/controller-02-daemonset-informer
Automatic merge from submit-queue

convert daemonset controller to shared informers

Convert the daemonset controller completely to `SharedInformers` for its list/watch resources.

@kubernetes/rh-cluster-infra @ncdc
2016-09-16 09:39:57 -07:00
deads2k
234d68be83 convert daemonset controller to shared informers 2016-09-16 10:40:46 -04:00
d00369826
a3888335f7 fix endpoint controller hot loop
Change-Id: I0f667006f310fdca6abe324f9ea03537679e9163
2016-09-16 21:41:22 +08:00
Kubernetes Submit Queue
e8fbcb1669 Merge pull request #32654 from soltysh/sj_clientset
Automatic merge from submit-queue

Switch ScheduledJob controller to use clientset

**What this PR does / why we need it**:
This is part of #25442. I've applied here the same fix I've applied in the manual client in #29187, see the 1st commit for that (@caesarxuchao we've talked about it in #29856).

@deads2k as promised 
@janetkuo ptal
2016-09-16 05:03:57 -07:00
Kubernetes Submit Queue
0d9685b0b5 Merge pull request #32805 from caesarxuchao/more-gc-optimization
Automatic merge from submit-queue

Add the uid in a delete event to the absentOwnerCache

This is a small optimization to further reduce the traffic sent by the GC.

In #31167, GC caches the non-existent owners when it processes the dirtyQueue. As discovered in #32571, there is still small inefficiency, because there are multiple goroutines processing the dirtyQueue, many of them might send a GET to the apiserver before the cache gets populated.

This PR populates the cache when GC observes an object gets deleted, which happens before the processing of the dirtyQueue, so it avoids the simultaneous GET sent by the GC workers.

cc @lavalamp
2016-09-16 00:40:24 -07:00
Chao Xu
d122de5371 add the uid in a delete event to the absentOwnerCache 2016-09-15 13:53:47 -07:00
Mike Danese
a765d59932 move informer and controller to pkg/client/cache
Signed-off-by: Mike Danese <mikedanese@google.com>
2016-09-15 12:50:08 -07:00
Chao Xu
21896dac4b add the uid in a delete event to the absentOwnerCache 2016-09-15 11:22:22 -07:00
d00369826
fea0c79054 fix certificates controller hotloop on unexpected API server rejections
Change-Id: Ib7d2e18bcaa498bddfc785f3ff12958dfaaecbc3
2016-09-15 20:10:21 +08:00
Kubernetes Submit Queue
843d7cd24c Merge pull request #32576 from wongma7/revert-30825-pv-controller-informer
Automatic merge from submit-queue

Revert "Use PV shared informer in PV controller"

Fixes #32497 

Reverts kubernetes/kubernetes#30825
2016-09-15 04:37:29 -07:00
Kubernetes Submit Queue
dab16bf8fd Merge pull request #32565 from jsafrane/deleter-plugin
Automatic merge from submit-queue

Do not report warning event when an unknown deleter is requested

When Kubernetes does not have a plugin to delete a PV it should wait for
either external deleter or storage admin to delete the volume instead of
throwing an error.

This is the same approach as in #32077

@kubernetes/sig-storage
2016-09-14 22:20:36 -07:00
Kubernetes Submit Queue
6b1565d275 Merge pull request #30678 from ping035627/ping035627-patch-0816
Automatic merge from submit-queue

Recombine the condition for the "shouldScale" function

The PR recombine the condition for the "shouldScale" function, abstract the common condition(hpa.Status.LastScaleTime == nil).
2016-09-14 04:50:49 -07:00
Dmitry Shulyak
0ddaa20bf1 Fix FakeNodeHandler Update behaviour
Two problems:
1. Get is always using Existing nodes slice, and you will for sure miss any
   updated data
2. Each Update duplicates node entry in UpdatedNodes slice

For the 1st, try to find a node in UpdatedNodes slice (same as for the List).
2nd - append only if there is no node with same name as updated, if there is
just replace object.

Change-Id: I9ef1cca2788ba946eee37fa1b037c124ad76074c
2016-09-14 12:34:37 +03:00
Maciej Szulik
7a34347f7f Move ScheduledJob controller to use generated clientset 2016-09-14 11:27:29 +02:00
Matthew Wong
25e9b9dcf9 Revert "Use PV shared informer in PV controller" 2016-09-13 10:12:34 -04:00
deads2k
8fac64b43f add localSAR 2016-09-13 08:54:23 -04:00