Commit Graph

3498 Commits

Author SHA1 Message Date
Kubernetes Submit Queue
4f0ec7b199
Merge pull request #61487 from gmarek/condition
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

Fix condition for using network unavailable taint in cloud_cidr_allocator

Ref. #61481

The 'networkUnavailable' condition has, in a sense reverse logic. I.e. we should be trying to allocate CIRD when the condition is "true", i.e. when the taint exists.

```release-note
NONE
```

@shyamjvs @agabet @bowei
2018-03-21 13:40:33 -07:00
Kubernetes Submit Queue
e40ffd7197
Merge pull request #59172 from fisherxu/removeyear
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

Remove YEAR field of all generated files and fix kubernetes boilerplate checker

**What this PR does / why we need it**:
Remove YEAR field of all generated files and fix kubernetes boilerplate checker
xref: [remove YEAR fileds in gengo #91](https://github.com/kubernetes/gengo/pull/91)

**Which issue(s) this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close the issue(s) when PR gets merged)*:
Fixes [#gengo/issues/24](https://github.com/kubernetes/gengo/issues/24)

**Special notes for your reviewer**:
/cc @thockin @lavalamp @sttts 

**Release note**:

```release-note
NONE
```
2018-03-21 12:44:37 -07:00
Marek Grabowski
f93598eea5 Fix condition for using network unavailable taint in cloud_cidr_allocator 2018-03-21 17:55:26 +00:00
Kubernetes Submit Queue
d9d1364657
Merge pull request #61124 from satyasm/avoid-sync-delay
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

Fix Issue #61123, call syncer.Update on add event.

**What this PR does / why we need it**:

**Which issue(s) this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close the issue(s) when PR gets merged)*:
Fixes #61123

**Special notes for your reviewer**:

**Release note**:

```release-note
Fixed #61123 by triggering syncer.Update on all cases including when a syncer is created
on a new add event.
```
2018-03-21 08:25:44 -07:00
Kubernetes Submit Queue
a7d788d91f
Merge pull request #60886 from mattjmcnaughton/mattjmcnaughton/59975-object-metrics-ignore-unready-pods
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

Ignore unready pods when calculating desired replicas

**What this PR does / why we need it**:

This PR causes `GetExternalMetricReplicas` and `GetObjectMetricReplicas` to ignore unready pods when computing the number of desired replicas. If we don't ignore unready pods, there is a risk of overscaling. See the commit messages for examples and implementation info.

**Which issue(s) this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close the issue(s) when PR gets merged)*:
Fixes #59975 

**Special notes for your reviewer**:
@MaciekPytel and I consciously chose to save `GetExternalPerPodMetricReplicas` for a separate PR, as we aren't definite on what is the preferred behavior.

**Release note**:

```release-note
Unready pods will no longer impact the number of desired replicas when using horizontal auto-scaling with external metrics or object metrics.
```
2018-03-21 01:34:23 -07:00
Kubernetes Submit Queue
a64a11d7e4
Merge pull request #61411 from liggitt/remove-ds-scheduling
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

disable DaemonSet scheduling feature for 1.10

The DaemonSet scheduling feature has blocked the alpha CI job being green and is preventing getting good CI signal for v1.10

It still contains pod scheduling races (#61050) and fundamental issues with the affinity terms it creates (#61410)

As such, there is not significant value in having the feature available in 1.10 in the current state

This PR disables the feature in order to regain green signal on the alpha CI job (reverting commits is likely to be more disruptive at this point)

related to https://github.com/kubernetes/kubernetes/issues/61050

```release-note
DaemonSet scheduling associated with the alpha ScheduleDaemonSetPods feature flag has been removed from the 1.10 release. See https://github.com/kubernetes/features/issues/548 for feature status.
```
2018-03-20 12:34:23 -07:00
Kubernetes Submit Queue
8c00efe653
Merge pull request #60831 from resouer/fix-race
Automatic merge from submit-queue (batch tested with PRs 60574, 60666, 60831, 60877, 60357). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

Fix data race in node lifecycle controller

**What this PR does / why we need it**:
Encountered this bug during fixing: https://github.com/kubernetes/kubernetes/pull/60753

There's a data race for `zoneNoExecuteTainter `.

```
--- PASS: TestTaintNodeByCondition (5.72s)
PASS
==================
WARNING: DATA RACE
Write at 0x00c421a8d2f0 by goroutine 1472:
  runtime.mapassign_faststr()
      /usr/local/go/src/runtime/hashmap_fast.go:598 +0x0
  k8s.io/kubernetes/pkg/controller/nodelifecycle.(*Controller).addPodEvictorForNewZone()
      /root/code/kubernetes/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/pkg/controller/nodelifecycle/node_lifecycle_controller.go:1053 +0x37d
  k8s.io/kubernetes/pkg/controller/nodelifecycle.(*Controller).monitorNodeStatus()
      

Previous read at 0x00c421a8d2f0 by goroutine 1471:
  runtime.mapiterinit()
      /usr/local/go/src/runtime/hashmap.go:709 +0x0
  k8s.io/kubernetes/pkg/controller/nodelifecycle.(*Controller).doNoExecuteTaintingPass()
      /root/code/kubernetes/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/pkg/controller/nodelifecycle/node_lifecycle_controller.go:459 +0xec
  k8s.io/kubernetes/pkg/controller/nodelifecycle.(*Controller).(k8s.io/kubernetes/pkg/controller/nodelifecycle.doNoExecuteTaintingPass)-fm()
```

**Which issue(s) this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close the issue(s) when PR gets merged)*:
Fixes #

**Special notes for your reviewer**:

**Release note**:

```release-note
Fix data race in node lifecycle controller
```
2018-03-20 08:34:40 -07:00
Jordan Liggitt
05e4ccecb1
disable DaemonSet scheduling feature for 1.10 2018-03-20 10:50:37 -04:00
Kubernetes Submit Queue
854002c3e1
Merge pull request #59208 from augabet/CIDR_Taints
Automatic merge from submit-queue (batch tested with PRs 60363, 59208, 59465, 60581, 60702). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

check taints when allocating CIDR for the cloud

check taint when allocating CIDR for the Cloud (for the shared informer cache).

What this PR does / why we need it:
Following the issue #58406 here is a check of taint when when allocating CIDR for the Cloud

Which issue(s) this PR fixes (optional, in fixes #<issue number>(, fixes #<issue_number>, ...) format, will close the issue(s) when PR gets merged):
Fixes #58406

Special notes for your reviewer:
/assign @yastij @gmarek

```release-note
None
```
2018-03-20 02:37:16 -07:00
Kubernetes Submit Queue
d60394eebd
Merge pull request #59542 from dmathieu/remove-cronjob-label-todo
Automatic merge from submit-queue (batch tested with PRs 60189, 59542, 59931, 60621, 60353). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

Remove todo to consider adding the cronjob name as a label

**What this PR does / why we need it**:

It seems this label shouldn't be added automatically. If so, we should remove the comment.
See #59473

**Which issue(s) this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close the issue(s) when PR gets merged)*:
Fixes #41633

**Special notes for your reviewer**:

**Release note**:

```release-note
NONE
```
2018-03-20 00:42:09 -07:00
Kubernetes Submit Queue
1be8e8bb59
Merge pull request #60562 from Pingan2017/cleanupnode
Automatic merge from submit-queue (batch tested with PRs 60457, 60331, 54970, 58731, 60562). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

clean up unused const in node_lifecycle_controller.go

**What this PR does / why we need it**:

**Which issue(s) this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close the issue(s) when PR gets merged)*:
Fixes #

**Special notes for your reviewer**:

**Release note**:

```release-note
NONE
```
2018-03-19 23:42:22 -07:00
Kubernetes Submit Queue
c64f19dd1b
Merge pull request #59728 from wgliang/master.append
Automatic merge from submit-queue (batch tested with PRs 59740, 59728, 60080, 60086, 58714). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

more concise to merge the slice

**What this PR does / why we need it**:
more concise to merge the slice

**Special notes for your reviewer**:
2018-03-19 21:34:30 -07:00
Damien Mathieu
e8efc51c1c remove todo suggesting to add the cronjob start time 2018-03-19 19:22:14 +01:00
Damien Mathieu
c669ce440c remove todo to consider adding the cronjob name as a label
See #59473
2018-03-19 19:22:14 +01:00
Kubernetes Submit Queue
f8f67da082
Merge pull request #61201 from jennybuckley/fix-gc-empty-map
Automatic merge from submit-queue (batch tested with PRs 61284, 61119, 61201). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

Prevent garbage collector from attempting to sync with 0 resources

**What this PR does / why we need it**:
As of #55259 we enabled garbagecollector.GetDeletableResources to return partial discovery results (including an empty set of discovery results).
This had the unintended consequence of allowing the garbage collector to enter a blocked state that can only be fixed by restarting.

From [this comment](https://github.com/kubernetes/kubernetes/issues/60037#issuecomment-372801088):

> 1. The Sync function periodically calls GetDeletableResources
>
> 2. According to the comment above GetDeletableResources, All discovery errors are considered temporary. Upon encountering any error, GetDeletableResources will log and return any discovered resources it was able to process (which may be none)., an error in discovery causes the discovery client to no longer discover resources in the cluster, but instead of failing and returning an error, it simply logs the error as garbagecollector.go:601] failed to discover preferred resources: %vthe server was unable to return a response in the time allotted, but may still be processing the request and returns an empty list of resources
>
> 3. The Sync function, upon recieving an empty resource list from discovery, detects that the resources have changed, and calls resyncMonitors, which calls dependencyGraphBuilder.syncMonitors with map[] as the argument as shown in the log as garbagecollector.go:189] syncing garbage collector with updated resources from discovery: map[], which sets the list of monitors to an empty list because it thinks there are no resources to monitor.
>
> 4. Lastly the Sync function calls controller.WaitForCacheSync, which calls cache.WaitForCacheSync, which will continually retry the garbagecollector.IsSynced function until it returns true, but it will always return false because len(gb.monitors) is 0.

This PR prevents that specific race condition from arising.

**Which issue(s) this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close the issue(s) when PR gets merged)*:
Fixes #60037

**Release note**:
```release-note
Fix bug allowing garbage collector to enter a broken state that could only be fixed by restarting the controller-manager.
```
2018-03-16 16:56:03 -07:00
jennybuckley
455c6fb049 Prevent garbage collector from attempting to sync with 0 resources 2018-03-16 11:44:09 -07:00
jennybuckley
68e2a96016 Add unit test TestGarbageCollectorSync 2018-03-16 11:28:58 -07:00
Kubernetes Submit Queue
ca02c11887
Merge pull request #61161 from k82cn/k8s_59194_4
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

Added unschedulable taint

Signed-off-by: Da K. Ma <klaus1982.cn@gmail.com>

**What this PR does / why we need it**:

**Which issue(s) this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close the issue(s) when PR gets merged)*:
part of #59194; fixes #61050

**Release note**:

```release-note
When `TaintNodesByCondition` enabled, added `node.kubernetes.io/unschedulable:NoSchedule`
 taint to the node if `spec.Unschedulable` is true.

When `ScheduleDaemonSetPods` enabled, `node.kubernetes.io/unschedulable:NoSchedule` 
toleration is added automatically to DaemonSet Pods; so the `unschedulable` field of 
a node is not respected by the DaemonSet controller.
```
2018-03-16 11:22:05 -07:00
Kubernetes Submit Queue
5d67222592
Merge pull request #60985 from soltysh/issue59918
Automatic merge from submit-queue (batch tested with PRs 60978, 60985). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

Backoff only when failed pod shows up

**What this PR does / why we need it**:
Upon introducing the backoff policy we started to delay sync runs for the job when it failed several times before. This leads to failed jobs not reporting status right away in cases that are not related to failed pods, eg. a successful run. This PR ensures the backoff is applied only when `updatePod` receives a failed pod.

**Which issue(s) this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close the issue(s) when PR gets merged)*:
Fixes #59918 #59527

/assign @janetkuo @kow3ns 

**Release note**:
```release-note
None
```
2018-03-15 22:55:02 -07:00
Da K. Ma
b23db30765 Added unscheduable taint.
Signed-off-by: Da K. Ma <klaus1982.cn@gmail.com>
2018-03-16 09:13:08 +08:00
Kubernetes Submit Queue
02611149c1
Merge pull request #60579 from gmarek/ss_logs
Automatic merge from submit-queue (batch tested with PRs 61118, 60579). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

Increase loging verbosity for deleting stateful set pods

We should always log reasons for deleting StatefulSet Pods.
@jdumars - what's the current process for putting such changes into the release? It's literally 0-risk change that helps with debugging.

cc @ttz21

```release-note
NONE
```
2018-03-14 09:49:52 -07:00
Maciej Szulik
1266252dc2
Backoff only when failed pod shows up 2018-03-14 11:49:13 +01:00
mattjmcnaughton
d33494d459 GetExternalMetricReplicas ignores unready pods
Similar to the change we made for `GetObjectMetricReplicas` in the
previous commit. Ensure that `GetExternalMetricReplicas` does not
include unready pods when its determining how many replica it desires.
Including unready pods can lead to over-scaling.

We did not change the behavior of `GetExternalPerPodMetricReplicas`, as
it is slightly less clear what is the desired behavior. We did make some
small naming refactorings to this method, which will make it easier to
ignore unready pods if we decide we want to.
2018-03-13 22:27:28 -04:00
Satyadeep Musuvathy
4b2de75679 Fix Issue #61123, call syncer.Update on add event. 2018-03-13 11:20:50 -07:00
Kubernetes Submit Queue
f7aafaeb40
Merge pull request #59862 from k82cn/k8s_59194_3
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

Task 2: Schedule DaemonSet Pods by default scheduler.

Signed-off-by: Da K. Ma <klaus1982.cn@gmail.com>


**Which issue(s) this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close the issue(s) when PR gets merged)*:
part of #59194
https://github.com/kubernetes/features/issues/548

**Release note**:

```release-note
When ScheduleDaemonSetPods is enabled, the DaemonSet controller will delegate Pods scheduling to default scheduler.
```
2018-03-11 06:19:27 -07:00
Shyam Jeedigunta
8ff1f05f7c Increase verbosity of frequently printed logline in scheduler_binder 2018-03-08 19:25:01 +01:00
fisherxu
b49ef6531c regenerated all files and remove all YEAR fields 2018-03-08 17:52:48 +08:00
Da K. Ma
5adb2bad45 Task 2: Schedule DaemonSet Pods by default scheduler.
Signed-off-by: Da K. Ma <klaus1982.cn@gmail.com>
2018-03-08 17:36:49 +08:00
mattjmcnaughton
7e3bce7b3e GetObjectMetricReplicas ignores unready pods
Previously, when `GetObjectMetricReplicas` calculated the desired
replica count, it multiplied the usage ratio by the current number of replicas.
This method caused over-scaling when there were pods that were not ready
for a long period of time. For example, if there were pods A, B, and C,
and only pod A was ready, and the usage ratio was 500%, we would
previously specify 15 pods as the desired replicas (even though really
only one pod was handling the load).

After this change, we now multiple the usage
ratio by the number of ready pods for `GetObjectMetricReplicas`.
In the example above, we'd only desire 5 replica pods.

This change gives `GetObjectMetricReplicas` the same behavior as the
other replica calculator methods. Only `GetExternalMetricReplicas` and
`GetExternalPerPodMetricRepliacs` still allow unready pods to impact the
number of desired replicas. I will fix this issue in the following
commit.
2018-03-07 08:13:01 -05:00
Harry Zhang
da29bd2cbe Fix data race in node lifecycle controller 2018-03-06 00:18:11 -08:00
Kubernetes Submit Queue
30eb1aa7c5
Merge pull request #60648 from bskiba/hpa-unready
Automatic merge from submit-queue (batch tested with PRs 60732, 60689, 60648, 60704). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

Do not count failed pods as unready in HPA controller

**What this PR does / why we need it**:
Currently, when performing a scale up, any failed pods (which can be present for example in case of evictions performed by kubelet) will be treated as unready. Unready pods are treated as if they had 0% utilization which will slow down or even block scale up.

After this change, failed pods are ignored in all calculations. This way they do not influence neither scale up nor scale down replica calculations.

@MaciekPytel @DirectXMan12 

**Which issue(s) this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close the issue(s) when PR gets merged)*:
Fixes #55630

**Special notes for your reviewer**:

**Release note**:
```
Stop counting failed pods as unready in HPA controller to avoid failed pods incorrectly affecting scale up replica count calculation.
```
2018-03-02 14:25:54 -08:00
Kubernetes Submit Queue
ae1fc13aee
Merge pull request #60386 from k82cn/k8s_60163
Automatic merge from submit-queue (batch tested with PRs 60683, 60386). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

Added unschedulabe predicate.

Signed-off-by: Da K. Ma <madaxa@cn.ibm.com>

**Which issue(s) this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close the issue(s) when PR gets merged)*:
Fixes #60163

**Release note**:
```release-note
None
```
2018-03-02 03:41:50 -08:00
Beata Skiba
e5f8bfa023 Do not count failed pods as unready in HPA controller
Currently, when performing a scale up, any failed pods (which can be present for example in case of evictions performed by kubelet) will be treated as unready. Unready pods are treated as if they had 0% utilization which will slow down or even block scale up.

After this change, failed pods are ignored in all calculations. This way they do not influence neither scale up nor scale down replica calculations.
2018-03-01 16:21:02 +01:00
Marek Grabowski
b27157a271 Increase loging verbosity for deleting stateful set pods 2018-03-01 09:33:18 +00:00
Kubernetes Submit Queue
07240b7166
Merge pull request #60555 from zhangxiaoyu-zidif/add-unit-test-for-nodenames-slice-comparison
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

add unit test case for nodenames comparison

**What this PR does / why we need it**:
ref https://github.com/kubernetes/kubernetes/pull/60486

**Which issue(s) this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close the issue(s) when PR gets merged)*:
Fixes #

**Special notes for your reviewer**:
please merge it after https://github.com/kubernetes/kubernetes/pull/60486

**Release note**:

```release-note
NONE
```
2018-02-28 10:39:18 -08:00
Kubernetes Submit Queue
c4f3102b1f
Merge pull request #60486 from zhangxiaoyu-zidif/fix-nodename-slice-cmp
Automatic merge from submit-queue (batch tested with PRs 60342, 60505, 59218, 52900, 60486). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

fix nodenames slices comparison para.

**What this PR does / why we need it**:

**Which issue(s) this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close the issue(s) when PR gets merged)*:
Fixes #

**Special notes for your reviewer**:

**Release note**:

```release-note
NONE
```
2018-02-28 06:07:34 -08:00
Da K. Ma
f94b7eda83 Added unscheduable node UT for DaemonSet.
Signed-off-by: Da K. Ma <madaxa@cn.ibm.com>
2018-02-28 16:11:01 +08:00
Kubernetes Submit Queue
aa13f3fa2a
Merge pull request #59289 from rmmh/semantic-check
Automatic merge from submit-queue (batch tested with PRs 53689, 56880, 55856, 59289, 60249). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

 Add test/typecheck, a fast typecheck for all build platforms.

Add test/typecheck, a fast typecheck for all build platforms.

Most of the time spent compiling is spent optimizing and linking
binary code. Most errors occur at the syntax or semantic (type) layers.
Go's compiler is importable as a normal package, so we can do fast
syntax and type checking for the 10 platforms we build on.

This currently takes ~6 minutes of CPU time (parallelized).

This makes presubmit cross builds superfluous, since it should catch
most cross-build breaks (generally Unix and 64-bit assumptions).

Example output:

```$ time go run test/typecheck/main.go
type-checking:  linux/amd64, windows/386, darwin/amd64, linux/arm, 
    linux/386, windows/amd64, linux/arm64, linux/ppc64le, linux/s390x, darwin/386
ERROR(windows/amd64) pkg/proxy/ipvs/proxier.go:1708:27: ENXIO not declared by package unix
ERROR(windows/386) pkg/proxy/ipvs/proxier.go:1708:27: ENXIO not declared by package unix

real    0m45.083s
user    6m15.504s
sys     1m14.000s
```


```release-note
NONE
```
2018-02-28 00:00:36 -08:00
Pingan2017
822d21f88a clean up unused const in node_lifecycle_controller.go 2018-02-28 15:34:47 +08:00
zhangxiaoyu-zidif
a0786a2df5 add unit test case for nodenames comparison 2018-02-28 14:02:09 +08:00
Mike Danese
024f57affe implement token authenticator for new id tokens 2018-02-27 17:20:46 -08:00
Mike Danese
1fbf8b8f2a svcacct: move getters to use an external clientset 2018-02-27 17:20:46 -08:00
Ryan Hitchman
e04b91facf Remove unused variables (only assigned to) from test code.
This is revealed by the go/types package, which is stricter than
the Go compiler about unused variables. See also: golang/go#8560
2018-02-27 13:45:31 -08:00
Kubernetes Submit Queue
249ecab74e
Merge pull request #59365 from ayushpateria/patch-sts
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

Fix StatefulSet set-based selector bug

**What this PR does / why we need it**:
ControllerRevisions were using selectors as the labels, in case of set-based selectors, the helper function to convert selectors to labels would break. This PR uses pod labels for ControllerRevision labels instead of selectors.
**Which issue(s) this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close the issue(s) when PR gets merged)*:
Fixes #59266

**Special notes for your reviewer**:
I'm trying to learn Kubernetes codebase and would be happy to make changes if anything is off.
**Release note**:

```release-note
Fix StatefulSet to work with set-based selectors.
```
2018-02-27 10:21:00 -08:00
Kubernetes Submit Queue
a2ddca76d2
Merge pull request #60243 from MaciekPytel/hpa_api_ext_imp
Automatic merge from submit-queue (batch tested with PRs 60433, 59982, 59128, 60243, 60440). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

Implement external metric in HPA

This implement the changes to HPA introduced in https://github.com/kubernetes/kubernetes/pull/60096
2018-02-27 08:25:47 -08:00
Aleksandra Malinowska
e58411c600 Implement external metrics in HPA 2018-02-27 14:10:29 +01:00
Maciej Pytel
66f4f9080d Add external metrics client to HPA rest client 2018-02-27 14:10:29 +01:00
wackxu
b3ba80b223 update bazel 2018-02-27 20:23:36 +08:00
wackxu
f737ad62ed update import 2018-02-27 20:23:35 +08:00
zhangxiaoyu-zidif
44aeb56eab fix nodenames slices comparison para. 2018-02-27 15:54:45 +08:00