Automatic merge from submit-queue (batch tested with PRs 52452, 52115, 52260, 52290)
Fixes device plugin re-registration handling logic to make sure:
- If a device plugin exits, its exported resource will be removed.
- No capacity change if a new device plugin instance comes up to replace the old instance.
**What this PR does / why we need it**:
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes https://github.com/kubernetes/kubernetes/issues/52510
**Special notes for your reviewer**:
**Release note**:
```release-note
```
Automatic merge from submit-queue (batch tested with PRs 52452, 52115, 52260, 52290)
fix azure disk mounter issue
**What this PR does / why we need it**:
fix azure disk mounter issue, it's a P1 bug, it exists in 1.7, 1.8 release, should cherry pick to 1.7, 1.8
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #
fixes#52261
consider following issue:
1) A pod mounting an azure disk in a k8s agent
2) The kubelet is restarted in that k8s agent
3) The pod could not start up, it always reports error as following:
4d 1m 3065 kubelet, 14777acs9000 Warning FailedMount MountVolume.SetUp failed for volume "pvc-7a0cdeb9-92c7-11e7-b86b-000d3a36d70c" : azureDisk - No
t a mounting point for disk andykubewin175-dynamic-pvc-7a0cdeb9-92c7-11e7-b86b-000d3a36d70c on \var\lib\kubelet\pods\d146c023-92c7-11e7-b86b-000d3a36d70c\volumes\kubernetes.io~azure-disk\pvc-7a0cdeb9-92c7-11
e7-b86b-000d3a36d70c
4d 1m 3157 kubelet, 14777acs9000 Warning FailedMount Error syncing pod
**Special notes for your reviewer**:
If you take a look at following implementation of vsphere or gce, it will return nil instead of error:
https://github.com/kubernetes/kubernetes/blob/master/pkg/volume/vsphere_volume/vsphere_volume.go#L217-L220https://github.com/kubernetes/kubernetes/blob/master/pkg/volume/gce_pd/gce_pd.go#L273-L275
The logic of return info parsing here, it's wrong to return error
https://github.com/kubernetes/kubernetes/blob/master/pkg/volume/util/operationexecutor/operation_generator.go#L469-L475
**Release note**:
```release-note
```
Automatic merge from submit-queue (batch tested with PRs 52452, 52115, 52260, 52290)
Fix support for updating quota on update
This PR implements support for properly handling quota when resources are updated. We never take negative values and add them up.
Fixes https://github.com/kubernetes/kubernetes/issues/51736
cc @derekwaynecarr
/sig storage
```release-note
Make sure that resources being updated are handled correctly by Quota system
```
Automatic merge from submit-queue (batch tested with PRs 51824, 50476, 52451, 52009, 52237)
fix issue(#47976)Invalid value error when creating service from expor…
…ted config
**What this PR does / why we need it**:
close issue #47976
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #
**Special notes for your reviewer**:
**Release note**:
```release-note
NONE
```
Automatic merge from submit-queue (batch tested with PRs 51824, 50476, 52451, 52009, 52237)
Plumbing the proxy dialer to the webhook admission plugin
* Fixing https://github.com/kubernetes/kubernetes/issues/49987. Plumb the `Dial` function to the `transport.Config`
* Fixing https://github.com/kubernetes/kubernetes/issues/52366. Let the webhook admission plugin sets the `TLSConfg.ServerName`.
I tested it in my gke setup. I don't have time to implement an e2e test before 1.8 release. I think it's ok to add the test later, because *i)* the change only affects the alpha webhook admission feature, and *ii)* the webhook feature is unusable without the fix. That said, it's up to my reviewer to decide.
Filed https://github.com/kubernetes/kubernetes/issues/52368 for the missing e2e test.
( The second commit is https://github.com/kubernetes/kubernetes/pull/52372, which is just a cleanup of client configuration in e2e tests. It removed a function that marshalled the client config to json and then unmarshalled it. It is a prerequisite of this PR, because this PR added the `Dial` function to the config which is not json marshallable.)
```release-note
Fixed the webhook admission plugin so that it works even if the apiserver and the nodes are in two networks (e.g., in GKE).
Fixed the webhook admission plugin so that webhook author could use the DNS name of the service as the CommonName when generating the server cert for the webhook.
Action required:
Anyone who generated server cert for admission webhooks need to regenerate the cert. Previously, when generating server cert for the admission webhook, the CN value doesn't matter. Now you must set it to the DNS name of the webhook service, i.e., `<service.Name>.<service.Namespace>.svc`.
```
Automatic merge from submit-queue (batch tested with PRs 52442, 52247, 46542, 52363, 51781)
Make CPU manager release CPUs when Pod enters completed phase.
**What this PR does / why we need it**: When CPU manager is enabled, this PR releases allocated CPUs when container is not running and is non-restartable.
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes#52351
**Special notes for your reviewer**:
This bug is only reproduced for pods with `restartPolicy` = `Never` or `OnFailure`. The following output is from a 4 CPU node. This bug can be reproduced as long >= half the cores are requested.
pod1.yaml:
```
apiVersion: v1
kind: Pod
metadata:
name: test-pod1
spec:
containers:
- image: ubuntu
command: ["/bin/bash"]
args: ["-c", "sleep 5"]
name: test-container1
resources:
requests:
cpu: 2
memory: 100Mi
limits:
cpu: 2
memory: 100Mi
restartPolicy: "Never"
```
pod2.yaml:
```
apiVersion: v1
kind: Pod
metadata:
name: test-pod2
spec:
containers:
- image: ubuntu
command: ["/bin/bash"]
args: ["-c", "sleep 5"]
name: test-container1
resources:
requests:
cpu: 2
memory: 100Mi
limits:
cpu: 2
memory: 100Mi
restartPolicy: "Never"
```
Run a local Kubernetes cluster with CPU manager enabled.
```sh
KUBELET_FLAGS='--feature-gates=CPUManager=true --cpu-manager-policy=static --cpu-manager-reconcile-period=1s --kube-reserved=cpu=500m' ./hack/local-up-cluster.sh
```
_Before:_
Create `test-pod1` using pod1.yaml.
```
./cluster/kubectl.sh create -f pod1.yaml
```
Wait for the pod to complete and wait another 90 seconds (give enough time for GC to kick-in).
Create `test-pod2` using pod2.yaml.
```
./cluster/kubectl.sh create -f pod2.yaml
```
Get all pods in the cluster.
```
./cluster/kubectl.sh get pods -a
NAME READY STATUS RESTARTS AGE
test-pod1 0/1 Completed 0 1m
test-pod2 0/1 not enough cpus available to satisfy request 0 9s
```
_After:_
Create `test-pod1` using pod1.yaml.
```
./cluster/kubectl.sh create -f pod1.yaml
```
Wait for the pod to complete and wait another 90 seconds (give enough time for GC to kick-in).
Create `test-pod2` using pod2.yaml.
```
./cluster/kubectl.sh create -f pod2.yaml
```
Get all pods in the cluster.
```
./cluster/kubectl.sh get pods -a
NAME READY STATUS RESTARTS AGE
test-pod1 0/1 Completed 0 1m
test-pod2 0/1 Completed 0 9s
```
Automatic merge from submit-queue (batch tested with PRs 52442, 52247, 46542, 52363, 51781)
Ignore pods for quota marked for deletion whose node is unreachable
**What this PR does / why we need it**:
Traditionally, we charge to quota all pods that are in a non-terminal phase. We have a user report that noted the behavior change in kube 1.5 for the node controller to no longer force delete pods whose nodes have been lost. Instead, the pod is marked for deletion, and the reason is updated to state that the node is unreachable. The user expected the quota to be released. If the user was at their quota limit, their application may not be able to create a new replica given the current behavior. As a result, this PR ignores pods marked for deletion that have exceeded their grace period.
**Which issue this PR fixes**
xref https://bugzilla.redhat.com/show_bug.cgi?id=1455743
fixes https://github.com/kubernetes/kubernetes/issues/52436
**Release note**:
```release-note
Ignore pods marked for deletion that exceed their grace period in ResourceQuota
```
Windows Kernel now exposes "Internal Load Balancing"
using VFP (Virtual Filtering Platform) part of Virtual Switch. An inbuild
windows service HNS (Host Networking Service) acts as interface to program
the VFP. VFP is synonymous to iptables in functionality. HNS uses json based
data as input.
With the help of the interface available in github.com/Microsoft/hcsshim,
these APIs are exposed to the world in github to program HNS and use
the feature.
*** More info about the changes in this PR ***
(1) For every endpoint available in the system, an HNS Endpoint is added
(1.a) for local endpoints, a local HNS Endpoint would already exist, as part of
container creation.
(1.b) For all remote endpoints, a remote HNS Endpoint is created via HNS
(2) For every Service, a HNS ILB LoadBalancer is added referring the endpoints
created in (1)
Sample Input to HNS:
{
"Policies": [
{
"ExternalPort": 80,
"InternalPort": 80,
"Protocol": 6,
"Type": "ELB",
"VIPs": [
"11.0.98.129"
]
}
],
"References": [
"/endpoints/ca8b877b-ab90-499a-bc0e-7d736c425632",
"/endpoints/ee0ef08b-8434-4f8b-b748-393884e77465"
]
}
(2-a) This is done for Cluster IP, LoadBalancer Ingress IP, NodePort, External IP
Following the regular service and endpoint updates,
the HNS is notified of the updates and the system is kept in sync.
Automatic merge from submit-queue (batch tested with PRs 52376, 52439, 52382, 52358, 52372)
Workaround go-junit-report bug for TestApps
**What this PR does / why we need it**: Fix output from pkg/kubectl/apps/TestApps unit test
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes#51253
**Special notes for your reviewer**: Literally copy-pasta of the approach taken in #45320. Maybe a sign that this should be extracted into something shared. I'm just trying to see if we can make https://k8s-testgrid.appspot.com/kubernetes-presubmits and https://k8s-testgrid.appspot.com/release-master-blocking a little more green for now.
```release-note
NONE
```
- If a device plugin exits, its exported resource will be removed.
- No capacity change if a new device plugin instance comes up to replace the old instance.
Move negative check for testing "not patched" output to test-cmd-util.sh
as exiting with code 1 was causing patch_test.go to fail when the error
was expected as part of the test.
This implements stats for windows nodes in a new package, winstats.
WinStats exports methods to get cadvisor like datastructures, however
with windows specific metrics. WinStats only gets node level metrics and
information, container stats will go via the CRI. This enables the
use of the summary api to get metrics for windows nodes.
Automatic merge from submit-queue
Fix swallowed errors in various volume packages
**What this PR does / why we need it**: Fixes swallowed errors in various volume packages.
**Release note**:
```release-note NONE
```
Automatic merge from submit-queue (batch tested with PRs 51601, 52153, 52364, 52362, 52342)
Minor fixes to validation test
Some test cases confuse the new object with the old object. This PR fixed that. Also added a test to verify that deletionTimestamp cannot be added (via the REST endpoints).
Fix#49256
When `ScalingLimited = true` for the `hpa`, there is an accompanying
status message, describing why scaling is limited. Previously if the desired
replica count was 0, and spec.minReplicas > 0, the status message
indicated "the desired replica count was less than the min replica
count". This was particularly confusing when `spec.MinReplicas = 1`. If
there was no `spec.minReplicas`, then the status message indicated "the
desired replica count was zero" which is more informative.
Update the calculation of status message so that if the desired replica
count is 0, we always display the clearer "the desired replica count was
zero" status message, even if spec.minReplicas > 0.
Signed-off-by: mattjmcnaughton <mattjmcnaughton@gmail.com>
Automatic merge from submit-queue (batch tested with PRs 52339, 52343, 52125, 52360, 52301)
'*' is valid for allowed seccomp profiles
**What this PR does / why we need it**:
This should be valid on a PodSecurityPolicy, but is currently rejected:
```
seccomp.security.alpha.kubernetes.io/allowedProfileNames: '*'
```
**Which issue this PR fixes**: fixes#52300
```release-note
NONE
```
Automatic merge from submit-queue (batch tested with PRs 52339, 52343, 52125, 52360, 52301)
dockershim: check if f.Sync() returns an error and surface it
```release-note
dockershim: check the error when syncing the checkpoint.
```
Automatic merge from submit-queue (batch tested with PRs 52339, 52343, 52125, 52360, 52301)
Prevent enabling alpha APIs by default
related to #47691
This is a follow up to #51839 to add a check that we do not enable alpha APIs by default
Automatic merge from submit-queue (batch tested with PRs 48226, 52046, 52231, 52344, 52352)
Log at higher verbosity levels some common SyncPod errors
This log message was 90% of all glog.Errorf level statements reported on a production cluster, hiding other more impactful errors. We already log it in start container, but for extra caution we continue to log it at v(3) here (the downside of not logging a start container error is worse than some log spam at higher levels).
HandleError() is intended only for unknown and unexpected errors.
```release-note
NONE
```
@derekwaynecarr @sjenning
Automatic merge from submit-queue (batch tested with PRs 48226, 52046, 52231, 52344, 52352)
[BugFix] Soft Eviction timer works correctly
fixes#51516
thresholdsMet should not exclude previously met thresholds when we do not have new stats for a threshold.
/assign @vishh @derekwaynecarr
cc @kubernetes/sig-node-bugs
Automatic merge from submit-queue
fix kubectl set env --list description
**What this PR does / why we need it**:
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #
**Special notes for your reviewer**:
**Release note**:
```release-note
none
```
Automatic merge from submit-queue
Azuredisk mount on windows node
**What this PR does / why we need it**:
This PR will enable azure disk on windows node, customer could create a pod mounted with azure disk on windows node.
There are a few pending items still left:
1) Current fstype would be forced as NTFS, will change if there is such requirement
2) GetDeviceNameFromMount function is not implemented(empty) because in Linux, we could use "cat /proc/mounts" to read all mounting points in OS easily, but in Windows, there is no such place, I am still figuring out. The empty function would cause a few warning logging, but it will not affect the main logic now.
**Special notes for your reviewer**:
1. This PR depends on https://github.com/kubernetes/kubernetes/pull/51240, which allow windows mount path in config validation
2. There is a bug in docker on windows(https://github.com/moby/moby/issues/34729), the ContainerPath could only be a drive letter now(e.g. D:), dir path would fail in the end.
The example pod with mount path is like below:
```
kind: Pod
apiVersion: v1
metadata:
name: pod-uses-shared-hdd-5g
labels:
name: storage
spec:
containers:
- image: microsoft/iis
name: az-c-01
volumeMounts:
- name: blobdisk01
mountPath: 'F:'
nodeSelector:
beta.kubernetes.io/os: windows
volumes:
- name: blobdisk01
persistentVolumeClaim:
claimName: pv-dd-shared-hdd-5
```
**Release note**:
```release-note
Automatic merge from submit-queue
Update set image description to remove job from resources that can update container image
**What this PR does / why we need it**:
This addressed the comment raised in https://github.com/kubernetes/kubernetes/issues/48388#issuecomment-322500960 by @harrissAvalon
**Special notes for your reviewer**:
**Release note**:
```release-note
none
```
Automatic merge from submit-queue (batch tested with PRs 51041, 52297, 52296, 52335, 52338)
Fix pagesize mount option name
**What this PR does / why we need it**:
Fixes#52337 .