Automatic merge from submit-queue
Add a short `-n` for `kubectl --namespace`
fixes#24078
`--namespace` is a very common flag for nearly every `kubectl` command we have. We should claim `-n` for it.
Automatic merge from submit-queue
Node controller deletePod return true if there are pods pending deletion
Fixes https://github.com/kubernetes/kubernetes/issues/30536
If a node had a single pod in terminating state, and that node no longer reported healthy, the pod was never deleted by the node controller because it believed there were no pods remaining.
@smarterclayton @ncdc
Automatic merge from submit-queue
rkt: Do not error out when there are unrecognized lines in os-release
Also fix the error handling which will cause panic. Also fix the error handling which will cause panic.
cc @kubernetes/sig-rktnetes
```relnote
Moved init-container feature from alpha to beta.
In 1.3, an init container is specified with this annotation key
on the pod or pod template: `pods.alpha.kubernetes.io/init-containers`.
In 1.4, either that key or this key: pods.beta.kubernetes.io/init-containers`,
can be used.
When you GET an object, you will see both annotation keys with the same values.
You can safely roll back from 1.4 to 1.3, and things with init-containers
will still work (pods, deployments, etc).
If you are running 1.3, only use the alpha annotation, or it may be lost when
rolling forward.
The status has moved from annotation key
`pods.beta.kubernetes.io/init-container-statuses` to
`pods.beta.kubernetes.io/init-container-statuses`.
Any code that inspects this annotation should be changed to use the new key.
State of Initialization will continue to be reported in both pods.alpha.kubernetes.io/initialized
and in `podStatus.conditions.{status: "True", type: Initialized}`
```
Mini-design for this change:
Goals:
1. A user can create an object with the beta annotation
on 1.4, and it works. The fact that the annotation has beta
in it communicates to the user that the feature is beta,
and so the user should have confidence in using it. Preferably,
when the user gets the annotation back, he see the beta
annotation.
1) If someone had an existing alpha object in their apiserver,
such as a RS with a pod template with an init-containers
annotation on it, it should continue to work (init containers
run) when stack upgraded to 1.4.
2) If someone is using a chart or blog post that has alpha
annotation on it and they create it on a 1.4 cluster, it should
work.
3) If someone had something with an init container in 1.4
and they roll back stack to 1.3, it should not silently stop
working (init containers don't run anymore).
To meet all these, we mirror an absent beta label from the alpha
key and vice versa. If they are out of sync, we use the alpha
one. We do this in conversion since there was already logic there.
In 1.3 code, all annotations are preserved across a round trip
(v1 -> api -> v1), and the alpha annotation turns into the internal
field that kubelet uses.
In 1.4 code, the alpha annotation is always preserved across
a round trip, and a beta annotation is always set equal to
the alpha one, after a round trip.
Currently, the kubelet always sees the object after a round trip
when it GETs it. But, we don't want to rely on that behavior,
since it will break when fastpath is implemented.
So, we rely on this:
all objects either are created with an alpha annotation (1.3 or 1.4
code) or are created with a beta annotation under 1.4. In the later
case, they are round tripped at creation time, and so get both
annotations. So all subsequent GETs see both labels.
Add --bootstrap-kubeconfig flag to kubelet. If the flag is non-empty
and --kubeconfig doesn't exist, then the kubelet will use the bootstrap
kubeconfig to create rest client and generate certificate signing request
to request a client cert from API server.
Once succeeds, the result cert will be written down to
--cert-dir/kubelet-client.crt, and the kubeconfig will be populated with
certfile, keyfile path pointing to the result certificate file, key file.
(The key file is generated before creating the CSR).
So that at most one volume object will be created for every unique
host path. Also the volume's name is random generated UUID to avoid
collision since the mount point's name passed by kubelet is not
guaranteed to be unique when 'subpath' is specified.
Automatic merge from submit-queue
kubelet/api: split RuntimeService interface
Splits `RuntimeService` interface into smaller interfaces
to make testing easier and delineate the responsibilities.
Its a non-breaking change to the previous users of `api.RuntimeService`
Automatic merge from submit-queue
prevent RC hotloop on denied pods
If a pod is rejected during creation, the RC controller hot-loops. This can happen most frequently due to insufficient quota.
Automatic merge from submit-queue
add --raw for kubectl get
Adds a `--raw` option to `kubectl get` that allow you specify your URI, but use the transport built by `kubectl`. This is especially useful when working with secured environments that require authentication and authorization to hit non-api endpoints. For example, `kubect get --raw /metrics` or if you want to debug a watch with a view at the exact data `kubectl get --raw '/api/v1/namespaces/one/replicationcontrollers?watch=true'`.
@kubernetes/kubectl
@fabianofranz fyi
Automatic merge from submit-queue
Add Events for operation_executor to show status of mounts, failed/successful to show in describe events
Fixes#27590
@saad-ali @pmorie @erinboyd
After talking with @pmorie last week about the above issue, I decided to poke around and see if I could remedy. The refactoring broke my previous UXP merged PR's that correctly showed failed mount errors in the describe events. However, Not sure I implemented correctly, but it tested out and seems to be working, let me know what I missed or if this is not the correct approach.
```
Events:
FirstSeen LastSeen Count From SubobjectPath Type Reason Message
--------- -------- ----- ---- ------------- -------- ------ -------
2m 2m 1 {default-scheduler } Normal Scheduled Successfully assigned nfs-bb-pod1 to 127.0.0.1
44s 44s 1 {kubelet 127.0.0.1} Warning FailedMount Unable to mount volumes for pod "nfs-bb-pod1_default(a94f64f1-37c9-11e6-9aa5-52540073d346)": timeout expired waiting for volumes to attach/mount for pod "nfs-bb-pod1"/"default". list of unattached/unmounted volumes=[nfsvol]
44s 44s 1 {kubelet 127.0.0.1} Warning FailedSync Error syncing pod, skipping: timeout expired waiting for volumes to attach/mount for pod "nfs-bb-pod1"/"default". list of unattached/unmounted volumes=[nfsvol]
38s 38s 1 {kubelet } Warning FailedMount Unable to mount volumes for pod "a94f64f1-37c9-11e6-9aa5-52540073d346": Mount failed: exit status 32
Mounting arguments: nfs1.rhs:/opt/data99 /var/lib/kubelet/pods/a94f64f1-37c9-11e6-9aa5-52540073d346/volumes/kubernetes.io~nfs/nfsvol nfs []
Output: mount.nfs: Connection timed out
Resolution hint: Check and make sure the NFS Server exists (ensure that correct IPAddress/Hostname was given) and is available/reachable.
Also make sure firewall ports are open on both client and NFS Server (2049 v4 and 2049, 20048 and 111 for v3).
Use commands telnet <nfs server> <port> and showmount <nfs server> to help test connectivity.
```
Automatic merge from submit-queue
Fix pvc requests.storage validation
A `PersistentVolumeClaim` should not be able to request a negative amount of storage.
/cc @kubernetes/sig-storage @kubernetes/rh-cluster-infra @deads2k
Automatic merge from submit-queue
OpenAPI / Swagger2 spec generation
This is alpha version of OpenAPI spec generation. Generated "/swagger.json" file (accessible on api server) is a valid OpenAPI spec with some warnings that will be fixed in next versions of spec generation. Currently it is possible to generate a client using this spec though I did not test the clients.
reference: #13414
**Release note**:
```release-note
Alpha support for OpenAPI (aka. Swagger 2.0) specification serves on /swagger.json
```
Automatic merge from submit-queue
Add cluster health metrics to NodeController
Follow up of #28832
This adds metrics to monitor cluster/zone status.
cc @alex-mohr @fabioy @wojtek-t @Q-Lee
Automatic merge from submit-queue
Fix memory leak in gc
ref #30759
GC had a memory leak. The work queue item is never deleted.
I'm still fighting with my kubemark cluster to get statistics after this fix.
@wojtek-t @lavalamp
New flag --container-runtime-endpoint (overrides --container-runtime)
is introduced to kubelet which identifies the unix socket file of
the remote runtime service. And new flag --image-service-endpoint is
introduced to kubelet which identifies the unix socket file of the
image service.
Automatic merge from submit-queue
Revert "Revert "syncNetworkUtil in kubelet and fix loadbalancerSourceRange on GCE
Reverts kubernetes/kubernetes#30729
When node is deleted, attach-detach controller cache may contain stale
information of this node, and update node status fails in reconciler
loop. But one node update failure should not block updating other nodes.
Also the warning message easily flush the log file. This PR is just a quick
fix of this issue. More complete fix including make sure controller cache
up to date will be addressed in another PR.
Automatic merge from submit-queue
Quobyte Volume plugin
@quofelix and myself developed a volume plugin for [Quobyte](http://www.quobyte.com) which is a software-defined storage solution. This PR allows Kubernetes users to mount a Quobyte Volume inside their containers over Kubernetes.
Here are some further informations about [Quobyte and Storage for containers](http://www.quobyte.com/containers)
Convert single GV and lists of GVs into an interface that can handle
more complex scenarios (everything internal, nothing supported). Pass
the interface down into conversion.
Automatic merge from submit-queue
Implement dynamic provisioning (beta) of PersistentVolumes via StorageClass
Implemented according to PR #26908. There are several patches in this PR with one huge code regen inside.
* Please review the API changes (the first patch) carefully, sometimes I don't know what the code is doing...
* `PV.Spec.Class` and `PVC.Spec.Class` is not implemented, use annotation `volume.alpha.kubernetes.io/storage-class`
* See e2e test and integration test changes - Kubernetes won't provision a thing without explicit configuration of at least one `StorageClass` instance!
* Multiple provisioning volume plugins can coexist together, e.g. HostPath and AWS EBS. This is important for Gluster and RBD provisioners in #25026
* Contradicting the proposal, `claim.Selector` and `volume.alpha.kubernetes.io/storage-class` annotation are **not** mutually exclusive. They're both used for matching existing PVs. However, only `volume.alpha.kubernetes.io/storage-class` is used for provisioning, configuration of provisioning with `Selector` is left for (near) future.
* Documentation is missing. Can please someone write some while I am out?
For now, AWS volume plugin accepts classes with these parameters:
```
kind: StorageClass
metadata:
name: slow
provisionerType: kubernetes.io/aws-ebs
provisionerParameters:
type: io1
zone: us-east-1d
iopsPerGB: 10
```
* parameters are case-insensitive
* `type`: `io1`, `gp2`, `sc1`, `st1`. See AWS docs for details
* `iopsPerGB`: only for `io1` volumes. I/O operations per second per GiB. AWS volume plugin multiplies this with size of requested volume to compute IOPS of the volume and caps it at 20 000 IOPS (maximum supported by AWS, see AWS docs).
* of course, the plugin will use some defaults when a parameter is omitted in a `StorageClass` instance (`gp2` in the same zone as in 1.3).
GCE:
```
apiVersion: extensions/v1beta1
kind: StorageClass
metadata:
name: slow
provisionerType: kubernetes.io/gce-pd
provisionerParameters:
type: pd-standard
zone: us-central1-a
```
* `type`: `pd-standard` or `pd-ssd`
* `zone`: GCE zone
* of course, the plugin will use some defaults when a parameter is omitted in a `StorageClass` instance (SSD in the same zone as in 1.3 ?).
No OpenStack/Cinder yet
@kubernetes/sig-storage
Automatic merge from submit-queue
kubelet eviction on inode exhaustion
Add support for kubelet to monitor for inode exhaustion of either image or rootfs, and in response, attempt to reclaim node level resources and/or evict pods.
Automatic merge from submit-queue
Add annotations to the PodSecurityPolicy Provider interface
@pweil- is this what you were thinking in terms of API changes? I really like to avoid functions with more than 2 return values, but couldn't think of a cleaner approach in this case.
Automatic merge from submit-queue
Allow setting permission mode bits on secrets, configmaps and downwardAPI files
cc @thockin @pmorie
Here is the first round to implement: https://github.com/kubernetes/kubernetes/pull/28733.
I made two commits: one with the actual change and the other with the auto-generated code. I think it's easier to review this way, but let me know if you prefer in some other way.
I haven't written any tests yet, I wanted to have a first glance and not write them till this (and the API) are more close to the "LGTM" :)
There are some things:
* I'm not sure where to do the "AND 0777". I'll try to look better in the code base, but suggestions are always welcome :)
* The write permission on group and others is not set when you do an `ls -l` on the running container. It does work with write permissions to the owner. Debugging seems to show that is something happening after this is correctly set on creation. Will look closer.
* The default permission (when the new fields are not specified) are the same that on kubernetes v1.3
* I do realize there are conflicts with master, but I think this is good enough to have a look. The conflicts is with the autog-enerated code, so the actual code is actually the same (and it takes like ~30 minutes to generate it here)
* I didn't generate the docs (`generated-docs` and `generated-swagger-docs` from `hack/update-all.sh`) because my machine runs out of mem. So that's why it isn't in this first PR, will try to investigate and see why it happens.
Other than that, this works fine here with some silly scripts I did to create a secret&configmap&downwardAPI, a pod and check the file permissions. Tested the "defaultMode" and "mode" for all. But of course, will write tests once this is looking fine :)
Thanks a lot again!
Rodrigo
Automatic merge from submit-queue
Continue on #30774: Change podNamespacer API
continue on #30774, credit to @wojtek-t, Ref #30759
I just fixed a test and converted IsActivePod to operate on *Pod.
Automatic merge from submit-queue
Expose flags for new NodeEviction logic in NodeController
Fix#28832
Last PR from the NodeController NodeEviction logic series.
cc @davidopp @lavalamp @mml
Automatic merge from submit-queue
Nodecontroller doesn't flip readiness on pods if kubeletVersion < 1.2.0
Older versions of the kubelet didn't know how to reconcile pod.Status, so the nodecontroller would mark pods NotReady on netsplit, and if the partition recovered in < 5m, the pods would never get marked Ready resulting in NotReady endpoints indefinitely (till kubelet restart/pod recreate etc).
Automatic merge from submit-queue
Change kubectl create to use dynamic client
https://github.com/kubernetes/kubernetes/issues/16764https://github.com/kubernetes/kubernetes/issues/3955
This is a series of changes to allow kubectl create to use discovery-based REST mapping and dynamic clients.
cc @kubernetes/sig-api-machinery
**Release note**:
<!-- Steps to write your release note:
1. Use the release-note-* labels to set the release note state (if you have access)
2. Enter your extended release note in the below block; leaving it blank means using the PR title as the release note. If no release note is required, just write `NONE`.
-->
```release-note
kubectl will no longer do client-side defaulting on create and replace.
```
An image string could contain a hostname (e.g., "docker.io") or not. The same
applies to the RepoTags returned from an image inspection. To determine whether
the image docker pulled matches what the user ask for, we check if the either
string is the suffix of the other.
This implements the proposal in:
docs/proposals/secret-configmap-downwarapi-file-mode.md
Fixes: #28317.
The mounttest image is updated so it returns the permissions of the linked file
and not the symlink itself.
Automatic merge from submit-queue
Fix default resource limits (node allocatable) for downward api volumes and env vars
@kubernetes/rh-cluster-infra @pmorie @derekwaynecarr
Automatic merge from submit-queue
Added warning msg for `kubectl get`
- added warning description regarding terminated pods to `get` long help message
- added printing of warning message in case of `get pods` if there are hidden pods
Fixes#22986 (initiall PR and discussion are here #26417)
## **Output examples:**
### # kubectl get pods
```
NAME READY STATUS RESTARTS AGE
dapi-test-pod1 0/1 Terminating 0 22h
liveness-http 0/1 CrashLoopBackOff 11245 22d
ubuntu1-1206318548-oh9tc 0/1 CrashLoopBackOff 2336 8d
info: 1 completed object(s) was(were) not shown in pods list. Pass --show-all to see all objects.
```
### # kubectl get pods,namespaces
```
NAME READY STATUS RESTARTS AGE
po/dapi-test-pod1 0/1 Terminating 0 22h
po/liveness-http 1/1 Running 11242 22d
po/ubuntu1-1206318548-oh9tc 0/1 CrashLoopBackOff 2335 8d
info: 1 completed object(s) was(were) not shown in pods list. Pass --show-all to see all objects.
NAME STATUS AGE
ns/default Active 89d
ns/kube-system Active 41d
```
### # kubectl get pods -a
```
NAME READY STATUS RESTARTS AGE
busybox 0/1 Error 0 27d
dapi-test-pod1 0/1 Terminating 0 22h
liveness-http 0/1 CrashLoopBackOff 11245 22d
ubuntu1-1206318548-oh9tc 0/1 CrashLoopBackOff 2336 8d
```
### # kubectl get -h
```
Display one or many resources.
Possible resource types include (case insensitive): pods (aka 'po'), services (aka 'svc'), deployments (aka 'deploy'),
replicasets (aka 'rs'), replicationcontrollers (aka 'rc'), nodes (aka 'no'), events (aka 'ev'), limitranges (aka 'limits'),
persistentvolumes (aka 'pv'), persistentvolumeclaims (aka 'pvc'), resourcequotas (aka 'quota'), namespaces (aka 'ns'),
serviceaccounts (aka 'sa'), ingresses (aka 'ing'), horizontalpodautoscalers (aka 'hpa'), daemonsets (aka 'ds'), configmaps (aka 'cm'),
componentstatuses (aka 'cs), endpoints (aka 'ep'), petsets (alpha feature, may be unstable) and secrets.
This command will hide resources that have completed. For instance, pods that are in the Succeeded or Failed phases.
You can see the full results for any resource by providing the '--show-all' flag.
By specifying the output as 'template' and providing a Go template as the value
of the --template flag, you can filter the attributes of the fetched resource(s).
Examples:
.........
````
Automatic merge from submit-queue
use Reader.ReadLine instead of bufio.Scanner to support bigger yaml
@smarterclayton ptal. Also refer #19603#23125 for more details.
Automatic merge from submit-queue
pkg/storage: remove Codec() from interface
What?
Removes Codec() from storage.Interface.
Why?
- storage interface doesn't need to expose Codec().
- Codec() isn't used anywhere.
Automatic merge from submit-queue
Add validation conditions for autoscale
When validate the value of max and min in autoscale.go, it should append all the invalid conditions to errs, and print the value.
Automatic merge from submit-queue
Add note: kubelet manages only k8s containers.
Kubelet wrote log when accesing container which was not created in k8s, what could confuse users. That's why we added note about it in documentation and lowered log level of the message to 5.
Here is example of the message:
```
> Apr 19 11:50:32 openshift-114.lab.sjc.redhat.com atomic-openshift-node[9551]:
I0419 11:50:32.194020 9600 docker.go:363]
Docker Container: /tiny_babbage is not managed by kubelet.
```
bug 1328441
Bugzilla link https://bugzilla.redhat.com/show_bug.cgi?id=1328441
Automatic merge from submit-queue
Correct the url in comment and optimise the code style
The PR modified two aspects:
1) Correct the url in comment, the original url can't be accessed;
2) Optimise the code style according to the go style guide.
Automatic merge from submit-queue
syncNetworkUtil in kubelet and fix loadbalancerSourceRange on GCE
fixes: #29997#29039
@yujuhong Can you take a look at the kubelet part?
@girishkalele KUBE-MARK-DROP is the chain for dropping connections. Marked connection will be drop in INPUT/OUTPUT chain of filter table. Let me know if this is good enough for your use case.
resource.Builder should prohibit empty resource names (the error is from
the wrong place) so that commands that work on multiple resources but
not resource types can properly limit errors.
Automatic merge from submit-queue
Add NodeName to EndpointAddress object
Adding a new string type `nodeName` to api.EndpointAddress.
We could also do *ObjectReference to the api.Node object instead, which would be more precise for the future.
```
type ObjectReference struct {
Kind string `json:"kind,omitempty"`
Namespace string `json:"namespace,omitempty"`
Name string `json:"name,omitempty"`
UID types.UID `json:"uid,omitempty"`
APIVersion string `json:"apiVersion,omitempty"`
ResourceVersion string `json:"resourceVersion,omitempty"`
// Optional. If referring to a piece of an object instead of an entire object, this string
// should contain information to identify the sub-object. For example, if the object
// reference is to a container within a pod, this would take on a value like:
// "spec.containers{name}" (where "name" refers to the name of the container that triggered
// the event) or if no container name is specified "spec.containers[2]" (container with
// index 2 in this pod). This syntax is chosen only to have some well-defined way of
// referencing a part of an object.
// TODO: this design is not final and this field is subject to change in the future.
FieldPath string `json:"fieldPath,omitempty"`
}
```
Automatic merge from submit-queue
Implement AppArmor Kubelet support
Includes PR https://github.com/kubernetes/kubernetes/pull/29812
Implements the Kubelet logic for AppArmor based on the alpha API proposed [here](https://github.com/kubernetes/kubernetes/blob/master/docs/proposals/apparmor.md). Also adds an E2E test, and I ran manual tests.
Remaining work: PodSecurityPolicy support, profile loader daemon, documentation, (maybe) beta API.
/cc @jfrazelle @Amey-D @kubernetes/sig-node
*Note on release-note-none: I am implementing AppArmor over multiple PRs. I will submit a single release note once the implementation is done to cover all of them.*
Automatic merge from submit-queue
fix node controller event uid issue
Fix#29289. @smarterclayton ptal. This is not a very elegant fix, if we can use nodeName in log maybe we can set timedValue.Value to node.UID.
Automatic merge from submit-queue
update strategic patch test for merge list of maps
Refer #26418 for more details. @janetkuo the test case is added, ptal.
Automatic merge from submit-queue
Prevent device unmount from deleting dir on failed unmount
This PR cleans up the device unmount code for attachable volumes. Specifically it:
* Prevents deletion of directory via `os.Remove` unless unmount succeeds.
* Moves common shared device unmount logic to a common util file.
This was already handled in most places. I think this is the only
remaining instance of it in the docker package.
This could lead to confusing results. E.g. if `networkPlugin` was cni,
it could lead to error logs about not getting network status for host
pods if eth0 didn't exist on the host.
- added warning description regarding terminated objects to `get` long help message
- added printing of warning message in case of `get pods` if there are hidden pods
Fixes#22986
Automatic merge from submit-queue
Fix image verification when hostname is present in image
Deal better with the situation where a image name contains
a hostname as well.
Fixes#30580
Automatic merge from submit-queue
Add volume reconstruct/cleanup logic in kubelet volume manager
Currently kubelet volume management works on the concept of desired
and actual world of states. The volume manager periodically compares the
two worlds and perform volume mount/unmount and/or attach/detach
operations. When kubelet restarts, the cache of those two worlds are
gone. Although desired world can be recovered through apiserver, actual
world can not be recovered which may cause some volumes cannot be cleaned
up if their information is deleted by apiserver. This change adds the
reconstruction of the actual world by reading the pod directories from
disk. The reconstructed volume information is added to both desired
world and actual world if it cannot be found in either world. The rest
logic would be as same as before, desired world populator may clean up
the volume entry if it is no longer in apiserver, and then volume
manager should invoke unmount to clean it up.
Fixes https://github.com/kubernetes/kubernetes/issues/27653
Automatic merge from submit-queue
CRI: remove pod sandbox resources
The pod-level resources need further discussion. Remove it from CRI for now.
See the original discussion in #29871
Currently kubelet volume management works on the concept of desired
and actual world of states. The volume manager periodically compares the
two worlds and perform volume mount/unmount and/or attach/detach
operations. When kubelet restarts, the cache of those two worlds are
gone. Although desired world can be recovered through apiserver, actual
world can not be recovered which may cause some volumes cannot be cleaned
up if their information is deleted by apiserver. This change adds the
reconstruction of the actual world by reading the pod directories from
disk. The reconstructed volume information is added to both desired
world and actual world if it cannot be found in either world. The rest
logic would be as same as before, desired world populator may clean up
the volume entry if it is no longer in apiserver, and then volume
manager should invoke unmount to clean it up.
Automatic merge from submit-queue
pkg/apiserver/authenticator: reorder oidc plugin to auth after service accounts
Both plugins verify JWTs, but the OpenID Connect plugin performs
much worse when faced with cache misses. Reorder the plugins so
the service account plugin tries to authenticate a bearer token
first.
I had a fun time with this by writing an OpenID Connect provider that stores its data in third party resources. When it's running in the cluster it uses a service account and caused some interesting behavior when the keys expired.
Our OpenID Connect plugin needs a more sophisticated caching model to avoid continuously re-requesting keys when seeing a lot of tokens it doesn't recognize. However, I feel this reordering is generally useful since service accounts will be more common than OpenID Connect tokens.
cc @kubernetes/sig-auth
Automatic merge from submit-queue
Remove pods along with jobs when Replace ConcurrentPolicy is set
Fixes#30442
This builds on #30327 and needs a bit more love in tests.
@janetkuo @erictune fyi
Automatic merge from submit-queue
Add CloudStack cloud provider (extended and refactored)
This PR is superseding PR #26165 is which some groundwork for this PR has been done. So this PR now fixes#26165 and fixes#26045.
I've been in contact with @ngtuna about this updated version of his earlier work (which is still in this PR as one squashed commit) and he has given his 👍 for this 😉
This PR adds additional logic for allocating and associating a public IP, if the `—load-balancer-ip` option is not used. It will do proper management of public IP’s that are allocated by this provider ( so IP’s that are no longer needed/used will also be released again).
Additionally the provider can now also work with CloudStack projects and advanced (VPC) networks. And lastly the Zone interface now returns an actual zone (supplied by the cloud config), a few logical errors are fixed and the first few tests are added.
All the functionality is extensively tested against both basic and advanced (VPC) networks and of course all new and existing (integration) tests are all passing.
Automatic merge from submit-queue
Move new etcd storage (low level storage) into cacher
In an effort for #29888, we are pushing forward this:
What?
- It changes creating etcd storage.Interface impl into creating config
- In creating cacher storage (StorageWithCacher), it passes config created above and new etcd storage inside.
Why?
- We want to expose the information of (etcd) kv client to cacher. Cacher storage uses this information to talk to remote storage.
Automatic merge from submit-queue
move syncNetworkConfig to Init for cni network plugin
start syncNetworkConfig routine in `Init` instead of probing. This fixes a bug where the syncNetworkConfig runs periodically even `cni` network plugin is not in use.
Automatic merge from submit-queue
add RequiresExactMatch test for empty andterm
What?
Add a test path for empty andterm.
Why?
fields.Everything() returns empty andterm.
fields.SelectorFromSet() returns empty andterm.
Automatic merge from submit-queue
Remove resource specifications from CRI until further notice
See #29871 for the discussion issue.
cc @dchen1107 @vishh @yujuhong @euank @yifan-gu @feiskyer
Automatic merge from submit-queue
Endpoint controller logs errors during normal cluster behavior
The endpoint controller logs an error when its forbidden from creating new endpoints during namespace termination. This is normal cluster behavior, and therefore should not be logged. This confuses operators administrating the cluster.
Updated to log at a lower level in response to a forbidden message when performing a create operation. In case of an error on the API server side of the house, I continue to requeue the key. It should be ignored in a future syncService call once the service is deleted as part of namespace termination.
See https://bugzilla.redhat.com/show_bug.cgi?id=1347425
/cc @kubernetes/rh-cluster-infra
This commit adds logic for allocating and associating a public IP, if the `—load-balancer-ip` option is not used. It will do proper management of IP’s that are allocated by this provider, so IP’s that are no longer needed/used will also be released again.
Additionally the provider can now also work with CloudStack projects and advanced (VPC) networks.
Lastly the Zone interface now returns an actual zone (supplied by the cloud config), a few logical errors are fixed and the first few tests are added.
All the functionality is extensively tested against both basic and advanced (VPC) networks.
Automatic merge from submit-queue
speed up RC scaler
The RC scaler was waiting before starting the scale and then didn't use a watch to observe the result. That led to longer than expected wait times.
@fabianofranz ptal. You may want to sweep the rest of the file. It could use some tidying with `RetryOnConflict` and `watch.Until`.
Automatic merge from submit-queue
Validate SHA/Tag when checking docker images
Docker API does not validate the tag/sha, for example, all the following
calls work say for a alpine image with short SHA "4e38e38c8ce0"
echo -e "GET /images/alpine:4e38e38c8ce0/json HTTP/1.0\r\n" | nc -U /var/run/docker.sock
echo -e "GET /images/alpine:4e38e38c/json HTTP/1.0\r\n" | nc -U /var/run/docker.sock
echo -e "GET /images/alpine:4/json HTTP/1.0\r\n" | nc -U /var/run/docker.sock
So we should check the response from the Docker API and look for the tags or SHA explicitly.
Fixes#30355
Automatic merge from submit-queue
Make more messages respect --quiet flag
Make following two messages respect `--quiet` in `kubectl run`
- `If you don't see a command prompt, try pressing enter.`
- `Pod "name" deleted`
Ref #28695
Automatic merge from submit-queue
Set pod state as "unknown" when CNI plugin fails
Before this change, CNI plugin failure didn't change anything in the pod status, so pods having containers without requested network were "running".
Fixes#29148
Automatic merge from submit-queue
Remove kubelet pkill dependency
Issue #26093 identified pkill as one of the dependencies of kublet
which could be worked around. Build on the code introduced for pidof
and regexp for the process(es) we need to send a signal to.
Related to #26093
Automatic merge from submit-queue
Adding events to federation control plane
Adding events to federation control plane.
Apart from the standard changes to add a resource to `federation/apis/core/v1`, other changes are:
* Adding a new `federationoptions.ServerRunOptions` which includes `genericoptions.ServerRunOptions` and EventsTTL.
* Added a new method in `pkg/api/mapper` to build a RestMapper based on the passed Scheme rather than using `api.Scheme`. Updated `federation/apis/core/install` to use this new method. Without this change, if `federation/apis/core/install.init()` is called before `pkg/api/install.init()` then the registered RESTMapper in `pkg/apimachinery/registered` will have no resources. This second problem will be fixed once we have instances of `pkg/apimachinery/registered` instead of a single global singleton (generated clientset which imports `pkg/api/install` will have a different instance of registered, than federation-apiserver which imports `federation/apis/core/install`).
cc @kubernetes/sig-cluster-federation @lavalamp
Automatic merge from submit-queue
[kubelet] Introduce --protect-kernel-defaults flag to make the tunable behaviour configurable
Let's make the default behaviour of kernel tuning configurable. The default behaviour is kept modify as has been so far.
Automatic merge from submit-queue
Fix TestPidOf {procfs} - Take #2
We should not bailout when we get an error. We should continue
processing other files/directories. We were returning the
err passed in which was causing the processing to stop.
Fixes#30377
Automatic merge from submit-queue
Kubelet: generate sandbox/container config for new runtime API
Generate sandbox/container config for new runtime API. Part of #28789 .
CC @yujuhong @Random-Liu @dchen1107
Automatic merge from submit-queue
Add zsh compatibility note `completion` cmd help
zsh completions are not supported on zsh versions < 5.2.
This patch advices user on supported versions of zsh when using the `completion`
command to avoid potential UX failure.
##### After
`$ kubectl completion -h`
```
Output shell completion code for the given shell (bash or zsh).
This command prints shell code which must be evaluation to provide interactive
completion of kubectl commands.
Examples:
$ source <(kubectl completion bash)
will load the kubectl completion code for bash. Note that this depends on the
bash-completion framework. It must be sourced before sourcing the kubectl
completion, e.g. on the Mac:
$ brew install bash-completion
$ source $(brew --prefix)/etc/bash_completion
$ source <(kubectl completion bash)
If you use zsh*, the following will load kubectl zsh completion:
$ source <(kubectl completion zsh)
* zsh completions are only supported in versions of zsh >= 5.2
```
```release-note
release-note-none
```
<!-- Reviewable:start -->
---
This change is [<img src="https://reviewable.kubernetes.io/review_button.svg" height="34" align="absmiddle" alt="Reviewable"/>](https://reviewable.kubernetes.io/reviews/kubernetes/kubernetes/30460)
<!-- Reviewable:end -->
Automatic merge from submit-queue
openstack: Autodetect LBaaS v1 vs v2
```release-note
* openstack: autodetect LBaaS v1/v2 by querying for available extensions. For most installs, this effectively changes the default from v1 to v2. Existing installs can add "lb-version = v1" to the provider config file to continue to use v1.
```
<!-- Reviewable:start -->
---
This change is [<img src="https://reviewable.kubernetes.io/review_button.svg" height="34" align="absmiddle" alt="Reviewable"/>](https://reviewable.kubernetes.io/reviews/kubernetes/kubernetes/29726)
<!-- Reviewable:end -->
Automatic merge from submit-queue
Implement 'kubectl top' command
```release-note
Added 'kubectl top' command showing the resource usage metrics.
```
Sample output:
Nodes:
```
$ kubectl top node
NAME CPU MEMORY STORAGE TIMESTAMP
kubernetes-minion-group-xxxx 76m 1468 Mi 0 Mi Tue, 12 Jul 2016 17:37:00 +0200
kubernetes-minion-group-yyyy 73m 1511 Mi 0 Mi Tue, 12 Jul 2016 17:37:00 +0200
kubernetes-minion-group-zzzz 46m 1506 Mi 0 Mi Tue, 12 Jul 2016 17:37:00 +0200
kubernetes-master 76m 2059 Mi 0 Mi Tue, 12 Jul 2016 17:37:00 +0200
```
Pods in all namespaces:
```
$ kubectl top pod --all-namespaces
NAMESPACE NAME CPU MEMORY STORAGE TIMESTAMP
default nginx-1111111111-zzzzz 0m 1 Mi 0 Mi Tue, 12 Jul 2016 17:49:00 +0200
kube-system etcd-server-kubernetes-master 4m 116 Mi 0 Mi Tue, 12 Jul 2016 17:49:00 +0200
kube-system fluentd-cloud-logging-kubernetes-minion-group-xxxx 14m 110 Mi 0 Mi Tue, 12 Jul 2016 17:49:00 +0200
kube-system kube-dns-v18-zzzzz 1m 6 Mi 0 Mi Tue, 12 Jul 2016 17:49:00 +0200
...
```
Pod with containers:
```
$ kubectl top pod heapster-v1.1.0-1111111111-miail --namespace=kube-system --containers
NAMESPACE NAME CPU MEMORY STORAGE TIMESTAMP
kube-system heapster-v1.1.0-1111111111-miail 1m 42 Mi 0 Mi Tue, 12 Jul 2016 17:52:00 +0200
heapster 1m 26 Mi 0 Mi
eventer 0m 3 Mi 0 Mi
heapster-nanny 0m 6 Mi 0 Mi
eventer-nanny 0m 6 Mi 0 Mi
```
ref #11382
[]()
<!-- Reviewable:start -->
---
This change is [<img src="https://reviewable.kubernetes.io/review_button.svg" height="34" align="absmiddle" alt="Reviewable"/>](https://reviewable.kubernetes.io/reviews/kubernetes/kubernetes/28844)
<!-- Reviewable:end -->
Automatic merge from submit-queue
return err on `kubectl run --image` with invalid value
When running `kubectl run <configname> --image="Invalid$$%ImageValue%%__"`, a configuration is successfully created with an image name that is not a valid value for an image reference.
This patch validates that the image name is a valid image reference, and returns an error before creating a config if an invalid value is passed.
`$ kubectl run test --image="Invalid__%imagename"`
```
error: Invalid image name "Invalid__%imagename": invalid reference format
```
<!-- Reviewable:start -->
---
This change is [<img src="https://reviewable.kubernetes.io/review_button.svg" height="34" align="absmiddle" alt="Reviewable"/>](https://reviewable.kubernetes.io/reviews/kubernetes/kubernetes/30162)
<!-- Reviewable:end -->
Automatic merge from submit-queue
Basic audit log
Fixes#2203 by introducing simple audit logging, including the information about impersonation. We currently have something identical in openshift, but I'm open to any suggestions. Sample logs look like that:
as `<self>`:
```
AUDIT: id="75114bb5-970a-47d5-a5f1-1e99cea0574c" ip="127.0.0.1" method="GET" user="test-admin" as="<self>" namespace="openshift" uri="/api/v1/namespaces/openshift/pods/python"
AUDIT: id="75114bb5-970a-47d5-a5f1-1e99cea0574c" response=200
```
as user:
```
AUDIT: id="b0a443ae-f7d8-408c-a355-eb9501fd5c59" ip="192.168.121.118" method="GET" user="system:admin" as="test-admin" namespace="openshift" uri="/api/v1/namespaces/openshift/pods/python"
AUDIT: id="b0a443ae-f7d8-408c-a355-eb9501fd5c59" response=200
```
```release-note
* Add basic audit logging
```
@ericchiang @smarterclayton @roberthbailey @erictune @ghodss
[]()
<!-- Reviewable:start -->
---
This change is [<img src="https://reviewable.kubernetes.io/review_button.svg" height="34" align="absmiddle" alt="Reviewable"/>](https://reviewable.kubernetes.io/reviews/kubernetes/kubernetes/27087)
<!-- Reviewable:end -->
Automatic merge from submit-queue
Let kubectl delete rc and rs with DeleteOptions.OrphanDependents=false
so that when the garbage collector is enabled, RC and RS are deleted immediately without waiting for the garbage collector to orphan the pods.
There is no user visible changes, so we don't need a release note.
cc @fabioy
<!-- Reviewable:start -->
---
This change is [<img src="https://reviewable.kubernetes.io/review_button.svg" height="34" align="absmiddle" alt="Reviewable"/>](https://reviewable.kubernetes.io/reviews/kubernetes/kubernetes/30461)
<!-- Reviewable:end -->
Automatic merge from submit-queue
Name jobs created by sj deterministically
```release-note
Name the job created by scheduledjob (sj) deterministically with sj's name and a hash of job's scheduled time.
```
@erictune @soltysh
<!-- Reviewable:start -->
---
This change is [<img src="https://reviewable.kubernetes.io/review_button.svg" height="34" align="absmiddle" alt="Reviewable"/>](https://reviewable.kubernetes.io/reviews/kubernetes/kubernetes/30420)
<!-- Reviewable:end -->
Automatic merge from submit-queue
Fix code generators-- make scheme building composable
I needed to make some changes to make my other refactoring possible and this got rather large.
We now provide a "SchemeBuilder" to help all of the api packages provide their scheme-building functions (addKnownTypes and friends) in a standardized way. This also allows generated deepcopies & conversions to be entirely self contained, the project will now build without them being present (as they can add themselves to the SchemeBuilder). (Although if you actually build without them, you will get reduced performance!)
Previously, there was no way to construct your own runtime.Scheme (e.g., to test), you had to use the api.Scheme object, which has all sorts of non-hermetic cruft in it. Now you can get everything from a package by calling the scheme builder's AddToScheme, including the generated functions, if they are present.
Next steps are to allow for declaring dependencies, and to standardize the registration & install code. (#25434)
<!-- Reviewable:start -->
---
This change is [<img src="https://reviewable.kubernetes.io/review_button.svg" height="34" align="absmiddle" alt="Reviewable"/>](https://reviewable.kubernetes.io/reviews/kubernetes/kubernetes/25526)
<!-- Reviewable:end -->
Automatic merge from submit-queue
the observed usage should match those that have hard constraints
in the sync process, the quota will be replenished, the new observed usage will be sumed from each evaluator, if the previousUsed set is not be cleared, the new usage will be dirty, maybe some unusage resource still in , as the code below
newUsage = quota.Mask(newUsage, matchedResources)
for key, value := range newUsage {
usage.Status.Used[key] = value
}
so i think here shoul not set value previousUsed
<!-- Reviewable:start -->
---
This change is [<img src="https://reviewable.kubernetes.io/review_button.svg" height="34" align="absmiddle" alt="Reviewable"/>](https://reviewable.kubernetes.io/reviews/kubernetes/kubernetes/29653)
<!-- Reviewable:end -->
Automatic merge from submit-queue
Quota admission errors if usage is negative
If quota observes negative usage for an artifact, that artifact could game the quota system.
This adds a global check in the quota system to catch this scenario for all evaluators.
/cc @deads2k
<!-- Reviewable:start -->
---
This change is [<img src="https://reviewable.kubernetes.io/review_button.svg" height="34" align="absmiddle" alt="Reviewable"/>](https://reviewable.kubernetes.io/reviews/kubernetes/kubernetes/30396)
<!-- Reviewable:end -->
This removes the need to manually specify the version in all but unusual
cases.
For most installs this will effectively flip the default from
v1 (deprecated) to v2 so conservative existing installs may want to
manually configure "lb-version = v1" before upgrading.
Automatic merge from submit-queue
Start verifying golint on a per-package basis as packages are fixed
<!--
Checklist for submitting a Pull Request
Please remove this comment block before submitting.
1. Please read our [contributor guidelines](https://github.com/kubernetes/kubernetes/blob/master/CONTRIBUTING.md).
2. See our [developer guide](https://github.com/kubernetes/kubernetes/blob/master/docs/devel/development.md).
3. If you want this PR to automatically close an issue when it is merged,
add `fixes #<issue number>` or `fixes #<issue number>, fixes #<issue number>`
to close multiple issues (see: https://github.com/blog/1506-closing-issues-via-pull-requests).
4. Follow the instructions for [labeling and writing a release note for this PR](https://github.com/kubernetes/kubernetes/blob/master/docs/devel/pull-requests.md#release-notes) in the block below.
-->
```release-note
Added `golint` for pkg/security/podsecuritypolicy/capabilities` along with validation.
```
[]()
This is a POC to start enabling `golint` checks on a per-package basis, we did this on the docker project and it was a great way for new contributors to help and it benefits the project overall. All they have to do is add the package they fixed to the bash array in `hack/verify-golint.sh` and fix all the lint errors.
Eventually when all the packages have been fixed we can change the function to `find_files`. Or something based off which files are changed in a patch set to verify `golint`.
Now I used this specific package as the POC because I wanted to show the downside of this changing the api of the package.
Most of the times this arose in docker/docker we decided that if someone wasn't importing their deps locally then it was their loss, but I'm not sure if you all will agree.
<!-- Reviewable:start -->
---
This change is [<img src="https://reviewable.kubernetes.io/review_button.svg" height="34" align="absmiddle" alt="Reviewable"/>](https://reviewable.kubernetes.io/reviews/kubernetes/kubernetes/27911)
<!-- Reviewable:end -->
We should not bailout when we get an error. We should continue
processing other files/directories. We were returning the
err passed in which was causing the processing to stop.
Fixes#30377
Automatic merge from submit-queue
check validation with no apps client in kubectl util factory
autoscaling client already exist:
if c.c.AutoscalingClient == nil {
return errors.New("unable to validate: no autoscaling client")
so following autoscaling client should be apps client:
if c.c.AppsClient == nil {
return errors.New("unable to validate: no autoscaling client")
}
<!-- Reviewable:start -->
---
This change is [<img src="https://reviewable.kubernetes.io/review_button.svg" height="34" align="absmiddle" alt="Reviewable"/>](https://reviewable.kubernetes.io/reviews/kubernetes/kubernetes/30362)
<!-- Reviewable:end -->
Both plugins verify JWTs, but the OpenID Connect plugin performs
much worse when faced with cache misses. Reorder the plugins so
the service account plugin tries to authenticate a bearer token
first.
Automatic merge from submit-queue
[Kubelet] Rename `--config` to `--pod-manifest-path`. `--config` is deprecated.
This field holds the location of a manifest file or directory of manifest
files for pods the Kubelet is supposed to run. The name of the field
should reflect that purpose. I didn't change the flag name because that
API should remain stable.
<!-- Reviewable:start -->
---
This change is [<img src="https://reviewable.io/review_button.svg" height="34" align="absmiddle" alt="Reviewable"/>](https://reviewable.kubernetes.io/reviews/kubernetes/kubernetes/29999)
<!-- Reviewable:end -->
Docker API does not validate the tag/sha, for example, all the following
calls work say for a alpine image with short SHA "4e38e38c8ce0"
echo -e "GET /images/alpine:4e38e38c8ce0/json HTTP/1.0\r\n" | nc -U /var/run/docker.sock
echo -e "GET /images/alpine:4e38e38c/json HTTP/1.0\r\n" | nc -U /var/run/docker.sock
echo -e "GET /images/alpine:4/json HTTP/1.0\r\n" | nc -U /var/run/docker.sock
So we should check the response from the Docker API and look for the
tags or SHA explicitly.
Fixes#30355
Automatic merge from submit-queue
[GarbageCollector] measure latency
First commit is #27600.
In e2e tests, I measure the average time an item spend in the eventQueue(~1.5 ms), dirtyQueue(~13ms), and orphanQueue(~37ms). There is no stress test in e2e yet, so the number may not be useful.
<!-- Reviewable:start -->
---
This change is [<img src="https://reviewable.kubernetes.io/review_button.svg" height="34" align="absmiddle" alt="Reviewable"/>](https://reviewable.kubernetes.io/reviews/kubernetes/kubernetes/28387)
<!-- Reviewable:end -->
Automatic merge from submit-queue
Simplify canonical element term in deepcopy
Replace the old functional canonical element term in deepcopy registration with direct struct instantiation.
The old way was an artifact of non-uniform pointer/non-pointer types in the signature of deepcopy function. Since we changed that to always be a pointer, we can simplify the code.
<!-- Reviewable:start -->
---
This change is [<img src="https://reviewable.kubernetes.io/review_button.svg" height="34" align="absmiddle" alt="Reviewable"/>](https://reviewable.kubernetes.io/reviews/kubernetes/kubernetes/30341)
<!-- Reviewable:end -->
Issue #26093 identified pkill as one of the dependencies of kublet
which could be worked around. Build on the code introduced for pidof
and regexp for the process(es) we need to send a signal to.
Related to #26093
Automatic merge from submit-queue
add metrics for workqueues
Adds prometheus metrics to work queues and enables them for the resourcequota controller. It would be easy to add this to all other workqueue based controllers and gather basic responsiveness metrics.
@kubernetes/rh-cluster-infra helps debug quota controller responsiveness problems.
<!-- Reviewable:start -->
---
This change is [<img src="https://reviewable.kubernetes.io/review_button.svg" height="34" align="absmiddle" alt="Reviewable"/>](https://reviewable.kubernetes.io/reviews/kubernetes/kubernetes/30296)
<!-- Reviewable:end -->
Also provide a new --pod-manifest-path flag and deprecate the old
--config one.
This field holds the location of a manifest file or directory of manifest
files for pods the Kubelet is supposed to run. The name of the field
should reflect that purpose.
Automatic merge from submit-queue
kube-proxy: Propagate hostname to iptables proxier
Need to propagate the hostname (i.e. Nodename) from kube-proxy to the iptables proxier to allow kube-proxy to determine local endpoints.
<!-- Reviewable:start -->
---
This change is [<img src="https://reviewable.kubernetes.io/review_button.svg" height="34" align="absmiddle" alt="Reviewable"/>](https://reviewable.kubernetes.io/reviews/kubernetes/kubernetes/30293)
<!-- Reviewable:end -->
Automatic merge from submit-queue
Remove kubelet dependency on pidof
Issue #26093 identified pidof as one of the dependencies of kublet
which could be worked around. In this PR, we just look at /proc
to construct the list of pids we need for a specified process
instead of running "pidof" executable
Related to #26093
<!-- Reviewable:start -->
---
This change is [<img src="https://reviewable.kubernetes.io/review_button.svg" height="34" align="absmiddle" alt="Reviewable"/>](https://reviewable.kubernetes.io/reviews/kubernetes/kubernetes/30002)
<!-- Reviewable:end -->
Automatic merge from submit-queue
pkg/controller/garbagecollector: simplify mutexes.
pkg/controller/garbagecollector: simplified synchronization and made idiomatic.
Similar to #29598, we can rely on the zero-value construction behavior
to embed `sync.Mutex` into parent structs.
<!-- Reviewable:start -->
---
This change is [<img src="https://reviewable.kubernetes.io/review_button.svg" height="34" align="absmiddle" alt="Reviewable"/>](https://reviewable.kubernetes.io/reviews/kubernetes/kubernetes/29898)
<!-- Reviewable:end -->
Automatic merge from submit-queue
apiserver: fix timeout handler
Protect access of the original writer. Panics if anything has wrote
into the original writer or the writer is hijacked when times out.
Fix#29001
/cc @smarterclayton @lavalamp
The next step would be respect the request context once 1.7 is out.
<!-- Reviewable:start -->
---
This change is [<img src="https://reviewable.kubernetes.io/review_button.svg" height="34" align="absmiddle" alt="Reviewable"/>](https://reviewable.kubernetes.io/reviews/kubernetes/kubernetes/29594)
<!-- Reviewable:end -->
Automatic merge from submit-queue
Cut the client repo, staging it in the main repo
Tracking issue: #28559
ref: https://github.com/kubernetes/kubernetes/pull/25978#issuecomment-232710174
This PR implements the plan a few of us came up with last week for cutting client into its own repo:
1. creating "_staging" (name is tentative) directory in the main repo, using a script to copy the client and its dependencies to this directory
2. periodically publishing the contents of this staging client to k8s.io/client-go repo
3. converting k8s components in the main repo to use the staged client. They should import the staged client as if the client were vendored. (i.e., the import line should be `import "k8s.io/client-go/<pacakge name>`). This requirement is to ease step 4.
4. In the future, removing the staging area, and vendoring the real client-go repo.
The advantage of having the staging area is that we can continuously run integration/e2e tests with the latest client repo and the latest main repo, without waiting for the client repo to be vendored back into the main repo. This staging area will exist until our test matrix is vendoring both the client and the server.
In the above plan, the tricky part is step 3. This PR achieves it by creating a symlink under ./vendor, pointing to the staging area, so packages in the main repo can refer to the client repo as if it's vendored. To prevent the godep tool from messing up the staging area, we export the staged client to GOPATH in hack/godep-save.sh so godep will think the client packages are local and won't attempt to manage ./vendor/k8s.io/client-go.
This is a POC. We'll rearrange the directory layout of the client before merge.
@thockin @lavalamp @bgrant0607 @kubernetes/sig-api-machinery
<!-- Reviewable:start -->
---
This change is [<img src="https://reviewable.kubernetes.io/review_button.svg" height="34" align="absmiddle" alt="Reviewable"/>](https://reviewable.kubernetes.io/reviews/kubernetes/kubernetes/29147)
<!-- Reviewable:end -->
Issue #26093 identified pidof as one of the dependencies of kublet
which could be worked around. In this PR, we just look at /proc
to construct the list of pids we need for a specified process
instead of running "pidof" executable
Related to #26093
Automatic merge from submit-queue
HPA: ignore scale targets whose replica count is 0
Disable HPA when the user (or another component) explicitly sets the replicas to 0.
Fixes#28603
@kubernetes/autoscaling @fgrzadkowski @kubernetes/rh-cluster-infra @smarterclayton @ncdc
<!-- Reviewable:start -->
---
This change is [<img src="https://reviewable.kubernetes.io/review_button.svg" height="34" align="absmiddle" alt="Reviewable"/>](https://reviewable.kubernetes.io/reviews/kubernetes/kubernetes/29212)
<!-- Reviewable:end -->
Automatic merge from submit-queue
Modify predicate() interface to return all failed predicates
As stated in the comments below, this is the first step of showing the user all predicates that failed for a given node when scheduling of a given pod failed on every node.
ref #20064
<!-- Reviewable:start -->
---
This change is [<img src="https://reviewable.kubernetes.io/review_button.svg" height="34" align="absmiddle" alt="Reviewable"/>](https://reviewable.kubernetes.io/reviews/kubernetes/kubernetes/29272)
<!-- Reviewable:end -->
Automatic merge from submit-queue
Remove default etcd validation in generic apiserver
Moving verification of `--etcd-servers` to the concrete apiserver instead of checking during defaulting in generic apiserver.
The context for this change is that heapster (will be another apiserver) doesn't need to have etcd underneath.
<!-- Reviewable:start -->
---
This change is [<img src="https://reviewable.kubernetes.io/review_button.svg" height="34" align="absmiddle" alt="Reviewable"/>](https://reviewable.kubernetes.io/reviews/kubernetes/kubernetes/29993)
<!-- Reviewable:end -->