Automatic merge from submit-queue
Quota ignores pod compute resources on updates
Scenario:
1. define a quota Q that tracks memory and cpu
2. create pod P that uses memory=100Mi, cpu=100m
3. update pod P to use memory=50Mi,cpu=10m
Expected Results:
Step 3 should fail with validation error.
Quota Q should not have changed.
Actual Results:
Step 3 fails validation, but quota Q is decremented to have memory usage down 50Mi and cpu usage down 40m. This is because the quota was getting updated even though the pod was going to fail validation.
Fix:
Quota should only support modifying pod compute resources when pods themselves support modifying their compute resources.
This also fixes https://github.com/kubernetes/kubernetes/issues/24352
/cc @smarterclayton - this is what we discussed.
fyi: @kubernetes/rh-cluster-infra
Automatic merge from submit-queue
add user.Info.GetExtra
I found myself wanting this field (or something like it), when trying to plumb the information about which scopes a particular token has.
Only the token authenticators have that information and I don't want tokens to leak past the authenticator. I thought about extending the `authenticator.Token` interface to include scopes (`[]string`), but that felt a little specific for what I wanted to do. I came up with this as an alternative.
It allows the token authenticator to fill in the information and authorizers already get handed the `user.Info`. It means that implementors can choose to tie the layers together if they wish, using whatever data they think is best.
@kubernetes/kube-iam
generic pod-per-node functionality for testing - 2 node test only
- update framework to decompose pod vs svc creation for composition.
- remove hard coded 2 and pointer to --scale
Automatic merge from submit-queue
Collect and expose runtime's image storage usage via Kubelet's /stats/summary endpoint
This information is useful to users since docker images are typically not stored on the root filesystem.
Kubelet will also consume this feature in the future to decide is evicting images will help with disk usage on the nodes.
cc @kubernetes/sig-node
Automatic merge from submit-queue
Move internal types of job from pkg/apis/extensions to pkg/apis/batch
This addressed the job part of #23216, this is still WIP. Will notify once finished. I'd like to have it in before starting working on ScheduledJob.
@lavalamp @erictune fyi
Automatic merge from submit-queue
Add node e2e test for mirror pod.
This is a node e2e test for mirror pod.
After this get merged, I'll revisit the mirror pod manager PR. #18638
/cc @yujuhong
Automatic merge from submit-queue
Use mCPU as CPU usage unit, add version in PerfData, and fix memory usage bug.
Partially addressed #24436.
This PR:
1) Change the CPU usage unit to "mCPU"
2) Add version in PerfData, and perfdash will only support the newest version now.
3) Fix stupid mistake when calculating the memory usage average.
/cc @vishh
Add tests to watch behavior in both protocols (http and websocket)
against all 3 media types. Adopt the
`application/vnd.kubernetes.protobuf;stream=watch` media type for the
content that comes back from a watch call so that it can be
distinguished from a Status result.
Automatic merge from submit-queue
Add timeout to e2e network connectivity checks
Some e2e tests use wget to check connectivity, and the default e2e
timeout is 900s. This change allows the timeout to be specified on a
check-by-check basis. This will also make the check useful for negative
checks (like those used by openshift to validate isolation) since a
short timeout is suggested where connectivity is not expected.
Automatic merge from submit-queue
make storage enablement, serialization, and location orthogonal
This allows a caller (command-line, config, code) to specify multiple separate pieces of config information regarding storage and have them properly composed at runtime. The information provided is exposed through interfaces to allow alternate implementations, which allows us to change the expression of the config moving forward. I also fixed up the types to be correct as I moved through.
The same options still exist, but they're composed slightly differently
1. specify target etcd servers per Group or per GroupResource
1. specify storage GroupVersions per Groups or per GroupResource
1. specify etcd prefixes per GroupVersion or per GroupResource
1. specify that multiple GroupResources share the same location in etcd
1. enable GroupResources by GroupVersion or by GroupResource whitelist or GroupResource blacklist
The `storage.Interface` is built per GroupResource by:
1. find the set of possible storage GroupResource based on the priority list of cohabitators
1. choose a GroupResource from the set by looking at which Groups have the resource enabled
1. find the target etcd server, etcd prefix, and storage encoding based on the GroupResource
The API server can have its resources separately enabled, but for now I've kept them linked.
@liggitt I think we need this (or something like it) to be able to go from config to these interfaces. Given another round of refactoring, we may be able to reshape these to be more forward driving.
@smarterclayton this is important for rebasing and for a seamless 1.2 to 1.3 migration for us.
Automatic merge from submit-queue
Increase provisioning test timeouts.
We've encountered flakes in our e2e infrastructure when kubelet took more than one minute to detach a volume used by a deleted pod.
Let's increase the wait period from 1 to 3 minutes. This slows down the test by 2 minutes, but it makes the test more stable.
In addition, when kubelet cannot detach a volume for 3 minutes, let the test wait for additional recycle controller retry interval (10 minutes) and hope the volume is deleted by then. This should not increase usual test time, it makes the test stable when kubelet is _extremely_ slow when releasing the volume.
Fixes: #24161
Automatic merge from submit-queue
Fix unintended change of Service.spec.ports[].nodePort during kubectl apply
Please refer #23551 for more detail. @bgrant0607 I think this simple fix should be ok to reuse nodePort. @thockin ptal.
Release note: Fix unintended change of `Service.spec.ports[].nodePort` during `kubectl apply`.
Automatic merge from submit-queue
Add RC and container pors to scheduler benchmark
Fix#23263
Ref #24408
However - scheduler throughput is still ~140 initially, whereas in reality we have 35-40. There are still significant difference we should understand.
@hongchaodeng @xiang90
Automatic merge from submit-queue
Cluster Verification Framework
I've spent the last few days looking at the general patterns of verification we have that we tend to reuse in the e2es. Basically, we need
- label filters
- forEach and WaitFor (where forEach doesn't necessarily waitFor anything).
- timeouts
- multiple phases (reusable definition of state)
- an extensible way to define cluster state that can evolve over time in a data object rather than as a set of parameters that have magic semantics
This PR
- implements the abstract above functionality declaratively, and w/o hidden semantics.
- addresses the sprawling duplicate methods in #23540, so that we can phase out the wrapper methods and replace them with well defined, extensible semantics for cluster state.
- fixes the recently discovered #23730 issue (where kubectl.go is relying on examples.go, which is obviously wacky) by using the new framework to implement forEachPod in just a couple of lines and migrating the wrapper function into framework.go.
There is some cleanup to do here, but this is seemingly working for a couple of use cases that are important (spark,cassandra,...,kubectl) tests. - i played with a few different ideas and this wound up seeming to be the most natural implementation from a usability standpoint...
in any case, just thought id push this up as a first iteration, open to feedback.
@kubernetes/sig-testing @timothysc
Automatic merge from submit-queue
Fix DNS test for larger clusters
On GKE, we scale the number of DNS pods based on the cluster size. For
testing on larger clusters, relax the DNS pod check.