* ftr(watch-cache): add benchmarks
* ftr(kube-apiserver): faster watch-cache getlist
* refine: testcase name
* - refine var name make it easier to convey meaning
- add comment to explain why we need to apply for a slice of runtime.Object instead of making a slice of ListObject.Items directly.
* migrated pod-security-admission to contextual logging
Signed-off-by: Naman <namanlakhwani@gmail.com>
* updating test files for contextual logging
Signed-off-by: Naman <namanlakhwani@gmail.com>
* smalll nit
Signed-off-by: Naman <namanlakhwani@gmail.com>
* doing inline if
Signed-off-by: Naman <namanlakhwani@gmail.com>
---------
Signed-off-by: Naman <namanlakhwani@gmail.com>
This moves the hack/ directory and scripts to the examples dir, which is
a distinct module. This avoids some Go unpleasantness around module
boundaries and just makes more sense.
When running this script more than once on Debian and Ubuntu, we fail to
chown -R the CERT_DIR due to this file owned by root and the CERT_DIR
owned by the unprivileged user running the script.
Let's remove the file, that is something we can always do, before
generating the certs. This fixes the problem on Debian and Ubuntu local
setups.
Signed-off-by: Rodrigo Campos <rodrigoca@microsoft.com>
By generating the unique name in advance, the label also can be set to a
matching value directly in the Create request. This makes test startup in
test/integration/scheduler_perf a bit faster because the extra patching can be
avoided.
It also leads to a better label because previously, the unique label value
didn't match the node name. This is required for simulating dynamic resource
allocation, which relies on the label to track where an allocated claim is
available.
The pod worker may recieve a new pod which is marked as terminal in
the runtime cache. This can occur if a pod is marked as terminal and the
kubelet is restarted.
The kubelet needs to drive these pods through the termination state
machine. If upon restart, the kubelet receives a pod which is terminal
based on runtime cache, it indicates that pod finished
`SyncTerminatingPod`, but it did not complete `SyncTerminatedPod`. The
pod worker needs ensure that `SyncTerminatedPod` will run on these pods.
To accomplish this, set `finished=False`, on the pod sync status, to
drive the pod through the rest of the state machine.
This will ensure that status manager and other kubelet subcomponents
(e.g. volume manager), will be aware of this pod and properly cleanup
all of the resources of the pod after the kubelet is restarted.
While making change, also update the comments to provide a bit more
background around why the kubelet needs to read the runtime pod cache
for newly synced terminal pods.
Signed-off-by: David Porter <david@porter.me>