Most of these could have been refactored automatically but it wouldn't
have been uglier. The unsophisticated tooling left lots of unnecessary
struct -> pointer -> struct transitions.
This is gross but because NewDeleteOptions is used by various parts of
storage that still pass around pointers, the return type can't be
changed without significant refactoring within the apiserver. I think
this would be good to cleanup, but I want to minimize apiserver side
changes as much as possible in the client signature refactor.
HPA needs metrics for exited init containers before it will
take action. By setting memory and CPU usage to zero for any
containers that cAdvisor didn't provide statistics for, we
are assured that HPA will be able to correctly calculate
pod resource usage.
The status manager syncBatch() method processes the current state
of the cache, which should include all entries in the channel. Flush
the channel before we call a batch to avoid unnecessary work and
to unblock pod workers when the node is congested.
Discovered while investigating long shutdown intervals on the node
where the status channel stayed full for tens of seconds.
Add a for loop around the select statement to avoid unnecessary
invocations of the wait.Forever closure each time.
When constructing the API status of a pod, if the pod is marked for
deletion no containers should be started. Previously, if a container
inside of a terminating pod failed to start due to a container
runtime error (that populates reasonCache) the reasonCache would
remain populated (it is only updated by syncPod for non-terminating
pods) and the delete action on the pod would be delayed until the
reasonCache entry expired due to other pods.
This dramatically reduces the amount of time the Kubelet waits to
delete pods that are terminating and encountered a container runtime
error.
After a pod reaches a terminal state and all containers are complete
we can delete the pod from the API server. The dispatchWork method
needs to wait for all container status to be available before invoking
delete. Even after the worker stops, status updates will continue to
be delivered and the sync handler will continue to sync the pods, so
dispatchWork gets multiple opportunities to see status.
The previous code assumed that a pod in Failed or Succeeded had no
running containers, but eviction or deletion of running pods could
still have running containers whose status needed to be reported.
This modifies earlier test to guarantee that the "fallback" exit
code 137 is never reported to match the expectation that all pods
exit with valid status for all containers (unless some exceptional
failure like eviction were to occur while the test is running).
The kubelet must not allow a container that was reported failed in a
restartPolicy=Never pod to be reported to the apiserver as success.
If a client deletes a restartPolicy=Never pod, the dispatchWork and
status manager race to update the container status. When dispatchWork
(specifically podIsTerminated) returns true, it means all containers
are stopped, which means status in the container is accurate. However,
the TerminatePod method then clears this status. This results in a
pod that has been reported with status.phase=Failed getting reset to
status.phase.Succeeded, which is a violation of the guarantees around
terminal phase.
Ensure the Kubelet never reports that a container succeeded when it
hasn't run or been executed by guarding the terminate pod loop from
ever reporting 0 in the absence of container status.
When NodeStage times out and does not prepare destination device and user
deletes corresponding pod, the driver may continue staging the volume in
background. Kubernetes must call NodeUnstage to "cancel" this operation.
Therefore TearDownDevice should be called even when the target directory
does not exist (yet).
When NodePublish times out and user deletes corresponding pod, the driver
may continue publishing the volume. In order to "cancel" this operation,
Kubernetes must issue NodeUnpublish and wait until it finishes.
Therefore, NodeUnpublish should be called even if the target directory
(created by the driver) does not exist yet.
Volume mount should be marked as uncertain after NodeStage / NodePublish
timeout or similar error, when the driver can continue with the operation in
background.
The scheduler doesn't really need to know in detail which reasons
rendered a node unusable for a node. All it needs from the volume
binder is a list of reasons that it then can present to the user.
This seems a bit cleaner. But the main reason for the change is that
it simplifies the checking of CSI inline volumes and perhaps later
capacity checking. Both will lead to new failure reasons, which then
can be added without changing the interface.
PodOverhead is now a beta feature and set to true by default. No need to
override to true during testing.
Signed-off-by: Eric Ernst <eric.ernst@intel.com>
GetAllocateResourcesPodAdmitHandler(). It is named as such to reflect its
new function. Also remove the Topology Manager feature gate check at higher level
kubelet.go, as it is now done in GetAllocateResourcesPodAdmitHandler().
- allocatePodResources logic altered to allow for container by container
device allocation.
- New type PodReusableDevices
- New field in devicemanager devicesToReuse
- Where previously we called manager.AddContainer(), we now call both
manager.Allocate() and manager.AddContainer().
- Some test cases now have two expected errors. One each
from Allocate() and AddContainer(). Existing outcomes are unchanged.
GetTopologyPodAdmitHandler() now returns a lifecycle.PodAdmitHandler
type instead of the TopologyManager directly. The handler it returns
is generally responsible for attempting to allocate any resources that
require a pod admission check. When the TopologyManager feature gate
is on, this comes directly from the TopologyManager. When it is off,
we simply attempt the allocations ourselves and fail the admission
on an unexpected error. The higher level kubelet.go feature gate
check will be removed in an upcoming PR.
In an e2e run, out of 1857 pod status updates executed by the
Kubelet 453 (25%) were no-ops - they only contained the UID of
the pod and no status changes. If the patch is a no-op we can
avoid invoking the server and continue.
persistent-disks for each instance type correctly.
Tested: Verified the kubernetes node object shows correct limits for
various instance types (n1-, c2-, e2-).
This commit removes AlgorithmSource from the v1alpha2 scheduler ComponentConfig api.
It also creates the necessary manual conversion functions and updates defaulting to accept the change.
In https://github.com/kubernetes/kubernetes/pull/88372, we added the
ability to inject errors to the `FakeImageService`. Use this ability to
test the error paths executed by the `kubeGenericRuntimeManager` when
underlying `ImageService` calls fail.
I don't foresee this change having a huge impact, but it should set a
good precedent for test coverage, and should the failure case behavior
become more "interesting" or risky in the future, we already will have
the scaffolding in place with which we can expand the tests.
Extended GetMountMedium function to check if hugetlbfs volume
is mounted with the page size equal to the medium size.
Page size is obtained from the 'pagesize' mount option of the
mounted hugetlbfs volume.
This implementation allows Pod to request multiple hugepage resources
of different size and mount hugepage volumes using storage medium
HugePage-<size>, e.g.
spec:
containers:
resources:
requests:
hugepages-2Mi: 2Mi
hugepages-1Gi: 2Gi
volumeMounts:
- mountPath: /hugepages-2Mi
name: hugepage-2mi
- mountPath: /hugepages-1Gi
name: hugepage-1gi
...
volumes:
- name: hugepage-2mi
emptyDir:
medium: HugePages-2Mi
- name: hugepage-1gi
emptyDir:
medium: HugePages-1Gi
NOTE: This is an alpha feature.
Feature gate HugePageStorageMediumSize must be enabled for it to work.
The pod, container, and emptyDir volumes can all trigger evictions
when their limits are breached. To ensure that administrators can
alert on these type of evictions, update kubelet_evictions to include
the following signal types:
* ephemeralcontainerfs.limit - container ephemeral storage breaches its limit
* ephemeralpodfs.limit - pod ephemeral storage breaches its limit
* emptydirfs.limit - pod emptyDir storage breaches its limit
The StatefulSet controller cleans up ControllerRevisions at the end of
the reconcile loop. If something goes wrong during reconcile, it bails
out without actually performing this step. This commit moves the cleanup
to a deferred function call to guarantee it will be executed.
Fixes issue: https://github.com/kubernetes/kubernetes/issues/85690
Filesystem mismatch is a special event. This could indicate
either user has asked for incorrect filesystem or there is a error
from which mount operation can not recover on retry.
Co-Authored-By: Jordan Liggitt <jordan@liggitt.net>
CSI volume plugin creates number of files/directories when processing block
volumes. These files must be cleaned when the plugin is done with the
volume, i.e. at the end on TearDownDevice().
This combines container names into a single list because separating them
into a long, variable length string isn't particularly useful in the
context of an streaming error message.
- Add handlers for service account issuer metadata.
- Add option to manually override JWKS URI.
- Add unit and integration tests.
- Add a separate ServiceAccountIssuerDiscovery feature gate.
Additional notes:
- If not explicitly overridden, the JWKS URI will be based on
the API server's external address and port.
- The metadata server is configured with the validating key set rather
than the signing key set. This allows for key rotation because tokens
can still be validated by the keys exposed in the JWKs URL, even if the
signing key has been rotated (note this may still be a short window if
tokens have short lifetimes).
- The trust model of OIDC discovery requires that the relying party
fetch the issuer metadata via HTTPS; the trust of the issuer metadata
comes from the server presenting a TLS certificate with a trust chain
back to the from the relying party's root(s) of trust. For tests, we use
a local issuer (https://kubernetes.default.svc) for the certificate
so that workloads within the cluster can authenticate it when fetching
OIDC metadata. An API server cannot validly claim https://kubernetes.io,
but within the cluster, it is the authority for kubernetes.default.svc,
according to the in-cluster config.
Co-authored-by: Michael Taufen <mtaufen@google.com>
This allows the proxier to cache local addresses instead of fetching all
local addresses every time in IsLocalIP.
Signed-off-by: Andrew Sy Kim <kiman@vmware.com>
This avoids fetching all local network interfaces everytime we sync an
external IP. For clusters with many external IPs this gets really
expensive. This change caches all local addresses once per sync.
Signed-off-by: Andrew Sy Kim <kiman@vmware.com>
This avoids fetching all local network interfaces everytime we sync an
external IP. For clusters with many external IPs this gets really
expensive. This change caches all local addresses once per sync.
Signed-off-by: Andrew Sy Kim <kiman@vmware.com>
kube-proxy, if is configured with an IP family, filters out the
incorrect IP version of the services.
This commit fix a bug caused by not filtering out the IPs in the
LoadBalancer Status Ingress field.
During EndpointSlice reconcilation, EndpointSliceTracker is supposed to
track expected EndpointSlice resource versions so that external changes
to them can be detected. But it actually tracked the stale resource
version and resulted in every Service was handled twice as it always
received an EndpointSlice update with a different resource version but
was actually created/updated by itself during the first processing.
This change will not work on its own. Higher level code needs to make
sure and call Allocate() before AddContainer is called. This is already
being done in cases when the TopologyManager feature gate is enabled (in
the PodAdmitHandler of the TopologyManager). However, we need to make
sure we add proper logic to call it in cases when the TopologyManager
feature gate is disabled.
This change will not work on its own. Higher level code needs to make
sure and call Allocate() before AddContainer is called. This is already
being done in cases when the TopologyManager feature gate is enabled (in
the PodAdmitHandler of the TopologyManager). However, we need to make
sure we add proper logic to call it in cases when the TopologyManager
feature gate is disabled.
Having this interface allows us to perform a tight loop of:
for each container {
containerHints = {}
for each provider {
containerHints[provider] = provider.GatherHints(container)
}
containerHints.MergeAndPublish()
for each provider {
provider.Allocate(container)
}
}
With this in place we can now be sure that the hints gathered in one
iteration of the loop always consider the allocations made in the
previous.
Instead of having a single call for Allocate(), we now split this into two
functions Allocate() and UpdatePluginResources().
The semantics split across them:
// Allocate configures and assigns devices to a pod. From the requested
// device resources, Allocate will communicate with the owning device
// plugin to allow setup procedures to take place, and for the device
// plugin to provide runtime settings to use the device (environment
// variables, mount points and device files).
Allocate(pod *v1.Pod) error
// UpdatePluginResources updates node resources based on devices already
// allocated to pods. The node object is provided for the device manager to
// update the node capacity to reflect the currently available devices.
UpdatePluginResources(
node *schedulernodeinfo.NodeInfo,
attrs *lifecycle.PodAdmitAttributes) error
As we move to a model in which the TopologyManager is able to ensure
aligned allocations from the CPUManager, devicemanger, and any
other TopologManager HintProviders in the same synchronous loop, we will
need to be able to call Allocate() independently from an
UpdatePluginResources(). This commit makes that possible.
The types were different so the diff output is not useful, both
should be pointers:
```
Feb 05 19:44:40 ci-ln-6k7l4-w-c-w9wbb.c.openshift-gce-devel-ci.internal hyperkube[2737]: I0205 19:44:40.222259 2737 status_manager.go:642] Pod status is inconsistent with cached status for pod "prometheus-k8s-1_openshift-monitoring(0e9137b8-3bd2-4353-b7f5-672749106dc1)", a reconciliation should be triggered:
Feb 05 19:44:40 ci-ln-6k7l4-w-c-w9wbb.c.openshift-gce-devel-ci.internal hyperkube[2737]: interface{}(
Feb 05 19:44:40 ci-ln-6k7l4-w-c-w9wbb.c.openshift-gce-devel-ci.internal hyperkube[2737]: - s"&PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-05 19:13:30 +0000 UTC,Reason:,Message:,},PodCondit>
Feb 05 19:44:40 ci-ln-6k7l4-w-c-w9wbb.c.openshift-gce-devel-ci.internal hyperkube[2737]: + v1.PodStatus{
Feb 05 19:44:40 ci-ln-6k7l4-w-c-w9wbb.c.openshift-gce-devel-ci.internal hyperkube[2737]: + Phase: "Running",
Feb 05 19:44:40 ci-ln-6k7l4-w-c-w9wbb.c.openshift-gce-devel-ci.internal hyperkube[2737]: + Conditions: []v1.PodCondition{
```
This change removes the audience logic from the oidc authenticator
and collapses it onto the same logic used by other audience unaware
authenticators.
oidc is audience unaware in the sense that it does not know or
understand the API server's audience. As before, the authenticator
will continue to check that the token audience matches the
configured client ID.
The reasoning for this simplification is:
1. The previous code tries to make the client ID on the oidc token
a valid audience. But by not returning any audience, the token is
not valid when used via token review on a server that is configured
to honor audiences (the token works against the Kube API because the
audience check is skipped).
2. It is unclear what functionality would be gained by allowing
token review to check the client ID as a valid audience. It could
serve as a proxy to know that the token was honored by the oidc
authenticator, but that does not seem like a valid use case.
3. It has never been possible to use the client ID as an audience
with token review as it would have always failed the audience
intersection check. Thus this change is backwards compatible.
It is strange that the oidc authenticator would be considered
audience unaware when oidc tokens have an audience claim, but from
the perspective of the Kube API (and for backwards compatibility),
these tokens are only valid for the API server's audience.
This change seems to be the least magical and most consistent way to
honor backwards compatibility and to allow oidc tokens to be used
via token review when audience support in enabled.
Signed-off-by: Monis Khan <mok@vmware.com>
Previously, the verious Merge() policies of the TopologyManager all
eturned their own lifecycle.PodAdmitResult result. However, for
consistency in any failed admits, this is better handled in the
top-level Topology manager, with each policy only returning a boolean
about whether or not they would like to admit the pod or not. This
commit changes the semantics to match this logic.
Previously, we unconditionally removed *all* topology hints from a pod
whenever just one container was being removed. This commit makes it so
we only remove the hints for the single container being removed, and
then conditionally remove the pod from the podTopologyHints[podUID] when
no containers left in it.