The featureGates field in ClusterConfiguration ends up
as a map[interface{}]interface{} in the test suite
and cannot be casted to map[string]bool directly.
Adapt the test to use map[interface{}]interface{}.
On the multi NUMA node environment, kernel splits hugepages allocated under
/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages file equally between NUMA nodes.
That makes it harder to predict where several pods will start because the number
of hugepages on each NUMA node will depend on the amount of NUMA nodes under the environment.
The memory manager test will allocate hugepages on the specific NUMA node to make
the test more predictable on the multi NUMA nodes environment.
Signed-off-by: Artyom Lukianov <alukiano@redhat.com>
Let's wait for the local node (aka the kubelet)
to be ready before to query podresources again,
to avoid false negatives.
Co-authored-by: Artyom Lukianov <alukiano@redhat.com>
Signed-off-by: Francesco Romani <fromani@redhat.com>
DKC is being removed and we don't want it to continue flaking the rest
of our tests. Lets disable them when dkc is disabled rather than hard
failing. This fits more in line with our other E2Es, and reduces the
maintenance load in test-infra.
we need to make sure the system state is completely cleaned up
again, to avoid to mess up with the shared node state, before
we transition from one test to another.
Signed-off-by: Francesco Romani <fromani@redhat.com>
Since commit 42dd01aa3f the cpuRequest is in millicores, hence
we need to properly check translating to exclusive cpus
when verifying the resource allocation.
Signed-off-by: Francesco Romani <fromani@redhat.com>
the intent is to make the code more readable, no intended
changes in behaviour. Now it should be a bit more explicit
why the code is checking some values.
Signed-off-by: Francesco Romani <fromani@redhat.com>
Signed-off-by: wangyysde <net_use@bzhy.com>
Generation swagger.json.
Use v2 path for hpa_cpu_field.
run update-codegen.sh
Signed-off-by: wangyysde <net_use@bzhy.com>
Some of the networking tests are flaking, and logging the command stdout and stderr
might show us some additional information about the the underlying issue when it
occurs.
Some tests have a short timeout for starting the pods (1 minute), but if
those tests happen to be the first ones to run, and the images have to be
pulled, then the test could timeout, especially with larger images. This
commit will allow us to prepull commonly used E2E test images, so this issue
can be avoided.
The logic to detect stale endpoints was not assuming the endpoint
readiness.
We can have stale entries on UDP services for 2 reasons:
- an endpoint was receiving traffic and is removed or replaced
- a service was receiving traffic but not forwarding it, and starts
to forward it.
Add an e2e test to cover the regression
The --log-file parameter will be deprecated as of Kubernetes 1.23 and should be
avoided. The replacement for distroless images is the image with go-runner, a
tool that handles output redirection.
For kubemark to run in that image it must be built as static binary.
* Bump the pod status and node status update timeouts to avoid flakes
* Add a small delay after dbus restart to ensure dbus has enough time to
restart to startup prior to sending shutdown signal
* Change check of pod being terminated by graceful shutdown. Previously,
the pod phase was checked to see if it was `Failed` and the pod reason
string matched. This logic needs to change after 1.22 graceful node
shutdown change introduced in PR #102344 which changed behavior to no
longer put the pods into a failed phase. Instead, the test now checks
that containers are not ready, and the pod status message and reason
are set appropriately.
Signed-off-by: David Porter <david@porter.me>
For some test failures, checking the pod logs could potentially
yield some interesting information, which could be used to further
investigate certain failures / flakes (for example, if there are some
networking issues, we could at least see if requests reach the containers,
(agnhost logs the connections / requests), or if there were any
other issues during the container's startup).
It looks like it tests two pods sharing the same volume, but the goal is
actually the opposite - two pods with the same inline volume definition
should get separate volumes.
This commit forces Kubelet Configuration files to always be generated
and when possible will use the kubeletconfig file that has been provided
by the test orchestrator
This commit enables the remote runner to provide a KubeletConfiguration
file to the test suite when uploading it to a remote host, thet test
runner will then use this configuration to run the Kubelet with the
provided config.
* De-share the Handler struct in core API
An upcoming PR adds a handler that only applies on one of these paths.
Having fields that don't work seems bad.
This never should have been shared. Lifecycle hooks are like a "write"
while probes are more like a "read". HTTPGet and TCPSocket don't really
make sense as lifecycle hooks (but I can't take that back). When we add
gRPC, it is EXPLICITLY a health check (defined by gRPC) not an arbitrary
RPC - so a probe makes sense but a hook does not.
In the future I can also see adding lifecycle hooks that don't make
sense as probes. E.g. 'sleep' is a common lifecycle request. The only
option is `exec`, which requires having a sleep binary in your image.
* Run update scripts
* fix_dsc_rbac_pod_update
* add test for DaemonSet Controller updates label of the pod after "DedupCurHistories"
* rebase
* update parameter of dsc.Run
Copying from pvcBlock swapped name and namespace (breaking the PVC test case)
and some references to the pvcBlock variable were left unchanged (incorrect
annotations for test failures).
Add a e2e test to exercise the checkpoint recovery flow.
This means we need to actually create a old (V1, pre-1.20) checkpoint,
but if we do it only in the e2e test, it's still fine.
Signed-off-by: Francesco Romani <fromani@redhat.com>
The previous implementaton called Update() without changing anything
about the object, so no MODIFIED events were ever generated. This change
ensures that all calls to Update() cause mutations, thereby ensuring
that MODIFIED events happen in the watch stream.
Signed-off-by: Steve Kuznetsov <skuznets@redhat.com>
During PR review it was pointed out that the branches for ephemeral
vs. persistent make the test harder to read. Therefore all code that depends on
if checks gets moved into two different versions of the test, one hat runs for
ephemeral volumes and one for persistent volumes, with skip statements at the
beginning.
* Cleanup FeatureGate skippers
* Perform changes requested by review
* some more review related changes
* Rename skipper functions to make code more readable
* add utilfeature back in
Conceptually, snapshots have to be taken while the pod and thus the volume
exist. Snapshotting has an issue where flushing of data is not guaranteed while
the volume is still staged on the node, so the test relied on deleting the pod
and checking for the volume to be unused. That part of the test cannot be done
for ephmeral volumes.
This is a fix for the new test case from
https://github.com/kubernetes/kubernetes/pull/105636 which had to be merged
without prior testing due to not having a cluster to test on and no pull job
which runs these
tests. https://testgrid.k8s.io/sig-storage-kubernetes#gce-serial then showed a
failure.
The fix is simple: in the ephemeral case, the PVC name isn't set in advance in
pvc.Name and instead must be computed. The fix now was tested on a kubetest
cluster in GCE.
Signed-off-by: Patrick Ohly <patrick.ohly@intel.com>
This adds a new test pattern and uses it for the inline volume tests. Because
the kind of volume now varies more, validation of the mount or block device is
always done by the caller of TestEphemeral.
hostPath volume plugin creates a directory within /tmp on host machine, to be mounted as volume.
inject-pod writes content to the volume, and a client-pod tried the read the contents and verify.
when SELinux is enabled on the host, client-pod can not read the content, with permission denied.
running the client-pod as privileged, so that it can access the volume content, even when SEinux is enabled on the host.
Enable feature by default.
Update integration tests for other features to assume that finalizers are present.
Change-Id: Ie969344f572627dba882c0e862e5700dadaf3026
Besides "subPath should unmount if pod is gracefully deleted while kubelet is
down" we also need a special case for "subPath should unmount if pod is force
deleted while kubelet is down".
This fixes a test failure in https://testgrid.k8s.io/sig-storage-kubernetes#gce-serial
It shouldn't make any difference, but it's better to actually test that
assumption.
All existing tests which create pods get converted by skipping the explicit PVC
creation for the ephemeral case and instead modifying the test pod so that it
has a volume claim template with the same spec as the PVC.
The feature gate gets locked to "true", with the goal to remove it in two
releases.
All code now can assume that the feature is enabled. Tests for "feature
disabled" are no longer needed and get removed.
Some code wasn't using the new helper functions yet. That gets changed while
touching those lines.
Each e2e test knows it wants to restart a running kubelet or a
non-running kubelet. The vast majority of times, we want to
restart a running kubelet (e.g. to change config or to check
some properties hold across kubelet crashes/restarts), but sometimes
we stop the kubelet, do some actions and only then restart.
To accomodate both use cases, we just expose the `running` boolean
flag to the e2e tests.
Having the `restartKubelet` explicitly restarting a running kubelet
helps us to trobuleshoot e2e failures on which the kubelet
was supposed to be running, while it was not; attempting a restart
in such cases only murkied the waters further, making the
troubleshooting and the eventual fix harder.
In the happy path, no expected change in behaviour.
Signed-off-by: Francesco Romani <fromani@redhat.com>
In the `restartKubelet` helper, we use `exec.Command`, whose
return value is the output as the command, but as `[]byte`.
The way we logged the output of the command was as value, making
the output, meant to be human readable, unnecessarily hard to read.
We fix this annoying behaviour converting the output to string before
to log it out, making pretty obvious to understand the outcome of
the command.
Signed-off-by: Francesco Romani <fromani@redhat.com>
This patch changes cpuCount to cpuRequest in order to cater to cases
where guaranteed pods make non-integral CPU Requests.
Signed-off-by: Swati Sehgal <swsehgal@redhat.com>
apparmor is no longer found in Alpine edge/testing but in
edge/community, presumably in preparation for full-fledged inclusion in
3.15. If so, once that is released, BASEIMAGE can be updated again and
the explicit --repository flag to 'apk add' dropped.
Fixes: https://github.com/kubernetes/kubernetes/issues/105528
Once the node gets deleted, the nodelifecycle controller
is racing to update pod status and the pod deletion logic
is failing causing tests to flake. This commit moves
the testContext creation to within the test loop and deletes nodes,
namespace within the test loop. We don't explicitly call the node
deletion within the loop but the `testutils.CleanupTest(t, testCtx)`
call ensures that the namespace, nodes gets deleted.
While running tests in parallel, especially those with higher loads
than others, it might take some time for Pods to be Running, even more
so if the image has to be pulled as well.
The test [sig-node] Pods should delete a collection of pods [Conformance]
only waits for the for the pods to be scheduled before deleting them, and
expects them to be gone in 1 minute, which can flake because of the above
reasons. Note that the operations are in order, and kubelet runs them in
order, which means that the pod first has to enter the Running state
before attempting to delete it.
This commit waits for the Pods to enter the Running state first before
deleting the entire collection.
Co-Authored-By: Antonio Ojea <aojea@redhat.com>
This commit fixes the LocalStorageCapacityIsolationEviction test by
acknowledging that in its default configuration kubelet will no-longer
evict memory-backed volume pods as they cannot use more than their
assigned limit with SizeMemoryBackedVolumes enabled.
To account for the old behaviour, we also add a test that explicitly
disables the feature to test the behaviour of memory backed local
volumes in those scenarios. That test can be removed when/if the feature
gate is removed.
Currently the storage eviction tests fail for a few reasons:
- They re-enter storage exhaustion after pulling the images during
cleanup (increasing test storage reqs, and adding verification for
future diagnosis)
- They were timing out, as in practice it seems that eviction takes just
over 10 minutes on an n1-standard in many cases. I'm raising these to
15 to provide some padding.
This should ideally bring these tests to passing on CI, as they've now
passed locally for me several times with the remote GCE env.
Follow up work involves diagnosing why these take so long, and
restructuring them to be less finicky.
When adding the ephemeral volume feature, the special case for
PersistentVolumeClaim volume sources in kubelet's host path and node
limits checks was overlooked. An ephemeral volume source is another
way of referencing a claim and has to be treated the same way.
The recommendation from #sig-cli was to print usage, then the error. Extra care
is taken to only print the usage instruction when the error really was about
flag parsing.
Taking kube-scheduler as example:
$ _output/bin/kube-scheduler
I0929 09:42:42.289039 149029 serving.go:348] Generated self-signed cert in-memory
...
W0929 09:42:42.489255 149029 client_config.go:620] error creating inClusterConfig, falling back to default config: unable to load in-cluster configuration, KUBERNETES_SERVICE_HOST and KUBERNETES_SERVICE_PORT must be defined
E0929 09:42:42.489366 149029 run.go:98] "command failed" err="invalid configuration: no configuration has been provided, try setting KUBERNETES_MASTER environment variable"
$ _output/bin/kube-scheduler --xxx
Usage:
kube-scheduler [flags]
...
--vmodule moduleSpec
comma-separated list of pattern=N settings for file-filtered logging
Error: unknown flag: --xxx
The kubectl behavior doesn't change:
$ _output/bin/kubectl get nodes
Unable to connect to the server: dial tcp: lookup xxxx: No address associated with hostname
$ _output/bin/kubectl --xxx
Error: unknown flag: --xxx
See 'kubectl --help' for usage.
It wasn't documented that InitLogs already uses the log flush frequency, so
some commands have called it before parsing (for example, kubectl in the
original code for logs.go). The flag never had an effect in such commands.
Fixing this turned into a major refactoring of how commands set up flags and
run their Cobra command:
- component-base/logs: implicitely registering flags during package init is an
anti-pattern that makes it impossible to use the package in commands which
want full control over their command line. Logging flags must be added
explicitly now, something that the new cli.Run does automatically.
- component-base/logs: AddFlags would have crashed in kubectl-convert if it
had been called because it relied on the global pflag.CommandLine. This
has been fixed and kubectl-convert now has the same --log-flush-frequency
flag as other commands.
- component-base/logs/testinit: an exception are tests where flag.CommandLine has
to be used. This new package can be imported to add flags to that
once per test program.
- Normalization of the klog command line flags was inconsistent. Some commands
unintentionally didn't normalize to the recommended format with hyphens. This
gets fixed for sample programs, but not for production programs because
it would be a breaking change.
This refactoring has the following user-visible effects:
- The validation error for `go run ./cmd/kube-apiserver --logging-format=json
--add-dir-header` now references `add-dir-header` instead of `add_dir_header`.
- `staging/src/k8s.io/cloud-provider/sample` uses flags with hyphen instead of
underscore.
- `--log-flush-frequency` is not listed anymore in the --logging-format flag's
`non-default formats don't honor these flags` usage text because it will also
work for non-default formats once it is needed.
- `cmd/kubelet`: the description of `--logging-format` uses hyphens instead of
underscores for the flags, which now matches what the command is using.
- `staging/src/k8s.io/component-base/logs/example/cmd`: added logging flags.
- `apiextensions-apiserver` no longer prints a useless stack trace for `main`
when command line parsing raises an error.
We graduate the `CPUManagerPolicyOptions` feature to beta
in the 1.23 cycle, and we add new experimental feature gates
to guard new options which are planned in the 1.23 and in the
following cycles.
We introduce additional feature gate called `CPUManagerPolicyAlphaOptions` and
`CPUManagerPolicyBetaOptions`. The basic idea is to avoid the
cumbersome process of adding a feature gate for each option, and to have
feature gates which track the maturity level of _groups_ of options.
Besides this change, the graduation process, and the process in general,
for adding new policy options is still unchanged.
The `full-pcpus-only` option added in the 1.22 cycle is intentionally
moved into the beta policy options
For more details:
- KEP: https://github.com/kubernetes/enhancements/pull/2933
- sig-arch discussion:
https://groups.google.com/u/1/g/kubernetes-sig-architecture/c/Nxsc7pfe5rw
Signed-off-by: Francesco Romani <fromani@redhat.com>
The boolean values for --dry-run have been deprecated for removal since
1.18, more than 2 releases.
The default value for --dry-run with the flag set and unspecified has
been deprecated for removal since 1.18, more than 2 releases.
Both values are now removed in this change. Any kubectl --dry-run
usage no longer accepts --dry-run=(true|false) boolean values and usage
now requires that a value of (client|server|none) is specified.
The boom-server container forges out-of-order TCP packets and injects them into the network. This requires the container to have the CAP_NET_RAW linux capability, otherwise the test will fail.
Signed-off-by: Riccardo Ravaioli <rravaiol@redhat.com>
* Updates ImpersonationConfig in rest/config.go to include UID
attribute, and pass it through when copying the config
* Updates ImpersonationConfig in transport/config.go to include UID
attribute
* In transport/round_tripper.go, Set the "Impersonate-Uid" header in
requests based on the UID value in the config
* Update auth_test.go integration test to specify a UID through the new
rest.ImpersonationConfig field rather than manually setting the
Impersonate-Uid header
Signed-off-by: Margo Crawford <margaretc@vmware.com>
By parsing flags in the test's main function before starting etcd we bail out
early without ever starting etcd when the test was invoked with -help.
Otherwise etcd must be available, gets started and then hangs because
flag.Parse itself exits when called by testing.go. This bypasses the code in
EtcdMain which normally stops etcd.
Otherwise, nodeNameToPodList[nodeName] list will have all its references
identical (corresponding to the control variable reference).
Thus, making all the pods in the list identical.
The agnhost pods using netexec will bind by default to the UDP
port 8081, use a different port for hostNetwork pods to avoid
scheduling conflicts and fail the tests.
There are some tests that doesn't need the UDP listener, so they
can disable it.
This is specially needed for tests that use hostNetwork pods, if 2
pods try to bind to the same port, the test will fail because one
of the pod can't be scheduled because of the port conflict.
To keep backwards compatibility, we can add an option to disable
the UDP listener by setting the port number to -1, that is consistent
with the SCTP implementation.
The issue on both tests is that before the refactor we had a method that
was creating the `StorageClass` manifest only, this manifest was used
later to be created by `TestBindingWaitForFirstConsumerMultiPVC`, after
the refactor we're ensuring that the `StorageClass` exists as a resource
before calling `TestBindingWaitForFirstConsumerMultiPVC` however this
method is still attempting to create it, that's the reason behind the
error: `resourceVersion should not be set on objects to be created
This issue wasn't caught before because
`TestBindingWaitForFirstConsumerMultiPVC` is creating the StorageClass
without the common utility function, the solution is to remove the
snippet that attempts to create the StorageClass againo
This test case requires special test-handler setup which is only done
for gce clusters created by kube-up scripts. Let's skip the test when
run under other providers.
use Extraconfig to configure the repair interval
and add an integration test for services finalizers, and
possible races with the services repair loop.
- Debian base used was older (v2.1.3) missing multiple fixed CVEs
- Minor update to distroless debian image name to explicitly point
to debian 10
- Debian base image now points to buster-1.9.0
The Topology Manager e2e tests wants to run on real multi-NUMA system
and want to consume real devices supported by device plugins; SRIOV
devices happen to be the most commonly available of such devices.
CI machines aren't multi NUMA nor expose SRIOV devices, so the biggest portion
of the tests will just skip, and we need to keep it like this until we
figure out how to enable these features.
However, some organizations can and want to run the testsuite on bare metal;
in this case, the current test will skip (not fail) with misconfigured
boxes, and this reports a misleading result. It will be much better to
fail if the test preconditions aren't met.
To satisfy both needs, we add an option, controlled by an environment
variable, to fail (not skip) if the machine on which the test run
doesn't meet the expectations (multi-NUMA, 4+ cores per NUMA cell,
expose SRIOV VFs).
We keep the old behaviour as default to keep being CI friendly.
Signed-off-by: Francesco Romani <fromani@redhat.com>
Previously we would try to infer the `ipFamilyPolicy` from `clusterIPs`
and/or `ipFamilies`. That is too tricky. Now you MUST specify
`ipFamilyPolicy` as one of the dual-stack options in order to get a
dual-stack service.
In older versions of Kubernetes (at least pre-0.19, it's the earliest
this test will run unmodified on), Pods that depended on devices could be
restarted after the device plugin had been removed. Currently however,
this isn't possible, as during ContainerManager.GetResources(), we
attempt to DeviceManager.GetDeviceRunContainerOptions() which fails as
there's no cached endpoint information for the plugin type.
This commit therefore breaks apart the existing test into two:
- One active test that validates that assignments are maintained across
restarts
- One skipped test that validates the behaviour after GPUs have been
removed, in case we decide that this is a bug that should be fixed in
the future.
15m is enough for Cluster Autoscaler to remove empty nodes, so we need
to break them sooner than that. Instead, wait 15m after breaking them to
ensure Cluster Autoscaler will consider them as unready instead of still
starting.
The profile gatherer has been removed in
https://github.com/kubernetes/kubernetes/pull/85304, so those options
are unused since then and can therefore be removed.
Signed-off-by: Sascha Grunert <sgrunert@redhat.com>
The e2e test "should have Endpoints and EndpointSlices pointing to
the API Server Service" was veryfing the current endpoints
reconciler implementation on the apiservers, however, users may
disable the endpoint reconciler and create their own.
This e2e test is also a conformance test, so we should test the
behaviour and not the implementation details. The test verifies
that a kubernetes.default service exist, an endpoint and endpoint
slices object referencing that service exist and are equivalent.
The Container Images for Windows Server 2022 have been published, and we can
start adding jobs for them.
The ltsc2022-based images have been built and promoted with these image versions.
The PR https://github.com/kubernetes/kubernetes/pull/104575 introduces
some intermediate types which makes the 32GiB memory machine kill the
typecheck process. To resolve that issue and make the test more robust,
we now reduce the amount of parallel typechecks to run to `2`.
Signed-off-by: Sascha Grunert <sgrunert@redhat.com>
Prior to this change, the pod was not getting scheduled on the node as
we don't have a running scheduler in e2e_node. PodClient solves this
problem by manually assigning the pod to the node.
The current GPU installer was built in 2017, from source that no longer
exists in Kubernetes ([adding commit][1]. The image was built on 2017-06-13.
Unfortunately, this installer no longer appears to work. When debugging
on the same node type as used by test-infra, it failed to build the
driver as the kernel sha was no longer available.
This lead to needing to find a new way to install GPUs. The smallest
logical change was switching to [cos-gpu-installer][2]
. There is a newer version of this available on [googlesource][3] that
I have not yet tested as it's not clear what the state of the project
is, as I couldn't find docs outside of the source itself.
We install things to the same location as previously to avoid needing
extra downstream changes. There are a couple of weird issues here
however, like needing to run the container twice to correctly update the
LD Cache.
[1]: 1e77594958/cluster/gce/gci/nvidia-gpus/Dockerfile
[2]: https://github.com/GoogleCloudPlatform/cos-gpu-installer
[3]: https://cos.googlesource.com/cos/tools/+/refs/heads/master/src/cmd/cos_gpu_installer/
Different CSI drivers have different error messages, making it difficult
to check them accurately. We remove the check for the error message and
only check the failure type instead, since that is all we need.
If device plugin returns device without topology, keep it internaly
as NUMA node -1, it helps at podresources level to not export NUMA
topology, otherwise topology is exported with NUMA node id 0,
which is not accurate.
It's imposible to unveile this bug just by tracing json.Marshal(resp)
in podresource client, because NUMANodes field ID has json property
omitempty, in this case when ID=0 shown as emtpy NUMANode.
To reproduce it, better to iterate on devices and just
trace dev.Topology.Nodes[0].ID.
Signed-off-by: Alexey Perevalov <alexey.perevalov@huawei.com>
The Container Images for Windows Server 2022 have been published, and
we can start building test images using them, so we can start adding
jobs for them.
The image versions for the e2e test images have been bumped in a previous
commit, but haven't been promoted yet. We don't need to bump them here.
httpd-2.4.46-win64-VC15.zip no longer exists, so we have to use
httpd-2.4.48-win64-VC15.zip instead.
Even DynamicKubeletConfig is deprecated it still used in e2e_node test.
The bug is hidden by forcibly enabled option
TEST_ARGS='--feature-gates=DynamicKubeletConfig=true'
if this option is not enabled setKubeletConfiguration tries to set
kubelet config via apiserver interface and failed with timeout.
Signed-off-by: Alexey Perevalov <alexey.perevalov@huawei.com>
Agnhost's serve-hostname at endpoint /hostname
will return hostname. Pods host node name may
return FQDN. Comparison between the two fails.
Signed-off-by: Martin Kennelly <mkennell@redhat.com>
The Container Images for Windows Server 2022 have been published, and
we can start building test images using them, so we can start adding
jobs for them.
The image versions for the e2e test images have been bumped in a previous
commit, but haven't been promoted yet. We don't need to bump them here.
We're starting with windows-servercore-cache and busybox images, since
they are needed for the other images the most.
A previous added LD_FLAGS for the go binary compilation, but it's not
defined for all images.
The pods using hostNetwork use the host network namespace, hence
they have to share it with the rest of the process and pods.
If several pods try to bind to the same port, the test will fail,
so we try to use a non common port, and run the different scenario
in the same test, so we only have to bind once and we avoid consuming
ports reducing the port collision risk.
In the test image build jobs, the image-util.sh script is not being run in a git
repository, which causes git log to fail.
In this case, we can use the PULL_BASE_SHA set in cloudbuild.yaml instead.
Windows Containerd has more features than Windows Docker. One of them is single file
mappings, allowing us to also map individual files into containers, not just folders.
This will set the tag [Excluded:WindowsDocker] for those tests instead of [LinuxOnly].
Co-authored-by: Mark Rossetti <marosset@microsoft.com>
Container without elevated privileges to bind to
host port less than 1024 causes bind permission
denied error.
Increase port number greater than 1024 to allow
binding.
Signed-off-by: Martin Kennelly <mkennell@redhat.com>
All dependencies of VolumeBinding plugin from
"k8s.io/kubernetes/pkg/controller/volume/scheduling" package moved to
"k8s.io/kubernetes/pkg/scheduler/framework/plugins/volumebinding" package:
- whole file pkg/controller/volume/scheduling/scheduler_assume_cache.go
- whole file pkg/controller/volume/scheduling/scheduler_assume_cache_test.go
- whole file pkg/controller/volume/scheduling/scheduler_binder.go
- whole file pkg/controller/volume/scheduling/scheduler_binder_fake.go
- whole file pkg/controller/volume/scheduling/scheduler_binder_test.go
Package "k8s.io/kubernetes/pkg/controller/volume/scheduling/metrics" moved
to "k8s.io/kubernetes/pkg/scheduler/framework/plugins/volumebinding/metrics"
because it only used in VolumeBinding plugin and (e2e) tests.
More described in issue #89930 and PR #102953.
Signed-off-by: Konstantin Misyutin <konstantin.misyutin@huawei.com>