It is possible that package paths might differ from the group name, see
https://github.com/openshift/api/blob/master/operatorcontrolplane/v1alpha1/doc.go
notice that pacakge name is `operatorcontrolplane` whereas the group is
`controlplane.operator...`. This confuses the generator since it tries
to extrapolate the name of the package based on the group name. Whereas
the ImportTracker can properly recognize the import path. This leads to
cyclical imports in packages where the group name is different from the
actual import path.
Currently bind mounts of filesystems with nodev, noexec, nosuid,
noatime, relatime or nodiratime options set fail if we are running in a
user namespace if the same options are not set for the bind mount.
In case we are running in a user name space fix this by searching the
mount options of the source filesystem for nodev, noexec, nosuid,
noatime, relatime or nodiratime and retry the bind mount with the
options found added.
Signed-off-by: Ruediger Pluem <ruediger.pluem@vodafone.com>
Currently type references for non-local names are output as relative
types which is subject to the resolution rules as defined at
https://protobuf.com/docs/language-spec#reference-resolution
This works fine within the k8s.io namespace where no subpackages are
named k8s, but other users of go-to-protobuf likely have k8s in their
package name. This causes conflicts in the search resolution when
executing `go-to-protobuf`:
```
company.example.com/k8s/custom/pkg/apis/custom.k8s.example.com/v1/generated.proto:64:12: "k8s.io.apimachinery.pkg.apis.meta.v1.ListMeta" is resolved to "company.example.com.k8s.custom.pkg.apis.custom.k8s.io.apimachinery.pkg.apis.meta.v1.ListMeta", which is not defined. The innermost scope is searched first in name resolution. Consider using a leading '.'(i.e., ".k8s.io.apimachinery.pkg.apis.meta.v1.ListMeta") to start from the outermost scope.
```
To avoid this we can output fully qualified type references using a
preceding dot (.)
This results in a change for k8s generated.proto files, but the
effect is a noop.
Fixeskubernetes/code-generator#147
Signed-off-by: Andrew DeMaria <ademaria@cloudflare.com>
The k8s.io in the string and conventions around finalizers for DRA driver
controllers implied that this is for use by Kubernetes, but it's better to be
explicit about this.
In contrast to the original HandleError and HandleCrash, the new
HandleErrorWithContext and HandleCrashWithContext functions properly do contextual
logging, so if a problem occurs while e.g. dealing with a certain request and
WithValues was used for that request, then the error log entry will also
contain information about it.
The output changes from unstructured to structured, which might be a breaking
change for users who grep for panics. Care was taken to format panics
as similar as possible to the original output.
For errors, a message string gets added. There was none before, which made it
impossible to find all error output coming from HandleError.
Keeping HandleError and HandleCrash around without deprecating while changing
the signature of callbacks is a compromise between not breaking existing code
and not adding too many special cases that need to be supported. There is some
code which uses PanicHandlers or ErrorHandlers, but less than code that uses
the Handle* calls.
In Kubernetes, we want to replace the calls. logcheck warns about them in code
which is supposed to be contextual. The steps towards that are:
- add TODO remarks as reminder (this commit)
- locally remove " TODO(pohly): " to enable the check with `//logcheck:context`,
merge fixes for linter warnings
- once there are none, remove the TODO to enable the check permanently
The default queue implementation is mostly FIFO and it is not
exchangeable unless we implement the whole `workqueue.Interface` which
is less desirable as we have to duplicate a lot of code. There was one
attempt done in [kubernetes/kubernetes#109349][1] which tried to
implement a priority queue. That is really useful and [knative/pkg][2]
implemented something called two-lane-queue. While two lane queue is
great, but isn't perfect since a full slow queue can still slow down
items in fast queue.
This change proposes a swappable queue implementation while not adding
extra maintenance effort in kubernetes community. We are happy to
maintain our own queue implementation (similar to two-lane-queue) in
downstream.
[1]: https://github.com/kubernetes/kubernetes/pull/109349
[2]: https://github.com/knative/pkg/blob/main/controller/two_lane_queue.go
The path module has a few different functions:
Clean, Split, Join, Ext, Dir, Base, IsAbs. These functions do not
take into account the OS-specific path separator, meaning that they
won't behave as intended on Windows.
For example, Dir is supposed to return all but the last element of the
path. For the path "C:\some\dir\somewhere", it is supposed to return
"C:\some\dir\", however, it returns ".".
Instead of these functions, the ones in filepath should be used instead.
The project does not recommend using insecure ports. Even
unauthenticated TLS is an improvement since it provides confidentiality.
If you relied upon this, please update to secure serving options.
before:
go test -v -race -count 1 -run ^TestCacheWatcherDrainingNoBookmarkAfterResourceVersionReceived$
ok k8s.io/apiserver/pkg/storage/cacher 3.792s
after:
go test -v -race -count 1 -run ^TestCacheWatcherDrainingNoBookmarkAfterResourceVersionReceived$
ok k8s.io/apiserver/pkg/storage/cacher 1.783s
before:
go test -v -race -count 1 -run ^TestWatchNotHangingOnStartupFailure$
ok k8s.io/apiserver/pkg/storage/cacher 6.775s
after:
go test -v -race -count 1 -run ^TestWatchNotHangingOnStartupFailure$
ok k8s.io/apiserver/pkg/storage/cacher 2.781s
The runtime classes are apiserver's concept, while the handlers are kubelet's concept.
For NodeStatus, it makes more sense to return the latter ones here.
This commit modifies the following files:
- pkg/apis/core/types.go
- staging/src/k8s.io/api/core/v1/types.go
- pkg/kubelet/nodestatus/setters.go
- pkg/kubelet/kubelet_node_status.go
- pkg/registry/core/node/strategy.go
- test/e2e_node/mount_rro_linux_test.go
Other changes were auto-generated by running `make update`.
Signed-off-by: Akihiro Suda <akihiro.suda.cz@hco.ntt.co.jp>
It turns out that kube has a custom timeout for tests of 3 minutes.
The tests in the cacher package are utilizing nearly the
entire time and are being terminated, resulting in failing jobs.
Before the change, the TestWatchSemantics took ~43s to run. With this simple change, it now takes ~18s.
When we created the tests, we didn't measure the running time and assumed that waiting 1 second on a watch channel
to make sure no more events are received was sufficient.
This PR decreases the waiting time to 300 milliseconds.
Modern computers can perform many tasks within that time.
In addition to that, the tests are serial in nature, meaning that there is no other
actor that could add items to the database, which could result in receiving new items.
After the change the total running time decreased by 17%.
Before the tests needed ~176s after they need ~146s.
The changes also improved TestWatchSemanticInitialEventsExtended.
updates the test to wait 300 ms instead of 3s
the watch was established otherwise
we would be blocking on a call to cache.Watch(...)
in addition to that, the tests are serial in nature,
meaning that there is no other actor
that could add items to the database,
which could result in receiving new items.
Before:
go test -race -run TestEmptyWatchEventCache
ok k8s.io/apiserver/pkg/storage/cacher 8.450s
After:
go test -race -run TestEmptyWatchEventCache
ok k8s.io/apiserver/pkg/storage/cacher 2.635s