The new zapr adds support for slog. The new zap has various improvements. It no
longer depends on go.uber.org/atomic which enables dropping it from the
Kubernetes dependencies. github.com/pkg/errors is also no longer needed.
39df946c06 was meant to enable stricter comment checking only for cmd/kubeadm
because the maintainers of that want that. However, the exclude rule didn't
capture all possible error texts and therefore some checks were enabled across
the entire code base.
The extended pattern is now based on the golangci-lint source code.
Also, the hint config didn't suppress any of these checks.
We recently had an accident were a 64MB executable got included in a PR and
wasn't caught during the manual review. This new verify script would have
caught that file.
The maximum file size is 10MB. This is intentionally low. If some legitimate
file needs to be added that is larger, then an entry in a .ignorefilesize file
in the directory of the large file can exclude that file from the check.
1. automatically add etcd to current PATH when calling kube::etcd::install
2. call kube::etcd::install from update-openapi-spec
3. don't call kube::golang::setup_env twice
Some code owners might want this for specific packages, like cmd/kubeadm.
This cannot be enabled for everything because:
- a lot of existing code doesn't pass (-> can't be in base config)
- a lot of packages don't need it (-> shouldn't even be a hint)
The in-tree configs use a relative path to find logcheck.so. This is useful
because then the invocation of golangci-lint also works outside of the script.
But when running with a containerized build, GOBIN points somewhere else. For
that case, a temporary copy of the configuration has to be created with an
absolute path.
Contributors have created PRs because the linter found them. That is often not
necessary and should be discussed before working on a PR.
While at it, output formatting gets updated a bit (extra blank lines, wording).
If something goes wrong during the test registration phase, the only solution
so far was to panic. This is not user-friendly and only allows to report one
problem at a time.
If initialization can continue, then a better solution is to record a bug,
continue, and then report all bugs together.
This also works when just listing tests. The new verify-e2e-suites.sh uses that
to check all test suites (identified as "packages that call
framework.AfterReadingAllFlags", with some exceptions) as part of
pull-kubernetes-verify.
Example output for a fake
framework.RecordBug(framework.NewBug("fake bug during SIGDescribe", 0))
in test/e2e/storage/volume_metrics.go:
```
$ hack/verify-e2e-suites.sh
go version go1.21.1 linux/amd64
ERROR: E2E test suite invocation failed for test/e2e.
ERROR: E2E suite initialization was faulty, these errors must be fixed:
ERROR: test/e2e/storage/volume_metrics.go:49: fake bug during SIGDescribe
E2E suite test/e2e_kubeadm passed.
E2E suite test/e2e_node passed.
```
Instead of invoking verify-golangci-lint.sh directly from Prow jobs,
those Prow jobs should use "make verify WHAT=...". The advantage is
that the common code for running verify targets will be used, which
includes producing JUnit files.
Providing simple wrappers for strict linting of PRs (=
verify-golangci-lint-pr.sh) and event stricter linting of PRs with hints
enabled (= verify-golangci-lint-pr-hints.sh) enables those WHAT targets.
The spaces are redundant because Ginkgo will add them itself when concatenating
the different test name components. Upcoming change in the framework will
enforce that there are no such redundant spaces.
Using StartRecordingToSinkWithContext instead of StartRecordingToSink and
StartLogging instead of StartStructuredLogging has several advantages:
- Spawned goroutines no longer get stuck for extended periods of
time during shutdown when passing in a context that gets canceled.
- Log output can be directed towards a specific logger instead of the global
default, for example one which writes to a testing.T instance.
- The new methods return an error when something went wrong instead of
merely recording the error.
That last point is the reason for deprecating the old methods instead of merely
adding new alternatives.
Setting a context when constructing an EventBroadcaster makes calling Shutdown
optional. It can also be used to specify the logger.
Both EventRecorder interfaces in tools/events and tools/record now have a
WithLogger helper. Using that method is optional, but recommended to support
contextual logging properly. Without it, errors that occur while emitting an
event are not associated with the caller.
for now:
- shim FORCE_HOST_GO to GOTOOLCHAIN=local
- treat GOTOOLCHAIN set and !=auto like FORCE_HOST_GO
- otherwise set GOTOOLCHAIN=go${GO_VERSION} and fallback to gimme if necessary
TODO: set toolchain statements in go.mod files and keep them in sync
This particualar warning didn't make it into
https://github.com/kubernetes/kubernetes/issues/117288. Discussion on Slack
concluded that "it's hard to have a universal policy for all functions marked
deprecated" and thus this can only be a hint which must be considered on a
case-by-case basis.
For example, APIs like sets.String are very unlikely to ever go away, therefore
it is entirely up to developers whether they switch to sets.Set even though
sets.String is marked as deprecated.
Ideally, the deprecation message should explain this. It doesn't for sets ("use
generic Set instead"), so a better message in that case would have been
"consider using generic Set instead".
The voting in https://github.com/kubernetes/kubernetes/issues/117288 led to
one check that got rejected ("ifElseChain: rewrite if-else to switch
statement") and several that are "nice to know".
golangci-lint's support for issue "severity" is too limited to identify "nice
to know" issues in the output (filtering is only by linter without considering
the issue text; not part of text output). Therefore a third configuration gets
added which emits all issues (must fix and nits). The intention is to use
the "strict" configuration in pull-kubernetes-verify and the "hints"
configuration in a new non-blocking pull-kubernetes-linter-hints.
That way, "must fix" issues will block merging while issues that may be useful
will show up in a failed optional job. However, that job then also contains
"must fix" issues, partly because filtering out those would make the
configuration a lot larger and is likely to be unreliably (all "must fix"
issues would need to be identified and listed), partly because it may be useful
to have all issues in one place.
The previous approach of manually keeping two configs in sync with special
comments didn't scale to three configs. Now a single golangci.yaml.in with
text/template constructs contains the source for all three configs. A new
simple CLI frontend for text/template (cmd/gotemplate) is used by
hack/update-golangci-lint-config.sh to generate the three flavors.
Several verify scripts used the same pattern of "check for clean working tree,
generated files, check for diffs". The code for that is now in
kube::verify::generated, defined in hack/lib/verify-generated.sh, and those
scripts just source that.
For some reason, in go1.21, go list does not allow
importing main packages anymore, even if it is for
the sake of tracking dependencies (which is a valid
use case).
A suggestion to work around this is to use -e flag to
permit processing of erroneous packages. However, this
doesn't seem prudent.
Signed-off-by: Madhav Jivrajani <madhav.jiv@gmail.com>
That release is the first one with official support for Go 1.21. go-ruleguard
must be >= 0.3.20 because of
https://github.com/quasilyte/go-ruleguard/issues/449 with Go
1.21. golangci-lint itself doesn't depend on a recent enough release yet, so
this was done manually.
Client-side extract calls depend on `managedFields`, which might not be
available. Therefore they should not be used in production code.
They are okay in test files (because the API has to be tested), in the
generated code (because the various type specific APIs still need to be
provided) and in unstructured.go (same reason).
This bump is done since the latest version of staticcheck includes
a fix for a false positive reported by us, discovered while bumping
to go1.20
Signed-off-by: Madhav Jivrajani <madhav.jiv@gmail.com>
For example, this is a false positive that currently exists in the code base:
test/e2e_node/dra_test.go:129:4: ginkgo-linter: use a function call in Consistently. This actually checks nothing, because Consistently receives the function returned value, instead of function itself, and this value is never changed; consider using `gomega.Consistently(ctx, e2epod.Get).WithArguments(f.ClientSet, pod).WithTimeout(podInPendingStateTimeout).Should(e2epod.BeInPhase(v1.PodPending),
"Pod should be in Pending state as resource preparation time outed")` instead (ginkgolinter)
gomega.Consistently(ctx, e2epod.Get(f.ClientSet, pod)).WithTimeout(podInPendingStateTimeout).Should(e2epod.BeInPhase(v1.PodPending),
^
It's a false positive because e2epod.Get returns the function that Consistently
is meant to call.
This could be worked around by assigning e2epod.Get(f.ClientSet, pod) to a
variable and then use that variable, but that is less readable.
In the wait_node_ready function, two steps are performed:
1.Check if the node exists
2.Wait for the node to enter the ready state
If one step fails, the second step should not continue, wasting 300 seconds.
Container runtimes like CRI-O and containerd reuse the code by copying
it from Kubernetes. To have a single source of truth for the streaming
server we now move the already isolated implementation to the
k8s.io/kubelet staging repository. This way runtimes can re-use the code
without copying the parts.
Signed-off-by: Sascha Grunert <sgrunert@redhat.com>
Because this doesn't get invoked through verify.sh, we have to call
logJu ourselves to get a JUnit file. errexit must not be set when
calling logJu, otherwise it does no post-processing.
When sh2ju.sh was called to generate the junit_verify.xml, it used to include
the entire output of a failed script twice: once as failure message, once as
log output.
This output can be large and often the actual failure isn't near the top, but
rather at the end or (in the case of the different golangci-lint invocations)
embedded in the log. This makes them hard to see at a glance when looking at
the Prow result page for a job.
Now a verify script can prefix relevant lines with "ERROR: " and then only
those lines are used as failure message in JUnit, without that prefix.
That string was chosen because Prow itself also then picks up those lines when
viewing the entire build log and it is unlikely that some script prints such
lines when they are not meant to be part of the failure.
If some script outputs no such lines, "see stderr for details" is used as
failure message. This is better than before because it avoids the redundancy.
Fixed ginkgo warning
You're using deprecated Ginkgo functionality:
=============================================
--untilItFails is deprecated, use --until-it-fails instead
Used consistent approach with this flag in e2e_node and e2e scripts.
The setting for garbagecollector was added 7 years ago in 9ac91e5172 for
"debugging gc". graph_builder was added 6 years in a98801c1 when restoring the
-vmodule parameter after some temporary removal, without an explanation.
It seems safe to assume that the garbage collector has been debugged
sufficiently...
These defaults cause performance overhead:
- Enabling -vmodule slows down all log calls because checking verbosity
cannot take a simpler fast path.
- The amount of log output is much higher for those files.
The amount of log data also caused test output to get truncated, removing the
actual test failure explanation.
Previously it would corrupt the log when it ran stuff like:
go mod tidy >> "${LOG_FILE}" 2>&1
because this would reopen the file. Also, if that failed, the `finish`
function would be called ALSO with output to the log.
Now we let &1 and &2 always be the log, and &11 and &22 are the real
stdout/stderr, which means we have to say that explicitly when we want
output.
No, I cannot do `OUT="&11"` - I would have to use `eval` to make that
work.
podman returns nozero exit code for `docker buildx`, because
it misses a subcommand.
`docker buildx version` should work both in podman and docker. Tested both
with docker-ce-20.10.18 + docker-buildx-plugin-0.10.2 and podman-4.5.0 +
podman-docker.
In the installation script we use coreos/etcd path which redirect
to etcd-io/etcd. This commit replace the same.
Signed-off-by: Humble Chirammal <humble.devassy@gmail.com>
This PR updates changes related references to the legacy
release bucket, excluding CHANGELOG updates.
Signed-off-by: Ricky Sadowski <richard.j.sadowski@gmail.com>
- if binaries are already present skip building them
- install missing packages like nftables and kmod
- work better when cgroups v2 is present
- update to newer CNI version (v1.2.0)
- Ensure we wait for coredns to stabilize
- Grab docker log as well (this has containerd logs too)
Used tips from:
- https://kubernetes.io/docs/tasks/administer-cluster/dns-debugging-resolution/
Tested locally in an environment as close to CI as possible:
- https://gist.github.com/dims/3c83730c99f61e36b8dd2d61abe68fe7
Signed-off-by: Davanum Srinivas <davanum@gmail.com>
This involves moving the report files, but it allows me to delete the
indirect variable and indirect array code in update-codgen. As proud as
I was of figuring that out, I am also ashamed of myself for doing it.
This is my atonement.
Use the "subprojects" aspect of update-codegen to generat openapi for
the subprojects. Next we can simplify and remove the generic support.
apiextensions-apiserver seems like it was ALWAYS broken:
k8s.io/apiextensions/ doesn't exist, but k8s.io/apiextensions-apiserver
does.
Fixing that causes different openapi results, obviously.
Here's what others in our ecosystem are doing:
https://cs.k8s.io/?q=GINKGO_POLL_PROGRESS_(AFTER%7CINTERVAL)&i=nope&files=&excludeFiles=&repos=
the logs currently are too big partially because of this
incessant output from the progress thingy
When someone wants to debug something, they can use this
set of parameters to something lower to capture these
additional logs.
Signed-off-by: Davanum Srinivas <davanum@gmail.com>
k8s_tag_files_matching looks for a slash after its argument, so the current value doesnt match anything
also update codegen
this is required for apiextensions-apiserver tests. After fixing apiextensions server tests to use type-aware SSA (instead of erroneously using untyped SSA) there were errors since none of the apiextensions types were actually used in the openapi given to tests.
In strict mode, stylecheck complains about Convert_* and SetDefaults_*
functions in Kubernetes because they use underscores. We want to allow that to
make the functions more readable.
This moves the hack/ directory and scripts to the examples dir, which is
a distinct module. This avoids some Go unpleasantness around module
boundaries and just makes more sense.
When running this script more than once on Debian and Ubuntu, we fail to
chown -R the CERT_DIR due to this file owned by root and the CERT_DIR
owned by the unprivileged user running the script.
Let's remove the file, that is something we can always do, before
generating the certs. This fixes the problem on Debian and Ubuntu local
setups.
Signed-off-by: Rodrigo Campos <rodrigoca@microsoft.com>
This commit is the main API piece of KEP-3257 (ClusterTrustBundles).
This commit:
* Adds the certificates.k8s.io/v1alpha1 API group
* Adds the ClusterTrustBundle type.
* Registers the new type in kube-apiserver.
* Implements the type-specfic validation specified for
ClusterTrustBundles:
- spec.pemTrustAnchors must always be non-empty.
- spec.signerName must be either empty or a valid signer name.
- Changing spec.signerName is disallowed.
* Implements the "attest" admission check to restrict actions on
ClusterTrustBundles that include a signer name.
Because it wasn't specified in the KEP, I chose to make attempts to
update the signer name be validation errors, rather than silently
ignored.
I have tested this out by launching these changes in kind and
manipulating ClusterTrustBundle objects in the resulting cluster using
kubectl.
The apiextensions-apiserver itself only depends on the following runtime
libraries when linking dynamically:
```
> ldd _output/bin/apiextensions-apiserver
linux-vdso.so.1 (0x00007ffd1b39f000)
libpthread.so.0 => /nix/store/4nlgxhb09sdr51nc9hdm8az5b08vzkgx-glibc-2.35-163/lib/libpthread.so.0 (0x00007fe836022000)
libc.so.6 => /nix/store/4nlgxhb09sdr51nc9hdm8az5b08vzkgx-glibc-2.35-163/lib/libc.so.6 (0x00007fe835e00000)
/nix/store/4nlgxhb09sdr51nc9hdm8az5b08vzkgx-glibc-2.35-163/lib/ld-linux-x86-64.so.2 => /nix/store/4nlgxhb09sdr51nc9hdm8az5b08vzkgx-glibc-2.35-163/lib64/ld-linux-x86-64.so.2 (0x00007fe836029000)
```
We now move the apiextensions-apiserver to become a static binary as
well to achieve maximum portability.
Signed-off-by: Sascha Grunert <sgrunert@redhat.com>
If a CRI error occurs during the terminating phase after a pod is
force deleted (API or static) then the housekeeping loop will not
deliver updates to the pod worker which prevents the pod's state
machine from progressing. The pod will remain in the terminating
phase but no further attempts to terminate or cleanup will occur
until the kubelet is restarted.
The pod worker now maintains a store of the pods state that it is
attempting to reconcile and uses that to resync unknown pods when
SyncKnownPods() is invoked, so that failures in sync methods for
unknown pods no longer hang forever.
The pod worker's store tracks desired updates and the last update
applied on podSyncStatuses. Each goroutine now synchronizes to
acquire the next work item, context, and whether the pod can start.
This synchronization moves the pending update to the stored last
update, which will ensure third parties accessing pod worker state
don't see updates before the pod worker begins synchronizing them.
As a consequence, the update channel becomes a simple notifier
(struct{}) so that SyncKnownPods can coordinate with the pod worker
to create a synthetic pending update for unknown pods (i.e. no one
besides the pod worker has data about those pods). Otherwise the
pending update info would be hidden inside the channel.
In order to properly track pending updates, we have to be very
careful not to mix RunningPods (which are calculated from the
container runtime and are missing all spec info) and config-
sourced pods. Update the pod worker to avoid using ToAPIPod()
and instead require the pod worker to directly use
update.Options.Pod or update.Options.RunningPod for the
correct methods. Add a new SyncTerminatingRuntimePod to prevent
accidental invocations of runtime only pod data.
Finally, fix SyncKnownPods to replay the last valid update for
undesired pods which drives the pod state machine towards
termination, and alter HandlePodCleanups to:
- terminate runtime pods that aren't known to the pod worker
- launch admitted pods that aren't known to the pod worker
Any started pods receive a replay until they reach the finished
state, and then are removed from the pod worker. When a desired
pod is detected as not being in the worker, the usual cause is
that the pod was deleted and recreated with the same UID (almost
always a static pod since API UID reuse is statistically
unlikely). This simplifies the previous restartable pod support.
We are careful to filter for active pods (those not already
terminal or those which have been previously rejected by
admission). We also force a refresh of the runtime cache to
ensure we don't see an older version of the state.
Future changes will allow other components that need to view the
pod worker's actual state (not the desired state the podManager
represents) to retrieve that info from the pod worker.
Several bugs in pod lifecycle have been undetectable at runtime
because the kubelet does not clearly describe the number of pods
in use. To better report, add the following metrics:
kubelet_desired_pods: Pods the pod manager sees
kubelet_active_pods: "Admitted" pods that gate new pods
kubelet_mirror_pods: Mirror pods the kubelet is tracking
kubelet_working_pods: Breakdown of pods from the last sync in
each phase, orphaned state, and static or not
kubelet_restarted_pods_total: A counter for pods that saw a
CREATE before the previous pod with the same UID was finished
kubelet_orphaned_runtime_pods_total: A counter for pods detected
at runtime that were not known to the kubelet. Will be
populated at Kubelet startup and should never be incremented
after.
Add a metric check to our e2e tests that verifies the values are
captured correctly during a serial test, and then verify them in
detail in unit tests.
Adds 23 series to the kubelet /metrics endpoint.
Currently we only cleanup on exit. Let's trap SIGINT (ctrl-c) too, so we
always cleanup everything.
Otherwise if we ctrl-c is easy to leave something running, specially if
we ctrl-c while the cleanup function is running. And when we leave
something running and don't reused the certs ($REUSE_CERTS), that is the
default, something is left running and it fails with weird ways as we
can't auth with the new certs.
Signed-off-by: Rodrigo Campos <rodrigoca@microsoft.com>
https://github.com/kubernetes/kubernetes/pull/109728 added a
golangci-strict.yaml where gingkolinter and stylecheck (some recent additions
to golangci.yaml) were missing.
To prevent such mistakes in the future, lines that are intentionally different
get annotated with a comment about golangci-strict.yaml or golangci.yaml.
Then a suitable diff command in the new verify-golangci-lint-config.sh checks
that only such lines, comments and blank lines are different.
The long-term goal is that when "make verify" is invoked in pull job, it will
also run golangci-lint with the strict configuration and write an
$ARTIFACTS/golangci-lint-githubactions.log file with GitHub actions
annotations. How to get those published for the GitHub PR is open.
When "make verify" is invoked manually or in any other job, the stricter check
will be skipped. That works because "PR_NUMBER" is only set for pre-merge
jobs (https://github.com/kubernetes/test-infra/blob/master/prow/jobs.md#job-environment-variables).
Because strict linting is still new and might not be useful without a better
UI (= GitHub annotations), this PR does not fully enable the integration into
"make verify". Instead, the new verify-golangci-lint-pr.sh is excluded from it
and needs to be run in a separate job.
It is useful to check new code with a stricter configuration because we want it
to be of higher quality. Code reviews also become easier when reviewers don't
need to point out inefficient code manually.
What exactly should be enabled is up for debate. The current config uses the
golangci-lint defaults plus everything that is enabled explicitly by the normal
golangci.yaml, just to be on the safe side.
go list -find takes ~60% the time:
$ time go list -e ./... | grep -E -v "/(build|third_party|vendor|staging|clientset_generated|hack)/" | md5sum
b5593b3f51f3b3cd08c33bbff9627d10 -
real 0m2.687s
user 0m3.624s
sys 0m1.552s
$ time go list -find -e ./... | grep -E -v "/(build|third_party|vendor|staging|clientset_generated|hack)/" | md5sum
b5593b3f51f3b3cd08c33bbff9627d10 -
real 0m1.721s
user 0m1.675s
sys 0m1.197s
https://github.com/kubernetes/kubernetes/pull/116166#discussion_r1123924871
To run just a specific linter based on command line flags, the default
configuration needs to be disabled (because it would enable additional ones)
and then command line flags must be passed through to "golangci-lint run".
For example, to lint with just "go vet" in verbose mode, use:
verify-golangci-lint.sh -c none -- --disable-all --enable=govet -v
LC_ALL is always wanted and GREP_OPTIONS is never wanted. The `grep
--color=never` dates back to 2016, an issue with OLD grep on Macs, which
was hard to deal with when this was all Makefile magic. Now that it's a
script, we can do it simpler.
Because the script now explicitly selects the configuration file, the files no
longer have to be in the root directory. Having them in hack without the
leading dot is better because they then have the same owners as the script and
are more visible.
The downside is that manual invocations of golangci-lint without the parameter
no longer work.
All wrappers except for ExpectNoError are identical to their gomega
counterparts. The only advantage that they have is that their invocations are
shorter.
That advantage does not outweigh their disadvantages:
- cannot be used in combination with gomega.Eventually/Consistently
- not a full replacement for gomega, so we just end up using both
- don't support passing a stack offset and thus cannot be used in helper
functions
- ginkgolinter does not work for them, so sub-optimal calls like this one
are not reported:
framework.ExpectEqual(len(items), 0)
->
gomega.Expect(items).To(gomega.BeEmpty())
- developers try to make do with what's available in the framework, leading
to sub-optimal checks like this:
framework.ExpectEqual(true, strings.Contains(event.Message, expectedEventError), "Event error should indicate non-root policy caused container to not start")
->
gomega.Expect(event.Message).To(gomega.ContainSubstring(expectedEventError), "Event error should indicate non-root policy caused container to not start")
So let's remove these wrappers. As a first step they get marked as deprecated.
This enables stricter
linting (https://github.com/kubernetes/kubernetes/pull/109728), once enabled,
to report new code which uses them.