CodecFactory is started with EnableStrict that throws an error when deserializing
a Kubelet component config that is malformed (e.g. unknown or duplicate keys).
When strict decoding a v1beta1 config fails, non-strict decoding is used and a warning is emitted.
For this, NewSchemeAndCodecs is now a variadic function that can take multiple
arguments for augmenting the returned codec factory. Strict decoding is
then explicitely enabled when decoding a kubelet config.
Additionally, decoding a RemoteConfigSource needs to be non-strict
to avoid an accidental error when it contains newer API fields that are not
yet known to the Kubelet.
DecodeKubeletConfiguration returns a wrapped error instead of a simple string
so its type can be tested.
Add unit tests for unhappy paths when loading a component config
Add keys for test cases struct fields, remove nil field initialization
Co-Authored-By: Jordan Liggitt <jordan@liggitt.net>
`Get` method within the fake clientset returns an object that would not be normally returned when using the real clientset. Reproducer:
```go
package main
import (
v1 "k8s.io/api/core/v1"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/client-go/kubernetes/fake"
)
func main () {
cm := &v1.ConfigMap{
ObjectMeta: metav1.ObjectMeta{Namespace: metav1.NamespaceSystem, Name: "cm"},
}
client := fake.NewSimpleClientset(cm)
obj, err := client.CoreV1().ConfigMaps("").Get("", metav1.GetOptions{})
if err != nil {
panic(err)
}
fmt.Printf("obj: %#v\n", obj)
}
```
stored under `test.go` of `github.com/kubernetes/kubernetes` (master HEAD) root directory and ran:
```sh
$ go run test.go
obj: &v1.ConfigMap{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"cm", GenerateName:"", Namespace:"kube-system", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Data:map[string]string(nil), BinaryData:map[string][]uint8(nil)}
```
As you can see fake clientset with a "test" configmap is created. When getting the object through the clientset back, I intentionally set the object name to an empty string. I would expect to get an error saying config map "" was not found. However, I get "test" configmap instead.
Reason for that is inside implementation of `filterByNamespaceAndName` private function:
```go
func filterByNamespaceAndName(objs []runtime.Object, ns, name string) ([]runtime.Object, error) {
var res []runtime.Object
for _, obj := range objs {
acc, err := meta.Accessor(obj)
if err != nil {
return nil, err
}
if ns != "" && acc.GetNamespace() != ns {
continue
}
if name != "" && acc.GetName() != name {
continue
}
res = append(res, obj)
}
return res, nil
}
```
When `name` is empty, `name != "" && acc.GetName() != name` condition is false and thus `obj` is consider as a fit.
[1] https://github.com/kubernetes/client-go/blob/master/testing/fixture.go#L481-L493
A preemption is a disruption event that should have a metric so that
the rate of preemption can be assessed. Nodes that are under heavy
preemption may have conflicting workloads or otherwise need attention.
A sudden burst of preemption on a cluster in steady state could
indicate pathological conditions within the scheduler or workload
controllers.
This patch fixes an issue where best-effort pods were not admitted
to the node if the single-numa-node policy was set.
This was because the Admit policy in single-numa-node policy does
not admit any pod where the hint is anything but single NUMA node. The 'best hint' in this case is {<set bits for num. Numa Nodes on machine>, true}
So on a machine with 2 NUMA nodes the best hint for a best-effort pod is {11,true} as best-effort pods have no Topology preferences.
The single-numa-node policy fails any pod with a not preferred hint OR a hint where > 1 bits are set, thus the above example resulting in termintaed pods with a Topology Affinity Error.
This is a short term fix for the single-numa-node policy, as there will be code refactoring for the 1.17 release.
This patch fixes an issue in the TopologyManager that wouldn't allow
pods to be admitted if pods were launched with the SingleNUMANode policy
and any of the hint providers had no NUMA preferences.
This is due to 2 factors:
1) Any hint provider that passes back a `nil` as its hints, has its hint
automatically transformed into a single {11 true} hint before merging
2) We added a special casing for the SingleNumaNodePolicy() in the
TopologyManager that essentially turns these hints into a
{11 false} anytime a {11 true} is seen.
The current patch reworks this logic so the that TopologyManager can
tell the difference between a "don't care" hint and a true "{11 true}"
hint returned by the hint provider. Only true "{11 true}" hints will be
converted by the special casing for the SingleNumaNodePolicy(), while
"don't care" hints will not.
This is a short term fix for this issue until we do a larger refactoring
of this code for the 1.17 release.