The repair loop controller watches the ServiceCIDRs configured
and use them to handle the repair of the IPAddresses assigned
by the kube-apiserver.
Change-Id: I8cfe8fd6285ea91192fc4ec72eaeea1eb004a235
Change-Id: If4be12e2c67b340d86c4efa2f9fb3672f0661636
Create a new allocator that uses the ServiceCIDRs configured in the
system to create IPAllocators.
The CIDRAllocator will create IPAllocators per parent ServiceCIDRs,
since we allow overlapping, there is no need to have an allocator
per ServiceCIDR.
The benefit of the IPAllocator is that uses the informer cache as
storage, hence, it does not need to keep cache and as only as logical
abstraction. This allows to create and delete IPAllocators without
any penalty.
IPAllocators can allocate IP addresses only if they are ready (not
being deleted)
Change-Id: I3fdda69991907c39cca3120fe2d850f14dcccec2
After enabling PersistentVolumeLastPhaseTransitionTime feature, any
test that compares PV objects that transitioned phase needs to handle
timestamp values correctly.
Either the tests should avoid phase transitions if not needed or the
test needs to set the same timestamp on new PV object so it's not
changed and can be checked for equality later, the latter is used in
this commit.
This field is not needed, IPAddresses are unique and
the name is canonicalized to avoid duplicates.
Change-Id: Iccaaf5d55e2af61fea7af9abd39584a80ed4054e
The repair loop are great for saving us of leaks, but the side effect
is that bugs can go unnoticed for a long time, so we need some
signal to be able to identify those errors proactivily.
Add two new metrics to identify:
- errors on the reconcile loop
- errors per clusterip
The repair loop are great for saving us of leaks, but the side effect
is that bugs can go unnoticed for a long time, so we need some
signal to be able to identify those errors proactivily.
Add two new metrics to identify:
- errors on the reconcile loop
- errors per nodeport
The Service API Rest implementation is complex and has to use different
hooks on the REST storage. The status store was making a shallow copy of
the storage before adding the hooks, so it was not inheriting the hooks.
The status store must have the same hooks as the rest store to be able
to handle correctly the allocation and deallocation of ClusterIPs and
nodePorts.
Change-Id: I44be21468d36017f0ec41a8f912b8490f8f13f55
Signed-off-by: Antonio Ojea <aojea@google.com>
When defining a ClusterIP Service, we can specify externalIP, and the
traffic policy of externalIP is subject to externalTrafficPolicy.
However, the policy can't be set when type is not NodePort or
LoadBalancer, and will default to Cluster when kube-proxy processes the
Service.
This commit updates the defaulting and validation of Service to allow
specifying ExternalTrafficPolicy for ClusterIP Services with
ExternalIPs.
Signed-off-by: Quan Tian <qtian@vmware.com>
PVC and containers shared the same ResourceRequirements struct to define their
API. When resource claims were added, that struct got extended, which
accidentally also changed the PVC API. To avoid such a mistake from happening
again, PVC now uses its own VolumeResourceRequirements struct.
The `Claims` field gets removed because risk of breaking someone is low:
theoretically, YAML files which have a claims field for volumes now
get rejected when validating against the OpenAPI. Such files
have never made sense and should be fixed.
Code that uses the struct definitions needs to be updated.
The fact that the .status.loadBalancer field can be set while .spec.type
is not "LoadBalancer" is a flub. Any spec update will already clear
.status.ingress, so it's hard to really rely on this. After this
change, updates which try to set this combination will fail validation.
Existing cases of this will not be broken. Any spec/metadata update
will clear it (no error) and this is the only stanza of status.
New gate "AllowServiceLBStatusOnNonLB" is off by default, but can be
enabled if this change actually breaks someone, which seems exceeedingly
unlikely.
PDB with an empty selector `{}` is selecting all the pods in a namespace.
But, during the `drain`, all the pods are getting evicted which is not expected.
This change should fix the issue and honor the pdb before evicting the pods.
Signed-off-by: Sai Ramesh Vanka <svanka@redhat.com>