The iptables rule that matches kubeNodePortLocalSetSCTP must be inserted
before the one matches kubeNodePortSetSCTP, otherwise all SCTP traffic
would be masqueraded regardless of whether its ExternalTrafficPolicy is
Local or not.
To cover the case in tests, the patch adds rule order validation to
checkIptables.
When doing partial updates for uncountedTerminatedPods, the controller might have removed UIDs for Pods which still had finalizers.
Also make more space by removing UIDs that don't have finalizers at the beginning of the sync.
This change fixes the order in which the PodSecurity and
PodSecurityPolicy admission plugins are run. The old code intended
for PSA to run before PSP, but attempted to enforce that via
registration order (which is irrelevant). Now PSA is correctly
executed before PSP to allow for audit and warning modes to be
exercised even in the presence of a deny PSP policy.
Signed-off-by: Monis Khan <mok@vmware.com>
The configuration is deprecated and targets removal for v1.23. Tests
cases have been changed as well.
Signed-off-by: Sascha Grunert <sgrunert@redhat.com>
Prior to 1.22 a user could change NodePort values within a service
during an update, and the apiserver would allocate values for any that
were not specified.
Consider a YAML like:
```
apiVersion: v1
kind: Service
metadata:
name: foo
spec:
type: NodePort
ports:
- name: p
port: 80
- name: q
port: 81
selector:
app: foo
```
When this is created, nodeport values will be allocated for each port.
Something like:
```
apiVersion: v1
kind: Service
metadata:
name: foo
spec:
clusterIP: 10.0.149.11
type: NodePort
ports:
- name: p
nodePort: 30872
port: 80
protocol: TCP
targetPort: 9376
- name: q
nodePort: 31310
port: 81
protocol: TCP
targetPort: 81
selector:
app: foo
```
If the user PUTs (kubectl replace) the original YAML, we would see that
`.nodePort = 0`, and allocate new ports. This was ugly at best.
In 1.22 we fixed this to not allocate new values if we still had the old
values, but instead re-assign them. Net new ports would still be seen
as `.nodePort = 0` and so new allocations would be made.
This broke a corner case as follows:
Prior to 1.22, the user could PUT this YAML:
```
apiVersion: v1
kind: Service
metadata:
name: foo
spec:
type: NodePort
ports:
- name: p
nodePort: 31310 # note this is the `q` value
port: 80
- name: q
# note this nodePort is not specified
port: 81
selector:
app: foo
```
The `p` port would take the `q` port's value. The `q` port would be
seen as `.nodePort = 0` and a new value allocated. In 1.22 this results
in an error (duplicate value in `p` and `q`).
This is VERY minor but it is an API regression, which we try to avoid,
and the fix is not too horrible.
This commit adds more robust testing of this logic.
These events are currently emitted for a pod using a generic ephemeral volume:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedScheduling 3s default-scheduler 0/1 nodes are available: 1 persistentvolumeclaim "my-csi-app-inline-volume-my-csi-volume" not found.
Warning FailedScheduling 2s default-scheduler 0/1 nodes are available: 1 pod has unbound immediate PersistentVolumeClaims.
The one about "persistentvolumeclaim not found" is potentially confusing. It
occurs because the scheduler typically checks the pod before the ephemeral
volume controller had a chance to create the PVC.
This is a bit easier to understand:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedScheduling 4s default-scheduler 0/1 nodes are available: 1 waiting for ephemeral volume controller to create the persistentvolumeclaim "my-csi-app-inline-volume-my-csi-volume".
Warning FailedScheduling 2s default-scheduler 0/1 nodes are available: 1 pod has unbound immediate PersistentVolumeClaims.
This PR adds GA AnnStorageProvisioner annotation to
a PVC if the PVC requires dynamic provisioning. This
also deprecates the beta AnnStorageProvisioner annotation
and it will be removed in a later release.