AWS Network Load Balancer recently got support for cross-zone load balancing.
Use the existing `service.beta.kubernetes.io/aws-load-balancer-cross-zone-load-balancing-enabled`
annotation to configure it.
Solves "Allow to override default AWS endpoint #70588"
Add several new properties to AWS CloudConfig to support custom endpoints.
Initialize/Parse on aws.go init() method which gets called when aws is loaded.
Allows overridden endpoints per servce and region. This allows functionality on air gapped networks.
This change is benign if services are not overridden in CloudConfig
**What type of PR is this?**
/kind cleanup
**What this PR does / why we need it**:
$ hack/verify-golint.sh
Errors from golint:
pkg/cloudprovider/providers/aws/aws_fakes.go:357:9: if block ends with a return statement, so drop this else and outdent its block
pkg/volume/util/util.go:204:9: if block ends with a return statement, so drop this else and outdent its block
**Which issue(s) this PR fixes** *(optional, in fixes #<issue number>(, fixes #<issue_number>, ...) format, will close the issue(s) when PR gets merged)*:
**Special notes for your reviewer**:
**Release note**:
```
NONE
```
This corrects a problem where valid security group ports were removed
unintentionally when updating a service or when node changes occur.
Fixes#60825, #64148
- Move from the old github.com/golang/glog to k8s.io/klog
- klog as explicit InitFlags() so we add them as necessary
- we update the other repositories that we vendor that made a similar
change from glog to klog
* github.com/kubernetes/repo-infra
* k8s.io/gengo/
* k8s.io/kube-openapi/
* github.com/google/cadvisor
- Entirely remove all references to glog
- Fix some tests by explicit InitFlags in their init() methods
Change-Id: I92db545ff36fcec83afe98f550c9e630098b3135
When the cloud-controller-manager is running with PV label initializing controller
and NFS volume is created, it causes nill reference error.
fixes#68996
The previous version forced us to create AWS IAM Policies that are too
permissive when dealing with volumes. That's because:
1. Volumes were created without tags that identifies the new resource as
managed by the cluster. So technically the resourse, at creation time,
is not owned by the cluster.
2. Tags were added to the volume making the resource now managed by the
cluster. The problem being that it could make ANY volume as managed by the
cluster. Thus allowing resources that aren't really part of the cluster,
or part of no cluster at all, to become a resource managed by the cluster.
By combining the operations we can both make the code simpler, since we
don't need to deal with deleting a volume in case we can't apply tags to
it, plus the security model gets a nice improvement.