Fix#51406
If cloud providers(like aws, gce etc...) implement ExternalID()
and support getting instance by ProviderID , they also implement
InstanceExistsByProviderID().
Automatic merge from submit-queue (batch tested with PRs 52240, 48145, 52220, 51698, 51777). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>..
Avoid printing node list for LoadBalancer in log file
**What this PR does / why we need it**: Production log files get saturated with EnsureLoadBalancer messages, this is problematic for sysadmins.
This patch avoids printing the node list on the AWS logs so the log file is more readable.
We should be able to build a cloud-controller-manager without having to
pull in code specific to GCE and AWS clouds. Note that this is a tactical
fix for now, we should have allow PVLabeler to be passed into the
PersistentVolumeController, maybe come up with better interfaces etc. Since
it is too late to do all that for 1.8, we just move cloud specific code
to where they belong and we check for PVLabeler method and use it where
needed.
Fixes#51629
Modifies the VolumeZonePredicate to handle a PV that belongs to more
then one zone or region. This is indicated by the zone or region label
value containing a comma separated list.
Automatic merge from submit-queue (batch tested with PRs 51301, 50497, 50112, 48184, 50993)
Replace the deprecated function with the suggest function in aws module
**What this PR does / why we need it**:
There are some deprecated function and I replace the deprecated function with the suggest function in aws module.
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #
**Special notes for your reviewer**:
**Release note**:
```
NONE
```
Automatic merge from submit-queue (batch tested with PRs 51301, 50497, 50112, 48184, 50993)
AWS: handle multiple IPs when using more than 1 network interface per ec2 instance
**What this PR does / why we need it**:
Adds support for kubelets running with the AWS cloud provider on ec2 instances with multiple network interfaces. If the active interface is not eth0, the AWS cloud provider currently reports the wrong node IP.
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes#44686
**Special notes for your reviewer**:
There is also some work necessary for handling multiple DNS names and such but I didn't fix them in this PR.
**Release note**:
```release-note
Fixed bug in AWS provider to handle multiple IPs when using more than 1 network interface per ec2 instance.
```
Automatic merge from submit-queue (batch tested with PRs 51513, 51515, 50570, 51482, 51448)
implementation of GetZoneByProviderID and GetZoneByNodeName for AWS
This a part of the #50926 effort
cc @luxas
**Release note**:
```release-note
None
```
AWS CreateVolume call does not check if referenced encryption key actually
exists and returns a valid new AWS EBS volume even though an invalid key
was specified. Later on it removes the EBS silently when its encryption fails.
To work around this buggy behavior we manually check that the key exists
before calling CreateVolume.
Automatic merge from submit-queue (batch tested with PRs 51174, 51363, 51087, 51382, 51388)
Add InstanceExistsByProviderID to cloud provider interface for CCM
**What this PR does / why we need it**:
Currently, [`MonitorNode()`](02b520f0a4/pkg/controller/cloud/nodecontroller.go (L240)) in the node controller checks with the CCM if a node still exists by calling `ExternalID(nodeName)`. `ExternalID` is supposed to return the provider id of a node which is not supported on every cloud. This means that any clouds who cannot infer the provider id by the node name from a remote location will never remove nodes that no longer exist.
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes#50985
**Special notes for your reviewer**:
We'll want to create a subsequent issue to track the implementation of these two new methods in the cloud providers.
**Release note**:
```release-note
Adds `InstanceExists` and `InstanceExistsByProviderID` to cloud provider interface for the cloud controller manager
```
/cc @wlan0 @thockin @andrewsykim @luxas @jhorwit2
/area cloudprovider
/sig cluster-lifecycle
Automatic merge from submit-queue (batch tested with PRs 50255, 50885)
AWS: Arbitrarily choose first (lexicographically) subnet in AZ
When there is more than one subnet for an AZ on AWS choose arbitrarily
chose the first one lexicographically for consistency.
**What this PR does / why we need it**:
If two subnets were to be used appear in the same aws az which one is chosen is currently not consistent. This could lead to difficulty in diagnosing issues.
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes#45983
**Special notes for your reviewer**:
**Release note**:
```release-note
```
Automatic merge from submit-queue (batch tested with PRs 49107, 47177, 49234, 49224, 49227)
Added logging to AWS api calls. #46969
Additionally logging of when AWS API calls start and end to help diagnose problems with kubelet on cloud provider nodes not reporting node status periodically. There's some inconsistency in logging around this PR we should discuss.
IMO, the API logging should be at a higher level than most other types of logging as you would probably only want it in limited instances. For most cases that is easy enough to do, but there are some calls which have some logging around them already, namely in the instance groups. My preference would be to keep the existing logging as it and just add the new API logs around the API call.
Automatic merge from submit-queue (batch tested with PRs 47915, 47856, 44086, 47575, 47475)
AWS: Fix suspicious loop comparing permissions
Because we only ever call it with a single UserId/GroupId, this would
not have been a problem in practice, but this fixes the code.
Fix#36902
```release-note
NONE
```
Automatic merge from submit-queue (batch tested with PRs 47922, 47195, 47241, 47095, 47401)
AWS: Set CredentialsChainVerboseErrors
This avoids a rather confusing error message.
Fix#39374
```release-note
NONE
```
Automatic merge from submit-queue
New annotation to add existing Security Groups to ELBs created by AWS cloudprovider
**What this PR does / why we need it**:
When K8S cluster is deployed in existing VPC there might be a need to attach extra SecurityGroups to ELB created by AWS cloudprovider. Example of it can be cases, where such Security Groups are maintained by another team.
**Special notes for your reviewer**:
For tests to pass depends on https://github.com/kubernetes/kubernetes/pull/45168 and therefore includes it
**Release note**:
```release-note
New 'service.beta.kubernetes.io/aws-load-balancer-extra-security-groups' Service annotation to specify extra Security Groups to be added to ELB created by AWS cloudprovider
```
We maintain a cache of all instances, and we invalidate the cache
whenever we see a new instance. For ELBs that should be sufficient,
because our usage is limited to instance ids and security groups, which
should not change.
Fix#45050
Automatic merge from submit-queue (batch tested with PRs 47510, 47516, 47482, 47521, 47537)
Batch AWS getInstancesByNodeNames calls with FilterNodeLimit
We are going to limit the getInstancesByNodeNames call with a batch
size of 150.
Fixes - #47271
```release-note
AWS: Batch DescribeInstance calls with nodeNames to 150 limit, to stay within AWS filter limits.
```
Automatic merge from submit-queue
AWS: Process disk attachments even with duplicate NodeNames
Fix#47404
```release-note
AWS: Process disk attachments even with duplicate NodeNames
```
Automatic merge from submit-queue (batch tested with PRs 46929, 47391, 47399, 47428, 47274)
AWS: Richer log message when metadata fails
Not a resolution, but should at least help determine the issue.
Issue #41904
```release-note
NONE
```
Service objects can be annotated with
`service.beta.kubernetes.io/aws-load-balancer-extra-security-groups`
to specify existing security groups to be added to ELB
created by AWS cloudprovider
Automatic merge from submit-queue (batch tested with PRs 36721, 46483, 45500, 46724, 46036)
AWS: Allow configuration of a single security group for ELBs
**What this PR does / why we need it**:
AWS has a hard limit on the number of Security Groups (500). Right now every time an ELB is created Kubernetes is creating a new Security Group. This allows for specifying a Security Group to use for all ELBS
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #
**Special notes for your reviewer**:
For some reason the Diff tool makes this look like it was way more changes than it really was.
**Release note**:
```release-note
```
Automatic merge from submit-queue (batch tested with PRs 46239, 46627, 46346, 46388, 46524)
move labels to components which own the APIs
During the apimachinery split in 1.6, we accidentally moved several label APIs into apimachinery. They don't belong there, since the individual APIs are not general machinery concerns, but instead are the concern of particular components: most commonly the kubelet. This pull moves the labels into their owning components and out of API machinery.
@kubernetes/sig-api-machinery-misc @kubernetes/api-reviewers @kubernetes/api-approvers
@derekwaynecarr since most of these are related to the kubelet
Automatic merge from submit-queue (batch tested with PRs 46686, 45049, 46323, 45708, 46487)
Log an EBS vol's instance when attaching fails because VolumeInUse
Messages now look something like this:
E0427 15:44:37.617134 16932 attacher.go:73] Error attaching volume "vol-00095ddceae1a96ed": Error attaching EBS volume "vol-00095ddceae1a96ed" to instance "i-245203b7": VolumeInUse: vol-00095ddceae1a96ed is already attached to an instance
status code: 400, request id: f510c439-64fe-43ea-b3ef-f496a5cd0577. The volume is currently attached to instance "i-072d9328131bcd9cd"
weird that AWS doesn't bother to put that information in there for us (it does when you try to delete a vol that's in use)
```release-note
NONE
```
Automatic merge from submit-queue (batch tested with PRs 46489, 46281, 46463, 46114, 43946)
AWS: consider instances of all states in DisksAreAttached, not just "running"
Require callers of `getInstancesByNodeNames(Cached)` to specify the states they want to filter instances by, if any. DisksAreAttached, cannot only get "running" instances because of the following attach/detach bug we discovered:
1. Node A stops (or reboots) and stays down for x amount of time
2. Kube reschedules all pods to different nodes; the ones using ebs volumes cannot run because their volumes are still attached to node A
3. Verify volumes are attached check happens while node A is down
4. Since aws ebs bulk verify filters by running nodes, it assumes the volumes attached to node A are detached and removes them all from ASW
5. Node A comes back; its volumes are still attached to it but the attach detach controller has removed them all from asw and so will never detach them even though they are no longer desired on this node and in fact desired elsewhere
6. Pods cannot run because their volumes are still attached to node A
So the idea here is to remove the wrong assumption that callers of `getInstancesByNodeNames(Cached)` only want "running" nodes.
I hope this isn't too confusing, open to alternative ways of fixing the bug + making the code nice.
ping @gnufied @kubernetes/sig-storage-bugs
```release-note
Fix AWS EBS volumes not getting detached from node if routine to verify volumes are attached runs while the node is down
```
Automatic merge from submit-queue
AWS: support node port health check
**What this PR does / why we need it**:
if a custom health check is set from the beta annotation on a service it
should be used for the ELB health check. This patch adds support for
that.
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #
**Special notes for your reviewer**:
Let me know if any tests need to be added.
**Release note**:
```release-note
```
Automatic merge from submit-queue (batch tested with PRs 45518, 46127, 46146, 45932, 45003)
aws: Support for ELB tagging by users
This PR provides support for tagging AWS ELBs using information in an
annotation and provided as a list of comma separated key-value pairs.
Closes https://github.com/kubernetes/community/pull/404
An admin wants to specify in which AWS availability zone(s) users may create persistent volumes using dynamic provisioning.
That's why the admin can now configure in StorageClass object a comma separated list of zones. Dynamically created PVs for PVCs that use the StorageClass are created in one of the configured zones.
This PR provides support for tagging AWS ELBs using information in an
annotation and provided as a list of comma separated key-value pairs.
Closes https://github.com/kubernetes/community/pull/404
Automatic merge from submit-queue (batch tested with PRs 43067, 45586, 45590, 38636, 45599)
AWS: Remove check that forces loadBalancerSourceRanges to be 0.0.0.0/0.
fixes#38633
Remove check that forces loadBalancerSourceRanges to be 0.0.0.0/0. Also, remove check that forces service.beta.kubernetes.io/aws-load-balancer-internal annotation to be 0.0.0.0/0. Ideally, it should be a boolean, but for backward compatibility, leaving it to be a non-empty value
I changed the function signature to contain protocol, port, and path.
When the service has a health check path and port set it will create an
HTTP health check that corresponds to the port and path. If those are
not set it will create a standard TCP health check on the first port
from the listeners that is not nil. As far as I know, there is no way to
tell if a Health Check should be HTTP vs HTTPS.
Automatic merge from submit-queue (batch tested with PRs 43925, 42512)
AWS: add KubernetesClusterID as additional option when VPC is set
This is a small enhancement after the PRs https://github.com/kubernetes/kubernetes/pull/41695 and https://github.com/kubernetes/kubernetes/pull/39996
## Release Notes
```release-note
AWS cloud provider: allow to set KubernetesClusterID or KubernetesClusterTag in combination with VPC.
```
The cloudprovider is being refactored out of kubernetes core. This is being
done by moving all the cloud-specific calls from kube-apiserver, kubelet and
kube-controller-manager into a separately maintained binary(by vendors) called
cloud-controller-manager. The Kubelet relies on the cloudprovider to detect information
about the node that it is running on. Some of the cloudproviders worked by
querying local information to obtain this information. In the new world of things,
local information cannot be relied on, since cloud-controller-manager will not
run on every node. Only one active instance of it will be run in the cluster.
Today, all calls to the cloudprovider are based on the nodename. Nodenames are
unqiue within the kubernetes cluster, but generally not unique within the cloud.
This model of addressing nodes by nodename will not work in the future because
local services cannot be queried to uniquely identify a node in the cloud. Therefore,
I propose that we perform all cloudprovider calls based on ProviderID. This ID is
a unique identifier for identifying a node on an external database (such as
the instanceID in aws cloud).
Automatic merge from submit-queue (batch tested with PRs 41306, 42187, 41666, 42275, 42266)
Implement bulk polling of volumes
This implements Bulk volume polling using ideas presented by
justin in https://github.com/kubernetes/kubernetes/pull/39564
But it changes the implementation to use an interface
and doesn't affect other implementations.
cc @justinsb
This implements Bulk volume polling using ideas presented by
justin in https://github.com/kubernetes/kubernetes/pull/39564
But it changes the implementation to use an interface
and doesn't affect other implementations.
Set the vpcID when dummy is created (+1 squashed commit)
Squashed commits:
[0b1ac6e83e] Use the VPC flag and KubernetesClusterTag as identifier (+1 squashed commit)
Squashed commits:
[962bc56e38] Remove again availabilityZone and fix naming (+1 squashed commit)
Squashed commits:
[e3d1b41807] Use the VCID flag as identifier (+1 squashed commit)
Squashed commits:
[5b99fe6243] Add flag for external master
Automatic merge from submit-queue (batch tested with PRs 41921, 41695, 42139, 42090, 41949)
AWS: Support shared tag `kubernetes.io/cluster/<clusterid>`
We recognize an additional cluster tag:
kubernetes.io/cluster/<clusterid>
This now allows us to share resources, in particular subnets.
In addition, the value is used to track ownership/lifecycle. When we
create objects, we record the value as "owned".
We also refactor out tags into its own file & class, as we are touching
most of these functions anyway.
```release-note
AWS: Support shared tag `kubernetes.io/cluster/<clusterid>`
```
Automatic merge from submit-queue (batch tested with PRs 38676, 41765, 42103, 41833, 41702)
AWS: Skip instances that are taggged as a master
We recognize a few AWS tags, and skip over masters when finding zones
for dynamic volumes. This will fix#34583.
This is not perfect, in that really the scheduler is the only component
that can correctly choose the zone, but should address the common
problem.
```release-note
AWS: Do not consider master instance zones for dynamic volume creation
```
We recognize an additional cluster tag:
kubernetes.io/cluster/<clusterid>
This now allows us to share resources, in particular subnets.
In addition, the value is used to track ownership/lifecycle. When we
create objects, we record the value as "owned".
We also refactor out tags into its own file & class, as we are touching
most of these functions anyway.
Automatic merge from submit-queue (batch tested with PRs 41756, 36344, 34259, 40843, 41526)
add InternalDNS/ExternalDNS node address types
This PR adds internal/external DNS names to the types of NodeAddresses that can be reported by the kubelet.
will spawn follow up issues for cloud provider owners to include these when possible
```release-note
Nodes can now report two additional address types in their status: InternalDNS and ExternalDNS. The apiserver can use `--kubelet-preferred-address-types` to give priority to the type of address it uses to reach nodes.
```
We recognize a few AWS tags, and skip over masters when finding zones
for dynamic volumes. This will fix#34583.
This is not perfect, in that really the scheduler is the only component
that can correctly choose the zone, but should address the common
problem.
Automatic merge from submit-queue
AWS: trust region if found from AWS metadata
```release-note
AWS: trust region if found from AWS metadata
```
Means we can run in newly announced regions without a code change.
We don't register the ECR provider in new regions, so we will still need
a code change for now.
Fix#35014
Means we can run in newly announced regions without a code change.
We don't register the ECR provider in new regions, so we will still need
a code change for now.
This also means we do trust config / instance metadata, and don't reject
incorrectly configured zones.
Fix#35014
Automatic merge from submit-queue
AWS: Add exponential backoff to waitForAttachmentStatus() and createTags()
We should use exponential backoff while waiting for a volume to get attached/detached to/from a node. This will lower AWS load and reduce API call throttling.
This partly fixes#33088
@justinsb, can you please take a look?
On AWS, we should not reuse device names as long as possible, see
https://aws.amazon.com/premiumsupport/knowledge-center/ebs-stuck-attaching/
"If you specify a device name that is not in use by EC2, but is being used by
the block device driver within the EC2 instance, the attachment of the EBS
volume does not succeed and the EBS volume is stuck in the attaching state."
This patch adds a device name allocator that tries to find a name that's next
to the last used device name instead of using the first available one.
This way we will loop through all device names ("xvdba" .. "xvdzz") before
a device name is reused.
We should use exponential backoff while waiting for a volume to get attached/
detached to/from a node. This will lower AWS load and reduce its API call
throttling.