When we convert the OpenStack cloud provider to run in an external
process, we should be able to use kubernetes Secrets capability to
inject the OS_* variables. This way we can specify the cloud
configuration as a configmap, specify secrets for the userid/password
information. The configmap can be mounted as a file. the secrets can
be made available as environment variables. the external controller
itself can run as a pod/daemonset.
For backward compat, we preload all the OS_* variables, if anything
is in the config file, then that overrides the environment variables.
Automatic merge from submit-queue (batch tested with PRs 55557, 55504, 56269, 55604, 56202). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Change wording in OpenStack Provider
**What this PR does / why we need it**:
Change working from "dealy" into "delay" in OpenStack Provider.
**Release note**:
```release-note
NONE
```
The OpenStack cloud provider retrieves mounted Cinder volume paths
by looking in /dev/disk/by-id, expecting the disk serial IDs (e.g.
SCSI ID) to include the volume ID.
The issue is that not all hypervisors are able to expose this.
For example, Hyper-V will just preserve the original Cinder volume
lun SCSI ID (without setting the volume id). For this reason,
disk path lookups will fail.
In order to be able to leverage Hyper-V based OpenStack providers,
as a fallback, we're querying the metadata service, searching for
disk device metadata. Note that starting with Nova Queens, the Hyper-V
driver always provides disk address information through the instance
metadata.
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Make OpenStack LBaaS v2 Provider configurable
Add option 'lb-provider' to the Loadbalancer section of the OpenStack
cloudprovider configuration to allow using a different LBaaS v2
provider than the default.
**What this PR does / why we need it**:
This PR allows to use a different OpenStack LBaaS v2 provider than the default of the OpenStack cloud.
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #
**Special notes for your reviewer**:
**Release note**:
```release-note
Added option lb-provider to OpenStack cloud provider config
```
Add option 'lb-provider' to the Loadbalancer section of the OpenStack
cloudprovider configuration to allow using a different LBaaS v2
provider than the default.
1. Support autoprobing node-security-group
2. Support multiple Security Groups for cluster's nodes
3. Fix recreating Security Group for cluster's nodes
This is a part of #50726
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
add possibility to ignore volume label in dynamic provisioning
**What this PR does / why we need it**: this is needed if openstack cinder zone name does not match to compute zone names. For instance if there is only one cinder zone and many compute zones.
**Which issue this PR fixes**: fixes#53488
**Special notes for your reviewer**:
```release-note
NONE
```
Automatic merge from submit-queue (batch tested with PRs 52768, 51898, 53510, 53097, 53058). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Ability to run the openstack tests against DevStack
**What this PR does / why we need it**:
Some of the environment variables have changed as devstack defaults
have changed. So look for the older env variables first and try the
newer ones later.
At a minimum you need the following for v3 authentication which is
the default with latest devstack. If you miss the Tenant information
then the token issued will be a unscoped token (and will not have any
service catalog information).
OS_AUTH_URL=http://192.168.0.42/identity
OS_REGION_NAME=RegionOne
OS_USERNAME=demo
OS_PASSWORD=supersecret
OS_TENANT_NAME=demo
OS_USER_DOMAIN_ID=default
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #
**Special notes for your reviewer**:
**Release note**:
```release-note
NONE
```
Automatic merge from submit-queue (batch tested with PRs 53418, 53366, 53115, 53402, 53130). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Fix the version detection of OpenStack Cinder
**What this PR does / why we need it**:
When running Kubernetes against an installation of DevStack which
deploys the Cinder service at a path rather than a port (ex:
http://foo.bar/volume rather than http://foo.bar:xxx), the version
detection fails. It is better to use the OpenStack service catalog.
OTOH, when initialize cinder client, kubernetes will check the
endpoint from the OpenStack service catalog, so we can do this
version detection by it.
There are two case should be fixed in other PR:
1. revisit the version detection after supporting Cinder V3 API.
2. add codes to support MicroVersion after gophercloud supports MicroVersion.
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes#50461
**Special notes for your reviewer**:
/assign @dims
/assign @xsgordon
**Release note**:
```release-note
Using OpenStack service catalog to do version detection
```
Some of the environment variables have changed as devstack defaults
have changed. So look for the older env variables first and try the
newer ones later.
At a minimum you need the following for v3 authentication which is
the default with latest devstack. If you miss the Tenant information
then the token issued will be a unscoped token (and will not have any
service catalog information).
OS_AUTH_URL=http://192.168.0.42/identity
OS_REGION_NAME=RegionOne
OS_USERNAME=demo
OS_PASSWORD=supersecret
OS_TENANT_NAME=demo
OS_USER_DOMAIN_ID=default
When running Kubernetes against an installation of DevStack which
deploys the Cinder service at a path rather than a port (ex:
http://foo.bar/volume rather than http://foo.bar:xxx), the version
detection fails. It is better to use the OpenStack service catalog.
OTOH, when initialize cinder client, kubernetes will check the
endpoint from the OpenStack service catalog, so we can do this
version detection by it.
Automatic merge from submit-queue (batch tested with PRs 49420, 49296, 49299, 49371, 46514)
Avoid looking up instance id until we need it
**What this PR does / why we need it**:
currently kube-controller-manager cannot run outside of a vm started
by openstack (with --cloud-provider=openstack params). We try to read
the instance id from the metadata provider or the config drive or the
file location only when we really need it. In the normal scenario, the
controller-manager uses the node name to get the instance id.
41541910e1/pkg/volume/cinder/attacher.go (L149)
The localInstanceID is currently used only in the test case, so let
us not read it until it is really needed.
So let's try to find the instance-id only when we need it.
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #
**Special notes for your reviewer**:
**Release note**:
```release-note
NONE
```
currently kube-controller-manager cannot run outside of a vm started
by openstack (with --cloud-provider=openstack params). We try to read
the instance id from the metadata provider or the config drive or the
file location only when we really need it. In the normal scenario, the
controller-manager uses the node name to get the instance id.
41541910e1/pkg/volume/cinder/attacher.go (L149)
The localInstanceID is currently used only in the test case, so let
us not read it until it is really needed.
We should allow scenarios where cinder can be used even if the
operator does not want to use the openstack load balancer. So
let's warn in the beginning if subnet-id is missing but fail only
if they try to use the load balancer
Automatic merge from submit-queue (batch tested with PRs 47948, 48631, 48693, 48549, 47593)
OpenStack for cloud-controller-manager
**What this PR does / why we need it**:
This implements the `NodeAddressesByProviderID` and `InstanceTypeByProviderID` methods used by the cloud-controller-manager to the OpenStack provider. The instance type returned is the flavor name, for consistency `InstanceType` has been implemented too returning the same value.
```release-note
NONE
```
This is part of #47257 cc @wlan0
Automatic merge from submit-queue
Statefulsets for cinder: allow multi-AZ deployments, spread pods across zones
**What this PR does / why we need it**: Currently if we do not specify availability zone in cinder storageclass, the cinder is provisioned to zone called nova. However, like mentioned in issue, we have situation that we want spread statefulset across 3 different zones. Currently this is not possible with statefulsets and cinder storageclass. In this new solution, if we leave it empty the algorithm will choose the zone for the cinder drive similar style like in aws and gce storageclass solutions.
**Which issue this PR fixes** fixes#44735
**Special notes for your reviewer**:
example:
```
kind: StorageClass
apiVersion: storage.k8s.io/v1beta1
metadata:
name: all
provisioner: kubernetes.io/cinder
---
apiVersion: v1
kind: Service
metadata:
annotations:
service.alpha.kubernetes.io/tolerate-unready-endpoints: "true"
name: galera
labels:
app: mysql
spec:
ports:
- port: 3306
name: mysql
clusterIP: None
selector:
app: mysql
---
apiVersion: apps/v1beta1
kind: StatefulSet
metadata:
name: mysql
spec:
serviceName: "galera"
replicas: 3
template:
metadata:
labels:
app: mysql
annotations:
pod.alpha.kubernetes.io/initialized: "true"
spec:
containers:
- name: mysql
image: adfinissygroup/k8s-mariadb-galera-centos:v002
imagePullPolicy: Always
ports:
- containerPort: 3306
name: mysql
- containerPort: 4444
name: sst
- containerPort: 4567
name: replication
- containerPort: 4568
name: ist
volumeMounts:
- name: storage
mountPath: /data
readinessProbe:
exec:
command:
- /usr/share/container-scripts/mysql/readiness-probe.sh
initialDelaySeconds: 15
timeoutSeconds: 5
env:
- name: POD_NAMESPACE
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.namespace
volumeClaimTemplates:
- metadata:
name: storage
annotations:
volume.beta.kubernetes.io/storage-class: all
spec:
accessModes: [ "ReadWriteOnce" ]
resources:
requests:
storage: 12Gi
```
If this example is deployed it will automatically create one replica per AZ. This helps us a lot making HA databases.
Current storageclass for cinder is not perfect in case of statefulsets. Lets assume that cinder storageclass is defined to be in zone called nova, but because labels are not added to pv - pods can be started in any zone. The problem is that at least in our openstack it is not possible to use cinder drive located in zone x from zone y. However, should we have possibility to choose between cross-zone cinder mounts or not? Imo it is not good way of doing things that they mount volume from another zone where the pod is located(means more network traffic between zones)? What you think? Current new solution does not allow that anymore (should we have possibility to allow it? it means removing the labels from pv).
There might be some things that needs to be fixed still in this release and I need help for that. Some parts of the code is not perfect.
Issues what i am thinking about (I need some help for these):
1) Can everybody see in openstack what AZ their servers are? Can there be like access policy that do not show that? If AZ is not found from server specs, I have no idea how the code behaves.
2) In GetAllZones() function, is it really needed to make new serviceclient using openstack.NewComputeV2 or could I somehow use existing one
3) This fetches all servers from some openstack tenant(project). However, in some cases kubernetes is maybe deployed only to specific zone. If kube servers are located for instance in zone 1, and then there are another servers in same tenant in zone 2. There might be usecase that cinder drive is provisioned to zone-2 but it cannot start pod, because kubernetes does not have any nodes in zone-2. Could we have better way to fetch kubernetes nodes zones? Currently that information is not added to kubernetes node labels automatically in openstack (which should I think). I have added those labels manually to nodes. If that zone information is not added to nodes, the new solution does not start stateful pods at all, because it cannot target pods.
cc @rootfs @anguslees @jsafrane
```release-note
Default behaviour in cinder storageclass is changed. If availability is not specified, the zone is chosen by algorithm. It makes possible to spread stateful pods across many zones.
```
This change migrates the 'openstack' provider and 'keystone'
authenticator plugin to the newer gophercloud/gophercloud library.
Note the 'rackspace' provider still uses rackspace/gophercloud.
Fixes#30404
This method has been unused by k8s for some time, and yet is the last
piece of the cloud provider API that encourages provider names to be
human-friendly strings (this method applies a regex to instance names).
Actually removing this deprecated method is part of a long effort to
migrate from instance names to instance IDs in at least the OpenStack
provider plugin.
This change implements the Routes API using Neutron's "extraroute"
extension.
To use, this requires all the nodes to be on the same Neutron network
and the UUID of the Neutron router on that network.
Required cloud provider config section:
[Route]
router-id = <UUID of Neutron router>
Ensure kube-controllermanager is started with (non-default)
`--allocate-node-cidrs=true` and set `--cluster-cidr` to the POD
super-subnet (a private /16 would be reasonable).
Based on an earlier version by @timbyr (#19473)
See issue #33128
We can't rely on the device name provided by Cinder, and thus must perform
detection based on the drive serial number (aka It's cinder ID) on the
kubelet itself.
This patch re-works the cinder volume attacher to ignore the supplied
deviceName, and instead defer to the pre-existing GetDevicePath method to
discover the device path based on it's serial number and /dev/disk/by-id
mapping.
This new behavior is controller by a config option, as falling back
to the cinder value when we can't discover a device would risk devices
not showing up, falling back to cinder's guess, and detecting the wrong
disk as attached.
This removes the need to manually specify the version in all but unusual
cases.
For most installs this will effectively flip the default from
v1 (deprecated) to v2 so conservative existing installs may want to
manually configure "lb-version = v1" before upgrading.
Automatic merge from submit-queue
Rackspace improvements (OpenStack Cinder)
This adds PV support via Cinder on Rackspace clusters. Rackspace Cloud Block Storage is pretty much vanilla OpenStack Cinder, so there is no need for a separate Volume Plugin. Instead I refactored the Cinder/OpenStack interaction a bit (by introducing a CinderProvider Interface and moving the device path detection logic to the OpenStack part).
Right now this is limited to `AttachDisk` and `DetachDisk`. Creation and deletion of Block Storage is not in scope of this PR.
Also the `ExternalID` and `InstanceID` cloud provider methods have been implemented for Rackspace.