Automatic merge from submit-queue
Ignore the available volume when calling DetachDisk
Fix#50207
If user detachs the volume by nova in openstack env, volume becomes
available. If nova instance is been deleted, nova will detach it
automatically and become available. So the "available" is fine since that means the
volume is detached from instance already.
**Release note**:
```release-note
NONE
```
Automatic merge from submit-queue (batch tested with PRs 49524, 46760, 50206, 50166, 49603)
[OpenStack] Add more detail error message
I get same simple error messages "Unable to initialize cinder client
for region: RegionOne" from controller-manager, but I can not find the
reason. We should add more detail message "err" into glog.Errorf.
Currently NewBlockStorageV2() return err when failed to get cinder endpoint, but there is no code to output the message of err.
**Release note**:
```release-note
NONE
```
Automatic merge from submit-queue (batch tested with PRs 50087, 39587, 50042, 50241, 49914)
AttachDisk should not call detach inside of Cinder volume provider
If use detachs the volume by nova in openstack env, volume becomes
available. If nova instance is been deleted, nova will detach it
automatically. So the "available" is fine since that means the
volume is detached from instance already.
I get same simple error messages "Unable to initialize cinder client
for region: RegionOne" from controller-manager, but I can not find the
reason. We should add more detail message "err" into glog.Errorf.
Automatic merge from submit-queue (batch tested with PRs 47416, 47408, 49697, 49860, 50162)
add possibility to use multiple floatingip pools in openstack loadbalancer
**What this PR does / why we need it**: Currently only one floating pool is supported in kubernetes openstack cloud provider. It is quite big issue for us, because we want run only single kubernetes cluster, but we want that external and internal services can be used. It means that we need possibility to create services with internal and external pools.
**Which issue this PR fixes**: fixes#49147
**Special notes for your reviewer**: service labels is not maybe correct place to define this floatingpool id. However, I did not find any better place easily. I do not want start modifying service api structure.
**Release note**:
```release-note
Add possibility to use multiple floatingip pools in openstack loadbalancer
```
Example how it works:
```
cat /etc/kubernetes/cloud-config
[Global]
auth-url=https://xxxx
username=xxxx
password=xxxx
region=yyy
tenant-id=b23efb65b1d44b5abd561511f40c565d
domain-name=foobar
[LoadBalancer]
lb-version=v2
subnet-id=aed26269-cd01-4d4e-b0d8-9ec726c4c2ba
lb-method=ROUND_ROBIN
floating-network-id=56e523e7-76cb-477f-80e4-2dc8cf32e3b4
create-monitor=yes
monitor-delay=10s
monitor-timeout=2000s
monitor-max-retries=3
```
```
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: nginx-deployment
spec:
replicas: 1
template:
metadata:
labels:
run: web
spec:
containers:
- name: nginx
image: nginx
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
labels:
run: web-ext
name: web-ext
namespace: default
spec:
selector:
run: web
ports:
- port: 80
name: https
protocol: TCP
targetPort: 80
type: LoadBalancer
---
apiVersion: v1
kind: Service
metadata:
labels:
run: web-int
floatingPool: a2a84887-4915-42bf-aaff-2b76688a4ec7
name: web-int
namespace: default
spec:
selector:
run: web
ports:
- port: 80
name: https
protocol: TCP
targetPort: 80
type: LoadBalancer
```
```
% kubectl create -f example.yaml
deployment "nginx-deployment" created
service "web-ext" created
service "web-int" created
% kubectl get svc -o wide
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
kubernetes 10.254.0.1 <none> 443/TCP 2m <none>
web-ext 10.254.23.153 192.168.1.57,193.xx.xxx.xxx 80:30151/TCP 52s run=web
web-int 10.254.128.141 192.168.1.58,10.222.130.80 80:32431/TCP 52s run=web
```
cc @anguslees @k8s-sig-openstack-feature-requests @dims
if not needed here
load network ids from gophercloud api
fix to getnetworkbyname
update godeps, add networks library
fix gofmt and boilerplate
gofmt
use annotations
fix
remove enableflag
add comment to annotationvalue
Automatic merge from submit-queue (batch tested with PRs 49081, 49318, 49219, 48989, 48486)
Better message if we dont find appropriate BlockStorage API
**What this PR does / why we need it**:
With latest devstack, v1 and v2 are DEPRECATED and v3 is marked
as CURRENT. So we fail to attach the disk, the error message is
shown when one does "kubectl describe pod" but the operator has
to dig into find the problem.
So log a better message if we can't find the appropriate version
of the API that we support with an explicit error message that
the operator can see how to fix the situation.
Note support for v3 block storage API is being added to gophercloud
and will take a bit of time before we can support it.
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #
**Special notes for your reviewer**:
**Release note**:
```release-note
NONE
```
With latest devstack, v1 and v2 are DEPRECATED and v3 is marked
as CURRENT. So we fail to attach the disk, the error message is
shown when one does "kubectl describe pod" but the operator has
to dig into find the problem.
So log a better message if we can't find the appropriate version
of the API that we support with an explicit error message that
the operator can see how to fix the situation.
Note support for v3 block storage API is being added to gophercloud
and will take a bit of time before we can support it.
Automatic merge from submit-queue (batch tested with PRs 49420, 49296, 49299, 49371, 46514)
Avoid looking up instance id until we need it
**What this PR does / why we need it**:
currently kube-controller-manager cannot run outside of a vm started
by openstack (with --cloud-provider=openstack params). We try to read
the instance id from the metadata provider or the config drive or the
file location only when we really need it. In the normal scenario, the
controller-manager uses the node name to get the instance id.
41541910e1/pkg/volume/cinder/attacher.go (L149)
The localInstanceID is currently used only in the test case, so let
us not read it until it is really needed.
So let's try to find the instance-id only when we need it.
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #
**Special notes for your reviewer**:
**Release note**:
```release-note
NONE
```
currently kube-controller-manager cannot run outside of a vm started
by openstack (with --cloud-provider=openstack params). We try to read
the instance id from the metadata provider or the config drive or the
file location only when we really need it. In the normal scenario, the
controller-manager uses the node name to get the instance id.
41541910e1/pkg/volume/cinder/attacher.go (L149)
The localInstanceID is currently used only in the test case, so let
us not read it until it is really needed.
Automatic merge from submit-queue (batch tested with PRs 49276, 49235)
Don't fail fast if LoadBalancer section is missing
**What this PR does / why we need it**:
We should allow scenarios where cinder can be used even if the
operator does not want to use the openstack load balancer. So
let's warn in the beginning if subnet-id is missing but fail only
if they try to use the load balancer
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #
**Special notes for your reviewer**:
**Release note**:
```release-note
NONE
```
We should allow scenarios where cinder can be used even if the
operator does not want to use the openstack load balancer. So
let's warn in the beginning if subnet-id is missing but fail only
if they try to use the load balancer
Current devstack seems to return "id", and an upcoming change using
nova's microversion will be returning "original_name":
https://blueprints.launchpad.net/nova/+spec/instance-flavor-api
So let's just inspect what is present and use that to figure out
the instance type.
Automatic merge from submit-queue (batch tested with PRs 48594, 47042, 48801, 48641, 48243)
Fix panic of DeleteRoute()
Fix#48800
It should be 'addr_pairs', not 'routes'.
**Release note**:
```release-note
NONE
```
Automatic merge from submit-queue (batch tested with PRs 47948, 48631, 48693, 48549, 47593)
OpenStack for cloud-controller-manager
**What this PR does / why we need it**:
This implements the `NodeAddressesByProviderID` and `InstanceTypeByProviderID` methods used by the cloud-controller-manager to the OpenStack provider. The instance type returned is the flavor name, for consistency `InstanceType` has been implemented too returning the same value.
```release-note
NONE
```
This is part of #47257 cc @wlan0
Automatic merge from submit-queue
Fix deleting empty monitors
Fix#48094
When create-monitor of cloud-config is false, pool has not monitor
and can not delete empty monitor.
**Release note**:
```release-note
NONE
```
Automatic merge from submit-queue (batch tested with PRs 47234, 48410, 48514, 48529, 48348)
Check opts of cloud config file
Fix#48347
Check opts when register OpenStack CloudProvider rather than
returning error when use opts to create/use cloud resource.
**Release note**:
```release-note
NONE
```
Automatic merge from submit-queue (batch tested with PRs 47776, 46220, 46878, 47942, 47947)
fix comment mistake
fix comment mistake
**What this PR does / why we need it**:
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #
**Special notes for your reviewer**:
**Release note**:
```release-note
```
Automatic merge from submit-queue (batch tested with PRs 47776, 46220, 46878, 47942, 47947)
update openstack metadata-service url
**What this PR does / why we need it**:
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #
**Special notes for your reviewer**:
**Release note**:
```release-note
```
Automatic merge from submit-queue
Ignore ErrNotFound when delete LB resources
IsNotFound error is fine since that means the object is
deleted already, so let's check it before return error.
Automatic merge from submit-queue
Adapt loadbalancer deleting/updating when using cloudprovider openstack in openstack/liberty
**What this PR does / why we need it**:
Make an extra verification on the returned listeners and pools because gophercloud query doesn't filter the results by loadbalancerID / listenerID respectively when using **openstack/librerty**.
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #
#33759
**Special notes for your reviewer**:
#33759 it's supposed to have a pull request which fixes this problem but in the release 1.5 loadbalancers doesn't use that patched code.
**Release note**:
NONE
```release-note
```
Automatic merge from submit-queue
Initialize cloud providers with a K8s clientBuilder
**What this PR does / why we need it**:
This PR provides each cloud provider the ability to generate kubernetes clients. Either the full access or service account client builder is passed from the controller manager. Cloud providers could need to retrieve information from the cluster that isn't provided through defined interfaces, and this seems more preferable to adding parameters.
Please leave your thoughts/comments.
**Release note**:
```release-note
NONE
```
When volume's status is 'attaching', its attachments will be None,
controllermanager can't get device path and make some failed event.
But it is normal, let's fix it.
Automatic merge from submit-queue
Statefulsets for cinder: allow multi-AZ deployments, spread pods across zones
**What this PR does / why we need it**: Currently if we do not specify availability zone in cinder storageclass, the cinder is provisioned to zone called nova. However, like mentioned in issue, we have situation that we want spread statefulset across 3 different zones. Currently this is not possible with statefulsets and cinder storageclass. In this new solution, if we leave it empty the algorithm will choose the zone for the cinder drive similar style like in aws and gce storageclass solutions.
**Which issue this PR fixes** fixes#44735
**Special notes for your reviewer**:
example:
```
kind: StorageClass
apiVersion: storage.k8s.io/v1beta1
metadata:
name: all
provisioner: kubernetes.io/cinder
---
apiVersion: v1
kind: Service
metadata:
annotations:
service.alpha.kubernetes.io/tolerate-unready-endpoints: "true"
name: galera
labels:
app: mysql
spec:
ports:
- port: 3306
name: mysql
clusterIP: None
selector:
app: mysql
---
apiVersion: apps/v1beta1
kind: StatefulSet
metadata:
name: mysql
spec:
serviceName: "galera"
replicas: 3
template:
metadata:
labels:
app: mysql
annotations:
pod.alpha.kubernetes.io/initialized: "true"
spec:
containers:
- name: mysql
image: adfinissygroup/k8s-mariadb-galera-centos:v002
imagePullPolicy: Always
ports:
- containerPort: 3306
name: mysql
- containerPort: 4444
name: sst
- containerPort: 4567
name: replication
- containerPort: 4568
name: ist
volumeMounts:
- name: storage
mountPath: /data
readinessProbe:
exec:
command:
- /usr/share/container-scripts/mysql/readiness-probe.sh
initialDelaySeconds: 15
timeoutSeconds: 5
env:
- name: POD_NAMESPACE
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.namespace
volumeClaimTemplates:
- metadata:
name: storage
annotations:
volume.beta.kubernetes.io/storage-class: all
spec:
accessModes: [ "ReadWriteOnce" ]
resources:
requests:
storage: 12Gi
```
If this example is deployed it will automatically create one replica per AZ. This helps us a lot making HA databases.
Current storageclass for cinder is not perfect in case of statefulsets. Lets assume that cinder storageclass is defined to be in zone called nova, but because labels are not added to pv - pods can be started in any zone. The problem is that at least in our openstack it is not possible to use cinder drive located in zone x from zone y. However, should we have possibility to choose between cross-zone cinder mounts or not? Imo it is not good way of doing things that they mount volume from another zone where the pod is located(means more network traffic between zones)? What you think? Current new solution does not allow that anymore (should we have possibility to allow it? it means removing the labels from pv).
There might be some things that needs to be fixed still in this release and I need help for that. Some parts of the code is not perfect.
Issues what i am thinking about (I need some help for these):
1) Can everybody see in openstack what AZ their servers are? Can there be like access policy that do not show that? If AZ is not found from server specs, I have no idea how the code behaves.
2) In GetAllZones() function, is it really needed to make new serviceclient using openstack.NewComputeV2 or could I somehow use existing one
3) This fetches all servers from some openstack tenant(project). However, in some cases kubernetes is maybe deployed only to specific zone. If kube servers are located for instance in zone 1, and then there are another servers in same tenant in zone 2. There might be usecase that cinder drive is provisioned to zone-2 but it cannot start pod, because kubernetes does not have any nodes in zone-2. Could we have better way to fetch kubernetes nodes zones? Currently that information is not added to kubernetes node labels automatically in openstack (which should I think). I have added those labels manually to nodes. If that zone information is not added to nodes, the new solution does not start stateful pods at all, because it cannot target pods.
cc @rootfs @anguslees @jsafrane
```release-note
Default behaviour in cinder storageclass is changed. If availability is not specified, the zone is chosen by algorithm. It makes possible to spread stateful pods across many zones.
```