Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
openstack: remove orphaned routes from terminated instances
**What this PR does / why we need it**:
At the moment the openstack cloudprovider only returns routes where the `NextHop` address points to an existing openstack instance. This is a problem when an instance is terminated before the corresponding node is removed from k8s. The existing route is not returned by the cloudprovider anymore and therefore never considered for deletion by the route controller. When the route's `DestinationCIDR` is reassigned to a new node the router ends up with two routes pointing to a different `NextHop` leading to broken networking.
This PR removes skipping routes pointing to unknown next hops when listing routes. This should cause [this conditional](93dc3763b0/pkg/controller/route/route_controller.go (L208)) in the route controller to succeed and have the route removed if the route controller [feels responsible](93dc3763b0/pkg/controller/route/route_controller.go (L206)).
```release-note
OpenStack cloudprovider: Ensure orphaned routes are removed.
```
add getNodeNameByID and use volume.AttachedDevice as devicepath
use uppercase functionname
do not delete automatically nodes if node is shutdowned in openstack
do not delete node
fix gofmt
fix cinder detach if instance is not in active state
fix gofmt
Currently getPortByIp() get port of instance only based on IP.
If there are two instances in diffent network and the CIDR of
their subnet are same, getPortByIp() will be conflict.
My PR gets port based on IP and Name of instance.
This change migrates the 'openstack' provider and 'keystone'
authenticator plugin to the newer gophercloud/gophercloud library.
Note the 'rackspace' provider still uses rackspace/gophercloud.
Fixes#30404
This change implements the Routes API using Neutron's "extraroute"
extension.
To use, this requires all the nodes to be on the same Neutron network
and the UUID of the Neutron router on that network.
Required cloud provider config section:
[Route]
router-id = <UUID of Neutron router>
Ensure kube-controllermanager is started with (non-default)
`--allocate-node-cidrs=true` and set `--cluster-cidr` to the POD
super-subnet (a private /16 would be reasonable).
Based on an earlier version by @timbyr (#19473)