Remove references to openstack and cinder

Signed-off-by: Davanum Srinivas <davanum@gmail.com>
This commit is contained in:
Davanum Srinivas 2022-08-08 16:01:59 -04:00
parent d206d7f0a6
commit 9bbf01bae9
No known key found for this signature in database
GPG Key ID: 80D83A796103BF59
78 changed files with 22 additions and 9287 deletions

View File

@ -1,195 +0,0 @@
= vendor/github.com/gophercloud/gophercloud licensed under: =
Copyright 2012-2013 Rackspace, Inc.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use
this file except in compliance with the License. You may obtain a copy of the
License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed
under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR
CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
------
Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS
= vendor/github.com/gophercloud/gophercloud/LICENSE dd19699707373c2ca31531a659130416

View File

@ -1,9 +0,0 @@
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: standard
annotations:
storageclass.kubernetes.io/is-default-class: "true"
labels:
addonmanager.kubernetes.io/mode: EnsureExists
provisioner: kubernetes.io/cinder

View File

@ -28,6 +28,5 @@ import (
_ "k8s.io/legacy-cloud-providers/aws"
_ "k8s.io/legacy-cloud-providers/azure"
_ "k8s.io/legacy-cloud-providers/gce"
_ "k8s.io/legacy-cloud-providers/openstack"
_ "k8s.io/legacy-cloud-providers/vsphere"
)

View File

@ -28,7 +28,6 @@ import (
"k8s.io/kubernetes/pkg/volume/awsebs"
"k8s.io/kubernetes/pkg/volume/azure_file"
"k8s.io/kubernetes/pkg/volume/azuredd"
"k8s.io/kubernetes/pkg/volume/cinder"
"k8s.io/kubernetes/pkg/volume/csimigration"
"k8s.io/kubernetes/pkg/volume/gcepd"
"k8s.io/kubernetes/pkg/volume/portworx"
@ -66,7 +65,6 @@ func appendAttachableLegacyProviderVolumes(allPlugins []volume.VolumePlugin, fea
pluginMigrationStatus := make(map[string]pluginInfo)
pluginMigrationStatus[plugins.AWSEBSInTreePluginName] = pluginInfo{pluginMigrationFeature: features.CSIMigrationAWS, pluginUnregisterFeature: features.InTreePluginAWSUnregister, pluginProbeFunction: awsebs.ProbeVolumePlugins}
pluginMigrationStatus[plugins.GCEPDInTreePluginName] = pluginInfo{pluginMigrationFeature: features.CSIMigrationGCE, pluginUnregisterFeature: features.InTreePluginGCEUnregister, pluginProbeFunction: gcepd.ProbeVolumePlugins}
pluginMigrationStatus[plugins.CinderInTreePluginName] = pluginInfo{pluginMigrationFeature: features.CSIMigrationOpenStack, pluginUnregisterFeature: features.InTreePluginOpenStackUnregister, pluginProbeFunction: cinder.ProbeVolumePlugins}
pluginMigrationStatus[plugins.AzureDiskInTreePluginName] = pluginInfo{pluginMigrationFeature: features.CSIMigrationAzureDisk, pluginUnregisterFeature: features.InTreePluginAzureDiskUnregister, pluginProbeFunction: azuredd.ProbeVolumePlugins}
pluginMigrationStatus[plugins.VSphereInTreePluginName] = pluginInfo{pluginMigrationFeature: features.CSIMigrationvSphere, pluginUnregisterFeature: features.InTreePluginvSphereUnregister, pluginProbeFunction: vsphere_volume.ProbeVolumePlugins}
pluginMigrationStatus[plugins.PortworxVolumePluginName] = pluginInfo{pluginMigrationFeature: features.CSIMigrationPortworx, pluginUnregisterFeature: features.InTreePluginPortworxUnregister, pluginProbeFunction: portworx.ProbeVolumePlugins}

View File

@ -33,7 +33,6 @@ import (
"k8s.io/kubernetes/pkg/volume/awsebs"
"k8s.io/kubernetes/pkg/volume/azure_file"
"k8s.io/kubernetes/pkg/volume/azuredd"
"k8s.io/kubernetes/pkg/volume/cinder"
"k8s.io/kubernetes/pkg/volume/csimigration"
"k8s.io/kubernetes/pkg/volume/gcepd"
"k8s.io/kubernetes/pkg/volume/portworx"
@ -72,7 +71,6 @@ func appendLegacyProviderVolumes(allPlugins []volume.VolumePlugin, featureGate f
pluginMigrationStatus := make(map[string]pluginInfo)
pluginMigrationStatus[plugins.AWSEBSInTreePluginName] = pluginInfo{pluginMigrationFeature: features.CSIMigrationAWS, pluginUnregisterFeature: features.InTreePluginAWSUnregister, pluginProbeFunction: awsebs.ProbeVolumePlugins}
pluginMigrationStatus[plugins.GCEPDInTreePluginName] = pluginInfo{pluginMigrationFeature: features.CSIMigrationGCE, pluginUnregisterFeature: features.InTreePluginGCEUnregister, pluginProbeFunction: gcepd.ProbeVolumePlugins}
pluginMigrationStatus[plugins.CinderInTreePluginName] = pluginInfo{pluginMigrationFeature: features.CSIMigrationOpenStack, pluginUnregisterFeature: features.InTreePluginOpenStackUnregister, pluginProbeFunction: cinder.ProbeVolumePlugins}
pluginMigrationStatus[plugins.AzureDiskInTreePluginName] = pluginInfo{pluginMigrationFeature: features.CSIMigrationAzureDisk, pluginUnregisterFeature: features.InTreePluginAzureDiskUnregister, pluginProbeFunction: azuredd.ProbeVolumePlugins}
pluginMigrationStatus[plugins.AzureFileInTreePluginName] = pluginInfo{pluginMigrationFeature: features.CSIMigrationAzureFile, pluginUnregisterFeature: features.InTreePluginAzureFileUnregister, pluginProbeFunction: azure_file.ProbeVolumePlugins}
pluginMigrationStatus[plugins.VSphereInTreePluginName] = pluginInfo{pluginMigrationFeature: features.CSIMigrationvSphere, pluginUnregisterFeature: features.InTreePluginvSphereUnregister, pluginProbeFunction: vsphere_volume.ProbeVolumePlugins}

2
go.mod
View File

@ -180,7 +180,6 @@ require (
github.com/google/pprof v0.0.0-20210407192527-94a9f03dee38 // indirect
github.com/google/shlex v0.0.0-20191202100458-e7afc7fbc510 // indirect
github.com/googleapis/gax-go/v2 v2.1.1 // indirect
github.com/gophercloud/gophercloud v0.1.0 // indirect
github.com/gorilla/mux v1.8.0 // indirect
github.com/gorilla/websocket v1.4.2 // indirect
github.com/gregjones/httpcache v0.0.0-20180305231024-9cad4c3443a7 // indirect
@ -398,7 +397,6 @@ replace (
github.com/google/shlex => github.com/google/shlex v0.0.0-20191202100458-e7afc7fbc510
github.com/google/uuid => github.com/google/uuid v1.1.2
github.com/googleapis/gax-go/v2 => github.com/googleapis/gax-go/v2 v2.1.1
github.com/gophercloud/gophercloud => github.com/gophercloud/gophercloud v0.1.0
github.com/gopherjs/gopherjs => github.com/gopherjs/gopherjs v0.0.0-20200217142428-fce0ec30dd00
github.com/gorilla/mux => github.com/gorilla/mux v1.8.0
github.com/gorilla/websocket => github.com/gorilla/websocket v1.4.2

2
go.sum
View File

@ -233,8 +233,6 @@ github.com/google/uuid v1.1.2 h1:EVhdT+1Kseyi1/pUmXKaFxYsDNy9RQYkMWRH68J/W7Y=
github.com/google/uuid v1.1.2/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+yHo=
github.com/googleapis/gax-go/v2 v2.1.1 h1:dp3bWCh+PPO1zjRRiCSczJav13sBvG4UhNyVTa1KqdU=
github.com/googleapis/gax-go/v2 v2.1.1/go.mod h1:hddJymUZASv3XPyGkUpKj8pPO47Rmb0eJc8R6ouapiM=
github.com/gophercloud/gophercloud v0.1.0 h1:P/nh25+rzXouhytV2pUHBb65fnds26Ghl8/391+sT5o=
github.com/gophercloud/gophercloud v0.1.0/go.mod h1:vxM41WHh5uqHVBMZHzuwNOHh8XEoIEcSTewFxm1c5g8=
github.com/gopherjs/gopherjs v0.0.0-20200217142428-fce0ec30dd00/go.mod h1:wJfORRmW1u3UXTncJ5qlYoELFm8eSnnEO6hX4iZ3EWY=
github.com/gorilla/mux v1.8.0 h1:i40aqfkR1h2SlN9hojwV5ZA91wcXFOvkdNIeFDP5koI=
github.com/gorilla/mux v1.8.0/go.mod h1:DVbg23sWSpFRCP0SfiEN6jmj59UnW/n46BH5rLB71So=

View File

@ -133,18 +133,6 @@ KUBE_CONTROLLERS="${KUBE_CONTROLLERS:-"*"}"
# Audit policy
AUDIT_POLICY_FILE=${AUDIT_POLICY_FILE:-""}
# sanity check for OpenStack provider
if [ "${CLOUD_PROVIDER}" == "openstack" ]; then
if [ "${CLOUD_CONFIG}" == "" ]; then
echo "Missing CLOUD_CONFIG env for OpenStack provider!"
exit 1
fi
if [ ! -f "${CLOUD_CONFIG}" ]; then
echo "Cloud config ${CLOUD_CONFIG} doesn't exist"
exit 1
fi
fi
# Stop right away if the build fails
set -e

View File

@ -4092,20 +4092,6 @@ func TestValidateVolumes(t *testing.T) {
field: "rbd.image",
}},
},
// Cinder
{
name: "valid Cinder",
vol: core.Volume{
Name: "cinder",
VolumeSource: core.VolumeSource{
Cinder: &core.CinderVolumeSource{
VolumeID: "29ea5088-4f60-4757-962e-dba678767887",
FSType: "ext4",
ReadOnly: false,
},
},
},
},
// CephFS
{
name: "valid CephFS",

View File

@ -24,6 +24,5 @@ import (
_ "k8s.io/legacy-cloud-providers/aws"
_ "k8s.io/legacy-cloud-providers/azure"
_ "k8s.io/legacy-cloud-providers/gce"
_ "k8s.io/legacy-cloud-providers/openstack"
_ "k8s.io/legacy-cloud-providers/vsphere"
)

View File

@ -140,13 +140,6 @@ const (
// Enables the GCE PD in-tree driver to GCE CSI Driver migration feature.
CSIMigrationGCE featuregate.Feature = "CSIMigrationGCE"
// owner: @adisky
// alpha: v1.14
// beta: v1.18
//
// Enables the OpenStack Cinder in-tree driver to OpenStack Cinder CSI Driver migration feature.
CSIMigrationOpenStack featuregate.Feature = "CSIMigrationOpenStack"
// owner: @trierra
// alpha: v1.23
//
@ -410,12 +403,6 @@ const (
// Disables the GCE PD in-tree driver.
InTreePluginGCEUnregister featuregate.Feature = "InTreePluginGCEUnregister"
// owner: @adisky
// alpha: v1.21
//
// Disables the OpenStack Cinder in-tree driver.
InTreePluginOpenStackUnregister featuregate.Feature = "InTreePluginOpenStackUnregister"
// owner: @trierra
// alpha: v1.23
//
@ -923,8 +910,6 @@ var defaultKubernetesFeatureGates = map[featuregate.Feature]featuregate.FeatureS
CSIMigrationGCE: {Default: true, PreRelease: featuregate.GA, LockToDefault: true}, // GA in 1.25 (requires GCE PD CSI Driver)
CSIMigrationOpenStack: {Default: true, PreRelease: featuregate.GA, LockToDefault: true}, // remove in 1.26
CSIMigrationPortworx: {Default: false, PreRelease: featuregate.Beta}, // Off by default (requires Portworx CSI driver)
CSIMigrationRBD: {Default: false, PreRelease: featuregate.Alpha}, // Off by default (requires RBD CSI driver)
@ -999,8 +984,6 @@ var defaultKubernetesFeatureGates = map[featuregate.Feature]featuregate.FeatureS
InTreePluginGCEUnregister: {Default: false, PreRelease: featuregate.Alpha},
InTreePluginOpenStackUnregister: {Default: false, PreRelease: featuregate.Alpha},
InTreePluginPortworxUnregister: {Default: false, PreRelease: featuregate.Alpha},
InTreePluginRBDUnregister: {Default: false, PreRelease: featuregate.Alpha},

View File

@ -30,7 +30,6 @@ const (
NodeUnschedulable = "NodeUnschedulable"
NodeVolumeLimits = "NodeVolumeLimits"
AzureDiskLimits = "AzureDiskLimits"
CinderLimits = "CinderLimits"
EBSLimits = "EBSLimits"
GCEPDLimits = "GCEPDLimits"
PodTopologySpread = "PodTopologySpread"

View File

@ -58,8 +58,6 @@ func getVolumeLimitKey(filterType string) v1.ResourceName {
return v1.ResourceName(volumeutil.GCEVolumeLimitKey)
case azureDiskVolumeFilterType:
return v1.ResourceName(volumeutil.AzureVolumeLimitKey)
case cinderVolumeFilterType:
return v1.ResourceName(volumeutil.CinderVolumeLimitKey)
default:
return v1.ResourceName(volumeutil.GetCSIAttachLimitKey(filterType))
}

View File

@ -56,8 +56,6 @@ const (
gcePDVolumeFilterType = "GCE"
// azureDiskVolumeFilterType defines the filter name for azureDiskVolumeFilter.
azureDiskVolumeFilterType = "AzureDisk"
// cinderVolumeFilterType defines the filter name for cinderVolumeFilter.
cinderVolumeFilterType = "Cinder"
// ErrReasonMaxVolumeCountExceeded is used for MaxVolumeCount predicate error.
ErrReasonMaxVolumeCountExceeded = "node(s) exceed max volume count"
@ -75,15 +73,6 @@ func NewAzureDisk(_ runtime.Object, handle framework.Handle, fts feature.Feature
return newNonCSILimitsWithInformerFactory(azureDiskVolumeFilterType, informerFactory, fts), nil
}
// CinderName is the name of the plugin used in the plugin registry and configurations.
const CinderName = names.CinderLimits
// NewCinder returns function that initializes a new plugin and returns it.
func NewCinder(_ runtime.Object, handle framework.Handle, fts feature.Features) (framework.Plugin, error) {
informerFactory := handle.SharedInformerFactory()
return newNonCSILimitsWithInformerFactory(cinderVolumeFilterType, informerFactory, fts), nil
}
// EBSName is the name of the plugin used in the plugin registry and configurations.
const EBSName = names.EBSLimits
@ -171,10 +160,6 @@ func newNonCSILimits(
name = AzureDiskName
filter = azureDiskVolumeFilter
volumeLimitKey = v1.ResourceName(volumeutil.AzureVolumeLimitKey)
case cinderVolumeFilterType:
name = CinderName
filter = cinderVolumeFilter
volumeLimitKey = v1.ResourceName(volumeutil.CinderVolumeLimitKey)
default:
klog.ErrorS(errors.New("wrong filterName"), "Cannot create nonCSILimits plugin")
return nil
@ -475,32 +460,6 @@ var azureDiskVolumeFilter = VolumeFilter{
},
}
// cinderVolumeFilter is a VolumeFilter for filtering cinder Volumes.
// It will be deprecated once Openstack cloudprovider has been removed from in-tree.
var cinderVolumeFilter = VolumeFilter{
FilterVolume: func(vol *v1.Volume) (string, bool) {
if vol.Cinder != nil {
return vol.Cinder.VolumeID, true
}
return "", false
},
FilterPersistentVolume: func(pv *v1.PersistentVolume) (string, bool) {
if pv.Spec.Cinder != nil {
return pv.Spec.Cinder.VolumeID, true
}
return "", false
},
MatchProvisioner: func(sc *storage.StorageClass) bool {
return sc.Provisioner == csilibplugins.CinderInTreePluginName
},
IsMigrated: func(csiNode *storage.CSINode) bool {
return isCSIMigrationOn(csiNode, csilibplugins.CinderInTreePluginName)
},
}
func getMaxVolumeFunc(filterName string) func(node *v1.Node) int {
return func(node *v1.Node) int {
maxVolumesFromEnv := getMaxVolLimitFromEnv()
@ -522,8 +481,6 @@ func getMaxVolumeFunc(filterName string) func(node *v1.Node) int {
return defaultMaxGCEPDVolumes
case azureDiskVolumeFilterType:
return defaultMaxAzureDiskVolumes
case cinderVolumeFilterType:
return volumeutil.DefaultMaxCinderVolumes
default:
return -1
}

View File

@ -55,8 +55,6 @@ func isCSIMigrationOn(csiNode *storagev1.CSINode, pluginName string) bool {
if !utilfeature.DefaultFeatureGate.Enabled(features.CSIMigrationAzureDisk) {
return false
}
case csilibplugins.CinderInTreePluginName:
return true
case csilibplugins.RBDVolumePluginName:
if !utilfeature.DefaultFeatureGate.Enabled(features.CSIMigrationRBD) {
return false

View File

@ -70,7 +70,6 @@ func NewInTreeRegistry() runtime.Registry {
nodevolumelimits.EBSName: runtime.FactoryAdapter(fts, nodevolumelimits.NewEBS),
nodevolumelimits.GCEPDName: runtime.FactoryAdapter(fts, nodevolumelimits.NewGCEPD),
nodevolumelimits.AzureDiskName: runtime.FactoryAdapter(fts, nodevolumelimits.NewAzureDisk),
nodevolumelimits.CinderName: runtime.FactoryAdapter(fts, nodevolumelimits.NewCinder),
interpodaffinity.Name: interpodaffinity.New,
queuesort.Name: queuesort.New,
defaultbinder.Name: defaultbinder.New,

View File

@ -1007,8 +1007,6 @@ func isCSIMigrationOnForPlugin(pluginName string) bool {
return utilfeature.DefaultFeatureGate.Enabled(features.CSIMigrationGCE)
case csiplugins.AzureDiskInTreePluginName:
return utilfeature.DefaultFeatureGate.Enabled(features.CSIMigrationAzureDisk)
case csiplugins.CinderInTreePluginName:
return true
case csiplugins.PortworxVolumePluginName:
return utilfeature.DefaultFeatureGate.Enabled(features.CSIMigrationPortworx)
case csiplugins.RBDVolumePluginName:

View File

@ -1,14 +0,0 @@
# See the OWNERS docs at https://go.k8s.io/owners
approvers:
- jsafrane
- anguslees
- dims
reviewers:
- anguslees
- saad-ali
- jsafrane
- jingxu97
- msau42
emeritus_approvers:
- FengyunPan2

View File

@ -1,434 +0,0 @@
//go:build !providerless
// +build !providerless
/*
Copyright 2016 The Kubernetes Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package cinder
import (
"context"
"fmt"
"os"
"path"
"strings"
"time"
v1 "k8s.io/api/core/v1"
"k8s.io/apimachinery/pkg/types"
"k8s.io/apimachinery/pkg/util/wait"
"k8s.io/klog/v2"
"k8s.io/mount-utils"
"k8s.io/kubernetes/pkg/volume"
volumeutil "k8s.io/kubernetes/pkg/volume/util"
)
type cinderDiskAttacher struct {
host volume.VolumeHost
cinderProvider BlockStorageProvider
}
var _ volume.Attacher = &cinderDiskAttacher{}
var _ volume.DeviceMounter = &cinderDiskAttacher{}
var _ volume.AttachableVolumePlugin = &cinderPlugin{}
var _ volume.DeviceMountableVolumePlugin = &cinderPlugin{}
const (
probeVolumeInitDelay = 1 * time.Second
probeVolumeFactor = 2.0
operationFinishInitDelay = 1 * time.Second
operationFinishFactor = 1.1
operationFinishSteps = 10
diskAttachInitDelay = 1 * time.Second
diskAttachFactor = 1.2
diskAttachSteps = 15
diskDetachInitDelay = 1 * time.Second
diskDetachFactor = 1.2
diskDetachSteps = 13
)
func (plugin *cinderPlugin) NewAttacher() (volume.Attacher, error) {
cinder, err := plugin.getCloudProvider()
if err != nil {
return nil, err
}
return &cinderDiskAttacher{
host: plugin.host,
cinderProvider: cinder,
}, nil
}
func (plugin *cinderPlugin) NewDeviceMounter() (volume.DeviceMounter, error) {
return plugin.NewAttacher()
}
func (plugin *cinderPlugin) GetDeviceMountRefs(deviceMountPath string) ([]string, error) {
mounter := plugin.host.GetMounter(plugin.GetPluginName())
return mounter.GetMountRefs(deviceMountPath)
}
func (attacher *cinderDiskAttacher) waitOperationFinished(volumeID string) error {
backoff := wait.Backoff{
Duration: operationFinishInitDelay,
Factor: operationFinishFactor,
Steps: operationFinishSteps,
}
var volumeStatus string
err := wait.ExponentialBackoff(backoff, func() (bool, error) {
var pending bool
var err error
pending, volumeStatus, err = attacher.cinderProvider.OperationPending(volumeID)
if err != nil {
return false, err
}
return !pending, nil
})
if err == wait.ErrWaitTimeout {
err = fmt.Errorf("volume %q is %s, can't finish within the alloted time", volumeID, volumeStatus)
}
return err
}
func (attacher *cinderDiskAttacher) waitDiskAttached(instanceID, volumeID string) error {
backoff := wait.Backoff{
Duration: diskAttachInitDelay,
Factor: diskAttachFactor,
Steps: diskAttachSteps,
}
err := wait.ExponentialBackoff(backoff, func() (bool, error) {
attached, err := attacher.cinderProvider.DiskIsAttached(instanceID, volumeID)
if err != nil {
return false, err
}
return attached, nil
})
if err == wait.ErrWaitTimeout {
err = fmt.Errorf("volume %q failed to be attached within the alloted time", volumeID)
}
return err
}
func (attacher *cinderDiskAttacher) Attach(spec *volume.Spec, nodeName types.NodeName) (string, error) {
volumeID, _, _, err := getVolumeInfo(spec)
if err != nil {
return "", err
}
instanceID, err := attacher.nodeInstanceID(nodeName)
if err != nil {
return "", err
}
if err := attacher.waitOperationFinished(volumeID); err != nil {
return "", err
}
attached, err := attacher.cinderProvider.DiskIsAttached(instanceID, volumeID)
if err != nil {
// Log error and continue with attach
klog.Warningf(
"Error checking if volume (%q) is already attached to current instance (%q). Will continue and try attach anyway. err=%v",
volumeID, instanceID, err)
}
if err == nil && attached {
// Volume is already attached to instance.
klog.Infof("Attach operation is successful. volume %q is already attached to instance %q.", volumeID, instanceID)
} else {
_, err = attacher.cinderProvider.AttachDisk(instanceID, volumeID)
if err == nil {
if err = attacher.waitDiskAttached(instanceID, volumeID); err != nil {
klog.Errorf("Error waiting for volume %q to be attached from node %q: %v", volumeID, nodeName, err)
return "", err
}
klog.Infof("Attach operation successful: volume %q attached to instance %q.", volumeID, instanceID)
} else {
klog.Infof("Attach volume %q to instance %q failed with: %v", volumeID, instanceID, err)
return "", err
}
}
devicePath, err := attacher.cinderProvider.GetAttachmentDiskPath(instanceID, volumeID)
if err != nil {
klog.Infof("Can not get device path of volume %q which be attached to instance %q, failed with: %v", volumeID, instanceID, err)
return "", err
}
return devicePath, nil
}
func (attacher *cinderDiskAttacher) VolumesAreAttached(specs []*volume.Spec, nodeName types.NodeName) (map[*volume.Spec]bool, error) {
volumesAttachedCheck := make(map[*volume.Spec]bool)
volumeSpecMap := make(map[string]*volume.Spec)
volumeIDList := []string{}
for _, spec := range specs {
volumeID, _, _, err := getVolumeInfo(spec)
if err != nil {
klog.Errorf("Error getting volume (%q) source : %v", spec.Name(), err)
continue
}
volumeIDList = append(volumeIDList, volumeID)
volumesAttachedCheck[spec] = true
volumeSpecMap[volumeID] = spec
}
attachedResult, err := attacher.cinderProvider.DisksAreAttachedByName(nodeName, volumeIDList)
if err != nil {
// Log error and continue with attach
klog.Errorf(
"Error checking if Volumes (%v) are already attached to current node (%q). Will continue and try attach anyway. err=%v",
volumeIDList, nodeName, err)
return volumesAttachedCheck, err
}
for volumeID, attached := range attachedResult {
if !attached {
spec := volumeSpecMap[volumeID]
volumesAttachedCheck[spec] = false
klog.V(2).Infof("VolumesAreAttached: check volume %q (specName: %q) is no longer attached", volumeID, spec.Name())
}
}
return volumesAttachedCheck, nil
}
func (attacher *cinderDiskAttacher) WaitForAttach(spec *volume.Spec, devicePath string, _ *v1.Pod, timeout time.Duration) (string, error) {
// NOTE: devicePath is path as reported by Cinder, which may be incorrect and should not be used. See Issue #33128
volumeID, _, _, err := getVolumeInfo(spec)
if err != nil {
return "", err
}
if devicePath == "" {
return "", fmt.Errorf("WaitForAttach failed for Cinder disk %q: devicePath is empty", volumeID)
}
ticker := time.NewTicker(probeVolumeInitDelay)
defer ticker.Stop()
timer := time.NewTimer(timeout)
defer timer.Stop()
duration := probeVolumeInitDelay
for {
select {
case <-ticker.C:
klog.V(5).Infof("Checking Cinder disk %q is attached.", volumeID)
probeAttachedVolume()
if !attacher.cinderProvider.ShouldTrustDevicePath() {
// Using the Cinder volume ID, find the real device path (See Issue #33128)
devicePath = attacher.cinderProvider.GetDevicePath(volumeID)
}
exists, err := mount.PathExists(devicePath)
if exists && err == nil {
klog.Infof("Successfully found attached Cinder disk %q at %v.", volumeID, devicePath)
return devicePath, nil
}
// Log an error, and continue checking periodically
klog.Errorf("Error: could not find attached Cinder disk %q (path: %q): %v", volumeID, devicePath, err)
// Using exponential backoff instead of linear
ticker.Stop()
duration = time.Duration(float64(duration) * probeVolumeFactor)
ticker = time.NewTicker(duration)
case <-timer.C:
return "", fmt.Errorf("could not find attached Cinder disk %q. Timeout waiting for mount paths to be created", volumeID)
}
}
}
func (attacher *cinderDiskAttacher) GetDeviceMountPath(
spec *volume.Spec) (string, error) {
volumeID, _, _, err := getVolumeInfo(spec)
if err != nil {
return "", err
}
return makeGlobalPDName(attacher.host, volumeID), nil
}
// FIXME: this method can be further pruned.
func (attacher *cinderDiskAttacher) MountDevice(spec *volume.Spec, devicePath string, deviceMountPath string, _ volume.DeviceMounterArgs) error {
mounter := attacher.host.GetMounter(cinderVolumePluginName)
notMnt, err := mounter.IsLikelyNotMountPoint(deviceMountPath)
if err != nil {
if os.IsNotExist(err) {
if err := os.MkdirAll(deviceMountPath, 0750); err != nil {
return err
}
notMnt = true
} else {
return err
}
}
_, volumeFSType, readOnly, err := getVolumeInfo(spec)
if err != nil {
return err
}
options := []string{}
if readOnly {
options = append(options, "ro")
}
if notMnt {
diskMounter := volumeutil.NewSafeFormatAndMountFromHost(cinderVolumePluginName, attacher.host)
mountOptions := volumeutil.MountOptionFromSpec(spec, options...)
err = diskMounter.FormatAndMount(devicePath, deviceMountPath, volumeFSType, mountOptions)
if err != nil {
os.Remove(deviceMountPath)
return err
}
}
return nil
}
type cinderDiskDetacher struct {
mounter mount.Interface
cinderProvider BlockStorageProvider
}
var _ volume.Detacher = &cinderDiskDetacher{}
var _ volume.DeviceUnmounter = &cinderDiskDetacher{}
func (plugin *cinderPlugin) NewDetacher() (volume.Detacher, error) {
cinder, err := plugin.getCloudProvider()
if err != nil {
return nil, err
}
return &cinderDiskDetacher{
mounter: plugin.host.GetMounter(plugin.GetPluginName()),
cinderProvider: cinder,
}, nil
}
func (plugin *cinderPlugin) NewDeviceUnmounter() (volume.DeviceUnmounter, error) {
return plugin.NewDetacher()
}
func (detacher *cinderDiskDetacher) waitOperationFinished(volumeID string) error {
backoff := wait.Backoff{
Duration: operationFinishInitDelay,
Factor: operationFinishFactor,
Steps: operationFinishSteps,
}
var volumeStatus string
err := wait.ExponentialBackoff(backoff, func() (bool, error) {
var pending bool
var err error
pending, volumeStatus, err = detacher.cinderProvider.OperationPending(volumeID)
if err != nil {
return false, err
}
return !pending, nil
})
if err == wait.ErrWaitTimeout {
err = fmt.Errorf("volume %q is %s, can't finish within the alloted time", volumeID, volumeStatus)
}
return err
}
func (detacher *cinderDiskDetacher) waitDiskDetached(instanceID, volumeID string) error {
backoff := wait.Backoff{
Duration: diskDetachInitDelay,
Factor: diskDetachFactor,
Steps: diskDetachSteps,
}
err := wait.ExponentialBackoff(backoff, func() (bool, error) {
attached, err := detacher.cinderProvider.DiskIsAttached(instanceID, volumeID)
if err != nil {
return false, err
}
return !attached, nil
})
if err == wait.ErrWaitTimeout {
err = fmt.Errorf("volume %q failed to detach within the alloted time", volumeID)
}
return err
}
func (detacher *cinderDiskDetacher) Detach(volumeName string, nodeName types.NodeName) error {
volumeID := path.Base(volumeName)
if err := detacher.waitOperationFinished(volumeID); err != nil {
return err
}
attached, instanceID, err := detacher.cinderProvider.DiskIsAttachedByName(nodeName, volumeID)
if err != nil {
// Log error and continue with detach
klog.Errorf(
"Error checking if volume (%q) is already attached to current node (%q). Will continue and try detach anyway. err=%v",
volumeID, nodeName, err)
}
if err == nil && !attached {
// Volume is already detached from node.
klog.Infof("detach operation was successful. volume %q is already detached from node %q.", volumeID, nodeName)
return nil
}
if err = detacher.cinderProvider.DetachDisk(instanceID, volumeID); err != nil {
klog.Errorf("Error detaching volume %q from node %q: %v", volumeID, nodeName, err)
return err
}
if err = detacher.waitDiskDetached(instanceID, volumeID); err != nil {
klog.Errorf("Error waiting for volume %q to detach from node %q: %v", volumeID, nodeName, err)
return err
}
klog.Infof("detached volume %q from node %q", volumeID, nodeName)
return nil
}
func (detacher *cinderDiskDetacher) UnmountDevice(deviceMountPath string) error {
return mount.CleanupMountPoint(deviceMountPath, detacher.mounter, false)
}
func (plugin *cinderPlugin) CanAttach(spec *volume.Spec) (bool, error) {
return true, nil
}
func (plugin *cinderPlugin) CanDeviceMount(spec *volume.Spec) (bool, error) {
return true, nil
}
func (attacher *cinderDiskAttacher) nodeInstanceID(nodeName types.NodeName) (string, error) {
instances, res := attacher.cinderProvider.Instances()
if !res {
return "", fmt.Errorf("failed to list openstack instances")
}
instanceID, err := instances.InstanceID(context.TODO(), nodeName)
if err != nil {
return "", err
}
if ind := strings.LastIndex(instanceID, "/"); ind >= 0 {
instanceID = instanceID[(ind + 1):]
}
return instanceID, nil
}

View File

@ -1,758 +0,0 @@
//go:build !providerless
// +build !providerless
/*
Copyright 2016 The Kubernetes Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package cinder
import (
"context"
"errors"
"os"
"path/filepath"
"reflect"
"testing"
v1 "k8s.io/api/core/v1"
"k8s.io/apimachinery/pkg/api/resource"
cloudprovider "k8s.io/cloud-provider"
"k8s.io/kubernetes/pkg/volume"
volumetest "k8s.io/kubernetes/pkg/volume/testing"
"fmt"
"sort"
"k8s.io/apimachinery/pkg/types"
"k8s.io/klog/v2"
)
const (
VolumeStatusPending = "pending"
VolumeStatusDone = "done"
)
var attachStatus = "Attach"
var detachStatus = "Detach"
func TestGetDeviceName_Volume(t *testing.T) {
plugin := newPlugin(t)
name := "my-cinder-volume"
spec := createVolSpec(name, false)
deviceName, err := plugin.GetVolumeName(spec)
if err != nil {
t.Errorf("GetDeviceName error: %v", err)
}
if deviceName != name {
t.Errorf("GetDeviceName error: expected %s, got %s", name, deviceName)
}
}
func TestGetDeviceName_PersistentVolume(t *testing.T) {
plugin := newPlugin(t)
name := "my-cinder-pv"
spec := createPVSpec(name, true)
deviceName, err := plugin.GetVolumeName(spec)
if err != nil {
t.Errorf("GetDeviceName error: %v", err)
}
if deviceName != name {
t.Errorf("GetDeviceName error: expected %s, got %s", name, deviceName)
}
}
func TestGetDeviceMountPath(t *testing.T) {
name := "cinder-volume-id"
spec := createVolSpec(name, false)
rootDir := "/var/lib/kubelet/"
host := volumetest.NewFakeVolumeHost(t, rootDir, nil, nil)
attacher := &cinderDiskAttacher{
host: host,
}
//test the path
path, err := attacher.GetDeviceMountPath(spec)
if err != nil {
t.Errorf("Get device mount path error")
}
expectedPath := filepath.Join(rootDir, "plugins/kubernetes.io/cinder/mounts", name)
if path != expectedPath {
t.Errorf("Device mount path error: expected %s, got %s ", expectedPath, path)
}
}
// One testcase for TestAttachDetach table test below
type testcase struct {
name string
// For fake GCE:
attach attachCall
detach detachCall
operationPending operationPendingCall
diskIsAttached diskIsAttachedCall
disksAreAttached disksAreAttachedCall
diskPath diskPathCall
t *testing.T
attachOrDetach *string
instanceID string
// Actual test to run
test func(test *testcase) (string, error)
// Expected return of the test
expectedResult string
expectedError error
}
func TestAttachDetach(t *testing.T) {
volumeID := "disk"
instanceID := "instance"
pending := VolumeStatusPending
done := VolumeStatusDone
nodeName := types.NodeName("nodeName")
readOnly := false
spec := createVolSpec(volumeID, readOnly)
attachError := errors.New("fake attach error")
detachError := errors.New("fake detach error")
diskCheckError := errors.New("fake DiskIsAttached error")
diskPathError := errors.New("fake GetAttachmentDiskPath error")
disksCheckError := errors.New("fake DisksAreAttached error")
operationFinishTimeout := errors.New("fake waitOperationFinished error")
tests := []testcase{
// Successful Attach call
{
name: "Attach_Positive",
instanceID: instanceID,
operationPending: operationPendingCall{volumeID, false, done, nil},
diskIsAttached: diskIsAttachedCall{instanceID, nodeName, volumeID, false, nil},
attach: attachCall{instanceID, volumeID, "", nil},
diskPath: diskPathCall{instanceID, volumeID, "/dev/sda", nil},
test: func(testcase *testcase) (string, error) {
attacher := newAttacher(testcase)
return attacher.Attach(spec, nodeName)
},
expectedResult: "/dev/sda",
},
// Disk is already attached
{
name: "Attach_Positive_AlreadyAttached",
instanceID: instanceID,
operationPending: operationPendingCall{volumeID, false, done, nil},
diskIsAttached: diskIsAttachedCall{instanceID, nodeName, volumeID, true, nil},
diskPath: diskPathCall{instanceID, volumeID, "/dev/sda", nil},
test: func(testcase *testcase) (string, error) {
attacher := newAttacher(testcase)
return attacher.Attach(spec, nodeName)
},
expectedResult: "/dev/sda",
},
// Disk is attaching
{
name: "Attach_is_attaching",
instanceID: instanceID,
operationPending: operationPendingCall{volumeID, true, pending, operationFinishTimeout},
test: func(testcase *testcase) (string, error) {
attacher := newAttacher(testcase)
return attacher.Attach(spec, nodeName)
},
expectedError: operationFinishTimeout,
},
// Attach call fails
{
name: "Attach_Negative",
instanceID: instanceID,
operationPending: operationPendingCall{volumeID, false, done, nil},
diskIsAttached: diskIsAttachedCall{instanceID, nodeName, volumeID, false, diskCheckError},
attach: attachCall{instanceID, volumeID, "/dev/sda", attachError},
test: func(testcase *testcase) (string, error) {
attacher := newAttacher(testcase)
return attacher.Attach(spec, nodeName)
},
expectedError: attachError,
},
// GetAttachmentDiskPath call fails
{
name: "Attach_Negative_DiskPatchFails",
instanceID: instanceID,
operationPending: operationPendingCall{volumeID, false, done, nil},
diskIsAttached: diskIsAttachedCall{instanceID, nodeName, volumeID, false, nil},
attach: attachCall{instanceID, volumeID, "", nil},
diskPath: diskPathCall{instanceID, volumeID, "", diskPathError},
test: func(testcase *testcase) (string, error) {
attacher := newAttacher(testcase)
return attacher.Attach(spec, nodeName)
},
expectedError: diskPathError,
},
// Successful VolumesAreAttached call, attached
{
name: "VolumesAreAttached_Positive",
instanceID: instanceID,
disksAreAttached: disksAreAttachedCall{instanceID, nodeName, []string{volumeID}, map[string]bool{volumeID: true}, nil},
test: func(testcase *testcase) (string, error) {
attacher := newAttacher(testcase)
attachments, err := attacher.VolumesAreAttached([]*volume.Spec{spec}, nodeName)
return serializeAttachments(attachments), err
},
expectedResult: serializeAttachments(map[*volume.Spec]bool{spec: true}),
},
// Successful VolumesAreAttached call, not attached
{
name: "VolumesAreAttached_Negative",
instanceID: instanceID,
disksAreAttached: disksAreAttachedCall{instanceID, nodeName, []string{volumeID}, map[string]bool{volumeID: false}, nil},
test: func(testcase *testcase) (string, error) {
attacher := newAttacher(testcase)
attachments, err := attacher.VolumesAreAttached([]*volume.Spec{spec}, nodeName)
return serializeAttachments(attachments), err
},
expectedResult: serializeAttachments(map[*volume.Spec]bool{spec: false}),
},
// Treat as attached when DisksAreAttached call fails
{
name: "VolumesAreAttached_CinderFailed",
instanceID: instanceID,
disksAreAttached: disksAreAttachedCall{instanceID, nodeName, []string{volumeID}, nil, disksCheckError},
test: func(testcase *testcase) (string, error) {
attacher := newAttacher(testcase)
attachments, err := attacher.VolumesAreAttached([]*volume.Spec{spec}, nodeName)
return serializeAttachments(attachments), err
},
expectedResult: serializeAttachments(map[*volume.Spec]bool{spec: true}),
expectedError: disksCheckError,
},
// Detach succeeds
{
name: "Detach_Positive",
instanceID: instanceID,
operationPending: operationPendingCall{volumeID, false, done, nil},
diskIsAttached: diskIsAttachedCall{instanceID, nodeName, volumeID, true, nil},
detach: detachCall{instanceID, volumeID, nil},
test: func(testcase *testcase) (string, error) {
detacher := newDetacher(testcase)
return "", detacher.Detach(volumeID, nodeName)
},
},
// Disk is already detached
{
name: "Detach_Positive_AlreadyDetached",
instanceID: instanceID,
operationPending: operationPendingCall{volumeID, false, done, nil},
diskIsAttached: diskIsAttachedCall{instanceID, nodeName, volumeID, false, nil},
test: func(testcase *testcase) (string, error) {
detacher := newDetacher(testcase)
return "", detacher.Detach(volumeID, nodeName)
},
},
// Detach succeeds when DiskIsAttached fails
{
name: "Detach_Positive_CheckFails",
instanceID: instanceID,
operationPending: operationPendingCall{volumeID, false, done, nil},
diskIsAttached: diskIsAttachedCall{instanceID, nodeName, volumeID, false, diskCheckError},
detach: detachCall{instanceID, volumeID, nil},
test: func(testcase *testcase) (string, error) {
detacher := newDetacher(testcase)
return "", detacher.Detach(volumeID, nodeName)
},
},
// Detach fails
{
name: "Detach_Negative",
instanceID: instanceID,
operationPending: operationPendingCall{volumeID, false, done, nil},
diskIsAttached: diskIsAttachedCall{instanceID, nodeName, volumeID, false, diskCheckError},
detach: detachCall{instanceID, volumeID, detachError},
test: func(testcase *testcase) (string, error) {
detacher := newDetacher(testcase)
return "", detacher.Detach(volumeID, nodeName)
},
expectedError: detachError,
},
// // Disk is detaching
{
name: "Detach_Is_Detaching",
instanceID: instanceID,
operationPending: operationPendingCall{volumeID, true, pending, operationFinishTimeout},
test: func(testcase *testcase) (string, error) {
detacher := newDetacher(testcase)
return "", detacher.Detach(volumeID, nodeName)
},
expectedError: operationFinishTimeout,
},
}
for _, testcase := range tests {
testcase.t = t
attachOrDetach := ""
testcase.attachOrDetach = &attachOrDetach
result, err := testcase.test(&testcase)
if err != testcase.expectedError {
t.Errorf("%s failed: expected err=%q, got %q", testcase.name, testcase.expectedError, err)
}
if result != testcase.expectedResult {
t.Errorf("%s failed: expected result=%q, got %q", testcase.name, testcase.expectedResult, result)
}
}
}
type volumeAttachmentFlag struct {
volumeID string
attached bool
}
type volumeAttachmentFlags []volumeAttachmentFlag
func (va volumeAttachmentFlags) Len() int {
return len(va)
}
func (va volumeAttachmentFlags) Swap(i, j int) {
va[i], va[j] = va[j], va[i]
}
func (va volumeAttachmentFlags) Less(i, j int) bool {
if va[i].volumeID < va[j].volumeID {
return true
}
if va[i].volumeID > va[j].volumeID {
return false
}
return va[j].attached
}
func serializeAttachments(attachments map[*volume.Spec]bool) string {
var attachmentFlags volumeAttachmentFlags
for spec, attached := range attachments {
attachmentFlags = append(attachmentFlags, volumeAttachmentFlag{spec.Name(), attached})
}
sort.Sort(attachmentFlags)
return fmt.Sprint(attachmentFlags)
}
// newPlugin creates a new gcePersistentDiskPlugin with fake cloud, NewAttacher
// and NewDetacher won't work.
func newPlugin(t *testing.T) *cinderPlugin {
host := volumetest.NewFakeVolumeHost(t, os.TempDir(), nil, nil)
plugins := ProbeVolumePlugins()
plugin := plugins[0]
plugin.Init(host)
return plugin.(*cinderPlugin)
}
func newAttacher(testcase *testcase) *cinderDiskAttacher {
return &cinderDiskAttacher{
host: nil,
cinderProvider: testcase,
}
}
func newDetacher(testcase *testcase) *cinderDiskDetacher {
return &cinderDiskDetacher{
cinderProvider: testcase,
}
}
func createVolSpec(name string, readOnly bool) *volume.Spec {
return &volume.Spec{
Volume: &v1.Volume{
Name: name,
VolumeSource: v1.VolumeSource{
Cinder: &v1.CinderVolumeSource{
VolumeID: name,
ReadOnly: readOnly,
},
},
},
}
}
func createPVSpec(name string, readOnly bool) *volume.Spec {
return &volume.Spec{
PersistentVolume: &v1.PersistentVolume{
Spec: v1.PersistentVolumeSpec{
PersistentVolumeSource: v1.PersistentVolumeSource{
Cinder: &v1.CinderPersistentVolumeSource{
VolumeID: name,
ReadOnly: readOnly,
},
},
},
},
}
}
// Fake GCE implementation
type attachCall struct {
instanceID string
volumeID string
retDeviceName string
ret error
}
type detachCall struct {
instanceID string
devicePath string
ret error
}
type operationPendingCall struct {
diskName string
pending bool
volumeStatus string
ret error
}
type diskIsAttachedCall struct {
instanceID string
nodeName types.NodeName
volumeID string
isAttached bool
ret error
}
type diskPathCall struct {
instanceID string
volumeID string
retPath string
ret error
}
type disksAreAttachedCall struct {
instanceID string
nodeName types.NodeName
volumeIDs []string
areAttached map[string]bool
ret error
}
func (testcase *testcase) AttachDisk(instanceID, volumeID string) (string, error) {
expected := &testcase.attach
if expected.volumeID == "" && expected.instanceID == "" {
// testcase.attach looks uninitialized, test did not expect to call
// AttachDisk
testcase.t.Errorf("unexpected AttachDisk call")
return "", errors.New("unexpected AttachDisk call")
}
if expected.volumeID != volumeID {
testcase.t.Errorf("unexpected AttachDisk call: expected volumeID %s, got %s", expected.volumeID, volumeID)
return "", errors.New("unexpected AttachDisk call: wrong volumeID")
}
if expected.instanceID != instanceID {
testcase.t.Errorf("unexpected AttachDisk call: expected instanceID %s, got %s", expected.instanceID, instanceID)
return "", errors.New("unexpected AttachDisk call: wrong instanceID")
}
klog.V(4).Infof("AttachDisk call: %s, %s, returning %q, %v", volumeID, instanceID, expected.retDeviceName, expected.ret)
testcase.attachOrDetach = &attachStatus
return expected.retDeviceName, expected.ret
}
func (testcase *testcase) DetachDisk(instanceID, volumeID string) error {
expected := &testcase.detach
if expected.devicePath == "" && expected.instanceID == "" {
// testcase.detach looks uninitialized, test did not expect to call
// DetachDisk
testcase.t.Errorf("unexpected DetachDisk call")
return errors.New("unexpected DetachDisk call")
}
if expected.devicePath != volumeID {
testcase.t.Errorf("unexpected DetachDisk call: expected volumeID %s, got %s", expected.devicePath, volumeID)
return errors.New("unexpected DetachDisk call: wrong volumeID")
}
if expected.instanceID != instanceID {
testcase.t.Errorf("unexpected DetachDisk call: expected instanceID %s, got %s", expected.instanceID, instanceID)
return errors.New("unexpected DetachDisk call: wrong instanceID")
}
klog.V(4).Infof("DetachDisk call: %s, %s, returning %v", volumeID, instanceID, expected.ret)
testcase.attachOrDetach = &detachStatus
return expected.ret
}
func (testcase *testcase) OperationPending(diskName string) (bool, string, error) {
expected := &testcase.operationPending
if expected.volumeStatus == VolumeStatusPending {
klog.V(4).Infof("OperationPending call: %s, returning %v, %v, %v", diskName, expected.pending, expected.volumeStatus, expected.ret)
return true, expected.volumeStatus, expected.ret
}
klog.V(4).Infof("OperationPending call: %s, returning %v, %v, %v", diskName, expected.pending, expected.volumeStatus, expected.ret)
return false, expected.volumeStatus, expected.ret
}
func (testcase *testcase) DiskIsAttached(instanceID, volumeID string) (bool, error) {
expected := &testcase.diskIsAttached
// If testcase call DetachDisk*, return false
if *testcase.attachOrDetach == detachStatus {
return false, nil
}
// If testcase call AttachDisk*, return true
if *testcase.attachOrDetach == attachStatus {
return true, nil
}
if expected.volumeID == "" && expected.instanceID == "" {
// testcase.diskIsAttached looks uninitialized, test did not expect to
// call DiskIsAttached
testcase.t.Errorf("unexpected DiskIsAttached call")
return false, errors.New("unexpected DiskIsAttached call")
}
if expected.volumeID != volumeID {
testcase.t.Errorf("unexpected DiskIsAttached call: expected volumeID %s, got %s", expected.volumeID, volumeID)
return false, errors.New("unexpected DiskIsAttached call: wrong volumeID")
}
if expected.instanceID != instanceID {
testcase.t.Errorf("unexpected DiskIsAttached call: expected instanceID %s, got %s", expected.instanceID, instanceID)
return false, errors.New("unexpected DiskIsAttached call: wrong instanceID")
}
klog.V(4).Infof("DiskIsAttached call: %s, %s, returning %v, %v", volumeID, instanceID, expected.isAttached, expected.ret)
return expected.isAttached, expected.ret
}
func (testcase *testcase) GetAttachmentDiskPath(instanceID, volumeID string) (string, error) {
expected := &testcase.diskPath
if expected.volumeID == "" && expected.instanceID == "" {
// testcase.diskPath looks uninitialized, test did not expect to
// call GetAttachmentDiskPath
testcase.t.Errorf("unexpected GetAttachmentDiskPath call")
return "", errors.New("unexpected GetAttachmentDiskPath call")
}
if expected.volumeID != volumeID {
testcase.t.Errorf("unexpected GetAttachmentDiskPath call: expected volumeID %s, got %s", expected.volumeID, volumeID)
return "", errors.New("unexpected GetAttachmentDiskPath call: wrong volumeID")
}
if expected.instanceID != instanceID {
testcase.t.Errorf("unexpected GetAttachmentDiskPath call: expected instanceID %s, got %s", expected.instanceID, instanceID)
return "", errors.New("unexpected GetAttachmentDiskPath call: wrong instanceID")
}
klog.V(4).Infof("GetAttachmentDiskPath call: %s, %s, returning %v, %v", volumeID, instanceID, expected.retPath, expected.ret)
return expected.retPath, expected.ret
}
func (testcase *testcase) ShouldTrustDevicePath() bool {
return true
}
func (testcase *testcase) DiskIsAttachedByName(nodeName types.NodeName, volumeID string) (bool, string, error) {
expected := &testcase.diskIsAttached
instanceID := expected.instanceID
// If testcase call DetachDisk*, return false
if *testcase.attachOrDetach == detachStatus {
return false, instanceID, nil
}
// If testcase call AttachDisk*, return true
if *testcase.attachOrDetach == attachStatus {
return true, instanceID, nil
}
if expected.nodeName != nodeName {
testcase.t.Errorf("unexpected DiskIsAttachedByName call: expected nodename %s, got %s", expected.nodeName, nodeName)
return false, instanceID, errors.New("unexpected DiskIsAttachedByName call: wrong nodename")
}
if expected.volumeID == "" && expected.instanceID == "" {
// testcase.diskIsAttached looks uninitialized, test did not expect to
// call DiskIsAttached
testcase.t.Errorf("unexpected DiskIsAttachedByName call")
return false, instanceID, errors.New("unexpected DiskIsAttachedByName call")
}
if expected.volumeID != volumeID {
testcase.t.Errorf("unexpected DiskIsAttachedByName call: expected volumeID %s, got %s", expected.volumeID, volumeID)
return false, instanceID, errors.New("unexpected DiskIsAttachedByName call: wrong volumeID")
}
if expected.instanceID != instanceID {
testcase.t.Errorf("unexpected DiskIsAttachedByName call: expected instanceID %s, got %s", expected.instanceID, instanceID)
return false, instanceID, errors.New("unexpected DiskIsAttachedByName call: wrong instanceID")
}
klog.V(4).Infof("DiskIsAttachedByName call: %s, %s, returning %v, %v, %v", volumeID, nodeName, expected.isAttached, expected.instanceID, expected.ret)
return expected.isAttached, expected.instanceID, expected.ret
}
func (testcase *testcase) CreateVolume(name string, size int, vtype, availability string, tags *map[string]string) (string, string, string, bool, error) {
return "", "", "", false, errors.New("not implemented")
}
func (testcase *testcase) GetDevicePath(volumeID string) string {
return ""
}
func (testcase *testcase) InstanceID() (string, error) {
return testcase.instanceID, nil
}
func (testcase *testcase) ExpandVolume(volumeID string, oldSize resource.Quantity, newSize resource.Quantity) (resource.Quantity, error) {
return resource.Quantity{}, nil
}
func (testcase *testcase) DeleteVolume(volumeID string) error {
return errors.New("not implemented")
}
func (testcase *testcase) GetAutoLabelsForPD(name string) (map[string]string, error) {
return map[string]string{}, errors.New("not implemented")
}
func (testcase *testcase) Instances() (cloudprovider.Instances, bool) {
return &instances{testcase.instanceID}, true
}
func (testcase *testcase) InstancesV2() (cloudprovider.InstancesV2, bool) {
return nil, false
}
func (testcase *testcase) DisksAreAttached(instanceID string, volumeIDs []string) (map[string]bool, error) {
expected := &testcase.disksAreAttached
areAttached := make(map[string]bool)
if len(expected.volumeIDs) == 0 && expected.instanceID == "" {
// testcase.volumeIDs looks uninitialized, test did not expect to call DisksAreAttached
testcase.t.Errorf("Unexpected DisksAreAttached call!")
return areAttached, errors.New("unexpected DisksAreAttached call")
}
if !reflect.DeepEqual(expected.volumeIDs, volumeIDs) {
testcase.t.Errorf("Unexpected DisksAreAttached call: expected volumeIDs %v, got %v", expected.volumeIDs, volumeIDs)
return areAttached, errors.New("unexpected DisksAreAttached call: wrong volumeID")
}
if expected.instanceID != instanceID {
testcase.t.Errorf("Unexpected DisksAreAttached call: expected instanceID %s, got %s", expected.instanceID, instanceID)
return areAttached, errors.New("unexpected DisksAreAttached call: wrong instanceID")
}
klog.V(4).Infof("DisksAreAttached call: %v, %s, returning %v, %v", volumeIDs, instanceID, expected.areAttached, expected.ret)
return expected.areAttached, expected.ret
}
func (testcase *testcase) DisksAreAttachedByName(nodeName types.NodeName, volumeIDs []string) (map[string]bool, error) {
expected := &testcase.disksAreAttached
areAttached := make(map[string]bool)
instanceID := expected.instanceID
if expected.nodeName != nodeName {
testcase.t.Errorf("Unexpected DisksAreAttachedByName call: expected nodeName %s, got %s", expected.nodeName, nodeName)
return areAttached, errors.New("unexpected DisksAreAttachedByName call: wrong nodename")
}
if len(expected.volumeIDs) == 0 && expected.instanceID == "" {
// testcase.volumeIDs looks uninitialized, test did not expect to call DisksAreAttached
testcase.t.Errorf("Unexpected DisksAreAttachedByName call!")
return areAttached, errors.New("unexpected DisksAreAttachedByName call")
}
if !reflect.DeepEqual(expected.volumeIDs, volumeIDs) {
testcase.t.Errorf("Unexpected DisksAreAttachedByName call: expected volumeIDs %v, got %v", expected.volumeIDs, volumeIDs)
return areAttached, errors.New("unexpected DisksAreAttachedByName call: wrong volumeID")
}
if expected.instanceID != instanceID {
testcase.t.Errorf("Unexpected DisksAreAttachedByName call: expected instanceID %s, got %s", expected.instanceID, instanceID)
return areAttached, errors.New("unexpected DisksAreAttachedByName call: wrong instanceID")
}
klog.V(4).Infof("DisksAreAttachedByName call: %v, %s, returning %v, %v", volumeIDs, nodeName, expected.areAttached, expected.ret)
return expected.areAttached, expected.ret
}
// Implementation of fake cloudprovider.Instances
type instances struct {
instanceID string
}
func (instances *instances) NodeAddresses(ctx context.Context, name types.NodeName) ([]v1.NodeAddress, error) {
return []v1.NodeAddress{}, errors.New("not implemented")
}
func (instances *instances) NodeAddressesByProviderID(ctx context.Context, providerID string) ([]v1.NodeAddress, error) {
return []v1.NodeAddress{}, errors.New("not implemented")
}
func (instances *instances) InstanceID(ctx context.Context, name types.NodeName) (string, error) {
return instances.instanceID, nil
}
func (instances *instances) InstanceType(ctx context.Context, name types.NodeName) (string, error) {
return "", errors.New("not implemented")
}
func (instances *instances) InstanceTypeByProviderID(ctx context.Context, providerID string) (string, error) {
return "", errors.New("not implemented")
}
func (instances *instances) InstanceExistsByProviderID(ctx context.Context, providerID string) (bool, error) {
return false, errors.New("unimplemented")
}
func (instances *instances) InstanceShutdownByProviderID(ctx context.Context, providerID string) (bool, error) {
return false, errors.New("unimplemented")
}
func (instances *instances) InstanceMetadataByProviderID(ctx context.Context, providerID string) (*cloudprovider.InstanceMetadata, error) {
return nil, errors.New("unimplemented")
}
func (instances *instances) List(filter string) ([]types.NodeName, error) {
return []types.NodeName{}, errors.New("not implemented")
}
func (instances *instances) AddSSHKeyToAllInstances(ctx context.Context, user string, keyData []byte) error {
return cloudprovider.NotImplemented
}
func (instances *instances) CurrentNodeName(ctx context.Context, hostname string) (types.NodeName, error) {
return "", errors.New("not implemented")
}

View File

@ -1,635 +0,0 @@
//go:build !providerless
// +build !providerless
/*
Copyright 2015 The Kubernetes Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package cinder
import (
"errors"
"fmt"
"os"
"path"
"path/filepath"
"k8s.io/klog/v2"
"k8s.io/mount-utils"
"k8s.io/utils/keymutex"
utilstrings "k8s.io/utils/strings"
v1 "k8s.io/api/core/v1"
"k8s.io/apimachinery/pkg/api/resource"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/types"
cloudprovider "k8s.io/cloud-provider"
"k8s.io/kubernetes/pkg/volume"
"k8s.io/kubernetes/pkg/volume/util"
"k8s.io/legacy-cloud-providers/openstack"
)
const (
// DefaultCloudConfigPath is the default path for cloud configuration
DefaultCloudConfigPath = "/etc/kubernetes/cloud-config"
)
// ProbeVolumePlugins is the primary entrypoint for volume plugins.
func ProbeVolumePlugins() []volume.VolumePlugin {
return []volume.VolumePlugin{&cinderPlugin{}}
}
// BlockStorageProvider is the interface for accessing cinder functionality.
type BlockStorageProvider interface {
AttachDisk(instanceID, volumeID string) (string, error)
DetachDisk(instanceID, volumeID string) error
DeleteVolume(volumeID string) error
CreateVolume(name string, size int, vtype, availability string, tags *map[string]string) (string, string, string, bool, error)
GetDevicePath(volumeID string) string
InstanceID() (string, error)
GetAttachmentDiskPath(instanceID, volumeID string) (string, error)
OperationPending(diskName string) (bool, string, error)
DiskIsAttached(instanceID, volumeID string) (bool, error)
DiskIsAttachedByName(nodeName types.NodeName, volumeID string) (bool, string, error)
DisksAreAttachedByName(nodeName types.NodeName, volumeIDs []string) (map[string]bool, error)
ShouldTrustDevicePath() bool
Instances() (cloudprovider.Instances, bool)
ExpandVolume(volumeID string, oldSize resource.Quantity, newSize resource.Quantity) (resource.Quantity, error)
}
type cinderPlugin struct {
host volume.VolumeHost
// Guarding SetUp and TearDown operations
volumeLocks keymutex.KeyMutex
}
var _ volume.VolumePlugin = &cinderPlugin{}
var _ volume.PersistentVolumePlugin = &cinderPlugin{}
var _ volume.DeletableVolumePlugin = &cinderPlugin{}
var _ volume.ProvisionableVolumePlugin = &cinderPlugin{}
const (
cinderVolumePluginName = "kubernetes.io/cinder"
)
func getPath(uid types.UID, volName string, host volume.VolumeHost) string {
return host.GetPodVolumeDir(uid, utilstrings.EscapeQualifiedName(cinderVolumePluginName), volName)
}
func (plugin *cinderPlugin) Init(host volume.VolumeHost) error {
plugin.host = host
plugin.volumeLocks = keymutex.NewHashed(0)
return nil
}
func (plugin *cinderPlugin) GetPluginName() string {
return cinderVolumePluginName
}
func (plugin *cinderPlugin) GetVolumeName(spec *volume.Spec) (string, error) {
volumeID, _, _, err := getVolumeInfo(spec)
if err != nil {
return "", err
}
return volumeID, nil
}
func (plugin *cinderPlugin) CanSupport(spec *volume.Spec) bool {
return (spec.Volume != nil && spec.Volume.Cinder != nil) || (spec.PersistentVolume != nil && spec.PersistentVolume.Spec.Cinder != nil)
}
func (plugin *cinderPlugin) RequiresRemount(spec *volume.Spec) bool {
return false
}
func (plugin *cinderPlugin) SupportsMountOption() bool {
return true
}
func (plugin *cinderPlugin) SupportsBulkVolumeVerification() bool {
return false
}
func (plugin *cinderPlugin) SupportsSELinuxContextMount(spec *volume.Spec) (bool, error) {
return false, nil
}
var _ volume.VolumePluginWithAttachLimits = &cinderPlugin{}
func (plugin *cinderPlugin) GetVolumeLimits() (map[string]int64, error) {
volumeLimits := map[string]int64{
util.CinderVolumeLimitKey: util.DefaultMaxCinderVolumes,
}
cloud := plugin.host.GetCloudProvider()
// if we can't fetch cloudprovider we return an error
// hoping external CCM or admin can set it. Returning
// default values from here will mean, no one can
// override them.
if cloud == nil {
return nil, fmt.Errorf("no cloudprovider present")
}
if cloud.ProviderName() != openstack.ProviderName {
return nil, fmt.Errorf("expected Openstack cloud, found %s", cloud.ProviderName())
}
openstackCloud, ok := cloud.(*openstack.OpenStack)
if ok && openstackCloud.NodeVolumeAttachLimit() > 0 {
volumeLimits[util.CinderVolumeLimitKey] = int64(openstackCloud.NodeVolumeAttachLimit())
}
return volumeLimits, nil
}
func (plugin *cinderPlugin) VolumeLimitKey(spec *volume.Spec) string {
return util.CinderVolumeLimitKey
}
func (plugin *cinderPlugin) GetAccessModes() []v1.PersistentVolumeAccessMode {
return []v1.PersistentVolumeAccessMode{
v1.ReadWriteOnce,
}
}
func (plugin *cinderPlugin) NewMounter(spec *volume.Spec, pod *v1.Pod, _ volume.VolumeOptions) (volume.Mounter, error) {
return plugin.newMounterInternal(spec, pod.UID, &DiskUtil{}, plugin.host.GetMounter(plugin.GetPluginName()))
}
func (plugin *cinderPlugin) newMounterInternal(spec *volume.Spec, podUID types.UID, manager cdManager, mounter mount.Interface) (volume.Mounter, error) {
pdName, fsType, readOnly, err := getVolumeInfo(spec)
if err != nil {
return nil, err
}
return &cinderVolumeMounter{
cinderVolume: &cinderVolume{
podUID: podUID,
volName: spec.Name(),
pdName: pdName,
mounter: mounter,
manager: manager,
plugin: plugin,
MetricsProvider: volume.NewMetricsStatFS(getPath(podUID, spec.Name(), plugin.host)),
},
fsType: fsType,
readOnly: readOnly,
blockDeviceMounter: util.NewSafeFormatAndMountFromHost(plugin.GetPluginName(), plugin.host),
mountOptions: util.MountOptionFromSpec(spec),
}, nil
}
func (plugin *cinderPlugin) NewUnmounter(volName string, podUID types.UID) (volume.Unmounter, error) {
return plugin.newUnmounterInternal(volName, podUID, &DiskUtil{}, plugin.host.GetMounter(plugin.GetPluginName()))
}
func (plugin *cinderPlugin) newUnmounterInternal(volName string, podUID types.UID, manager cdManager, mounter mount.Interface) (volume.Unmounter, error) {
return &cinderVolumeUnmounter{
&cinderVolume{
podUID: podUID,
volName: volName,
manager: manager,
mounter: mounter,
plugin: plugin,
MetricsProvider: volume.NewMetricsStatFS(getPath(podUID, volName, plugin.host)),
}}, nil
}
func (plugin *cinderPlugin) NewDeleter(spec *volume.Spec) (volume.Deleter, error) {
return plugin.newDeleterInternal(spec, &DiskUtil{})
}
func (plugin *cinderPlugin) newDeleterInternal(spec *volume.Spec, manager cdManager) (volume.Deleter, error) {
if spec.PersistentVolume != nil && spec.PersistentVolume.Spec.Cinder == nil {
return nil, fmt.Errorf("spec.PersistentVolumeSource.Cinder is nil")
}
return &cinderVolumeDeleter{
&cinderVolume{
volName: spec.Name(),
pdName: spec.PersistentVolume.Spec.Cinder.VolumeID,
manager: manager,
plugin: plugin,
}}, nil
}
func (plugin *cinderPlugin) NewProvisioner(options volume.VolumeOptions) (volume.Provisioner, error) {
return plugin.newProvisionerInternal(options, &DiskUtil{})
}
func (plugin *cinderPlugin) newProvisionerInternal(options volume.VolumeOptions, manager cdManager) (volume.Provisioner, error) {
return &cinderVolumeProvisioner{
cinderVolume: &cinderVolume{
manager: manager,
plugin: plugin,
},
options: options,
}, nil
}
func (plugin *cinderPlugin) getCloudProvider() (BlockStorageProvider, error) {
cloud := plugin.host.GetCloudProvider()
if cloud == nil {
if _, err := os.Stat(DefaultCloudConfigPath); err == nil {
var config *os.File
config, err = os.Open(DefaultCloudConfigPath)
if err != nil {
return nil, fmt.Errorf("unable to load OpenStack configuration from default path : %v", err)
}
defer config.Close()
cloud, err = cloudprovider.GetCloudProvider(openstack.ProviderName, config)
if err != nil {
return nil, fmt.Errorf("unable to create OpenStack cloud provider from default path : %v", err)
}
} else {
return nil, fmt.Errorf("OpenStack cloud provider was not initialized properly : %v", err)
}
}
switch cloud := cloud.(type) {
case *openstack.OpenStack:
return cloud, nil
default:
return nil, errors.New("invalid cloud provider: expected OpenStack")
}
}
func (plugin *cinderPlugin) ConstructVolumeSpec(volumeName, mountPath string) (*volume.Spec, error) {
mounter := plugin.host.GetMounter(plugin.GetPluginName())
kvh, ok := plugin.host.(volume.KubeletVolumeHost)
if !ok {
return nil, fmt.Errorf("plugin volume host does not implement KubeletVolumeHost interface")
}
hu := kvh.GetHostUtil()
pluginMntDir := util.GetPluginMountDir(plugin.host, plugin.GetPluginName())
sourceName, err := hu.GetDeviceNameFromMount(mounter, mountPath, pluginMntDir)
if err != nil {
return nil, err
}
klog.V(4).Infof("Found volume %s mounted to %s", sourceName, mountPath)
cinderVolume := &v1.Volume{
Name: volumeName,
VolumeSource: v1.VolumeSource{
Cinder: &v1.CinderVolumeSource{
VolumeID: sourceName,
},
},
}
return volume.NewSpecFromVolume(cinderVolume), nil
}
var _ volume.ExpandableVolumePlugin = &cinderPlugin{}
func (plugin *cinderPlugin) ExpandVolumeDevice(spec *volume.Spec, newSize resource.Quantity, oldSize resource.Quantity) (resource.Quantity, error) {
volumeID, _, _, err := getVolumeInfo(spec)
if err != nil {
return oldSize, err
}
cloud, err := plugin.getCloudProvider()
if err != nil {
return oldSize, err
}
expandedSize, err := cloud.ExpandVolume(volumeID, oldSize, newSize)
if err != nil {
return oldSize, err
}
klog.V(2).Infof("volume %s expanded to new size %d successfully", volumeID, int(newSize.Value()))
return expandedSize, nil
}
func (plugin *cinderPlugin) NodeExpand(resizeOptions volume.NodeResizeOptions) (bool, error) {
fsVolume, err := util.CheckVolumeModeFilesystem(resizeOptions.VolumeSpec)
if err != nil {
return false, fmt.Errorf("error checking VolumeMode: %v", err)
}
// if volume is not a fs file system, there is nothing for us to do here.
if !fsVolume {
return true, nil
}
_, err = util.GenericResizeFS(plugin.host, plugin.GetPluginName(), resizeOptions.DevicePath, resizeOptions.DeviceMountPath)
if err != nil {
return false, err
}
return true, nil
}
var _ volume.NodeExpandableVolumePlugin = &cinderPlugin{}
func (plugin *cinderPlugin) RequiresFSResize() bool {
return true
}
// Abstract interface to PD operations.
type cdManager interface {
// Attaches the disk to the kubelet's host machine.
AttachDisk(mounter *cinderVolumeMounter, globalPDPath string) error
// Detaches the disk from the kubelet's host machine.
DetachDisk(unmounter *cinderVolumeUnmounter) error
// Creates a volume
CreateVolume(provisioner *cinderVolumeProvisioner, node *v1.Node, allowedTopologies []v1.TopologySelectorTerm) (volumeID string, volumeSizeGB int, labels map[string]string, fstype string, err error)
// Deletes a volume
DeleteVolume(deleter *cinderVolumeDeleter) error
}
var _ volume.Mounter = &cinderVolumeMounter{}
type cinderVolumeMounter struct {
*cinderVolume
fsType string
readOnly bool
blockDeviceMounter *mount.SafeFormatAndMount
mountOptions []string
}
// cinderPersistentDisk volumes are disk resources provided by C3
// that are attached to the kubelet's host machine and exposed to the pod.
type cinderVolume struct {
volName string
podUID types.UID
// Unique identifier of the volume, used to find the disk resource in the provider.
pdName string
// Filesystem type, optional.
fsType string
// Utility interface that provides API calls to the provider to attach/detach disks.
manager cdManager
// Mounter interface that provides system calls to mount the global path to the pod local path.
mounter mount.Interface
plugin *cinderPlugin
volume.MetricsProvider
}
func (b *cinderVolumeMounter) GetAttributes() volume.Attributes {
return volume.Attributes{
ReadOnly: b.readOnly,
Managed: !b.readOnly,
SELinuxRelabel: true,
}
}
func (b *cinderVolumeMounter) SetUp(mounterArgs volume.MounterArgs) error {
return b.SetUpAt(b.GetPath(), mounterArgs)
}
// SetUp bind mounts to the volume path.
func (b *cinderVolumeMounter) SetUpAt(dir string, mounterArgs volume.MounterArgs) error {
klog.V(5).Infof("Cinder SetUp %s to %s", b.pdName, dir)
b.plugin.volumeLocks.LockKey(b.pdName)
defer b.plugin.volumeLocks.UnlockKey(b.pdName)
notmnt, err := b.mounter.IsLikelyNotMountPoint(dir)
if err != nil && !os.IsNotExist(err) {
klog.Errorf("Cannot validate mount point: %s %v", dir, err)
return err
}
if !notmnt {
klog.V(4).Infof("Something is already mounted to target %s", dir)
return nil
}
globalPDPath := makeGlobalPDName(b.plugin.host, b.pdName)
options := []string{"bind"}
if b.readOnly {
options = append(options, "ro")
}
if err := os.MkdirAll(dir, 0750); err != nil {
klog.V(4).Infof("Could not create directory %s: %v", dir, err)
return err
}
mountOptions := util.JoinMountOptions(options, b.mountOptions)
// Perform a bind mount to the full path to allow duplicate mounts of the same PD.
klog.V(4).Infof("Attempting to mount cinder volume %s to %s with options %v", b.pdName, dir, mountOptions)
err = b.mounter.MountSensitiveWithoutSystemd(globalPDPath, dir, "", options, nil)
if err != nil {
klog.V(4).Infof("Mount failed: %v", err)
notmnt, mntErr := b.mounter.IsLikelyNotMountPoint(dir)
if mntErr != nil {
klog.Errorf("IsLikelyNotMountPoint check failed: %v", mntErr)
return err
}
if !notmnt {
if mntErr = b.mounter.Unmount(dir); mntErr != nil {
klog.Errorf("Failed to unmount: %v", mntErr)
return err
}
notmnt, mntErr := b.mounter.IsLikelyNotMountPoint(dir)
if mntErr != nil {
klog.Errorf("IsLikelyNotMountPoint check failed: %v", mntErr)
return err
}
if !notmnt {
// This is very odd, we don't expect it. We'll try again next sync loop.
klog.Errorf("%s is still mounted, despite call to unmount(). Will try again next sync loop.", b.GetPath())
return err
}
}
os.Remove(dir)
klog.Errorf("Failed to mount %s: %v", dir, err)
return err
}
if !b.readOnly {
volume.SetVolumeOwnership(b, mounterArgs.FsGroup, mounterArgs.FSGroupChangePolicy, util.FSGroupCompleteHook(b.plugin, nil))
}
klog.V(3).Infof("Cinder volume %s mounted to %s", b.pdName, dir)
return nil
}
func makeGlobalPDName(host volume.VolumeHost, devName string) string {
return filepath.Join(host.GetPluginDir(cinderVolumePluginName), util.MountsInGlobalPDPath, devName)
}
func (cd *cinderVolume) GetPath() string {
return getPath(cd.podUID, cd.volName, cd.plugin.host)
}
type cinderVolumeUnmounter struct {
*cinderVolume
}
var _ volume.Unmounter = &cinderVolumeUnmounter{}
func (c *cinderVolumeUnmounter) TearDown() error {
return c.TearDownAt(c.GetPath())
}
// Unmounts the bind mount, and detaches the disk only if the PD
// resource was the last reference to that disk on the kubelet.
func (c *cinderVolumeUnmounter) TearDownAt(dir string) error {
if pathExists, pathErr := mount.PathExists(dir); pathErr != nil {
return fmt.Errorf("error checking if path exists: %v", pathErr)
} else if !pathExists {
klog.Warningf("Warning: Unmount skipped because path does not exist: %w", dir)
return nil
}
klog.V(5).Infof("Cinder TearDown of %s", dir)
notmnt, err := c.mounter.IsLikelyNotMountPoint(dir)
if err != nil {
klog.V(4).Infof("IsLikelyNotMountPoint check failed: %v", err)
return err
}
if notmnt {
klog.V(4).Infof("Nothing is mounted to %s, ignoring", dir)
return os.Remove(dir)
}
// Find Cinder volumeID to lock the right volume
// TODO: refactor VolumePlugin.NewUnmounter to get full volume.Spec just like
// NewMounter. We could then find volumeID there without probing MountRefs.
refs, err := c.mounter.GetMountRefs(dir)
if err != nil {
klog.V(4).Infof("GetMountRefs failed: %v", err)
return err
}
if len(refs) == 0 {
klog.V(4).Infof("Directory %s is not mounted", dir)
return fmt.Errorf("directory %s is not mounted", dir)
}
c.pdName = path.Base(refs[0])
klog.V(4).Infof("Found volume %s mounted to %s", c.pdName, dir)
// lock the volume (and thus wait for any concurrent SetUpAt to finish)
c.plugin.volumeLocks.LockKey(c.pdName)
defer c.plugin.volumeLocks.UnlockKey(c.pdName)
// Reload list of references, there might be SetUpAt finished in the meantime
_, err = c.mounter.GetMountRefs(dir)
if err != nil {
klog.V(4).Infof("GetMountRefs failed: %v", err)
return err
}
if err := c.mounter.Unmount(dir); err != nil {
klog.V(4).Infof("Unmount failed: %v", err)
return err
}
klog.V(3).Infof("Successfully unmounted: %s\n", dir)
notmnt, mntErr := c.mounter.IsLikelyNotMountPoint(dir)
if mntErr != nil {
klog.Errorf("IsLikelyNotMountPoint check failed: %v", mntErr)
return err
}
if notmnt {
if err := os.Remove(dir); err != nil {
klog.V(4).Infof("Failed to remove directory after unmount: %v", err)
return err
}
}
return nil
}
type cinderVolumeDeleter struct {
*cinderVolume
}
var _ volume.Deleter = &cinderVolumeDeleter{}
func (r *cinderVolumeDeleter) GetPath() string {
return getPath(r.podUID, r.volName, r.plugin.host)
}
func (r *cinderVolumeDeleter) Delete() error {
return r.manager.DeleteVolume(r)
}
type cinderVolumeProvisioner struct {
*cinderVolume
options volume.VolumeOptions
}
var _ volume.Provisioner = &cinderVolumeProvisioner{}
func (c *cinderVolumeProvisioner) Provision(selectedNode *v1.Node, allowedTopologies []v1.TopologySelectorTerm) (*v1.PersistentVolume, error) {
if !util.ContainsAllAccessModes(c.plugin.GetAccessModes(), c.options.PVC.Spec.AccessModes) {
return nil, fmt.Errorf("invalid AccessModes %v: only AccessModes %v are supported", c.options.PVC.Spec.AccessModes, c.plugin.GetAccessModes())
}
volumeID, sizeGB, labels, fstype, err := c.manager.CreateVolume(c, selectedNode, allowedTopologies)
if err != nil {
return nil, err
}
if fstype == "" {
fstype = "ext4"
}
volumeMode := c.options.PVC.Spec.VolumeMode
if volumeMode != nil && *volumeMode == v1.PersistentVolumeBlock {
// Block volumes should not have any FSType
fstype = ""
}
pv := &v1.PersistentVolume{
ObjectMeta: metav1.ObjectMeta{
Name: c.options.PVName,
Labels: labels,
Annotations: map[string]string{
util.VolumeDynamicallyCreatedByKey: "cinder-dynamic-provisioner",
},
},
Spec: v1.PersistentVolumeSpec{
PersistentVolumeReclaimPolicy: c.options.PersistentVolumeReclaimPolicy,
AccessModes: c.options.PVC.Spec.AccessModes,
Capacity: v1.ResourceList{
v1.ResourceName(v1.ResourceStorage): resource.MustParse(fmt.Sprintf("%dGi", sizeGB)),
},
VolumeMode: volumeMode,
PersistentVolumeSource: v1.PersistentVolumeSource{
Cinder: &v1.CinderPersistentVolumeSource{
VolumeID: volumeID,
FSType: fstype,
ReadOnly: false,
},
},
MountOptions: c.options.MountOptions,
},
}
if len(c.options.PVC.Spec.AccessModes) == 0 {
pv.Spec.AccessModes = c.plugin.GetAccessModes()
}
requirements := make([]v1.NodeSelectorRequirement, 0)
for k, v := range labels {
if v != "" {
requirements = append(requirements, v1.NodeSelectorRequirement{Key: k, Operator: v1.NodeSelectorOpIn, Values: []string{v}})
}
}
if len(requirements) > 0 {
pv.Spec.NodeAffinity = new(v1.VolumeNodeAffinity)
pv.Spec.NodeAffinity.Required = new(v1.NodeSelector)
pv.Spec.NodeAffinity.Required.NodeSelectorTerms = make([]v1.NodeSelectorTerm, 1)
pv.Spec.NodeAffinity.Required.NodeSelectorTerms[0].MatchExpressions = requirements
}
return pv, nil
}
func getVolumeInfo(spec *volume.Spec) (string, string, bool, error) {
if spec.Volume != nil && spec.Volume.Cinder != nil {
return spec.Volume.Cinder.VolumeID, spec.Volume.Cinder.FSType, spec.Volume.Cinder.ReadOnly, nil
} else if spec.PersistentVolume != nil &&
spec.PersistentVolume.Spec.Cinder != nil {
return spec.PersistentVolume.Spec.Cinder.VolumeID, spec.PersistentVolume.Spec.Cinder.FSType, spec.ReadOnly, nil
}
return "", "", false, fmt.Errorf("Spec does not reference a Cinder volume type")
}

View File

@ -1,179 +0,0 @@
//go:build !providerless
// +build !providerless
/*
Copyright 2018 The Kubernetes Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package cinder
import (
"fmt"
"path/filepath"
"k8s.io/klog/v2"
"k8s.io/mount-utils"
utilstrings "k8s.io/utils/strings"
v1 "k8s.io/api/core/v1"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/types"
"k8s.io/kubernetes/pkg/volume"
"k8s.io/kubernetes/pkg/volume/util/volumepathhandler"
)
var _ volume.VolumePlugin = &cinderPlugin{}
var _ volume.PersistentVolumePlugin = &cinderPlugin{}
var _ volume.BlockVolumePlugin = &cinderPlugin{}
var _ volume.DeletableVolumePlugin = &cinderPlugin{}
var _ volume.ProvisionableVolumePlugin = &cinderPlugin{}
var _ volume.ExpandableVolumePlugin = &cinderPlugin{}
func (plugin *cinderPlugin) ConstructBlockVolumeSpec(podUID types.UID, volumeName, mapPath string) (*volume.Spec, error) {
pluginDir := plugin.host.GetVolumeDevicePluginDir(cinderVolumePluginName)
blkutil := volumepathhandler.NewBlockVolumePathHandler()
globalMapPathUUID, err := blkutil.FindGlobalMapPathUUIDFromPod(pluginDir, mapPath, podUID)
if err != nil {
return nil, err
}
klog.V(5).Infof("globalMapPathUUID: %v, err: %v", globalMapPathUUID, err)
globalMapPath := filepath.Dir(globalMapPathUUID)
if len(globalMapPath) <= 1 {
return nil, fmt.Errorf("failed to get volume plugin information from globalMapPathUUID: %v", globalMapPathUUID)
}
return getVolumeSpecFromGlobalMapPath(volumeName, globalMapPath)
}
func getVolumeSpecFromGlobalMapPath(volumeName, globalMapPath string) (*volume.Spec, error) {
// Get volume spec information from globalMapPath
// globalMapPath example:
// plugins/kubernetes.io/{PluginName}/{DefaultKubeletVolumeDevicesDirName}/{volumeID}
// plugins/kubernetes.io/cinder/volumeDevices/vol-XXXXXX
vID := filepath.Base(globalMapPath)
if len(vID) <= 1 {
return nil, fmt.Errorf("failed to get volumeID from global path=%s", globalMapPath)
}
block := v1.PersistentVolumeBlock
cinderVolume := &v1.PersistentVolume{
ObjectMeta: metav1.ObjectMeta{
Name: volumeName,
},
Spec: v1.PersistentVolumeSpec{
PersistentVolumeSource: v1.PersistentVolumeSource{
Cinder: &v1.CinderPersistentVolumeSource{
VolumeID: vID,
},
},
VolumeMode: &block,
},
}
return volume.NewSpecFromPersistentVolume(cinderVolume, true), nil
}
// NewBlockVolumeMapper creates a new volume.BlockVolumeMapper from an API specification.
func (plugin *cinderPlugin) NewBlockVolumeMapper(spec *volume.Spec, pod *v1.Pod, _ volume.VolumeOptions) (volume.BlockVolumeMapper, error) {
// If this is called via GenerateUnmapDeviceFunc(), pod is nil.
// Pass empty string as dummy uid since uid isn't used in the case.
var uid types.UID
if pod != nil {
uid = pod.UID
}
return plugin.newBlockVolumeMapperInternal(spec, uid, &DiskUtil{}, plugin.host.GetMounter(plugin.GetPluginName()))
}
func (plugin *cinderPlugin) newBlockVolumeMapperInternal(spec *volume.Spec, podUID types.UID, manager cdManager, mounter mount.Interface) (volume.BlockVolumeMapper, error) {
pdName, fsType, readOnly, err := getVolumeInfo(spec)
if err != nil {
return nil, err
}
mapper := &cinderVolumeMapper{
cinderVolume: &cinderVolume{
podUID: podUID,
volName: spec.Name(),
pdName: pdName,
fsType: fsType,
manager: manager,
mounter: mounter,
plugin: plugin,
},
readOnly: readOnly,
}
blockPath, err := mapper.GetGlobalMapPath(spec)
if err != nil {
return nil, fmt.Errorf("failed to get device path: %v", err)
}
mapper.MetricsProvider = volume.NewMetricsBlock(filepath.Join(blockPath, string(podUID)))
return mapper, nil
}
func (plugin *cinderPlugin) NewBlockVolumeUnmapper(volName string, podUID types.UID) (volume.BlockVolumeUnmapper, error) {
return plugin.newUnmapperInternal(volName, podUID, &DiskUtil{}, plugin.host.GetMounter(plugin.GetPluginName()))
}
func (plugin *cinderPlugin) newUnmapperInternal(volName string, podUID types.UID, manager cdManager, mounter mount.Interface) (volume.BlockVolumeUnmapper, error) {
return &cinderPluginUnmapper{
cinderVolume: &cinderVolume{
podUID: podUID,
volName: volName,
manager: manager,
mounter: mounter,
plugin: plugin,
}}, nil
}
type cinderPluginUnmapper struct {
*cinderVolume
volume.MetricsNil
}
var _ volume.BlockVolumeUnmapper = &cinderPluginUnmapper{}
type cinderVolumeMapper struct {
*cinderVolume
readOnly bool
}
var _ volume.BlockVolumeMapper = &cinderVolumeMapper{}
// GetGlobalMapPath returns global map path and error
// path: plugins/kubernetes.io/{PluginName}/volumeDevices/volumeID
//
// plugins/kubernetes.io/cinder/volumeDevices/vol-XXXXXX
func (cd *cinderVolume) GetGlobalMapPath(spec *volume.Spec) (string, error) {
pdName, _, _, err := getVolumeInfo(spec)
if err != nil {
return "", err
}
return filepath.Join(cd.plugin.host.GetVolumeDevicePluginDir(cinderVolumePluginName), pdName), nil
}
// GetPodDeviceMapPath returns pod device map path and volume name
// path: pods/{podUid}/volumeDevices/kubernetes.io~cinder
func (cd *cinderVolume) GetPodDeviceMapPath() (string, string) {
name := cinderVolumePluginName
return cd.plugin.host.GetPodVolumeDeviceDir(cd.podUID, utilstrings.EscapeQualifiedName(name)), cd.volName
}
// SupportsMetrics returns true for cinderVolumeMapper as it initializes the
// MetricsProvider.
func (cvm *cinderVolumeMapper) SupportsMetrics() bool {
return true
}

View File

@ -1,151 +0,0 @@
//go:build !providerless
// +build !providerless
/*
Copyright 2018 The Kubernetes Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package cinder
import (
"os"
"path/filepath"
"testing"
v1 "k8s.io/api/core/v1"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/types"
utiltesting "k8s.io/client-go/util/testing"
"k8s.io/kubernetes/pkg/volume"
volumetest "k8s.io/kubernetes/pkg/volume/testing"
)
const (
testVolName = "vol-1234"
testPVName = "pv1"
testGlobalPath = "plugins/kubernetes.io/cinder/volumeDevices/vol-1234"
testPodPath = "pods/poduid/volumeDevices/kubernetes.io~cinder"
)
func TestGetVolumeSpecFromGlobalMapPath(t *testing.T) {
// make our test path for fake GlobalMapPath
// /tmp symbolized our pluginDir
// /tmp/testGlobalPathXXXXX/plugins/kubernetes.io/cinder/volumeDevices/pdVol1
tmpVDir, err := utiltesting.MkTmpdir("cinderBlockTest")
if err != nil {
t.Fatalf("can't make a temp dir: %v", err)
}
//deferred clean up
defer os.RemoveAll(tmpVDir)
expectedGlobalPath := filepath.Join(tmpVDir, testGlobalPath)
//Bad Path
badspec, err := getVolumeSpecFromGlobalMapPath("", "")
if badspec != nil || err == nil {
t.Errorf("Expected not to get spec from GlobalMapPath but did")
}
// Good Path
spec, err := getVolumeSpecFromGlobalMapPath("myVolume", expectedGlobalPath)
if spec == nil || err != nil {
t.Fatalf("Failed to get spec from GlobalMapPath: %v", err)
}
if spec.PersistentVolume.Name != "myVolume" {
t.Errorf("Invalid PV name from GlobalMapPath spec: %s", spec.PersistentVolume.Name)
}
if spec.PersistentVolume.Spec.Cinder.VolumeID != testVolName {
t.Errorf("Invalid volumeID from GlobalMapPath spec: %s", spec.PersistentVolume.Spec.Cinder.VolumeID)
}
block := v1.PersistentVolumeBlock
specMode := spec.PersistentVolume.Spec.VolumeMode
if specMode == nil {
t.Fatalf("Failed to get volumeMode from PersistentVolumeBlock")
}
if *specMode != block {
t.Errorf("Invalid volumeMode from GlobalMapPath spec: %v expected: %v", *specMode, block)
}
}
func getTestVolume(readOnly bool, isBlock bool) *volume.Spec {
pv := &v1.PersistentVolume{
ObjectMeta: metav1.ObjectMeta{
Name: testPVName,
},
Spec: v1.PersistentVolumeSpec{
PersistentVolumeSource: v1.PersistentVolumeSource{
Cinder: &v1.CinderPersistentVolumeSource{
VolumeID: testVolName,
},
},
},
}
if isBlock {
blockMode := v1.PersistentVolumeBlock
pv.Spec.VolumeMode = &blockMode
}
return volume.NewSpecFromPersistentVolume(pv, readOnly)
}
func TestGetPodAndPluginMapPaths(t *testing.T) {
tmpVDir, err := utiltesting.MkTmpdir("cinderBlockTest")
if err != nil {
t.Fatalf("can't make a temp dir: %v", err)
}
//deferred clean up
defer os.RemoveAll(tmpVDir)
expectedGlobalPath := filepath.Join(tmpVDir, testGlobalPath)
expectedPodPath := filepath.Join(tmpVDir, testPodPath)
spec := getTestVolume(false, true /*isBlock*/)
plugMgr := volume.VolumePluginMgr{}
plugMgr.InitPlugins(ProbeVolumePlugins(), nil /* prober */, volumetest.NewFakeVolumeHost(t, tmpVDir, nil, nil))
plug, err := plugMgr.FindMapperPluginByName(cinderVolumePluginName)
if err != nil {
os.RemoveAll(tmpVDir)
t.Fatalf("Can't find the plugin by name: %q", cinderVolumePluginName)
}
if plug.GetPluginName() != cinderVolumePluginName {
t.Fatalf("Wrong name: %s", plug.GetPluginName())
}
pod := &v1.Pod{ObjectMeta: metav1.ObjectMeta{UID: types.UID("poduid")}}
mapper, err := plug.NewBlockVolumeMapper(spec, pod, volume.VolumeOptions{})
if err != nil {
t.Fatalf("Failed to make a new Mounter: %v", err)
}
if mapper == nil {
t.Fatalf("Got a nil Mounter")
}
//GetGlobalMapPath
gMapPath, err := mapper.GetGlobalMapPath(spec)
if err != nil || len(gMapPath) == 0 {
t.Fatalf("Invalid GlobalMapPath from spec: %s", spec.PersistentVolume.Spec.Cinder.VolumeID)
}
if gMapPath != expectedGlobalPath {
t.Errorf("Failed to get GlobalMapPath: %s %s", gMapPath, expectedGlobalPath)
}
//GetPodDeviceMapPath
gDevicePath, gVolName := mapper.GetPodDeviceMapPath()
if gDevicePath != expectedPodPath {
t.Errorf("Got unexpected pod path: %s, expected %s", gDevicePath, expectedPodPath)
}
if gVolName != testPVName {
t.Errorf("Got unexpected volNamne: %s, expected %s", gVolName, testPVName)
}
}

View File

@ -1,365 +0,0 @@
//go:build !providerless
// +build !providerless
/*
Copyright 2015 The Kubernetes Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package cinder
import (
"fmt"
"os"
"path/filepath"
"testing"
"time"
v1 "k8s.io/api/core/v1"
"k8s.io/apimachinery/pkg/types"
utiltesting "k8s.io/client-go/util/testing"
"k8s.io/mount-utils"
"k8s.io/kubernetes/pkg/volume"
volumetest "k8s.io/kubernetes/pkg/volume/testing"
"k8s.io/kubernetes/pkg/volume/util"
"k8s.io/legacy-cloud-providers/openstack"
)
func TestCanSupport(t *testing.T) {
tmpDir, err := utiltesting.MkTmpdir("cinderTest")
if err != nil {
t.Fatalf("can't make a temp dir: %v", err)
}
defer os.RemoveAll(tmpDir)
plugMgr := volume.VolumePluginMgr{}
plugMgr.InitPlugins(ProbeVolumePlugins(), nil /* prober */, volumetest.NewFakeKubeletVolumeHost(t, tmpDir, nil, nil))
plug, err := plugMgr.FindPluginByName("kubernetes.io/cinder")
if err != nil {
t.Fatal("Can't find the plugin by name")
}
if plug.GetPluginName() != "kubernetes.io/cinder" {
t.Errorf("Wrong name: %s", plug.GetPluginName())
}
if !plug.CanSupport(&volume.Spec{Volume: &v1.Volume{VolumeSource: v1.VolumeSource{Cinder: &v1.CinderVolumeSource{}}}}) {
t.Errorf("Expected true")
}
if !plug.CanSupport(&volume.Spec{PersistentVolume: &v1.PersistentVolume{Spec: v1.PersistentVolumeSpec{PersistentVolumeSource: v1.PersistentVolumeSource{Cinder: &v1.CinderPersistentVolumeSource{}}}}}) {
t.Errorf("Expected true")
}
}
type fakePDManager struct {
// How long should AttachDisk/DetachDisk take - we need slower AttachDisk in a test.
attachDetachDuration time.Duration
}
func getFakeDeviceName(host volume.VolumeHost, pdName string) string {
return filepath.Join(host.GetPluginDir(cinderVolumePluginName), "device", pdName)
}
// Real Cinder AttachDisk attaches a cinder volume. If it is not yet mounted,
// it mounts it to globalPDPath.
// We create a dummy directory (="device") and bind-mount it to globalPDPath
func (fake *fakePDManager) AttachDisk(b *cinderVolumeMounter, globalPDPath string) error {
globalPath := makeGlobalPDName(b.plugin.host, b.pdName)
fakeDeviceName := getFakeDeviceName(b.plugin.host, b.pdName)
err := os.MkdirAll(fakeDeviceName, 0750)
if err != nil {
return err
}
// Attaching a Cinder volume can be slow...
time.Sleep(fake.attachDetachDuration)
// The volume is "attached", bind-mount it if it's not mounted yet.
notmnt, err := b.mounter.IsLikelyNotMountPoint(globalPath)
if err != nil {
if os.IsNotExist(err) {
if err := os.MkdirAll(globalPath, 0750); err != nil {
return err
}
notmnt = true
} else {
return err
}
}
if notmnt {
err = b.mounter.MountSensitiveWithoutSystemd(fakeDeviceName, globalPath, "", []string{"bind"}, nil)
if err != nil {
return err
}
}
return nil
}
func (fake *fakePDManager) DetachDisk(c *cinderVolumeUnmounter) error {
globalPath := makeGlobalPDName(c.plugin.host, c.pdName)
fakeDeviceName := getFakeDeviceName(c.plugin.host, c.pdName)
// unmount the bind-mount - should be fast
err := c.mounter.Unmount(globalPath)
if err != nil {
return err
}
// "Detach" the fake "device"
err = os.RemoveAll(fakeDeviceName)
if err != nil {
return err
}
return nil
}
func (fake *fakePDManager) CreateVolume(c *cinderVolumeProvisioner, node *v1.Node, allowedTopologies []v1.TopologySelectorTerm) (volumeID string, volumeSizeGB int, labels map[string]string, fstype string, err error) {
labels = make(map[string]string)
labels[v1.LabelTopologyZone] = "nova"
return "test-volume-name", 1, labels, "", nil
}
func (fake *fakePDManager) DeleteVolume(cd *cinderVolumeDeleter) error {
if cd.pdName != "test-volume-name" {
return fmt.Errorf("Deleter got unexpected volume name: %s", cd.pdName)
}
return nil
}
func TestPlugin(t *testing.T) {
tmpDir, err := utiltesting.MkTmpdir("cinderTest")
if err != nil {
t.Fatalf("can't make a temp dir: %v", err)
}
defer os.RemoveAll(tmpDir)
plugMgr := volume.VolumePluginMgr{}
plugMgr.InitPlugins(ProbeVolumePlugins(), nil /* prober */, volumetest.NewFakeKubeletVolumeHost(t, tmpDir, nil, nil))
plug, err := plugMgr.FindPluginByName("kubernetes.io/cinder")
if err != nil {
t.Errorf("Can't find the plugin by name")
}
spec := &v1.Volume{
Name: "vol1",
VolumeSource: v1.VolumeSource{
Cinder: &v1.CinderVolumeSource{
VolumeID: "pd",
FSType: "ext4",
},
},
}
mounter, err := plug.(*cinderPlugin).newMounterInternal(volume.NewSpecFromVolume(spec), types.UID("poduid"), &fakePDManager{0}, mount.NewFakeMounter(nil))
if err != nil {
t.Errorf("Failed to make a new Mounter: %v", err)
}
if mounter == nil {
t.Errorf("Got a nil Mounter")
}
volPath := filepath.Join(tmpDir, "pods/poduid/volumes/kubernetes.io~cinder/vol1")
path := mounter.GetPath()
if path != volPath {
t.Errorf("Got unexpected path: %s", path)
}
if err := mounter.SetUp(volume.MounterArgs{}); err != nil {
t.Errorf("Expected success, got: %v", err)
}
if _, err := os.Stat(path); err != nil {
if os.IsNotExist(err) {
t.Errorf("SetUp() failed, volume path not created: %s", path)
} else {
t.Errorf("SetUp() failed: %v", err)
}
}
unmounter, err := plug.(*cinderPlugin).newUnmounterInternal("vol1", types.UID("poduid"), &fakePDManager{0}, mount.NewFakeMounter(nil))
if err != nil {
t.Errorf("Failed to make a new Unmounter: %v", err)
}
if unmounter == nil {
t.Errorf("Got a nil Unmounter")
}
if err := unmounter.TearDown(); err != nil {
t.Errorf("Expected success, got: %v", err)
}
if _, err := os.Stat(path); err == nil {
t.Errorf("TearDown() failed, volume path still exists: %s", path)
} else if !os.IsNotExist(err) {
t.Errorf("TearDown() failed: %v", err)
}
// Test Provisioner
options := volume.VolumeOptions{
PVC: volumetest.CreateTestPVC("100Mi", []v1.PersistentVolumeAccessMode{v1.ReadWriteOnce}),
PersistentVolumeReclaimPolicy: v1.PersistentVolumeReclaimDelete,
}
provisioner, err := plug.(*cinderPlugin).newProvisionerInternal(options, &fakePDManager{0})
if err != nil {
t.Errorf("ProvisionerInternal() failed: %v", err)
}
persistentSpec, err := provisioner.Provision(nil, nil)
if err != nil {
t.Errorf("Provision() failed: %v", err)
}
if persistentSpec.Spec.PersistentVolumeSource.Cinder.VolumeID != "test-volume-name" {
t.Errorf("Provision() returned unexpected volume ID: %s", persistentSpec.Spec.PersistentVolumeSource.Cinder.VolumeID)
}
cap := persistentSpec.Spec.Capacity[v1.ResourceStorage]
size := cap.Value()
if size != 1024*1024*1024 {
t.Errorf("Provision() returned unexpected volume size: %v", size)
}
// check nodeaffinity members
if persistentSpec.Spec.NodeAffinity == nil {
t.Errorf("Provision() returned unexpected nil NodeAffinity")
}
if persistentSpec.Spec.NodeAffinity.Required == nil {
t.Errorf("Provision() returned unexpected nil NodeAffinity.Required")
}
n := len(persistentSpec.Spec.NodeAffinity.Required.NodeSelectorTerms)
if n != 1 {
t.Errorf("Provision() returned unexpected number of NodeSelectorTerms %d. Expected %d", n, 1)
}
n = len(persistentSpec.Spec.NodeAffinity.Required.NodeSelectorTerms[0].MatchExpressions)
if n != 1 {
t.Errorf("Provision() returned unexpected number of MatchExpressions %d. Expected %d", n, 1)
}
req := persistentSpec.Spec.NodeAffinity.Required.NodeSelectorTerms[0].MatchExpressions[0]
if req.Key != v1.LabelTopologyZone {
t.Errorf("Provision() returned unexpected requirement key in NodeAffinity %v", req.Key)
}
if req.Operator != v1.NodeSelectorOpIn {
t.Errorf("Provision() returned unexpected requirement operator in NodeAffinity %v", req.Operator)
}
if len(req.Values) != 1 || req.Values[0] != "nova" {
t.Errorf("Provision() returned unexpected requirement value in NodeAffinity %v", req.Values)
}
// Test Deleter
volSpec := &volume.Spec{
PersistentVolume: persistentSpec,
}
deleter, err := plug.(*cinderPlugin).newDeleterInternal(volSpec, &fakePDManager{0})
if err != nil {
t.Errorf("DeleterInternal() failed: %v", err)
}
err = deleter.Delete()
if err != nil {
t.Errorf("Deleter() failed: %v", err)
}
}
func TestGetVolumeLimit(t *testing.T) {
tmpDir, err := utiltesting.MkTmpdir("cinderTest")
if err != nil {
t.Fatalf("can't make a temp dir: %v", err)
}
cloud, err := getOpenstackCloudProvider()
if err != nil {
t.Fatalf("can not instantiate openstack cloudprovider : %v", err)
}
defer os.RemoveAll(tmpDir)
plugMgr := volume.VolumePluginMgr{}
volumeHost := volumetest.NewFakeKubeletVolumeHostWithCloudProvider(t, tmpDir, nil, nil, cloud)
plugMgr.InitPlugins(ProbeVolumePlugins(), nil /* prober */, volumeHost)
plug, err := plugMgr.FindPluginByName("kubernetes.io/cinder")
if err != nil {
t.Fatalf("Can't find the plugin by name")
}
attachablePlugin, ok := plug.(volume.VolumePluginWithAttachLimits)
if !ok {
t.Fatalf("plugin %s is not of attachable type", plug.GetPluginName())
}
limits, err := attachablePlugin.GetVolumeLimits()
if err != nil {
t.Errorf("error fetching limits : %v", err)
}
if len(limits) == 0 {
t.Fatalf("expecting limit from openstack got none")
}
limit, _ := limits[util.CinderVolumeLimitKey]
if limit != 10 {
t.Fatalf("expected volume limit to be 10 got %d", limit)
}
}
func getOpenstackCloudProvider() (*openstack.OpenStack, error) {
cfg := getOpenstackConfig()
return openstack.NewFakeOpenStackCloud(cfg)
}
func getOpenstackConfig() openstack.Config {
cfg := openstack.Config{
Global: struct {
AuthURL string `gcfg:"auth-url"`
Username string
UserID string `gcfg:"user-id"`
Password string `datapolicy:"password"`
TenantID string `gcfg:"tenant-id"`
TenantName string `gcfg:"tenant-name"`
TrustID string `gcfg:"trust-id"`
DomainID string `gcfg:"domain-id"`
DomainName string `gcfg:"domain-name"`
Region string
CAFile string `gcfg:"ca-file"`
SecretName string `gcfg:"secret-name"`
SecretNamespace string `gcfg:"secret-namespace"`
KubeconfigPath string `gcfg:"kubeconfig-path"`
}{
Username: "user",
Password: "pass",
TenantID: "foobar",
DomainID: "2a73b8f597c04551a0fdc8e95544be8a",
DomainName: "local",
AuthURL: "http://auth.url",
UserID: "user",
},
BlockStorage: openstack.BlockStorageOpts{
NodeVolumeAttachLimit: 10,
},
}
return cfg
}
func TestUnsupportedVolumeHost(t *testing.T) {
tmpDir, err := utiltesting.MkTmpdir("cinderTest")
if err != nil {
t.Fatalf("can't make a temp dir: %v", err)
}
defer os.RemoveAll(tmpDir)
plugMgr := volume.VolumePluginMgr{}
plugMgr.InitPlugins(ProbeVolumePlugins(), nil /* prober */, volumetest.NewFakeVolumeHost(t, tmpDir, nil, nil))
plug, err := plugMgr.FindPluginByName("kubernetes.io/cinder")
if err != nil {
t.Fatal("Can't find the plugin by name")
}
_, err = plug.ConstructVolumeSpec("", "")
if err == nil {
t.Errorf("Expected failure constructing volume spec with unsupported VolumeHost")
}
}

View File

@ -1,278 +0,0 @@
//go:build !providerless
// +build !providerless
/*
Copyright 2015 The Kubernetes Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package cinder
import (
"context"
"errors"
"fmt"
"io/ioutil"
"os"
"strings"
"time"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/klog/v2"
"k8s.io/api/core/v1"
"k8s.io/apimachinery/pkg/util/sets"
clientset "k8s.io/client-go/kubernetes"
volumehelpers "k8s.io/cloud-provider/volume/helpers"
"k8s.io/kubernetes/pkg/volume"
volutil "k8s.io/kubernetes/pkg/volume/util"
"k8s.io/utils/exec"
)
// DiskUtil has utility/helper methods
type DiskUtil struct{}
// AttachDisk attaches a disk specified by a volume.CinderPersistenDisk to the current kubelet.
// Mounts the disk to its global path.
func (util *DiskUtil) AttachDisk(b *cinderVolumeMounter, globalPDPath string) error {
options := []string{}
if b.readOnly {
options = append(options, "ro")
}
cloud, err := b.plugin.getCloudProvider()
if err != nil {
return err
}
instanceid, err := cloud.InstanceID()
if err != nil {
return err
}
diskid, err := cloud.AttachDisk(instanceid, b.pdName)
if err != nil {
return err
}
var devicePath string
numTries := 0
for {
devicePath = cloud.GetDevicePath(diskid)
probeAttachedVolume()
_, err := os.Stat(devicePath)
if err == nil {
break
}
if err != nil && !os.IsNotExist(err) {
return err
}
numTries++
if numTries == 10 {
return errors.New("could not attach disk: Timeout after 60s")
}
time.Sleep(time.Second * 6)
}
notmnt, err := b.mounter.IsLikelyNotMountPoint(globalPDPath)
if err != nil {
if os.IsNotExist(err) {
if err := os.MkdirAll(globalPDPath, 0750); err != nil {
return err
}
notmnt = true
} else {
return err
}
}
if notmnt {
err = b.blockDeviceMounter.FormatAndMount(devicePath, globalPDPath, b.fsType, options)
if err != nil {
os.Remove(globalPDPath)
return err
}
klog.V(2).Infof("Safe mount successful: %q\n", devicePath)
}
return nil
}
// DetachDisk unmounts the device and detaches the disk from the kubelet's host machine.
func (util *DiskUtil) DetachDisk(cd *cinderVolumeUnmounter) error {
globalPDPath := makeGlobalPDName(cd.plugin.host, cd.pdName)
if err := cd.mounter.Unmount(globalPDPath); err != nil {
return err
}
if err := os.Remove(globalPDPath); err != nil {
return err
}
klog.V(2).Infof("Successfully unmounted main device: %s\n", globalPDPath)
cloud, err := cd.plugin.getCloudProvider()
if err != nil {
return err
}
instanceid, err := cloud.InstanceID()
if err != nil {
return err
}
if err = cloud.DetachDisk(instanceid, cd.pdName); err != nil {
return err
}
klog.V(2).Infof("Successfully detached cinder volume %s", cd.pdName)
return nil
}
// DeleteVolume uses the cloud entrypoint to delete specified volume
func (util *DiskUtil) DeleteVolume(cd *cinderVolumeDeleter) error {
cloud, err := cd.plugin.getCloudProvider()
if err != nil {
return err
}
if err = cloud.DeleteVolume(cd.pdName); err != nil {
// OpenStack cloud provider returns volume.tryAgainError when necessary,
// no handling needed here.
klog.V(2).Infof("Error deleting cinder volume %s: %v", cd.pdName, err)
return err
}
klog.V(2).Infof("Successfully deleted cinder volume %s", cd.pdName)
return nil
}
func getZonesFromNodes(kubeClient clientset.Interface) (sets.String, error) {
// TODO: caching, currently it is overkill because it calls this function
// only when it creates dynamic PV
zones := make(sets.String)
nodes, err := kubeClient.CoreV1().Nodes().List(context.TODO(), metav1.ListOptions{})
if err != nil {
klog.V(2).Infof("Error listing nodes")
return zones, err
}
for _, node := range nodes.Items {
if zone, ok := node.Labels[v1.LabelTopologyZone]; ok {
zones.Insert(zone)
}
}
klog.V(4).Infof("zones found: %v", zones)
return zones, nil
}
// CreateVolume uses the cloud provider entrypoint for creating a volume
func (util *DiskUtil) CreateVolume(c *cinderVolumeProvisioner, node *v1.Node, allowedTopologies []v1.TopologySelectorTerm) (volumeID string, volumeSizeGB int, volumeLabels map[string]string, fstype string, err error) {
cloud, err := c.plugin.getCloudProvider()
if err != nil {
return "", 0, nil, "", err
}
capacity := c.options.PVC.Spec.Resources.Requests[v1.ResourceName(v1.ResourceStorage)]
// Cinder works with gigabytes, convert to GiB with rounding up
volSizeGiB, err := volumehelpers.RoundUpToGiBInt(capacity)
if err != nil {
return "", 0, nil, "", err
}
name := volutil.GenerateVolumeName(c.options.ClusterName, c.options.PVName, 255) // Cinder volume name can have up to 255 characters
vtype := ""
availability := ""
// Apply ProvisionerParameters (case-insensitive). We leave validation of
// the values to the cloud provider.
for k, v := range c.options.Parameters {
switch strings.ToLower(k) {
case "type":
vtype = v
case "availability":
availability = v
case volume.VolumeParameterFSType:
fstype = v
default:
return "", 0, nil, "", fmt.Errorf("invalid option %q for volume plugin %s", k, c.plugin.GetPluginName())
}
}
// TODO: implement PVC.Selector parsing
if c.options.PVC.Spec.Selector != nil {
return "", 0, nil, "", fmt.Errorf("claim.Spec.Selector is not supported for dynamic provisioning on Cinder")
}
if availability == "" {
// No zone specified, choose one randomly in the same region
zones, err := getZonesFromNodes(c.plugin.host.GetKubeClient())
if err != nil {
klog.V(2).Infof("error getting zone information: %v", err)
return "", 0, nil, "", err
}
// if we did not get any zones, lets leave it blank and gophercloud will
// use zone "nova" as default
if len(zones) > 0 {
availability, err = volumehelpers.SelectZoneForVolume(false, false, "", nil, zones, node, allowedTopologies, c.options.PVC.Name)
if err != nil {
klog.V(2).Infof("error selecting zone for volume: %v", err)
return "", 0, nil, "", err
}
}
}
volumeID, volumeAZ, volumeRegion, IgnoreVolumeAZ, err := cloud.CreateVolume(name, volSizeGiB, vtype, availability, c.options.CloudTags)
if err != nil {
klog.V(2).Infof("Error creating cinder volume: %v", err)
return "", 0, nil, "", err
}
klog.V(2).Infof("Successfully created cinder volume %s", volumeID)
// these are needed that pod is spawning to same AZ
volumeLabels = make(map[string]string)
if IgnoreVolumeAZ == false {
if volumeAZ != "" {
volumeLabels[v1.LabelTopologyZone] = volumeAZ
}
if volumeRegion != "" {
volumeLabels[v1.LabelTopologyRegion] = volumeRegion
}
}
return volumeID, volSizeGiB, volumeLabels, fstype, nil
}
func probeAttachedVolume() error {
// rescan scsi bus
scsiHostRescan()
executor := exec.New()
// udevadm settle waits for udevd to process the device creation
// events for all hardware devices, thus ensuring that any device
// nodes have been created successfully before proceeding.
argsSettle := []string{"settle"}
cmdSettle := executor.Command("udevadm", argsSettle...)
_, errSettle := cmdSettle.CombinedOutput()
if errSettle != nil {
klog.Errorf("error running udevadm settle %v\n", errSettle)
}
args := []string{"trigger"}
cmd := executor.Command("udevadm", args...)
_, err := cmd.CombinedOutput()
if err != nil {
klog.Errorf("error running udevadm trigger %v\n", err)
return err
}
klog.V(4).Infof("Successfully probed all attachments")
return nil
}
func scsiHostRescan() {
scsiPath := "/sys/class/scsi_host/"
if dirs, err := ioutil.ReadDir(scsiPath); err == nil {
for _, f := range dirs {
name := scsiPath + f.Name() + "/scan"
data := []byte("- - -")
ioutil.WriteFile(name, data, 0666)
}
}
}

View File

@ -1,18 +0,0 @@
/*
Copyright 2015 The Kubernetes Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
// Package cinder contains the internal representation of cinder volumes.
package cinder // import "k8s.io/kubernetes/pkg/volume/cinder"

View File

@ -222,9 +222,6 @@ func (p *csiPlugin) Init(host volume.VolumeHost) error {
csitranslationplugins.AWSEBSInTreePluginName: func() bool {
return utilfeature.DefaultFeatureGate.Enabled(features.CSIMigrationAWS)
},
csitranslationplugins.CinderInTreePluginName: func() bool {
return true
},
csitranslationplugins.AzureDiskInTreePluginName: func() bool {
return utilfeature.DefaultFeatureGate.Enabled(features.CSIMigrationAzureDisk)
},

View File

@ -68,8 +68,6 @@ func (pm PluginManager) IsMigrationCompleteForPlugin(pluginName string) bool {
return pm.featureGate.Enabled(features.InTreePluginAzureFileUnregister)
case csilibplugins.AzureDiskInTreePluginName:
return pm.featureGate.Enabled(features.InTreePluginAzureDiskUnregister)
case csilibplugins.CinderInTreePluginName:
return pm.featureGate.Enabled(features.InTreePluginOpenStackUnregister)
case csilibplugins.VSphereInTreePluginName:
return pm.featureGate.Enabled(features.InTreePluginvSphereUnregister)
case csilibplugins.PortworxVolumePluginName:
@ -96,8 +94,6 @@ func (pm PluginManager) IsMigrationEnabledForPlugin(pluginName string) bool {
return pm.featureGate.Enabled(features.CSIMigrationAzureFile)
case csilibplugins.AzureDiskInTreePluginName:
return pm.featureGate.Enabled(features.CSIMigrationAzureDisk)
case csilibplugins.CinderInTreePluginName:
return true
case csilibplugins.VSphereInTreePluginName:
return pm.featureGate.Enabled(features.CSIMigrationvSphere)
case csilibplugins.PortworxVolumePluginName:

View File

@ -40,13 +40,6 @@ const (
// GCEVolumeLimitKey stores resource name that will store volume limits for GCE node
GCEVolumeLimitKey = "attachable-volumes-gce-pd"
// CinderVolumeLimitKey contains Volume limit key for Cinder
CinderVolumeLimitKey = "attachable-volumes-cinder"
// DefaultMaxCinderVolumes defines the maximum number of PD Volumes for Cinder
// For Openstack we are keeping this to a high enough value so as depending on backend
// cluster admins can configure it.
DefaultMaxCinderVolumes = 256
// CSIAttachLimitPrefix defines prefix used for CSI volumes
CSIAttachLimitPrefix = "attachable-volumes-csi-"

View File

@ -21,7 +21,6 @@ import (
"os"
"reflect"
"runtime"
"strings"
"testing"
v1 "k8s.io/api/core/v1"
@ -261,30 +260,6 @@ func TestFsUserFrom(t *testing.T) {
}
}
func TestGenerateVolumeName(t *testing.T) {
// Normal operation, no truncate
v1 := GenerateVolumeName("kubernetes", "pv-cinder-abcde", 255)
if v1 != "kubernetes-dynamic-pv-cinder-abcde" {
t.Errorf("Expected kubernetes-dynamic-pv-cinder-abcde, got %s", v1)
}
// Truncate trailing "6789-dynamic"
prefix := strings.Repeat("0123456789", 9) // 90 characters prefix + 8 chars. of "-dynamic"
v2 := GenerateVolumeName(prefix, "pv-cinder-abcde", 100)
expect := prefix[:84] + "-pv-cinder-abcde"
if v2 != expect {
t.Errorf("Expected %s, got %s", expect, v2)
}
// Truncate really long cluster name
prefix = strings.Repeat("0123456789", 1000) // 10000 characters prefix
v3 := GenerateVolumeName(prefix, "pv-cinder-abcde", 100)
if v3 != expect {
t.Errorf("Expected %s, got %s", expect, v3)
}
}
func TestMountOptionFromSpec(t *testing.T) {
scenarios := map[string]struct {
volume *volume.Spec

View File

@ -55,13 +55,12 @@ var _ = admission.Interface(&persistentVolumeLabel{})
type persistentVolumeLabel struct {
*admission.Handler
mutex sync.Mutex
cloudConfig []byte
awsPVLabeler cloudprovider.PVLabeler
gcePVLabeler cloudprovider.PVLabeler
azurePVLabeler cloudprovider.PVLabeler
openStackPVLabeler cloudprovider.PVLabeler
vspherePVLabeler cloudprovider.PVLabeler
mutex sync.Mutex
cloudConfig []byte
awsPVLabeler cloudprovider.PVLabeler
gcePVLabeler cloudprovider.PVLabeler
azurePVLabeler cloudprovider.PVLabeler
vspherePVLabeler cloudprovider.PVLabeler
}
var _ admission.MutationInterface = &persistentVolumeLabel{}
@ -73,7 +72,7 @@ var _ kubeapiserveradmission.WantsCloudConfig = &persistentVolumeLabel{}
// As a side effect, the cloud provider may block invalid or non-existent volumes.
func newPersistentVolumeLabel() *persistentVolumeLabel {
// DEPRECATED: in a future release, we will use mutating admission webhooks to apply PV labels.
// Once the mutating admission webhook is used for AWS, Azure, GCE, and OpenStack,
// Once the mutating admission webhook is used for AWS, Azure and GCE,
// this admission controller will be removed.
klog.Warning("PersistentVolumeLabel admission controller is deprecated. " +
"Please remove this controller from your configuration files and scripts.")
@ -219,12 +218,6 @@ func (l *persistentVolumeLabel) findVolumeLabels(volume *api.PersistentVolume) (
return nil, fmt.Errorf("error querying AzureDisk volume %s: %v", volume.Spec.AzureDisk.DiskName, err)
}
return labels, nil
case volume.Spec.Cinder != nil:
labels, err := l.findCinderDiskLabels(volume)
if err != nil {
return nil, fmt.Errorf("error querying Cinder volume %s: %v", volume.Spec.Cinder.VolumeID, err)
}
return labels, nil
case volume.Spec.VsphereVolume != nil:
labels, err := l.findVsphereVolumeLabels(volume)
if err != nil {
@ -381,56 +374,6 @@ func (l *persistentVolumeLabel) findAzureDiskLabels(volume *api.PersistentVolume
return pvlabler.GetLabelsForVolume(context.TODO(), pv)
}
func (l *persistentVolumeLabel) getOpenStackPVLabeler() (cloudprovider.PVLabeler, error) {
l.mutex.Lock()
defer l.mutex.Unlock()
if l.openStackPVLabeler == nil {
var cloudConfigReader io.Reader
if len(l.cloudConfig) > 0 {
cloudConfigReader = bytes.NewReader(l.cloudConfig)
}
cloudProvider, err := cloudprovider.GetCloudProvider("openstack", cloudConfigReader)
if err != nil || cloudProvider == nil {
return nil, err
}
openStackPVLabeler, ok := cloudProvider.(cloudprovider.PVLabeler)
if !ok {
return nil, errors.New("OpenStack cloud provider does not implement PV labeling")
}
l.openStackPVLabeler = openStackPVLabeler
}
return l.openStackPVLabeler, nil
}
func (l *persistentVolumeLabel) findCinderDiskLabels(volume *api.PersistentVolume) (map[string]string, error) {
// Ignore any volumes that are being provisioned
if volume.Spec.Cinder.VolumeID == cloudvolume.ProvisionedVolumeName {
return nil, nil
}
pvlabler, err := l.getOpenStackPVLabeler()
if err != nil {
return nil, err
}
if pvlabler == nil {
return nil, fmt.Errorf("unable to build OpenStack cloud provider for Cinder disk")
}
pv := &v1.PersistentVolume{}
err = k8s_api_v1.Convert_core_PersistentVolume_To_v1_PersistentVolume(volume, pv, nil)
if err != nil {
return nil, fmt.Errorf("failed to convert PersistentVolume to core/v1: %q", err)
}
return pvlabler.GetLabelsForVolume(context.TODO(), pv)
}
func (l *persistentVolumeLabel) findVsphereVolumeLabels(volume *api.PersistentVolume) (map[string]string, error) {
pvlabler, err := l.getVspherePVLabeler()
if err != nil {

View File

@ -560,72 +560,6 @@ func Test_PVLAdmission(t *testing.T) {
},
err: nil,
},
{
name: "Cinder Disk PV labeled correctly",
handler: newPersistentVolumeLabel(),
pvlabeler: mockVolumeLabels(map[string]string{
"a": "1",
"b": "2",
v1.LabelFailureDomainBetaZone: "1__2__3",
}),
preAdmissionPV: &api.PersistentVolume{
ObjectMeta: metav1.ObjectMeta{
Name: "azurepd",
Namespace: "myns",
},
Spec: api.PersistentVolumeSpec{
PersistentVolumeSource: api.PersistentVolumeSource{
Cinder: &api.CinderPersistentVolumeSource{
VolumeID: "123",
},
},
},
},
postAdmissionPV: &api.PersistentVolume{
ObjectMeta: metav1.ObjectMeta{
Name: "azurepd",
Namespace: "myns",
Labels: map[string]string{
"a": "1",
"b": "2",
v1.LabelFailureDomainBetaZone: "1__2__3",
},
},
Spec: api.PersistentVolumeSpec{
PersistentVolumeSource: api.PersistentVolumeSource{
Cinder: &api.CinderPersistentVolumeSource{
VolumeID: "123",
},
},
NodeAffinity: &api.VolumeNodeAffinity{
Required: &api.NodeSelector{
NodeSelectorTerms: []api.NodeSelectorTerm{
{
MatchExpressions: []api.NodeSelectorRequirement{
{
Key: "a",
Operator: api.NodeSelectorOpIn,
Values: []string{"1"},
},
{
Key: "b",
Operator: api.NodeSelectorOpIn,
Values: []string{"2"},
},
{
Key: v1.LabelFailureDomainBetaZone,
Operator: api.NodeSelectorOpIn,
Values: []string{"1", "2", "3"},
},
},
},
},
},
},
},
},
err: nil,
},
{
name: "AWS EBS PV overrides user applied labels",
handler: newPersistentVolumeLabel(),
@ -983,7 +917,6 @@ func setPVLabeler(handler *persistentVolumeLabel, pvlabeler cloudprovider.PVLabe
handler.awsPVLabeler = pvlabeler
handler.gcePVLabeler = pvlabeler
handler.azurePVLabeler = pvlabeler
handler.openStackPVLabeler = pvlabeler
handler.vspherePVLabeler = pvlabeler
}

View File

@ -1634,8 +1634,6 @@ rules:
branch: master
- repository: controller-manager
branch: master
- repository: mount-utils
branch: master
- repository: component-helpers
branch: master
source:

View File

@ -19,7 +19,6 @@ Or you can load specific auth plugins:
import _ "k8s.io/client-go/plugin/pkg/client/auth/azure"
import _ "k8s.io/client-go/plugin/pkg/client/auth/gcp"
import _ "k8s.io/client-go/plugin/pkg/client/auth/oidc"
import _ "k8s.io/client-go/plugin/pkg/client/auth/openstack"
```
### Configuration

View File

@ -40,7 +40,6 @@ import (
// _ "k8s.io/client-go/plugin/pkg/client/auth/azure"
// _ "k8s.io/client-go/plugin/pkg/client/auth/gcp"
// _ "k8s.io/client-go/plugin/pkg/client/auth/oidc"
// _ "k8s.io/client-go/plugin/pkg/client/auth/openstack"
)
func main() {

View File

@ -41,7 +41,6 @@ import (
// _ "k8s.io/client-go/plugin/pkg/client/auth/azure"
// _ "k8s.io/client-go/plugin/pkg/client/auth/gcp"
// _ "k8s.io/client-go/plugin/pkg/client/auth/oidc"
// _ "k8s.io/client-go/plugin/pkg/client/auth/openstack"
)
func main() {

View File

@ -34,7 +34,6 @@ import (
// _ "k8s.io/client-go/plugin/pkg/client/auth/azure"
// _ "k8s.io/client-go/plugin/pkg/client/auth/gcp"
// _ "k8s.io/client-go/plugin/pkg/client/auth/oidc"
// _ "k8s.io/client-go/plugin/pkg/client/auth/openstack"
)
func main() {

View File

@ -37,7 +37,6 @@ import (
// _ "k8s.io/client-go/plugin/pkg/client/auth/azure"
// _ "k8s.io/client-go/plugin/pkg/client/auth/gcp"
// _ "k8s.io/client-go/plugin/pkg/client/auth/oidc"
// _ "k8s.io/client-go/plugin/pkg/client/auth/openstack"
)
func main() {

View File

@ -1,36 +0,0 @@
/*
Copyright 2020 The Kubernetes Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package openstack
import (
"errors"
"k8s.io/client-go/rest"
"k8s.io/klog/v2"
)
func init() {
if err := rest.RegisterAuthProviderPlugin("openstack", newOpenstackAuthProvider); err != nil {
klog.Fatalf("Failed to register openstack auth plugin: %s", err)
}
}
func newOpenstackAuthProvider(_ string, _ map[string]string, _ rest.AuthProviderConfigPersister) (rest.AuthProvider, error) {
return nil, errors.New(`The openstack auth plugin has been removed.
Please use the "client-keystone-auth" kubectl/client-go credential plugin instead.
See https://github.com/kubernetes/cloud-provider-openstack/blob/master/docs/using-client-keystone-auth.md for further details`)
}

View File

@ -23,5 +23,4 @@ import (
// Initialize client auth plugins for cloud providers.
_ "k8s.io/client-go/plugin/pkg/client/auth/azure"
_ "k8s.io/client-go/plugin/pkg/client/auth/gcp"
_ "k8s.io/client-go/plugin/pkg/client/auth/openstack"
)

View File

@ -43,7 +43,6 @@ var (
{"aws", false, "The AWS provider is deprecated and will be removed in a future release. Please use https://github.com/kubernetes/cloud-provider-aws"},
{"azure", false, "The Azure provider is deprecated and will be removed in a future release. Please use https://github.com/kubernetes-sigs/cloud-provider-azure"},
{"gce", false, "The GCE provider is deprecated and will be removed in a future release. Please use https://github.com/kubernetes/cloud-provider-gcp"},
{"openstack", true, "https://github.com/kubernetes/cloud-provider-openstack"},
{"vsphere", false, "The vSphere provider is deprecated and will be removed in a future release. Please use https://github.com/kubernetes/cloud-provider-vsphere"},
}
)

View File

@ -421,48 +421,6 @@ func TestTranslateTopologyFromCSIToInTree(t *testing.T) {
v1.LabelTopologyRegion: "us-east1",
},
},
{
name: "cinder translation",
key: CinderTopologyKey,
expErr: false,
regionParser: nil,
pv: &v1.PersistentVolume{
ObjectMeta: metav1.ObjectMeta{
Name: "cinder", Namespace: "myns",
},
Spec: v1.PersistentVolumeSpec{
NodeAffinity: &v1.VolumeNodeAffinity{
Required: &v1.NodeSelector{
NodeSelectorTerms: []v1.NodeSelectorTerm{
{
MatchExpressions: []v1.NodeSelectorRequirement{
{
Key: CinderTopologyKey,
Operator: v1.NodeSelectorOpIn,
Values: []string{"nova"},
},
},
},
},
},
},
},
},
expectedNodeSelectorTerms: []v1.NodeSelectorTerm{
{
MatchExpressions: []v1.NodeSelectorRequirement{
{
Key: v1.LabelTopologyZone,
Operator: v1.NodeSelectorOpIn,
Values: []string{"nova"},
},
},
},
},
expectedLabels: map[string]string{
v1.LabelTopologyZone: "nova",
},
},
}
for _, tc := range testCases {

View File

@ -1,184 +0,0 @@
/*
Copyright 2019 The Kubernetes Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package plugins
import (
"fmt"
"strings"
v1 "k8s.io/api/core/v1"
storage "k8s.io/api/storage/v1"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
)
const (
// CinderDriverName is the name of the CSI driver for Cinder
CinderDriverName = "cinder.csi.openstack.org"
// CinderTopologyKey is the zonal topology key for Cinder CSI Driver
CinderTopologyKey = "topology.cinder.csi.openstack.org/zone"
// CinderInTreePluginName is the name of the intree plugin for Cinder
CinderInTreePluginName = "kubernetes.io/cinder"
)
var _ InTreePlugin = (*osCinderCSITranslator)(nil)
// osCinderCSITranslator handles translation of PV spec from In-tree Cinder to CSI Cinder and vice versa
type osCinderCSITranslator struct{}
// NewOpenStackCinderCSITranslator returns a new instance of osCinderCSITranslator
func NewOpenStackCinderCSITranslator() InTreePlugin {
return &osCinderCSITranslator{}
}
// TranslateInTreeStorageClassToCSI translates InTree Cinder storage class parameters to CSI storage class
func (t *osCinderCSITranslator) TranslateInTreeStorageClassToCSI(sc *storage.StorageClass) (*storage.StorageClass, error) {
var (
params = map[string]string{}
)
for k, v := range sc.Parameters {
switch strings.ToLower(k) {
case fsTypeKey:
params[csiFsTypeKey] = v
default:
// All other parameters are supported by the CSI driver.
// This includes also "availability", therefore do not translate it to sc.AllowedTopologies
params[k] = v
}
}
if len(sc.AllowedTopologies) > 0 {
newTopologies, err := translateAllowedTopologies(sc.AllowedTopologies, CinderTopologyKey)
if err != nil {
return nil, fmt.Errorf("failed translating allowed topologies: %v", err)
}
sc.AllowedTopologies = newTopologies
}
sc.Parameters = params
return sc, nil
}
// TranslateInTreeInlineVolumeToCSI takes a Volume with Cinder set from in-tree
// and converts the Cinder source to a CSIPersistentVolumeSource
func (t *osCinderCSITranslator) TranslateInTreeInlineVolumeToCSI(volume *v1.Volume, podNamespace string) (*v1.PersistentVolume, error) {
if volume == nil || volume.Cinder == nil {
return nil, fmt.Errorf("volume is nil or Cinder not defined on volume")
}
cinderSource := volume.Cinder
pv := &v1.PersistentVolume{
ObjectMeta: metav1.ObjectMeta{
// Must be unique per disk as it is used as the unique part of the
// staging path
Name: fmt.Sprintf("%s-%s", CinderDriverName, cinderSource.VolumeID),
},
Spec: v1.PersistentVolumeSpec{
PersistentVolumeSource: v1.PersistentVolumeSource{
CSI: &v1.CSIPersistentVolumeSource{
Driver: CinderDriverName,
VolumeHandle: cinderSource.VolumeID,
ReadOnly: cinderSource.ReadOnly,
FSType: cinderSource.FSType,
VolumeAttributes: map[string]string{},
},
},
AccessModes: []v1.PersistentVolumeAccessMode{v1.ReadWriteOnce},
},
}
return pv, nil
}
// TranslateInTreePVToCSI takes a PV with Cinder set from in-tree
// and converts the Cinder source to a CSIPersistentVolumeSource
func (t *osCinderCSITranslator) TranslateInTreePVToCSI(pv *v1.PersistentVolume) (*v1.PersistentVolume, error) {
if pv == nil || pv.Spec.Cinder == nil {
return nil, fmt.Errorf("pv is nil or Cinder not defined on pv")
}
cinderSource := pv.Spec.Cinder
csiSource := &v1.CSIPersistentVolumeSource{
Driver: CinderDriverName,
VolumeHandle: cinderSource.VolumeID,
ReadOnly: cinderSource.ReadOnly,
FSType: cinderSource.FSType,
VolumeAttributes: map[string]string{},
}
if err := translateTopologyFromInTreeToCSI(pv, CinderTopologyKey); err != nil {
return nil, fmt.Errorf("failed to translate topology: %v", err)
}
pv.Spec.Cinder = nil
pv.Spec.CSI = csiSource
return pv, nil
}
// TranslateCSIPVToInTree takes a PV with CSIPersistentVolumeSource set and
// translates the Cinder CSI source to a Cinder In-tree source.
func (t *osCinderCSITranslator) TranslateCSIPVToInTree(pv *v1.PersistentVolume) (*v1.PersistentVolume, error) {
if pv == nil || pv.Spec.CSI == nil {
return nil, fmt.Errorf("pv is nil or CSI source not defined on pv")
}
csiSource := pv.Spec.CSI
cinderSource := &v1.CinderPersistentVolumeSource{
VolumeID: csiSource.VolumeHandle,
FSType: csiSource.FSType,
ReadOnly: csiSource.ReadOnly,
}
// translate CSI topology to In-tree topology for rollback compatibility.
// It is not possible to guess Cinder Region from the Zone, therefore leave it empty.
if err := translateTopologyFromCSIToInTree(pv, CinderTopologyKey, nil); err != nil {
return nil, fmt.Errorf("failed to translate topology. PV:%+v. Error:%v", *pv, err)
}
pv.Spec.CSI = nil
pv.Spec.Cinder = cinderSource
return pv, nil
}
// CanSupport tests whether the plugin supports a given persistent volume
// specification from the API. The spec pointer should be considered
// const.
func (t *osCinderCSITranslator) CanSupport(pv *v1.PersistentVolume) bool {
return pv != nil && pv.Spec.Cinder != nil
}
// CanSupportInline tests whether the plugin supports a given inline volume
// specification from the API. The spec pointer should be considered
// const.
func (t *osCinderCSITranslator) CanSupportInline(volume *v1.Volume) bool {
return volume != nil && volume.Cinder != nil
}
// GetInTreePluginName returns the name of the intree plugin driver
func (t *osCinderCSITranslator) GetInTreePluginName() string {
return CinderInTreePluginName
}
// GetCSIPluginName returns the name of the CSI plugin
func (t *osCinderCSITranslator) GetCSIPluginName() string {
return CinderDriverName
}
func (t *osCinderCSITranslator) RepairVolumeHandle(volumeHandle, nodeID string) (string, error) {
return volumeHandle, nil
}

View File

@ -1,80 +0,0 @@
/*
Copyright 2021 The Kubernetes Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package plugins
import (
"reflect"
"testing"
v1 "k8s.io/api/core/v1"
storage "k8s.io/api/storage/v1"
)
func TestTranslateCinderInTreeStorageClassToCSI(t *testing.T) {
translator := NewOpenStackCinderCSITranslator()
cases := []struct {
name string
sc *storage.StorageClass
expSc *storage.StorageClass
expErr bool
}{
{
name: "translate normal",
sc: NewStorageClass(map[string]string{"foo": "bar"}, nil),
expSc: NewStorageClass(map[string]string{"foo": "bar"}, nil),
},
{
name: "translate empty map",
sc: NewStorageClass(map[string]string{}, nil),
expSc: NewStorageClass(map[string]string{}, nil),
},
{
name: "translate with fstype",
sc: NewStorageClass(map[string]string{"fstype": "ext3"}, nil),
expSc: NewStorageClass(map[string]string{"csi.storage.k8s.io/fstype": "ext3"}, nil),
},
{
name: "translate with topology in parameters (no translation expected)",
sc: NewStorageClass(map[string]string{"availability": "nova"}, nil),
expSc: NewStorageClass(map[string]string{"availability": "nova"}, nil),
},
{
name: "translate with topology",
sc: NewStorageClass(map[string]string{}, generateToplogySelectors(v1.LabelFailureDomainBetaZone, []string{"nova"})),
expSc: NewStorageClass(map[string]string{}, generateToplogySelectors(CinderTopologyKey, []string{"nova"})),
},
}
for _, tc := range cases {
t.Logf("Testing %v", tc.name)
got, err := translator.TranslateInTreeStorageClassToCSI(tc.sc)
if err != nil && !tc.expErr {
t.Errorf("Did not expect error but got: %v", err)
}
if err == nil && tc.expErr {
t.Errorf("Expected error, but did not get one.")
}
if !reflect.DeepEqual(got, tc.expSc) {
t.Errorf("Got parameters: %v, expected: %v", got, tc.expSc)
}
}
}

View File

@ -29,7 +29,6 @@ var (
inTreePlugins = map[string]plugins.InTreePlugin{
plugins.GCEPDDriverName: plugins.NewGCEPersistentDiskCSITranslator(),
plugins.AWSEBSDriverName: plugins.NewAWSElasticBlockStoreCSITranslator(),
plugins.CinderDriverName: plugins.NewOpenStackCinderCSITranslator(),
plugins.AzureDiskDriverName: plugins.NewAzureDiskCSITranslator(),
plugins.AzureFileDriverName: plugins.NewAzureFileCSITranslator(),
plugins.VSphereDriverName: plugins.NewvSphereCSITranslator(),

View File

@ -189,17 +189,6 @@ func TestTopologyTranslation(t *testing.T) {
pv: makeAWSEBSPV(kubernetesGATopologyLabels, makeTopology(v1.LabelTopologyZone, "us-east-2a")),
expectedNodeAffinity: makeNodeAffinity(false /*multiTerms*/, plugins.AWSEBSTopologyKey, "us-east-2a"),
},
// Cinder test cases: test mosty topology key, i.e., don't repeat testing done with GCE
{
name: "OpenStack Cinder with zone labels",
pv: makeCinderPV(kubernetesBetaTopologyLabels, nil /*topology*/),
expectedNodeAffinity: makeNodeAffinity(false /*multiTerms*/, plugins.CinderTopologyKey, "us-east-1a"),
},
{
name: "OpenStack Cinder with zone labels and topology",
pv: makeCinderPV(kubernetesBetaTopologyLabels, makeTopology(v1.LabelFailureDomainBetaZone, "us-east-2a")),
expectedNodeAffinity: makeNodeAffinity(false /*multiTerms*/, plugins.CinderTopologyKey, "us-east-2a"),
},
}
for _, test := range testCases {
@ -302,18 +291,6 @@ func makeAWSEBSPV(labels map[string]string, topology *v1.NodeSelectorRequirement
return pv
}
func makeCinderPV(labels map[string]string, topology *v1.NodeSelectorRequirement) *v1.PersistentVolume {
pv := makePV(labels, topology)
pv.Spec.PersistentVolumeSource = v1.PersistentVolumeSource{
Cinder: &v1.CinderPersistentVolumeSource{
VolumeID: "vol1",
FSType: "ext4",
ReadOnly: false,
},
}
return pv
}
func makeNodeAffinity(multiTerms bool, key string, values ...string) *v1.VolumeNodeAffinity {
nodeAffinity := &v1.VolumeNodeAffinity{
Required: &v1.NodeSelector{
@ -412,12 +389,6 @@ func generateUniqueVolumeSource(driverName string) (v1.VolumeSource, error) {
},
}, nil
case plugins.CinderDriverName:
return v1.VolumeSource{
Cinder: &v1.CinderVolumeSource{
VolumeID: string(uuid.NewUUID()),
},
}, nil
case plugins.AzureDiskDriverName:
return v1.VolumeSource{
AzureDisk: &v1.AzureDiskVolumeSource{

View File

@ -947,8 +947,6 @@ func describeVolumes(volumes []corev1.Volume, w PrefixWriter, space string) {
printAzureDiskVolumeSource(volume.VolumeSource.AzureDisk, w)
case volume.VolumeSource.VsphereVolume != nil:
printVsphereVolumeSource(volume.VolumeSource.VsphereVolume, w)
case volume.VolumeSource.Cinder != nil:
printCinderVolumeSource(volume.VolumeSource.Cinder, w)
case volume.VolumeSource.PhotonPersistentDisk != nil:
printPhotonPersistentDiskVolumeSource(volume.VolumeSource.PhotonPersistentDisk, w)
case volume.VolumeSource.PortworxVolume != nil:
@ -1227,24 +1225,6 @@ func printPhotonPersistentDiskVolumeSource(photon *corev1.PhotonPersistentDiskVo
photon.PdID, photon.FSType)
}
func printCinderVolumeSource(cinder *corev1.CinderVolumeSource, w PrefixWriter) {
w.Write(LEVEL_2, "Type:\tCinder (a Persistent Disk resource in OpenStack)\n"+
" VolumeID:\t%v\n"+
" FSType:\t%v\n"+
" ReadOnly:\t%v\n"+
" SecretRef:\t%v\n",
cinder.VolumeID, cinder.FSType, cinder.ReadOnly, cinder.SecretRef)
}
func printCinderPersistentVolumeSource(cinder *corev1.CinderPersistentVolumeSource, w PrefixWriter) {
w.Write(LEVEL_2, "Type:\tCinder (a Persistent Disk resource in OpenStack)\n"+
" VolumeID:\t%v\n"+
" FSType:\t%v\n"+
" ReadOnly:\t%v\n"+
" SecretRef:\t%v\n",
cinder.VolumeID, cinder.FSType, cinder.ReadOnly, cinder.SecretRef)
}
func printScaleIOVolumeSource(sio *corev1.ScaleIOVolumeSource, w PrefixWriter) {
w.Write(LEVEL_2, "Type:\tScaleIO (a persistent volume backed by a block device in ScaleIO)\n"+
" Gateway:\t%v\n"+
@ -1562,8 +1542,6 @@ func describePersistentVolume(pv *corev1.PersistentVolume, events *corev1.EventL
printQuobyteVolumeSource(pv.Spec.Quobyte, w)
case pv.Spec.VsphereVolume != nil:
printVsphereVolumeSource(pv.Spec.VsphereVolume, w)
case pv.Spec.Cinder != nil:
printCinderPersistentVolumeSource(pv.Spec.Cinder, w)
case pv.Spec.AzureDisk != nil:
printAzureDiskVolumeSource(pv.Spec.AzureDisk, w)
case pv.Spec.PhotonPersistentDisk != nil:

View File

@ -1483,19 +1483,6 @@ func TestPersistentVolumeDescriber(t *testing.T) {
},
unexpectedElements: []string{"VolumeMode", "Filesystem"},
},
{
name: "test8",
plugin: "cinder",
pv: &corev1.PersistentVolume{
ObjectMeta: metav1.ObjectMeta{Name: "bar"},
Spec: corev1.PersistentVolumeSpec{
PersistentVolumeSource: corev1.PersistentVolumeSource{
Cinder: &corev1.CinderPersistentVolumeSource{},
},
},
},
unexpectedElements: []string{"VolumeMode", "Filesystem"},
},
{
name: "test9",
plugin: "fc",

View File

@ -15,8 +15,6 @@ require (
github.com/aws/aws-sdk-go v1.38.49
github.com/golang/mock v1.6.0
github.com/google/go-cmp v0.5.6
github.com/gophercloud/gophercloud v0.1.0
github.com/mitchellh/mapstructure v1.4.1
github.com/rubiojr/go-vhd v0.0.0-20200706105327-02e210299021
github.com/stretchr/testify v1.7.0
github.com/vmware/govmomi v0.20.3
@ -31,7 +29,6 @@ require (
k8s.io/component-base v0.0.0
k8s.io/csi-translation-lib v0.0.0
k8s.io/klog/v2 v2.70.1
k8s.io/mount-utils v0.0.0
k8s.io/utils v0.0.0-20220728103510-ee6ede2d64ed
sigs.k8s.io/yaml v1.2.0
)
@ -64,13 +61,11 @@ require (
github.com/google/gofuzz v1.1.0 // indirect
github.com/google/uuid v1.1.2 // indirect
github.com/googleapis/gax-go/v2 v2.1.1 // indirect
github.com/imdario/mergo v0.3.6 // indirect
github.com/jmespath/go-jmespath v0.4.0 // indirect
github.com/josharian/intern v1.0.0 // indirect
github.com/json-iterator/go v1.1.12 // indirect
github.com/mailru/easyjson v0.7.6 // indirect
github.com/matttproud/golang_protobuf_extensions v1.0.1 // indirect
github.com/moby/sys/mountinfo v0.6.0 // indirect
github.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd // indirect
github.com/modern-go/reflect2 v1.0.2 // indirect
github.com/munnerz/goautoneg v0.0.0-20191010083416-a7dc8b61c822 // indirect
@ -112,5 +107,4 @@ replace (
k8s.io/controller-manager => ../controller-manager
k8s.io/csi-translation-lib => ../csi-translation-lib
k8s.io/legacy-cloud-providers => ../legacy-cloud-providers
k8s.io/mount-utils => ../mount-utils
)

View File

@ -238,15 +238,11 @@ github.com/googleapis/gax-go/v2 v2.0.5/go.mod h1:DWXyrwAJ9X0FpwwEdw+IPEYBICEFu5m
github.com/googleapis/gax-go/v2 v2.1.0/go.mod h1:Q3nei7sK6ybPYH7twZdmQpAd1MKb7pfu6SK+H1/DsU0=
github.com/googleapis/gax-go/v2 v2.1.1 h1:dp3bWCh+PPO1zjRRiCSczJav13sBvG4UhNyVTa1KqdU=
github.com/googleapis/gax-go/v2 v2.1.1/go.mod h1:hddJymUZASv3XPyGkUpKj8pPO47Rmb0eJc8R6ouapiM=
github.com/gophercloud/gophercloud v0.1.0 h1:P/nh25+rzXouhytV2pUHBb65fnds26Ghl8/391+sT5o=
github.com/gophercloud/gophercloud v0.1.0/go.mod h1:vxM41WHh5uqHVBMZHzuwNOHh8XEoIEcSTewFxm1c5g8=
github.com/grpc-ecosystem/grpc-gateway v1.16.0/go.mod h1:BDjrQk3hbvj6Nolgz8mAMFbcEtjT1g+wF4CSlocrBnw=
github.com/hashicorp/golang-lru v0.5.0/go.mod h1:/m3WP610KZHVQ1SGc6re/UDhFvYD7pJ4Ao+sR/qLZy8=
github.com/hashicorp/golang-lru v0.5.1/go.mod h1:/m3WP610KZHVQ1SGc6re/UDhFvYD7pJ4Ao+sR/qLZy8=
github.com/ianlancetaylor/demangle v0.0.0-20181102032728-5e5cf60278f6/go.mod h1:aSSvb/t6k1mPoxDqO4vJh6VOCGPwU4O0C2/Eqndh1Sc=
github.com/ianlancetaylor/demangle v0.0.0-20200824232613-28f6c0f3b639/go.mod h1:aSSvb/t6k1mPoxDqO4vJh6VOCGPwU4O0C2/Eqndh1Sc=
github.com/imdario/mergo v0.3.6 h1:xTNEAn+kxVO7dTZGu0CegyqKZmoWFI0rF8UxjlB2d28=
github.com/imdario/mergo v0.3.6/go.mod h1:2EnlNZ0deacrJVfApfmtdGgDfMuh/nq6Ok1EcJh5FfA=
github.com/jmespath/go-jmespath v0.4.0 h1:BEgLn5cpjn8UN1mAw4NjwDrS35OdebyEtFe+9YPoQUg=
github.com/jmespath/go-jmespath v0.4.0/go.mod h1:T8mJZnbsbmF+m6zOOFylbeCJqk5+pHWvzYPziyZiYoo=
github.com/jmespath/go-jmespath/internal/testify v1.5.1 h1:shLQSRRSCCPj3f2gpwzGwWFoC7ycTf1rcQZHOlsJ6N8=
@ -280,10 +276,6 @@ github.com/mailru/easyjson v0.7.6 h1:8yTIVnZgCoiM1TgqoeTl+LfU5Jg6/xL3QhGQnimLYnA
github.com/mailru/easyjson v0.7.6/go.mod h1:xzfreul335JAWq5oZzymOObrkdz5UnU4kGfJJLY9Nlc=
github.com/matttproud/golang_protobuf_extensions v1.0.1 h1:4hp9jkHxhMHkqkrB3Ix0jegS5sx/RkqARlsWZ6pIwiU=
github.com/matttproud/golang_protobuf_extensions v1.0.1/go.mod h1:D8He9yQNgCq6Z5Ld7szi9bcBfOoFv/3dc6xSMkL2PC0=
github.com/mitchellh/mapstructure v1.4.1 h1:CpVNEelQCZBooIPDn+AR3NpivK/TIKU8bDxdASFVQag=
github.com/mitchellh/mapstructure v1.4.1/go.mod h1:bFUtVrKA4DC2yAKiSyO/QUcy7e+RRV2QTWOzhPopBRo=
github.com/moby/sys/mountinfo v0.6.0 h1:gUDhXQx58YNrpHlK4nSL+7y2pxFZkUcXqzFDKWdC0Oo=
github.com/moby/sys/mountinfo v0.6.0/go.mod h1:3bMD3Rg+zkqx8MRYPi7Pyb0Ie97QEBmdxbhnCLlSvSU=
github.com/modern-go/concurrent v0.0.0-20180228061459-e0a39a4cb421/go.mod h1:6dJC0mAP4ikYIbvyc7fijjWJddQyLn8Ig3JB5CqoB9Q=
github.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd h1:TRLaZ9cD/w8PVh93nsPXa1VrQ6jlwL5oN8l14QlcNfg=
github.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd/go.mod h1:6dJC0mAP4ikYIbvyc7fijjWJddQyLn8Ig3JB5CqoB9Q=
@ -367,7 +359,6 @@ go.opencensus.io v0.23.0 h1:gqCw0LfLxScz8irSi8exQc7fyQ0fKQU/qnC/X8+V/1M=
go.opencensus.io v0.23.0/go.mod h1:XItmlyltB5F7CS4xOC1DcqMoFqwtC6OG2xF7mCv7P7E=
go.opentelemetry.io/proto/otlp v0.7.0/go.mod h1:PqfVotwruBrMGOCsRd/89rSnXhoiJIqeYNgFYFoEGnI=
golang.org/x/crypto v0.0.0-20180904163835-0709b304e793/go.mod h1:6SG95UA2DQfeDnfUPMdvaQW0Q7yPrPDi9nlGo2tz2b4=
golang.org/x/crypto v0.0.0-20190211182817-74369b46fc67/go.mod h1:6SG95UA2DQfeDnfUPMdvaQW0Q7yPrPDi9nlGo2tz2b4=
golang.org/x/crypto v0.0.0-20190308221718-c2843e01d9a2/go.mod h1:djNgcEr1/C05ACkg1iLfiJU5Ep61QUkGW8qpdssI0+w=
golang.org/x/crypto v0.0.0-20190510104115-cbcb75029529/go.mod h1:yigFU9vqHzYiE8UmvKecakEJjdnWj3jj499lnFckfCI=
golang.org/x/crypto v0.0.0-20190605123033-f99c8df09eb5/go.mod h1:yigFU9vqHzYiE8UmvKecakEJjdnWj3jj499lnFckfCI=
@ -486,7 +477,6 @@ golang.org/x/sync v0.0.0-20210220032951-036812b2e83c/go.mod h1:RxMgew5VJxzue5/jJ
golang.org/x/sys v0.0.0-20180830151530-49385e6e1522/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20180905080454-ebe1bf3edb33/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20181116152217-5ac8a444bdc5/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20190209173611-3b5209105503/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20190215142949-d0b11bdaac8a/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20190312061237-fead79001313/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20190412213103-97732733099d/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=

View File

@ -1,4 +0,0 @@
# Maintainers
* [Angus Lees](https://github.com/anguslees)

View File

@ -1,13 +0,0 @@
# See the OWNERS docs at https://go.k8s.io/owners
# We are no longer accepting features into k8s.io/legacy-cloud-providers.
# Any kind/feature PRs must be approved by SIG Cloud Provider going forward.
emeritus_approvers:
- anguslees
- NickrenREN
- dims
- FengyunPan2
reviewers:
- anguslees
- NickrenREN
- dims

View File

@ -1,201 +0,0 @@
//go:build !providerless
// +build !providerless
/*
Copyright 2016 The Kubernetes Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package openstack
import (
"encoding/json"
"errors"
"fmt"
"io"
"io/ioutil"
"net/http"
"os"
"path/filepath"
"strings"
"k8s.io/klog/v2"
"k8s.io/mount-utils"
"k8s.io/utils/exec"
)
const (
// metadataURLTemplate allows building an OpenStack Metadata service URL.
// It's a hardcoded IPv4 link-local address as documented in "OpenStack Cloud
// Administrator Guide", chapter Compute - Networking with nova-network.
//https://docs.openstack.org/nova/latest/admin/networking-nova.html#metadata-service
defaultMetadataVersion = "2012-08-10"
metadataURLTemplate = "http://169.254.169.254/openstack/%s/meta_data.json"
// metadataID is used as an identifier on the metadata search order configuration.
metadataID = "metadataService"
// Config drive is defined as an iso9660 or vfat (deprecated) drive
// with the "config-2" label.
//https://docs.openstack.org/nova/latest/user/config-drive.html
configDriveLabel = "config-2"
configDrivePathTemplate = "openstack/%s/meta_data.json"
// configDriveID is used as an identifier on the metadata search order configuration.
configDriveID = "configDrive"
)
// ErrBadMetadata is used to indicate a problem parsing data from metadata server
var ErrBadMetadata = errors.New("invalid OpenStack metadata, got empty uuid")
// DeviceMetadata is a single/simplified data structure for all kinds of device metadata types.
type DeviceMetadata struct {
Type string `json:"type"`
Bus string `json:"bus,omitempty"`
Serial string `json:"serial,omitempty"`
Address string `json:"address,omitempty"`
// .. and other fields.
}
// Metadata has the information fetched from OpenStack metadata service or
// config drives. Assumes the "2012-08-10" meta_data.json format.
// See http://docs.openstack.org/user-guide/cli_config_drive.html
type Metadata struct {
UUID string `json:"uuid"`
Name string `json:"name"`
AvailabilityZone string `json:"availability_zone"`
Devices []DeviceMetadata `json:"devices,omitempty"`
// .. and other fields we don't care about. Expand as necessary.
}
// parseMetadata reads JSON from OpenStack metadata server and parses
// instance ID out of it.
func parseMetadata(r io.Reader) (*Metadata, error) {
var metadata Metadata
json := json.NewDecoder(r)
if err := json.Decode(&metadata); err != nil {
return nil, err
}
if metadata.UUID == "" {
return nil, ErrBadMetadata
}
return &metadata, nil
}
func getMetadataURL(metadataVersion string) string {
return fmt.Sprintf(metadataURLTemplate, metadataVersion)
}
func getConfigDrivePath(metadataVersion string) string {
return fmt.Sprintf(configDrivePathTemplate, metadataVersion)
}
func getMetadataFromConfigDrive(metadataVersion string) (*Metadata, error) {
// Try to read instance UUID from config drive.
dev := "/dev/disk/by-label/" + configDriveLabel
if _, err := os.Stat(dev); os.IsNotExist(err) {
out, err := exec.New().Command(
"blkid", "-l",
"-t", "LABEL="+configDriveLabel,
"-o", "device",
).CombinedOutput()
if err != nil {
return nil, fmt.Errorf("unable to run blkid: %v", err)
}
dev = strings.TrimSpace(string(out))
}
mntdir, err := ioutil.TempDir("", "configdrive")
if err != nil {
return nil, err
}
defer os.Remove(mntdir)
klog.V(4).Infof("Attempting to mount configdrive %s on %s", dev, mntdir)
mounter := mount.New("" /* default mount path */)
err = mounter.Mount(dev, mntdir, "iso9660", []string{"ro"})
if err != nil {
err = mounter.Mount(dev, mntdir, "vfat", []string{"ro"})
}
if err != nil {
return nil, fmt.Errorf("error mounting configdrive %s: %v", dev, err)
}
defer mounter.Unmount(mntdir)
klog.V(4).Infof("Configdrive mounted on %s", mntdir)
configDrivePath := getConfigDrivePath(metadataVersion)
f, err := os.Open(
filepath.Join(mntdir, configDrivePath))
if err != nil {
return nil, fmt.Errorf("error reading %s on config drive: %v", configDrivePath, err)
}
defer f.Close()
return parseMetadata(f)
}
func getMetadataFromMetadataService(metadataVersion string) (*Metadata, error) {
// Try to get JSON from metadata server.
metadataURL := getMetadataURL(metadataVersion)
klog.V(4).Infof("Attempting to fetch metadata from %s", metadataURL)
resp, err := http.Get(metadataURL)
if err != nil {
return nil, fmt.Errorf("error fetching %s: %v", metadataURL, err)
}
defer resp.Body.Close()
if resp.StatusCode != http.StatusOK {
err = fmt.Errorf("unexpected status code when reading metadata from %s: %s", metadataURL, resp.Status)
return nil, err
}
return parseMetadata(resp.Body)
}
// Metadata is fixed for the current host, so cache the value process-wide
var metadataCache *Metadata
func getMetadata(order string) (*Metadata, error) {
if metadataCache == nil {
var md *Metadata
var err error
elements := strings.Split(order, ",")
for _, id := range elements {
id = strings.TrimSpace(id)
switch id {
case configDriveID:
md, err = getMetadataFromConfigDrive(defaultMetadataVersion)
case metadataID:
md, err = getMetadataFromMetadataService(defaultMetadataVersion)
default:
err = fmt.Errorf("%s is not a valid metadata search order option. Supported options are %s and %s", id, configDriveID, metadataID)
}
if err == nil {
break
}
}
if err != nil {
return nil, err
}
metadataCache = md
}
return metadataCache, nil
}

View File

@ -1,118 +0,0 @@
//go:build !providerless
// +build !providerless
/*
Copyright 2016 The Kubernetes Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package openstack
import (
"strings"
"testing"
)
var FakeMetadata = Metadata{
UUID: "83679162-1378-4288-a2d4-70e13ec132aa",
Name: "test",
AvailabilityZone: "nova",
}
func SetMetadataFixture(value *Metadata) {
metadataCache = value
}
func ClearMetadata() {
metadataCache = nil
}
func TestParseMetadata(t *testing.T) {
_, err := parseMetadata(strings.NewReader("bogus"))
if err == nil {
t.Errorf("Should fail when bad data is provided: %s", err)
}
data := strings.NewReader(`
{
"availability_zone": "nova",
"files": [
{
"content_path": "/content/0000",
"path": "/etc/network/interfaces"
},
{
"content_path": "/content/0001",
"path": "known_hosts"
}
],
"hostname": "test.novalocal",
"launch_index": 0,
"name": "test",
"meta": {
"role": "webservers",
"essential": "false"
},
"public_keys": {
"mykey": "ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAAAgQDBqUfVvCSez0/Wfpd8dLLgZXV9GtXQ7hnMN+Z0OWQUyebVEHey1CXuin0uY1cAJMhUq8j98SiW+cU0sU4J3x5l2+xi1bodDm1BtFWVeLIOQINpfV1n8fKjHB+ynPpe1F6tMDvrFGUlJs44t30BrujMXBe8Rq44cCk6wqyjATA3rQ== Generated by Nova\n"
},
"uuid": "83679162-1378-4288-a2d4-70e13ec132aa",
"devices": [
{
"bus": "scsi",
"serial": "6df1888b-f373-41cf-b960-3786e60a28ef",
"tags": ["fake_tag"],
"type": "disk",
"address": "0:0:0:0"
}
]
}
`)
md, err := parseMetadata(data)
if err != nil {
t.Fatalf("Should succeed when provided with valid data: %s", err)
}
if md.Name != "test" {
t.Errorf("incorrect name: %s", md.Name)
}
if md.UUID != "83679162-1378-4288-a2d4-70e13ec132aa" {
t.Errorf("incorrect uuid: %s", md.UUID)
}
if md.AvailabilityZone != "nova" {
t.Errorf("incorrect az: %s", md.AvailabilityZone)
}
if len(md.Devices) != 1 {
t.Errorf("expecting to find 1 device, found %d", len(md.Devices))
}
if md.Devices[0].Bus != "scsi" {
t.Errorf("incorrect disk bus: %s", md.Devices[0].Bus)
}
if md.Devices[0].Address != "0:0:0:0" {
t.Errorf("incorrect disk address: %s", md.Devices[0].Address)
}
if md.Devices[0].Type != "disk" {
t.Errorf("incorrect device type: %s", md.Devices[0].Type)
}
if md.Devices[0].Serial != "6df1888b-f373-41cf-b960-3786e60a28ef" {
t.Errorf("incorrect device serial: %s", md.Devices[0].Serial)
}
}

View File

@ -1,949 +0,0 @@
//go:build !providerless
// +build !providerless
/*
Copyright 2014 The Kubernetes Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package openstack
import (
"context"
"crypto/tls"
"errors"
"fmt"
"io"
"io/ioutil"
"net/http"
"os"
"reflect"
"regexp"
"strings"
"time"
"github.com/gophercloud/gophercloud"
"github.com/gophercloud/gophercloud/openstack"
"github.com/gophercloud/gophercloud/openstack/compute/v2/extensions/attachinterfaces"
"github.com/gophercloud/gophercloud/openstack/compute/v2/servers"
"github.com/gophercloud/gophercloud/openstack/identity/v3/extensions/trusts"
tokens3 "github.com/gophercloud/gophercloud/openstack/identity/v3/tokens"
"github.com/gophercloud/gophercloud/pagination"
"github.com/mitchellh/mapstructure"
"gopkg.in/gcfg.v1"
v1 "k8s.io/api/core/v1"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/types"
netutil "k8s.io/apimachinery/pkg/util/net"
"k8s.io/client-go/kubernetes"
"k8s.io/client-go/tools/clientcmd"
certutil "k8s.io/client-go/util/cert"
cloudprovider "k8s.io/cloud-provider"
nodehelpers "k8s.io/cloud-provider/node/helpers"
"k8s.io/klog/v2"
netutils "k8s.io/utils/net"
)
const (
// ProviderName is the name of the openstack provider
ProviderName = "openstack"
// TypeHostName is the name type of openstack instance
TypeHostName = "hostname"
availabilityZone = "availability_zone"
defaultTimeOut = 60 * time.Second
)
// ErrNotFound is used to inform that the object is missing
var ErrNotFound = errors.New("failed to find object")
// ErrMultipleResults is used when we unexpectedly get back multiple results
var ErrMultipleResults = errors.New("multiple results where only one expected")
// ErrNoAddressFound is used when we cannot find an ip address for the host
var ErrNoAddressFound = errors.New("no address found for host")
// MyDuration is the encoding.TextUnmarshaler interface for time.Duration
type MyDuration struct {
time.Duration
}
// UnmarshalText is used to convert from text to Duration
func (d *MyDuration) UnmarshalText(text []byte) error {
res, err := time.ParseDuration(string(text))
if err != nil {
return err
}
d.Duration = res
return nil
}
// LoadBalancer is used for creating and maintaining load balancers
type LoadBalancer struct {
network *gophercloud.ServiceClient
compute *gophercloud.ServiceClient
lb *gophercloud.ServiceClient
opts LoadBalancerOpts
}
// LoadBalancerOpts have the options to talk to Neutron LBaaSV2 or Octavia
type LoadBalancerOpts struct {
LBVersion string `gcfg:"lb-version"` // overrides autodetection. Only support v2.
UseOctavia bool `gcfg:"use-octavia"` // uses Octavia V2 service catalog endpoint
SubnetID string `gcfg:"subnet-id"` // overrides autodetection.
FloatingNetworkID string `gcfg:"floating-network-id"` // If specified, will create floating ip for loadbalancer, or do not create floating ip.
LBMethod string `gcfg:"lb-method"` // default to ROUND_ROBIN.
LBProvider string `gcfg:"lb-provider"`
CreateMonitor bool `gcfg:"create-monitor"`
MonitorDelay MyDuration `gcfg:"monitor-delay"`
MonitorTimeout MyDuration `gcfg:"monitor-timeout"`
MonitorMaxRetries uint `gcfg:"monitor-max-retries"`
ManageSecurityGroups bool `gcfg:"manage-security-groups"`
NodeSecurityGroupIDs []string // Do not specify, get it automatically when enable manage-security-groups. TODO(FengyunPan): move it into cache
}
// BlockStorageOpts is used to talk to Cinder service
type BlockStorageOpts struct {
BSVersion string `gcfg:"bs-version"` // overrides autodetection. v1 or v2. Defaults to auto
TrustDevicePath bool `gcfg:"trust-device-path"` // See Issue #33128
IgnoreVolumeAZ bool `gcfg:"ignore-volume-az"`
NodeVolumeAttachLimit int `gcfg:"node-volume-attach-limit"` // override volume attach limit for Cinder. Default is : 256
}
// RouterOpts is used for Neutron routes
type RouterOpts struct {
RouterID string `gcfg:"router-id"` // required
}
// MetadataOpts is used for configuring how to talk to metadata service or config drive
type MetadataOpts struct {
SearchOrder string `gcfg:"search-order"`
RequestTimeout MyDuration `gcfg:"request-timeout"`
}
var _ cloudprovider.Interface = (*OpenStack)(nil)
var _ cloudprovider.Zones = (*OpenStack)(nil)
// OpenStack is an implementation of cloud provider Interface for OpenStack.
type OpenStack struct {
provider *gophercloud.ProviderClient
region string
lbOpts LoadBalancerOpts
bsOpts BlockStorageOpts
routeOpts RouterOpts
metadataOpts MetadataOpts
// InstanceID of the server where this OpenStack object is instantiated.
localInstanceID string
}
// Config is used to read and store information from the cloud configuration file
// NOTE: Cloud config files should follow the same Kubernetes deprecation policy as
// flags or CLIs. Config fields should not change behavior in incompatible ways and
// should be deprecated for at least 2 release prior to removing.
// See https://kubernetes.io/docs/reference/using-api/deprecation-policy/#deprecating-a-flag-or-cli
// for more details.
type Config struct {
Global struct {
AuthURL string `gcfg:"auth-url"`
Username string
UserID string `gcfg:"user-id"`
Password string `datapolicy:"password"`
TenantID string `gcfg:"tenant-id"`
TenantName string `gcfg:"tenant-name"`
TrustID string `gcfg:"trust-id"`
DomainID string `gcfg:"domain-id"`
DomainName string `gcfg:"domain-name"`
Region string
CAFile string `gcfg:"ca-file"`
SecretName string `gcfg:"secret-name"`
SecretNamespace string `gcfg:"secret-namespace"`
KubeconfigPath string `gcfg:"kubeconfig-path"`
}
LoadBalancer LoadBalancerOpts
BlockStorage BlockStorageOpts
Route RouterOpts
Metadata MetadataOpts
}
func init() {
registerMetrics()
cloudprovider.RegisterCloudProvider(ProviderName, func(config io.Reader) (cloudprovider.Interface, error) {
cfg, err := readConfig(config)
if err != nil {
return nil, err
}
return newOpenStack(cfg)
})
}
func (cfg Config) toAuthOptions() gophercloud.AuthOptions {
return gophercloud.AuthOptions{
IdentityEndpoint: cfg.Global.AuthURL,
Username: cfg.Global.Username,
UserID: cfg.Global.UserID,
Password: cfg.Global.Password,
TenantID: cfg.Global.TenantID,
TenantName: cfg.Global.TenantName,
DomainID: cfg.Global.DomainID,
DomainName: cfg.Global.DomainName,
// Persistent service, so we need to be able to renew tokens.
AllowReauth: true,
}
}
func (cfg Config) toAuth3Options() tokens3.AuthOptions {
return tokens3.AuthOptions{
IdentityEndpoint: cfg.Global.AuthURL,
Username: cfg.Global.Username,
UserID: cfg.Global.UserID,
Password: cfg.Global.Password,
DomainID: cfg.Global.DomainID,
DomainName: cfg.Global.DomainName,
AllowReauth: true,
}
}
// configFromEnv allows setting up credentials etc using the
// standard OS_* OpenStack client environment variables.
func configFromEnv() (cfg Config, ok bool) {
cfg.Global.AuthURL = os.Getenv("OS_AUTH_URL")
cfg.Global.Username = os.Getenv("OS_USERNAME")
cfg.Global.Region = os.Getenv("OS_REGION_NAME")
cfg.Global.UserID = os.Getenv("OS_USER_ID")
cfg.Global.TrustID = os.Getenv("OS_TRUST_ID")
cfg.Global.TenantID = os.Getenv("OS_TENANT_ID")
if cfg.Global.TenantID == "" {
cfg.Global.TenantID = os.Getenv("OS_PROJECT_ID")
}
cfg.Global.TenantName = os.Getenv("OS_TENANT_NAME")
if cfg.Global.TenantName == "" {
cfg.Global.TenantName = os.Getenv("OS_PROJECT_NAME")
}
cfg.Global.DomainID = os.Getenv("OS_DOMAIN_ID")
if cfg.Global.DomainID == "" {
cfg.Global.DomainID = os.Getenv("OS_USER_DOMAIN_ID")
}
cfg.Global.DomainName = os.Getenv("OS_DOMAIN_NAME")
if cfg.Global.DomainName == "" {
cfg.Global.DomainName = os.Getenv("OS_USER_DOMAIN_NAME")
}
cfg.Global.SecretName = os.Getenv("SECRET_NAME")
cfg.Global.SecretNamespace = os.Getenv("SECRET_NAMESPACE")
cfg.Global.KubeconfigPath = os.Getenv("KUBECONFIG_PATH")
ok = cfg.Global.AuthURL != "" &&
cfg.Global.Username != "" &&
cfg.Global.Password != "" &&
(cfg.Global.TenantID != "" || cfg.Global.TenantName != "" ||
cfg.Global.DomainID != "" || cfg.Global.DomainName != "" ||
cfg.Global.Region != "" || cfg.Global.UserID != "" ||
cfg.Global.TrustID != "")
cfg.Metadata.SearchOrder = fmt.Sprintf("%s,%s", configDriveID, metadataID)
cfg.BlockStorage.BSVersion = "auto"
return
}
func createKubernetesClient(kubeconfigPath string) (*kubernetes.Clientset, error) {
klog.Info("Creating kubernetes API client.")
cfg, err := clientcmd.BuildConfigFromFlags("", kubeconfigPath)
if err != nil {
return nil, err
}
cfg.DisableCompression = true
client, err := kubernetes.NewForConfig(cfg)
if err != nil {
return nil, err
}
v, err := client.Discovery().ServerVersion()
if err != nil {
return nil, err
}
klog.Infof("Kubernetes API client created, server version %s", fmt.Sprintf("v%v.%v", v.Major, v.Minor))
return client, nil
}
// setConfigFromSecret allows setting up the config from k8s secret
func setConfigFromSecret(cfg *Config) error {
secretName := cfg.Global.SecretName
secretNamespace := cfg.Global.SecretNamespace
kubeconfigPath := cfg.Global.KubeconfigPath
k8sClient, err := createKubernetesClient(kubeconfigPath)
if err != nil {
return fmt.Errorf("failed to get kubernetes client: %v", err)
}
secret, err := k8sClient.CoreV1().Secrets(secretNamespace).Get(context.TODO(), secretName, metav1.GetOptions{})
if err != nil {
klog.Warningf("Cannot get secret %s in namespace %s. error: %q", secretName, secretNamespace, err)
return err
}
if content, ok := secret.Data["clouds.conf"]; ok {
err = gcfg.ReadStringInto(cfg, string(content))
if err != nil {
klog.Error("Cannot parse data from the secret.")
return fmt.Errorf("cannot parse data from the secret")
}
return nil
}
klog.Error("Cannot find \"clouds.conf\" key in the secret.")
return fmt.Errorf("cannot find \"clouds.conf\" key in the secret")
}
func readConfig(config io.Reader) (Config, error) {
if config == nil {
return Config{}, fmt.Errorf("no OpenStack cloud provider config file given")
}
cfg, _ := configFromEnv()
// Set default values for config params
cfg.BlockStorage.BSVersion = "auto"
cfg.BlockStorage.TrustDevicePath = false
cfg.BlockStorage.IgnoreVolumeAZ = false
cfg.Metadata.SearchOrder = fmt.Sprintf("%s,%s", configDriveID, metadataID)
err := gcfg.ReadInto(&cfg, config)
if err != nil {
// Warn instead of failing on non-fatal config parsing errors.
// This is important during the transition to external CCM we
// may be sharing user-managed configuration KCM, using legacy
// cloud provider, and CCM using external cloud provider.
// We do not want to prevent KCM from starting if the user adds
// new configuration which is only present in OpenStack CCM.
if gcfg.FatalOnly(err) == nil {
klog.Warningf("Non-fatal error parsing OpenStack cloud config. "+
"This may happen when passing config directives exclusive to OpenStack CCM to the legacy cloud provider. "+
"Legacy cloud provider has correctly parsed all directives it knows about: %s", err)
} else {
return cfg, err
}
}
if cfg.Global.SecretName != "" && cfg.Global.SecretNamespace != "" {
klog.Infof("Set credentials from secret %s in namespace %s", cfg.Global.SecretName, cfg.Global.SecretNamespace)
err = setConfigFromSecret(&cfg)
if err != nil {
return cfg, err
}
}
return cfg, nil
}
// caller is a tiny helper for conditional unwind logic
type caller bool
func newCaller() caller { return caller(true) }
func (c *caller) disarm() { *c = false }
func (c *caller) call(f func()) {
if *c {
f()
}
}
func readInstanceID(searchOrder string) (string, error) {
// Try to find instance ID on the local filesystem (created by cloud-init)
const instanceIDFile = "/var/lib/cloud/data/instance-id"
idBytes, err := ioutil.ReadFile(instanceIDFile)
if err == nil {
instanceID := string(idBytes)
instanceID = strings.TrimSpace(instanceID)
klog.V(3).Infof("Got instance id from %s: %s", instanceIDFile, instanceID)
if instanceID != "" {
return instanceID, nil
}
// Fall through to metadata server lookup
}
md, err := getMetadata(searchOrder)
if err != nil {
return "", err
}
return md.UUID, nil
}
// check opts for OpenStack
func checkOpenStackOpts(openstackOpts *OpenStack) error {
lbOpts := openstackOpts.lbOpts
// if need to create health monitor for Neutron LB,
// monitor-delay, monitor-timeout and monitor-max-retries should be set.
emptyDuration := MyDuration{}
if lbOpts.CreateMonitor {
if lbOpts.MonitorDelay == emptyDuration {
return fmt.Errorf("monitor-delay not set in cloud provider config")
}
if lbOpts.MonitorTimeout == emptyDuration {
return fmt.Errorf("monitor-timeout not set in cloud provider config")
}
if lbOpts.MonitorMaxRetries == uint(0) {
return fmt.Errorf("monitor-max-retries not set in cloud provider config")
}
}
return checkMetadataSearchOrder(openstackOpts.metadataOpts.SearchOrder)
}
func newOpenStack(cfg Config) (*OpenStack, error) {
provider, err := openstack.NewClient(cfg.Global.AuthURL)
if err != nil {
return nil, err
}
if cfg.Global.CAFile != "" {
roots, err := certutil.NewPool(cfg.Global.CAFile)
if err != nil {
return nil, err
}
config := &tls.Config{}
config.RootCAs = roots
provider.HTTPClient.Transport = netutil.SetOldTransportDefaults(&http.Transport{TLSClientConfig: config})
}
if cfg.Global.TrustID != "" {
opts := cfg.toAuth3Options()
authOptsExt := trusts.AuthOptsExt{
TrustID: cfg.Global.TrustID,
AuthOptionsBuilder: &opts,
}
err = openstack.AuthenticateV3(provider, authOptsExt, gophercloud.EndpointOpts{})
} else {
err = openstack.Authenticate(provider, cfg.toAuthOptions())
}
if err != nil {
return nil, err
}
emptyDuration := MyDuration{}
if cfg.Metadata.RequestTimeout == emptyDuration {
cfg.Metadata.RequestTimeout.Duration = time.Duration(defaultTimeOut)
}
provider.HTTPClient.Timeout = cfg.Metadata.RequestTimeout.Duration
os := OpenStack{
provider: provider,
region: cfg.Global.Region,
lbOpts: cfg.LoadBalancer,
bsOpts: cfg.BlockStorage,
routeOpts: cfg.Route,
metadataOpts: cfg.Metadata,
}
err = checkOpenStackOpts(&os)
if err != nil {
return nil, err
}
return &os, nil
}
// NewFakeOpenStackCloud creates and returns an instance of Openstack cloudprovider.
// Mainly for use in tests that require instantiating Openstack without having
// to go through cloudprovider interface.
func NewFakeOpenStackCloud(cfg Config) (*OpenStack, error) {
provider, err := openstack.NewClient(cfg.Global.AuthURL)
if err != nil {
return nil, err
}
emptyDuration := MyDuration{}
if cfg.Metadata.RequestTimeout == emptyDuration {
cfg.Metadata.RequestTimeout.Duration = time.Duration(defaultTimeOut)
}
provider.HTTPClient.Timeout = cfg.Metadata.RequestTimeout.Duration
os := OpenStack{
provider: provider,
region: cfg.Global.Region,
lbOpts: cfg.LoadBalancer,
bsOpts: cfg.BlockStorage,
routeOpts: cfg.Route,
metadataOpts: cfg.Metadata,
}
return &os, nil
}
// Initialize passes a Kubernetes clientBuilder interface to the cloud provider
func (os *OpenStack) Initialize(clientBuilder cloudprovider.ControllerClientBuilder, stop <-chan struct{}) {
}
// mapNodeNameToServerName maps a k8s NodeName to an OpenStack Server Name
// This is a simple string cast.
func mapNodeNameToServerName(nodeName types.NodeName) string {
return string(nodeName)
}
// GetNodeNameByID maps instanceid to types.NodeName
func (os *OpenStack) GetNodeNameByID(instanceID string) (types.NodeName, error) {
client, err := os.NewComputeV2()
var nodeName types.NodeName
if err != nil {
return nodeName, err
}
server, err := servers.Get(client, instanceID).Extract()
if err != nil {
return nodeName, err
}
nodeName = mapServerToNodeName(server)
return nodeName, nil
}
// mapServerToNodeName maps an OpenStack Server to a k8s NodeName
func mapServerToNodeName(server *servers.Server) types.NodeName {
// Node names are always lowercase, and (at least)
// routecontroller does case-sensitive string comparisons
// assuming this
return types.NodeName(strings.ToLower(server.Name))
}
func foreachServer(client *gophercloud.ServiceClient, opts servers.ListOptsBuilder, handler func(*servers.Server) (bool, error)) error {
pager := servers.List(client, opts)
err := pager.EachPage(func(page pagination.Page) (bool, error) {
s, err := servers.ExtractServers(page)
if err != nil {
return false, err
}
for _, server := range s {
ok, err := handler(&server)
if !ok || err != nil {
return false, err
}
}
return true, nil
})
return err
}
func getServerByName(client *gophercloud.ServiceClient, name types.NodeName) (*servers.Server, error) {
opts := servers.ListOpts{
Name: fmt.Sprintf("^%s$", regexp.QuoteMeta(mapNodeNameToServerName(name))),
}
pager := servers.List(client, opts)
serverList := make([]servers.Server, 0, 1)
err := pager.EachPage(func(page pagination.Page) (bool, error) {
s, err := servers.ExtractServers(page)
if err != nil {
return false, err
}
serverList = append(serverList, s...)
if len(serverList) > 1 {
return false, ErrMultipleResults
}
return true, nil
})
if err != nil {
return nil, err
}
if len(serverList) == 0 {
return nil, ErrNotFound
}
return &serverList[0], nil
}
func nodeAddresses(srv *servers.Server) ([]v1.NodeAddress, error) {
addrs := []v1.NodeAddress{}
type Address struct {
IPType string `mapstructure:"OS-EXT-IPS:type"`
Addr string
}
var addresses map[string][]Address
err := mapstructure.Decode(srv.Addresses, &addresses)
if err != nil {
return nil, err
}
for network, addrList := range addresses {
for _, props := range addrList {
var addressType v1.NodeAddressType
if props.IPType == "floating" || network == "public" {
addressType = v1.NodeExternalIP
} else {
addressType = v1.NodeInternalIP
}
nodehelpers.AddToNodeAddresses(&addrs,
v1.NodeAddress{
Type: addressType,
Address: props.Addr,
},
)
}
}
// AccessIPs are usually duplicates of "public" addresses.
if srv.AccessIPv4 != "" {
nodehelpers.AddToNodeAddresses(&addrs,
v1.NodeAddress{
Type: v1.NodeExternalIP,
Address: srv.AccessIPv4,
},
)
}
if srv.AccessIPv6 != "" {
nodehelpers.AddToNodeAddresses(&addrs,
v1.NodeAddress{
Type: v1.NodeExternalIP,
Address: srv.AccessIPv6,
},
)
}
if srv.Metadata[TypeHostName] != "" {
nodehelpers.AddToNodeAddresses(&addrs,
v1.NodeAddress{
Type: v1.NodeHostName,
Address: srv.Metadata[TypeHostName],
},
)
}
return addrs, nil
}
func getAddressesByName(client *gophercloud.ServiceClient, name types.NodeName) ([]v1.NodeAddress, error) {
srv, err := getServerByName(client, name)
if err != nil {
return nil, err
}
return nodeAddresses(srv)
}
func getAddressByName(client *gophercloud.ServiceClient, name types.NodeName, needIPv6 bool) (string, error) {
addrs, err := getAddressesByName(client, name)
if err != nil {
return "", err
} else if len(addrs) == 0 {
return "", ErrNoAddressFound
}
for _, addr := range addrs {
isIPv6 := netutils.ParseIPSloppy(addr.Address).To4() == nil
if (addr.Type == v1.NodeInternalIP) && (isIPv6 == needIPv6) {
return addr.Address, nil
}
}
for _, addr := range addrs {
isIPv6 := netutils.ParseIPSloppy(addr.Address).To4() == nil
if (addr.Type == v1.NodeExternalIP) && (isIPv6 == needIPv6) {
return addr.Address, nil
}
}
// It should never return an address from a different IP Address family than the one needed
return "", ErrNoAddressFound
}
// getAttachedInterfacesByID returns the node interfaces of the specified instance.
func getAttachedInterfacesByID(client *gophercloud.ServiceClient, serviceID string) ([]attachinterfaces.Interface, error) {
var interfaces []attachinterfaces.Interface
pager := attachinterfaces.List(client, serviceID)
err := pager.EachPage(func(page pagination.Page) (bool, error) {
s, err := attachinterfaces.ExtractInterfaces(page)
if err != nil {
return false, err
}
interfaces = append(interfaces, s...)
return true, nil
})
if err != nil {
return interfaces, err
}
return interfaces, nil
}
// Clusters is a no-op
func (os *OpenStack) Clusters() (cloudprovider.Clusters, bool) {
return nil, false
}
// ProviderName returns the cloud provider ID.
func (os *OpenStack) ProviderName() string {
return ProviderName
}
// HasClusterID returns true if the cluster has a clusterID
func (os *OpenStack) HasClusterID() bool {
return true
}
// LoadBalancer initializes a LbaasV2 object
func (os *OpenStack) LoadBalancer() (cloudprovider.LoadBalancer, bool) {
klog.V(4).Info("openstack.LoadBalancer() called")
if reflect.DeepEqual(os.lbOpts, LoadBalancerOpts{}) {
klog.V(4).Info("LoadBalancer section is empty/not defined in cloud-config")
return nil, false
}
network, err := os.NewNetworkV2()
if err != nil {
return nil, false
}
compute, err := os.NewComputeV2()
if err != nil {
return nil, false
}
lb, err := os.NewLoadBalancerV2()
if err != nil {
return nil, false
}
// LBaaS v1 is deprecated in the OpenStack Liberty release.
// Currently kubernetes OpenStack cloud provider just support LBaaS v2.
lbVersion := os.lbOpts.LBVersion
if lbVersion != "" && lbVersion != "v2" {
klog.Warningf("Config error: currently only support LBaaS v2, unrecognised lb-version \"%v\"", lbVersion)
return nil, false
}
klog.V(1).Info("Claiming to support LoadBalancer")
return &LbaasV2{LoadBalancer{network, compute, lb, os.lbOpts}}, true
}
func isNotFound(err error) bool {
if _, ok := err.(gophercloud.ErrDefault404); ok {
return true
}
if errCode, ok := err.(gophercloud.ErrUnexpectedResponseCode); ok {
if errCode.Actual == http.StatusNotFound {
return true
}
}
return false
}
// Zones indicates that we support zones
func (os *OpenStack) Zones() (cloudprovider.Zones, bool) {
klog.V(1).Info("Claiming to support Zones")
return os, true
}
// GetZone returns the current zone
func (os *OpenStack) GetZone(ctx context.Context) (cloudprovider.Zone, error) {
md, err := getMetadata(os.metadataOpts.SearchOrder)
if err != nil {
return cloudprovider.Zone{}, err
}
zone := cloudprovider.Zone{
FailureDomain: md.AvailabilityZone,
Region: os.region,
}
klog.V(4).Infof("Current zone is %v", zone)
return zone, nil
}
// GetZoneByProviderID implements Zones.GetZoneByProviderID
// This is particularly useful in external cloud providers where the kubelet
// does not initialize node data.
func (os *OpenStack) GetZoneByProviderID(ctx context.Context, providerID string) (cloudprovider.Zone, error) {
instanceID, err := instanceIDFromProviderID(providerID)
if err != nil {
return cloudprovider.Zone{}, err
}
compute, err := os.NewComputeV2()
if err != nil {
return cloudprovider.Zone{}, err
}
srv, err := servers.Get(compute, instanceID).Extract()
if err != nil {
return cloudprovider.Zone{}, err
}
zone := cloudprovider.Zone{
FailureDomain: srv.Metadata[availabilityZone],
Region: os.region,
}
klog.V(4).Infof("The instance %s in zone %v", srv.Name, zone)
return zone, nil
}
// GetZoneByNodeName implements Zones.GetZoneByNodeName
// This is particularly useful in external cloud providers where the kubelet
// does not initialize node data.
func (os *OpenStack) GetZoneByNodeName(ctx context.Context, nodeName types.NodeName) (cloudprovider.Zone, error) {
compute, err := os.NewComputeV2()
if err != nil {
return cloudprovider.Zone{}, err
}
srv, err := getServerByName(compute, nodeName)
if err != nil {
if err == ErrNotFound {
return cloudprovider.Zone{}, cloudprovider.InstanceNotFound
}
return cloudprovider.Zone{}, err
}
zone := cloudprovider.Zone{
FailureDomain: srv.Metadata[availabilityZone],
Region: os.region,
}
klog.V(4).Infof("The instance %s in zone %v", srv.Name, zone)
return zone, nil
}
// Routes initializes routes support
func (os *OpenStack) Routes() (cloudprovider.Routes, bool) {
klog.V(4).Info("openstack.Routes() called")
network, err := os.NewNetworkV2()
if err != nil {
return nil, false
}
netExts, err := networkExtensions(network)
if err != nil {
klog.Warningf("Failed to list neutron extensions: %v", err)
return nil, false
}
if !netExts["extraroute"] {
klog.V(3).Info("Neutron extraroute extension not found, required for Routes support")
return nil, false
}
compute, err := os.NewComputeV2()
if err != nil {
return nil, false
}
r, err := NewRoutes(compute, network, os.routeOpts)
if err != nil {
klog.Warningf("Error initialising Routes support: %v", err)
return nil, false
}
klog.V(1).Info("Claiming to support Routes")
return r, true
}
func (os *OpenStack) volumeService(forceVersion string) (volumeService, error) {
bsVersion := ""
if forceVersion == "" {
bsVersion = os.bsOpts.BSVersion
} else {
bsVersion = forceVersion
}
switch bsVersion {
case "v1":
sClient, err := os.NewBlockStorageV1()
if err != nil {
return nil, err
}
klog.V(3).Info("Using Blockstorage API V1")
return &VolumesV1{sClient, os.bsOpts}, nil
case "v2":
sClient, err := os.NewBlockStorageV2()
if err != nil {
return nil, err
}
klog.V(3).Info("Using Blockstorage API V2")
return &VolumesV2{sClient, os.bsOpts}, nil
case "v3":
sClient, err := os.NewBlockStorageV3()
if err != nil {
return nil, err
}
klog.V(3).Info("Using Blockstorage API V3")
return &VolumesV3{sClient, os.bsOpts}, nil
case "auto":
// Currently kubernetes support Cinder v1 / Cinder v2 / Cinder v3.
// Choose Cinder v3 firstly, if kubernetes can't initialize cinder v3 client, try to initialize cinder v2 client.
// If kubernetes can't initialize cinder v2 client, try to initialize cinder v1 client.
// Return appropriate message when kubernetes can't initialize them.
if sClient, err := os.NewBlockStorageV3(); err == nil {
klog.V(3).Info("Using Blockstorage API V3")
return &VolumesV3{sClient, os.bsOpts}, nil
}
if sClient, err := os.NewBlockStorageV2(); err == nil {
klog.V(3).Info("Using Blockstorage API V2")
return &VolumesV2{sClient, os.bsOpts}, nil
}
if sClient, err := os.NewBlockStorageV1(); err == nil {
klog.V(3).Info("Using Blockstorage API V1")
return &VolumesV1{sClient, os.bsOpts}, nil
}
errTxt := "BlockStorage API version autodetection failed. " +
"Please set it explicitly in cloud.conf in section [BlockStorage] with key `bs-version`"
return nil, errors.New(errTxt)
default:
errTxt := fmt.Sprintf("Config error: unrecognised bs-version \"%v\"", os.bsOpts.BSVersion)
return nil, errors.New(errTxt)
}
}
func checkMetadataSearchOrder(order string) error {
if order == "" {
return errors.New("invalid value in section [Metadata] with key `search-order`. Value cannot be empty")
}
elements := strings.Split(order, ",")
if len(elements) > 2 {
return errors.New("invalid value in section [Metadata] with key `search-order`. Value cannot contain more than 2 elements")
}
for _, id := range elements {
id = strings.TrimSpace(id)
switch id {
case configDriveID:
case metadataID:
default:
return fmt.Errorf("invalid element %q found in section [Metadata] with key `search-order`."+
"Supported elements include %q and %q", id, configDriveID, metadataID)
}
}
return nil
}

View File

@ -1,101 +0,0 @@
//go:build !providerless
// +build !providerless
/*
Copyright 2017 The Kubernetes Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package openstack
import (
"fmt"
"github.com/gophercloud/gophercloud"
"github.com/gophercloud/gophercloud/openstack"
)
// NewNetworkV2 creates a ServiceClient that may be used with the neutron v2 API
func (os *OpenStack) NewNetworkV2() (*gophercloud.ServiceClient, error) {
network, err := openstack.NewNetworkV2(os.provider, gophercloud.EndpointOpts{
Region: os.region,
})
if err != nil {
return nil, fmt.Errorf("failed to find network v2 endpoint for region %s: %v", os.region, err)
}
return network, nil
}
// NewComputeV2 creates a ServiceClient that may be used with the nova v2 API
func (os *OpenStack) NewComputeV2() (*gophercloud.ServiceClient, error) {
compute, err := openstack.NewComputeV2(os.provider, gophercloud.EndpointOpts{
Region: os.region,
})
if err != nil {
return nil, fmt.Errorf("failed to find compute v2 endpoint for region %s: %v", os.region, err)
}
return compute, nil
}
// NewBlockStorageV1 creates a ServiceClient that may be used with the Cinder v1 API
func (os *OpenStack) NewBlockStorageV1() (*gophercloud.ServiceClient, error) {
storage, err := openstack.NewBlockStorageV1(os.provider, gophercloud.EndpointOpts{
Region: os.region,
})
if err != nil {
return nil, fmt.Errorf("unable to initialize cinder v1 client for region %s: %v", os.region, err)
}
return storage, nil
}
// NewBlockStorageV2 creates a ServiceClient that may be used with the Cinder v2 API
func (os *OpenStack) NewBlockStorageV2() (*gophercloud.ServiceClient, error) {
storage, err := openstack.NewBlockStorageV2(os.provider, gophercloud.EndpointOpts{
Region: os.region,
})
if err != nil {
return nil, fmt.Errorf("unable to initialize cinder v2 client for region %s: %v", os.region, err)
}
return storage, nil
}
// NewBlockStorageV3 creates a ServiceClient that may be used with the Cinder v3 API
func (os *OpenStack) NewBlockStorageV3() (*gophercloud.ServiceClient, error) {
storage, err := openstack.NewBlockStorageV3(os.provider, gophercloud.EndpointOpts{
Region: os.region,
})
if err != nil {
return nil, fmt.Errorf("unable to initialize cinder v3 client for region %s: %v", os.region, err)
}
return storage, nil
}
// NewLoadBalancerV2 creates a ServiceClient that may be used with the Neutron LBaaS v2 API
func (os *OpenStack) NewLoadBalancerV2() (*gophercloud.ServiceClient, error) {
var lb *gophercloud.ServiceClient
var err error
if os.lbOpts.UseOctavia {
lb, err = openstack.NewLoadBalancerV2(os.provider, gophercloud.EndpointOpts{
Region: os.region,
})
} else {
lb, err = openstack.NewNetworkV2(os.provider, gophercloud.EndpointOpts{
Region: os.region,
})
}
if err != nil {
return nil, fmt.Errorf("failed to find load-balancer v2 endpoint for region %s: %v", os.region, err)
}
return lb, nil
}

View File

@ -1,244 +0,0 @@
//go:build !providerless
// +build !providerless
/*
Copyright 2016 The Kubernetes Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package openstack
import (
"context"
"fmt"
"regexp"
"github.com/gophercloud/gophercloud"
"github.com/gophercloud/gophercloud/openstack/compute/v2/servers"
v1 "k8s.io/api/core/v1"
"k8s.io/klog/v2"
"k8s.io/apimachinery/pkg/types"
cloudprovider "k8s.io/cloud-provider"
)
var _ cloudprovider.Instances = (*Instances)(nil)
// Instances encapsulates an implementation of Instances for OpenStack.
type Instances struct {
compute *gophercloud.ServiceClient
opts MetadataOpts
}
const (
instanceShutoff = "SHUTOFF"
)
// Instances returns an implementation of Instances for OpenStack.
func (os *OpenStack) Instances() (cloudprovider.Instances, bool) {
klog.V(4).Info("openstack.Instances() called")
compute, err := os.NewComputeV2()
if err != nil {
klog.Errorf("unable to access compute v2 API : %v", err)
return nil, false
}
klog.V(4).Info("Claiming to support Instances")
return &Instances{
compute: compute,
opts: os.metadataOpts,
}, true
}
// InstancesV2 returns an implementation of InstancesV2 for OpenStack.
// TODO: implement ONLY for external cloud provider
func (os *OpenStack) InstancesV2() (cloudprovider.InstancesV2, bool) {
return nil, false
}
// CurrentNodeName implements Instances.CurrentNodeName
// Note this is *not* necessarily the same as hostname.
func (i *Instances) CurrentNodeName(ctx context.Context, hostname string) (types.NodeName, error) {
md, err := getMetadata(i.opts.SearchOrder)
if err != nil {
return "", err
}
return types.NodeName(md.Name), nil
}
// AddSSHKeyToAllInstances is not implemented for OpenStack
func (i *Instances) AddSSHKeyToAllInstances(ctx context.Context, user string, keyData []byte) error {
return cloudprovider.NotImplemented
}
// NodeAddresses implements Instances.NodeAddresses
func (i *Instances) NodeAddresses(ctx context.Context, name types.NodeName) ([]v1.NodeAddress, error) {
klog.V(4).Infof("NodeAddresses(%v) called", name)
addrs, err := getAddressesByName(i.compute, name)
if err != nil {
return nil, err
}
klog.V(4).Infof("NodeAddresses(%v) => %v", name, addrs)
return addrs, nil
}
// NodeAddressesByProviderID returns the node addresses of an instances with the specified unique providerID
// This method will not be called from the node that is requesting this ID. i.e. metadata service
// and other local methods cannot be used here
func (i *Instances) NodeAddressesByProviderID(ctx context.Context, providerID string) ([]v1.NodeAddress, error) {
instanceID, err := instanceIDFromProviderID(providerID)
if err != nil {
return []v1.NodeAddress{}, err
}
server, err := servers.Get(i.compute, instanceID).Extract()
if err != nil {
return []v1.NodeAddress{}, err
}
addresses, err := nodeAddresses(server)
if err != nil {
return []v1.NodeAddress{}, err
}
return addresses, nil
}
// InstanceExistsByProviderID returns true if the instance with the given provider id still exist.
// If false is returned with no error, the instance will be immediately deleted by the cloud controller manager.
func (i *Instances) InstanceExistsByProviderID(ctx context.Context, providerID string) (bool, error) {
instanceID, err := instanceIDFromProviderID(providerID)
if err != nil {
return false, err
}
_, err = servers.Get(i.compute, instanceID).Extract()
if err != nil {
if isNotFound(err) {
return false, nil
}
return false, err
}
return true, nil
}
// InstanceShutdownByProviderID returns true if the instances is in safe state to detach volumes
func (i *Instances) InstanceShutdownByProviderID(ctx context.Context, providerID string) (bool, error) {
instanceID, err := instanceIDFromProviderID(providerID)
if err != nil {
return false, err
}
server, err := servers.Get(i.compute, instanceID).Extract()
if err != nil {
return false, err
}
// SHUTOFF is the only state where we can detach volumes immediately
if server.Status == instanceShutoff {
return true, nil
}
return false, nil
}
// InstanceID returns the kubelet's cloud provider ID.
func (os *OpenStack) InstanceID() (string, error) {
if len(os.localInstanceID) == 0 {
id, err := readInstanceID(os.metadataOpts.SearchOrder)
if err != nil {
return "", err
}
os.localInstanceID = id
}
return os.localInstanceID, nil
}
// InstanceID returns the cloud provider ID of the specified instance.
func (i *Instances) InstanceID(ctx context.Context, name types.NodeName) (string, error) {
srv, err := getServerByName(i.compute, name)
if err != nil {
if err == ErrNotFound {
return "", cloudprovider.InstanceNotFound
}
return "", err
}
// In the future it is possible to also return an endpoint as:
// <endpoint>/<instanceid>
return "/" + srv.ID, nil
}
// InstanceTypeByProviderID returns the cloudprovider instance type of the node with the specified unique providerID
// This method will not be called from the node that is requesting this ID. i.e. metadata service
// and other local methods cannot be used here
func (i *Instances) InstanceTypeByProviderID(ctx context.Context, providerID string) (string, error) {
instanceID, err := instanceIDFromProviderID(providerID)
if err != nil {
return "", err
}
server, err := servers.Get(i.compute, instanceID).Extract()
if err != nil {
return "", err
}
return srvInstanceType(server)
}
// InstanceType returns the type of the specified instance.
func (i *Instances) InstanceType(ctx context.Context, name types.NodeName) (string, error) {
srv, err := getServerByName(i.compute, name)
if err != nil {
return "", err
}
return srvInstanceType(srv)
}
func srvInstanceType(srv *servers.Server) (string, error) {
keys := []string{"name", "id", "original_name"}
for _, key := range keys {
val, found := srv.Flavor[key]
if found {
flavor, ok := val.(string)
if ok {
return flavor, nil
}
}
}
return "", fmt.Errorf("flavor name/id not found")
}
// instanceIDFromProviderID splits a provider's id and return instanceID.
// A providerID is build out of '${ProviderName}:///${instance-id}'which contains ':///'.
// See cloudprovider.GetInstanceProviderID and Instances.InstanceID.
func instanceIDFromProviderID(providerID string) (instanceID string, err error) {
// If Instances.InstanceID or cloudprovider.GetInstanceProviderID is changed, the regexp should be changed too.
var providerIDRegexp = regexp.MustCompile(`^` + ProviderName + `:///([^/]+)$`)
matches := providerIDRegexp.FindStringSubmatch(providerID)
if len(matches) != 2 {
return "", fmt.Errorf("ProviderID \"%s\" didn't match expected format \"openstack:///InstanceID\"", providerID)
}
return matches[1], nil
}

View File

@ -1,64 +0,0 @@
//go:build !providerless
// +build !providerless
/*
Copyright 2017 The Kubernetes Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package openstack
import (
"sync"
"k8s.io/component-base/metrics"
"k8s.io/component-base/metrics/legacyregistry"
)
const (
openstackSubsystem = "openstack"
openstackOperationKey = "cloudprovider_openstack_api_request_duration_seconds"
openstackOperationErrorKey = "cloudprovider_openstack_api_request_errors"
)
var (
openstackOperationsLatency = metrics.NewHistogramVec(
&metrics.HistogramOpts{
Subsystem: openstackSubsystem,
Name: openstackOperationKey,
Help: "Latency of openstack api call",
StabilityLevel: metrics.ALPHA,
},
[]string{"request"},
)
openstackAPIRequestErrors = metrics.NewCounterVec(
&metrics.CounterOpts{
Subsystem: openstackSubsystem,
Name: openstackOperationErrorKey,
Help: "Cumulative number of openstack Api call errors",
StabilityLevel: metrics.ALPHA,
},
[]string{"request"},
)
)
var registerOnce sync.Once
func registerMetrics() {
registerOnce.Do(func() {
legacyregistry.MustRegister(openstackOperationsLatency)
legacyregistry.MustRegister(openstackAPIRequestErrors)
})
}

View File

@ -1,347 +0,0 @@
//go:build !providerless
// +build !providerless
/*
Copyright 2016 The Kubernetes Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package openstack
import (
"context"
"errors"
"github.com/gophercloud/gophercloud"
"github.com/gophercloud/gophercloud/openstack/compute/v2/servers"
"github.com/gophercloud/gophercloud/openstack/networking/v2/extensions/layer3/routers"
neutronports "github.com/gophercloud/gophercloud/openstack/networking/v2/ports"
"k8s.io/apimachinery/pkg/types"
cloudprovider "k8s.io/cloud-provider"
"k8s.io/klog/v2"
netutils "k8s.io/utils/net"
)
var errNoRouterID = errors.New("router-id not set in cloud provider config")
var _ cloudprovider.Routes = (*Routes)(nil)
// Routes implements the cloudprovider.Routes for OpenStack clouds
type Routes struct {
compute *gophercloud.ServiceClient
network *gophercloud.ServiceClient
opts RouterOpts
}
// NewRoutes creates a new instance of Routes
func NewRoutes(compute *gophercloud.ServiceClient, network *gophercloud.ServiceClient, opts RouterOpts) (cloudprovider.Routes, error) {
if opts.RouterID == "" {
return nil, errNoRouterID
}
return &Routes{
compute: compute,
network: network,
opts: opts,
}, nil
}
// ListRoutes lists all managed routes that belong to the specified clusterName
func (r *Routes) ListRoutes(ctx context.Context, clusterName string) ([]*cloudprovider.Route, error) {
klog.V(4).Infof("ListRoutes(%v)", clusterName)
nodeNamesByAddr := make(map[string]types.NodeName)
err := foreachServer(r.compute, servers.ListOpts{}, func(srv *servers.Server) (bool, error) {
addrs, err := nodeAddresses(srv)
if err != nil {
return false, err
}
name := mapServerToNodeName(srv)
for _, addr := range addrs {
nodeNamesByAddr[addr.Address] = name
}
return true, nil
})
if err != nil {
return nil, err
}
router, err := routers.Get(r.network, r.opts.RouterID).Extract()
if err != nil {
return nil, err
}
var routes []*cloudprovider.Route
for _, item := range router.Routes {
nodeName, foundNode := nodeNamesByAddr[item.NextHop]
if !foundNode {
nodeName = types.NodeName(item.NextHop)
}
route := cloudprovider.Route{
Name: item.DestinationCIDR,
TargetNode: nodeName, //contains the nexthop address if node was not found
Blackhole: !foundNode,
DestinationCIDR: item.DestinationCIDR,
}
routes = append(routes, &route)
}
return routes, nil
}
func updateRoutes(network *gophercloud.ServiceClient, router *routers.Router, newRoutes []routers.Route) (func(), error) {
origRoutes := router.Routes // shallow copy
_, err := routers.Update(network, router.ID, routers.UpdateOpts{
Routes: newRoutes,
}).Extract()
if err != nil {
return nil, err
}
unwinder := func() {
klog.V(4).Infof("Reverting routes change to router %v", router.ID)
_, err := routers.Update(network, router.ID, routers.UpdateOpts{
Routes: origRoutes,
}).Extract()
if err != nil {
klog.Warningf("Unable to reset routes during error unwind: %v", err)
}
}
return unwinder, nil
}
func updateAllowedAddressPairs(network *gophercloud.ServiceClient, port *neutronports.Port, newPairs []neutronports.AddressPair) (func(), error) {
origPairs := port.AllowedAddressPairs // shallow copy
_, err := neutronports.Update(network, port.ID, neutronports.UpdateOpts{
AllowedAddressPairs: &newPairs,
}).Extract()
if err != nil {
return nil, err
}
unwinder := func() {
klog.V(4).Infof("Reverting allowed-address-pairs change to port %v", port.ID)
_, err := neutronports.Update(network, port.ID, neutronports.UpdateOpts{
AllowedAddressPairs: &origPairs,
}).Extract()
if err != nil {
klog.Warningf("Unable to reset allowed-address-pairs during error unwind: %v", err)
}
}
return unwinder, nil
}
// CreateRoute creates the described managed route
func (r *Routes) CreateRoute(ctx context.Context, clusterName string, nameHint string, route *cloudprovider.Route) error {
klog.V(4).Infof("CreateRoute(%v, %v, %v)", clusterName, nameHint, route)
onFailure := newCaller()
ip, _, _ := netutils.ParseCIDRSloppy(route.DestinationCIDR)
isCIDRv6 := ip.To4() == nil
addr, err := getAddressByName(r.compute, route.TargetNode, isCIDRv6)
if err != nil {
return err
}
klog.V(4).Infof("Using nexthop %v for node %v", addr, route.TargetNode)
router, err := routers.Get(r.network, r.opts.RouterID).Extract()
if err != nil {
return err
}
routes := router.Routes
for _, item := range routes {
if item.DestinationCIDR == route.DestinationCIDR && item.NextHop == addr {
klog.V(4).Infof("Skipping existing route: %v", route)
return nil
}
}
routes = append(routes, routers.Route{
DestinationCIDR: route.DestinationCIDR,
NextHop: addr,
})
unwind, err := updateRoutes(r.network, router, routes)
if err != nil {
return err
}
defer onFailure.call(unwind)
// get the port of addr on target node.
portID, err := getPortIDByIP(r.compute, route.TargetNode, addr)
if err != nil {
return err
}
port, err := getPortByID(r.network, portID)
if err != nil {
return err
}
found := false
for _, item := range port.AllowedAddressPairs {
if item.IPAddress == route.DestinationCIDR {
klog.V(4).Infof("Found existing allowed-address-pair: %v", item)
found = true
break
}
}
if !found {
newPairs := append(port.AllowedAddressPairs, neutronports.AddressPair{
IPAddress: route.DestinationCIDR,
})
unwind, err := updateAllowedAddressPairs(r.network, port, newPairs)
if err != nil {
return err
}
defer onFailure.call(unwind)
}
klog.V(4).Infof("Route created: %v", route)
onFailure.disarm()
return nil
}
// DeleteRoute deletes the specified managed route
func (r *Routes) DeleteRoute(ctx context.Context, clusterName string, route *cloudprovider.Route) error {
klog.V(4).Infof("DeleteRoute(%v, %v)", clusterName, route)
onFailure := newCaller()
ip, _, _ := netutils.ParseCIDRSloppy(route.DestinationCIDR)
isCIDRv6 := ip.To4() == nil
var addr string
// Blackhole routes are orphaned and have no counterpart in OpenStack
if !route.Blackhole {
var err error
addr, err = getAddressByName(r.compute, route.TargetNode, isCIDRv6)
if err != nil {
return err
}
}
router, err := routers.Get(r.network, r.opts.RouterID).Extract()
if err != nil {
return err
}
routes := router.Routes
index := -1
for i, item := range routes {
if item.DestinationCIDR == route.DestinationCIDR && (item.NextHop == addr || route.Blackhole && item.NextHop == string(route.TargetNode)) {
index = i
break
}
}
if index == -1 {
klog.V(4).Infof("Skipping non-existent route: %v", route)
return nil
}
// Delete element `index`
routes[index] = routes[len(routes)-1]
routes = routes[:len(routes)-1]
unwind, err := updateRoutes(r.network, router, routes)
// If this was a blackhole route we are done, there are no ports to update
if err != nil || route.Blackhole {
return err
}
defer onFailure.call(unwind)
// get the port of addr on target node.
portID, err := getPortIDByIP(r.compute, route.TargetNode, addr)
if err != nil {
return err
}
port, err := getPortByID(r.network, portID)
if err != nil {
return err
}
addrPairs := port.AllowedAddressPairs
index = -1
for i, item := range addrPairs {
if item.IPAddress == route.DestinationCIDR {
index = i
break
}
}
if index != -1 {
// Delete element `index`
addrPairs[index] = addrPairs[len(addrPairs)-1]
addrPairs = addrPairs[:len(addrPairs)-1]
unwind, err := updateAllowedAddressPairs(r.network, port, addrPairs)
if err != nil {
return err
}
defer onFailure.call(unwind)
}
klog.V(4).Infof("Route deleted: %v", route)
onFailure.disarm()
return nil
}
func getPortIDByIP(compute *gophercloud.ServiceClient, targetNode types.NodeName, ipAddress string) (string, error) {
srv, err := getServerByName(compute, targetNode)
if err != nil {
return "", err
}
interfaces, err := getAttachedInterfacesByID(compute, srv.ID)
if err != nil {
return "", err
}
for _, intf := range interfaces {
for _, fixedIP := range intf.FixedIPs {
if fixedIP.IPAddress == ipAddress {
return intf.PortID, nil
}
}
}
return "", ErrNotFound
}
func getPortByID(client *gophercloud.ServiceClient, portID string) (*neutronports.Port, error) {
targetPort, err := neutronports.Get(client, portID).Extract()
if err != nil {
return nil, err
}
if targetPort == nil {
return nil, ErrNotFound
}
return targetPort, nil
}

View File

@ -1,128 +0,0 @@
//go:build !providerless
// +build !providerless
/*
Copyright 2016 The Kubernetes Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package openstack
import (
"context"
"testing"
"github.com/gophercloud/gophercloud/openstack/compute/v2/servers"
"github.com/gophercloud/gophercloud/openstack/networking/v2/extensions/layer3/routers"
"k8s.io/apimachinery/pkg/types"
cloudprovider "k8s.io/cloud-provider"
netutils "k8s.io/utils/net"
)
func TestRoutes(t *testing.T) {
const clusterName = "ignored"
cfg, ok := configFromEnv()
if !ok {
t.Skipf("No config found in environment")
}
os, err := newOpenStack(cfg)
if err != nil {
t.Fatalf("Failed to construct/authenticate OpenStack: %s", err)
}
vms := getServers(os)
_, err = os.InstanceID()
if err != nil || len(vms) == 0 {
t.Skipf("Please run this test in an OpenStack vm or create at least one VM in OpenStack before you run this test.")
}
// We know we have at least one vm.
servername := vms[0].Name
// Pick the first router and server to try a test with
os.routeOpts.RouterID = getRouters(os)[0].ID
r, ok := os.Routes()
if !ok {
t.Skip("Routes() returned false - perhaps your stack does not support Neutron extraroute extension?")
}
newroute := cloudprovider.Route{
DestinationCIDR: "10.164.2.0/24",
TargetNode: types.NodeName(servername),
}
err = r.CreateRoute(context.TODO(), clusterName, "myhint", &newroute)
if err != nil {
t.Fatalf("CreateRoute error: %v", err)
}
routelist, err := r.ListRoutes(context.TODO(), clusterName)
if err != nil {
t.Fatalf("ListRoutes() error: %v", err)
}
for _, route := range routelist {
_, cidr, err := netutils.ParseCIDRSloppy(route.DestinationCIDR)
if err != nil {
t.Logf("Ignoring route %s, unparsable CIDR: %v", route.Name, err)
continue
}
t.Logf("%s via %s", cidr, route.TargetNode)
}
err = r.DeleteRoute(context.TODO(), clusterName, &newroute)
if err != nil {
t.Fatalf("DeleteRoute error: %v", err)
}
}
func getServers(os *OpenStack) []servers.Server {
c, err := os.NewComputeV2()
if err != nil {
panic(err)
}
allPages, err := servers.List(c, servers.ListOpts{}).AllPages()
if err != nil {
panic(err)
}
allServers, err := servers.ExtractServers(allPages)
if err != nil {
panic(err)
}
if len(allServers) == 0 {
panic("No servers to test with")
}
return allServers
}
func getRouters(os *OpenStack) []routers.Router {
listOpts := routers.ListOpts{}
n, err := os.NewNetworkV2()
if err != nil {
panic(err)
}
allPages, err := routers.List(n, listOpts).AllPages()
if err != nil {
panic(err)
}
allRouters, err := routers.ExtractRouters(allPages)
if err != nil {
panic(err)
}
if len(allRouters) == 0 {
panic("No routers to test with")
}
return allRouters
}

View File

@ -1,733 +0,0 @@
//go:build !providerless
// +build !providerless
/*
Copyright 2014 The Kubernetes Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package openstack
import (
"context"
"fmt"
"os"
"reflect"
"regexp"
"sort"
"strings"
"testing"
"time"
"github.com/gophercloud/gophercloud"
"github.com/gophercloud/gophercloud/openstack/compute/v2/servers"
v1 "k8s.io/api/core/v1"
"k8s.io/apimachinery/pkg/api/resource"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/util/rand"
"k8s.io/apimachinery/pkg/util/wait"
)
const (
testClusterName = "testCluster"
volumeStatusTimeoutSeconds = 30
// volumeStatus* is configuration of exponential backoff for
// waiting for specified volume status. Starting with 1
// seconds, multiplying by 1.2 with each step and taking 13 steps at maximum
// it will time out after 32s, which roughly corresponds to 30s
volumeStatusInitDelay = 1 * time.Second
volumeStatusFactor = 1.2
volumeStatusSteps = 13
)
func WaitForVolumeStatus(t *testing.T, os *OpenStack, volumeName string, status string) {
backoff := wait.Backoff{
Duration: volumeStatusInitDelay,
Factor: volumeStatusFactor,
Steps: volumeStatusSteps,
}
err := wait.ExponentialBackoff(backoff, func() (bool, error) {
getVol, err := os.getVolume(volumeName)
if err != nil {
return false, err
}
if getVol.Status == status {
t.Logf("Volume (%s) status changed to %s after %v seconds\n",
volumeName,
status,
volumeStatusTimeoutSeconds)
return true, nil
}
return false, nil
})
if err == wait.ErrWaitTimeout {
t.Logf("Volume (%s) status did not change to %s after %v seconds\n",
volumeName,
status,
volumeStatusTimeoutSeconds)
return
}
if err != nil {
t.Fatalf("Cannot get existing Cinder volume (%s): %v", volumeName, err)
}
}
func TestReadConfig(t *testing.T) {
_, err := readConfig(nil)
if err == nil {
t.Errorf("Should fail when no config is provided: %s", err)
}
// Since we are setting env vars, we need to reset old
// values for other tests to succeed.
env := clearEnviron(t)
defer resetEnviron(t, env)
os.Setenv("OS_PASSWORD", "mypass") // Fake value for testing.
defer os.Unsetenv("OS_PASSWORD")
os.Setenv("OS_TENANT_NAME", "admin")
defer os.Unsetenv("OS_TENANT_NAME")
cfg, err := readConfig(strings.NewReader(`
[Global]
auth-url = http://auth.url
user-id = user
tenant-name = demo
region = RegionOne
[LoadBalancer]
create-monitor = yes
monitor-delay = 1m
monitor-timeout = 30s
monitor-max-retries = 3
[BlockStorage]
bs-version = auto
trust-device-path = yes
ignore-volume-az = yes
[Metadata]
search-order = configDrive, metadataService
`))
cfg.Global.Password = os.Getenv("OS_PASSWORD")
if err != nil {
t.Fatalf("Should succeed when a valid config is provided: %s", err)
}
if cfg.Global.AuthURL != "http://auth.url" {
t.Errorf("incorrect authurl: %s", cfg.Global.AuthURL)
}
if cfg.Global.UserID != "user" {
t.Errorf("incorrect userid: %s", cfg.Global.UserID)
}
if cfg.Global.Password != "mypass" {
t.Errorf("incorrect password: %s", cfg.Global.Password)
}
// config file wins over environment variable
if cfg.Global.TenantName != "demo" {
t.Errorf("incorrect tenant name: %s", cfg.Global.TenantName)
}
if cfg.Global.Region != "RegionOne" {
t.Errorf("incorrect region: %s", cfg.Global.Region)
}
if !cfg.LoadBalancer.CreateMonitor {
t.Errorf("incorrect lb.createmonitor: %t", cfg.LoadBalancer.CreateMonitor)
}
if cfg.LoadBalancer.MonitorDelay.Duration != 1*time.Minute {
t.Errorf("incorrect lb.monitordelay: %s", cfg.LoadBalancer.MonitorDelay)
}
if cfg.LoadBalancer.MonitorTimeout.Duration != 30*time.Second {
t.Errorf("incorrect lb.monitortimeout: %s", cfg.LoadBalancer.MonitorTimeout)
}
if cfg.LoadBalancer.MonitorMaxRetries != 3 {
t.Errorf("incorrect lb.monitormaxretries: %d", cfg.LoadBalancer.MonitorMaxRetries)
}
if cfg.BlockStorage.TrustDevicePath != true {
t.Errorf("incorrect bs.trustdevicepath: %v", cfg.BlockStorage.TrustDevicePath)
}
if cfg.BlockStorage.BSVersion != "auto" {
t.Errorf("incorrect bs.bs-version: %v", cfg.BlockStorage.BSVersion)
}
if cfg.BlockStorage.IgnoreVolumeAZ != true {
t.Errorf("incorrect bs.IgnoreVolumeAZ: %v", cfg.BlockStorage.IgnoreVolumeAZ)
}
if cfg.Metadata.SearchOrder != "configDrive, metadataService" {
t.Errorf("incorrect md.search-order: %v", cfg.Metadata.SearchOrder)
}
}
func TestToAuthOptions(t *testing.T) {
cfg := Config{}
cfg.Global.Username = "user"
cfg.Global.Password = "pass" // Fake value for testing.
cfg.Global.DomainID = "2a73b8f597c04551a0fdc8e95544be8a"
cfg.Global.DomainName = "local"
cfg.Global.AuthURL = "http://auth.url"
cfg.Global.UserID = "user"
ao := cfg.toAuthOptions()
if !ao.AllowReauth {
t.Errorf("Will need to be able to reauthenticate")
}
if ao.Username != cfg.Global.Username {
t.Errorf("Username %s != %s", ao.Username, cfg.Global.Username)
}
if ao.Password != cfg.Global.Password {
t.Errorf("Password %s != %s", ao.Password, cfg.Global.Password)
}
if ao.DomainID != cfg.Global.DomainID {
t.Errorf("DomainID %s != %s", ao.DomainID, cfg.Global.DomainID)
}
if ao.IdentityEndpoint != cfg.Global.AuthURL {
t.Errorf("IdentityEndpoint %s != %s", ao.IdentityEndpoint, cfg.Global.AuthURL)
}
if ao.UserID != cfg.Global.UserID {
t.Errorf("UserID %s != %s", ao.UserID, cfg.Global.UserID)
}
if ao.DomainName != cfg.Global.DomainName {
t.Errorf("DomainName %s != %s", ao.DomainName, cfg.Global.DomainName)
}
if ao.TenantID != cfg.Global.TenantID {
t.Errorf("TenantID %s != %s", ao.TenantID, cfg.Global.TenantID)
}
}
func TestCheckOpenStackOpts(t *testing.T) {
delay := MyDuration{60 * time.Second}
timeout := MyDuration{30 * time.Second}
tests := []struct {
name string
openstackOpts *OpenStack
expectedError error
}{
{
name: "test1",
openstackOpts: &OpenStack{
provider: nil,
lbOpts: LoadBalancerOpts{
LBVersion: "v2",
SubnetID: "6261548e-ffde-4bc7-bd22-59c83578c5ef",
FloatingNetworkID: "38b8b5f9-64dc-4424-bf86-679595714786",
LBMethod: "ROUND_ROBIN",
LBProvider: "haproxy",
CreateMonitor: true,
MonitorDelay: delay,
MonitorTimeout: timeout,
MonitorMaxRetries: uint(3),
ManageSecurityGroups: true,
},
metadataOpts: MetadataOpts{
SearchOrder: configDriveID,
},
},
expectedError: nil,
},
{
name: "test2",
openstackOpts: &OpenStack{
provider: nil,
lbOpts: LoadBalancerOpts{
LBVersion: "v2",
FloatingNetworkID: "38b8b5f9-64dc-4424-bf86-679595714786",
LBMethod: "ROUND_ROBIN",
CreateMonitor: true,
MonitorDelay: delay,
MonitorTimeout: timeout,
MonitorMaxRetries: uint(3),
ManageSecurityGroups: true,
},
metadataOpts: MetadataOpts{
SearchOrder: configDriveID,
},
},
expectedError: nil,
},
{
name: "test3",
openstackOpts: &OpenStack{
provider: nil,
lbOpts: LoadBalancerOpts{
LBVersion: "v2",
SubnetID: "6261548e-ffde-4bc7-bd22-59c83578c5ef",
FloatingNetworkID: "38b8b5f9-64dc-4424-bf86-679595714786",
LBMethod: "ROUND_ROBIN",
CreateMonitor: true,
MonitorTimeout: timeout,
MonitorMaxRetries: uint(3),
ManageSecurityGroups: true,
},
metadataOpts: MetadataOpts{
SearchOrder: configDriveID,
},
},
expectedError: fmt.Errorf("monitor-delay not set in cloud provider config"),
},
{
name: "test4",
openstackOpts: &OpenStack{
provider: nil,
metadataOpts: MetadataOpts{
SearchOrder: "",
},
},
expectedError: fmt.Errorf("invalid value in section [Metadata] with key `search-order`. Value cannot be empty"),
},
{
name: "test5",
openstackOpts: &OpenStack{
provider: nil,
metadataOpts: MetadataOpts{
SearchOrder: "value1,value2,value3",
},
},
expectedError: fmt.Errorf("invalid value in section [Metadata] with key `search-order`. Value cannot contain more than 2 elements"),
},
{
name: "test6",
openstackOpts: &OpenStack{
provider: nil,
metadataOpts: MetadataOpts{
SearchOrder: "value1",
},
},
expectedError: fmt.Errorf("invalid element %q found in section [Metadata] with key `search-order`."+
"Supported elements include %q and %q", "value1", configDriveID, metadataID),
},
{
name: "test7",
openstackOpts: &OpenStack{
provider: nil,
lbOpts: LoadBalancerOpts{
LBVersion: "v2",
SubnetID: "6261548e-ffde-4bc7-bd22-59c83578c5ef",
FloatingNetworkID: "38b8b5f9-64dc-4424-bf86-679595714786",
LBMethod: "ROUND_ROBIN",
CreateMonitor: true,
MonitorDelay: delay,
MonitorTimeout: timeout,
ManageSecurityGroups: true,
},
metadataOpts: MetadataOpts{
SearchOrder: configDriveID,
},
},
expectedError: fmt.Errorf("monitor-max-retries not set in cloud provider config"),
},
{
name: "test8",
openstackOpts: &OpenStack{
provider: nil,
lbOpts: LoadBalancerOpts{
LBVersion: "v2",
SubnetID: "6261548e-ffde-4bc7-bd22-59c83578c5ef",
FloatingNetworkID: "38b8b5f9-64dc-4424-bf86-679595714786",
LBMethod: "ROUND_ROBIN",
CreateMonitor: true,
MonitorDelay: delay,
MonitorMaxRetries: uint(3),
ManageSecurityGroups: true,
},
metadataOpts: MetadataOpts{
SearchOrder: configDriveID,
},
},
expectedError: fmt.Errorf("monitor-timeout not set in cloud provider config"),
},
}
for _, testcase := range tests {
err := checkOpenStackOpts(testcase.openstackOpts)
if err == nil && testcase.expectedError == nil {
continue
}
if (err != nil && testcase.expectedError == nil) || (err == nil && testcase.expectedError != nil) || err.Error() != testcase.expectedError.Error() {
t.Errorf("%s failed: expected err=%q, got %q",
testcase.name, testcase.expectedError, err)
}
}
}
func TestCaller(t *testing.T) {
called := false
myFunc := func() { called = true }
c := newCaller()
c.call(myFunc)
if !called {
t.Errorf("caller failed to call function in default case")
}
c.disarm()
called = false
c.call(myFunc)
if called {
t.Error("caller still called function when disarmed")
}
// Confirm the "usual" deferred caller pattern works as expected
called = false
successCase := func() {
c := newCaller()
defer c.call(func() { called = true })
c.disarm()
}
if successCase(); called {
t.Error("Deferred success case still invoked unwind")
}
called = false
failureCase := func() {
c := newCaller()
defer c.call(func() { called = true })
}
if failureCase(); !called {
t.Error("Deferred failure case failed to invoke unwind")
}
}
// An arbitrary sort.Interface, just for easier comparison
type AddressSlice []v1.NodeAddress
func (a AddressSlice) Len() int { return len(a) }
func (a AddressSlice) Less(i, j int) bool { return a[i].Address < a[j].Address }
func (a AddressSlice) Swap(i, j int) { a[i], a[j] = a[j], a[i] }
func TestNodeAddresses(t *testing.T) {
srv := servers.Server{
Status: "ACTIVE",
HostID: "29d3c8c896a45aa4c34e52247875d7fefc3d94bbcc9f622b5d204362",
AccessIPv4: "50.56.176.99",
AccessIPv6: "2001:4800:790e:510:be76:4eff:fe04:82a8",
Addresses: map[string]interface{}{
"private": []interface{}{
map[string]interface{}{
"OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:7c:1b:2b",
"version": float64(4),
"addr": "10.0.0.32",
"OS-EXT-IPS:type": "fixed",
},
map[string]interface{}{
"version": float64(4),
"addr": "50.56.176.36",
"OS-EXT-IPS:type": "floating",
},
map[string]interface{}{
"version": float64(4),
"addr": "10.0.0.31",
// No OS-EXT-IPS:type
},
},
"public": []interface{}{
map[string]interface{}{
"version": float64(4),
"addr": "50.56.176.35",
},
map[string]interface{}{
"version": float64(6),
"addr": "2001:4800:780e:510:be76:4eff:fe04:84a8",
},
},
},
Metadata: map[string]string{
"name": "a1-yinvcez57-0-bvynoyawrhcg-kube-minion-fg5i4jwcc2yy",
TypeHostName: "a1-yinvcez57-0-bvynoyawrhcg-kube-minion-fg5i4jwcc2yy.novalocal",
},
}
addrs, err := nodeAddresses(&srv)
if err != nil {
t.Fatalf("nodeAddresses returned error: %v", err)
}
sort.Sort(AddressSlice(addrs))
t.Logf("addresses is %v", addrs)
want := []v1.NodeAddress{
{Type: v1.NodeInternalIP, Address: "10.0.0.31"},
{Type: v1.NodeInternalIP, Address: "10.0.0.32"},
{Type: v1.NodeExternalIP, Address: "2001:4800:780e:510:be76:4eff:fe04:84a8"},
{Type: v1.NodeExternalIP, Address: "2001:4800:790e:510:be76:4eff:fe04:82a8"},
{Type: v1.NodeExternalIP, Address: "50.56.176.35"},
{Type: v1.NodeExternalIP, Address: "50.56.176.36"},
{Type: v1.NodeExternalIP, Address: "50.56.176.99"},
{Type: v1.NodeHostName, Address: "a1-yinvcez57-0-bvynoyawrhcg-kube-minion-fg5i4jwcc2yy.novalocal"},
}
if !reflect.DeepEqual(want, addrs) {
t.Errorf("nodeAddresses returned incorrect value %v", addrs)
}
}
func configFromEnvWithPasswd() (cfg Config, ok bool) {
cfg, ok = configFromEnv()
if !ok {
return cfg, ok
}
cfg.Global.Password = os.Getenv("OS_PASSWORD")
return cfg, ok
}
func TestNewOpenStack(t *testing.T) {
cfg, ok := configFromEnvWithPasswd()
if !ok {
t.Skip("No config found in environment")
}
_, err := newOpenStack(cfg)
if err != nil {
t.Fatalf("Failed to construct/authenticate OpenStack: %s", err)
}
}
func TestLoadBalancer(t *testing.T) {
cfg, ok := configFromEnvWithPasswd()
if !ok {
t.Skip("No config found in environment")
}
versions := []string{"v2", ""}
for _, v := range versions {
t.Logf("Trying LBVersion = '%s'\n", v)
cfg.LoadBalancer.LBVersion = v
os, err := newOpenStack(cfg)
if err != nil {
t.Fatalf("Failed to construct/authenticate OpenStack: %s", err)
}
lb, ok := os.LoadBalancer()
if !ok {
t.Fatalf("LoadBalancer() returned false - perhaps your stack doesn't support Neutron?")
}
_, exists, err := lb.GetLoadBalancer(context.TODO(), testClusterName, &v1.Service{ObjectMeta: metav1.ObjectMeta{Name: "noexist"}})
if err != nil {
t.Fatalf("GetLoadBalancer(\"noexist\") returned error: %s", err)
}
if exists {
t.Fatalf("GetLoadBalancer(\"noexist\") returned exists")
}
}
}
func TestZones(t *testing.T) {
SetMetadataFixture(&FakeMetadata)
defer ClearMetadata()
os := OpenStack{
provider: &gophercloud.ProviderClient{
IdentityBase: "http://auth.url/",
},
region: "myRegion",
}
z, ok := os.Zones()
if !ok {
t.Fatalf("Zones() returned false")
}
zone, err := z.GetZone(context.TODO())
if err != nil {
t.Fatalf("GetZone() returned error: %s", err)
}
if zone.Region != "myRegion" {
t.Fatalf("GetZone() returned wrong region (%s)", zone.Region)
}
if zone.FailureDomain != "nova" {
t.Fatalf("GetZone() returned wrong failure domain (%s)", zone.FailureDomain)
}
}
var diskPathRegexp = regexp.MustCompile("/dev/disk/(?:by-id|by-path)/")
func TestVolumes(t *testing.T) {
cfg, ok := configFromEnvWithPasswd()
if !ok {
t.Skip("No config found in environment")
}
os, err := newOpenStack(cfg)
if err != nil {
t.Fatalf("Failed to construct/authenticate OpenStack: %s", err)
}
tags := map[string]string{
"test": "value",
}
vol, _, _, _, err := os.CreateVolume("kubernetes-test-volume-"+rand.String(10), 1, "", "", &tags)
if err != nil {
t.Fatalf("Cannot create a new Cinder volume: %v", err)
}
t.Logf("Volume (%s) created\n", vol)
WaitForVolumeStatus(t, os, vol, volumeAvailableStatus)
id, err := os.InstanceID()
if err != nil {
t.Logf("Cannot find instance id: %v - perhaps you are running this test outside a VM launched by OpenStack", err)
} else {
diskID, err := os.AttachDisk(id, vol)
if err != nil {
t.Fatalf("Cannot AttachDisk Cinder volume %s: %v", vol, err)
}
t.Logf("Volume (%s) attached, disk ID: %s\n", vol, diskID)
WaitForVolumeStatus(t, os, vol, volumeInUseStatus)
devicePath := os.GetDevicePath(diskID)
if diskPathRegexp.FindString(devicePath) == "" {
t.Fatalf("GetDevicePath returned and unexpected path for Cinder volume %s, returned %s", vol, devicePath)
}
t.Logf("Volume (%s) found at path: %s\n", vol, devicePath)
err = os.DetachDisk(id, vol)
if err != nil {
t.Fatalf("Cannot DetachDisk Cinder volume %s: %v", vol, err)
}
t.Logf("Volume (%s) detached\n", vol)
WaitForVolumeStatus(t, os, vol, volumeAvailableStatus)
}
expectedVolSize := resource.MustParse("2Gi")
newVolSize, err := os.ExpandVolume(vol, resource.MustParse("1Gi"), expectedVolSize)
if err != nil {
t.Fatalf("Cannot expand a Cinder volume: %v", err)
}
if newVolSize != expectedVolSize {
t.Logf("Expected: %v but got: %v ", expectedVolSize, newVolSize)
}
t.Logf("Volume expanded to (%v) \n", newVolSize)
WaitForVolumeStatus(t, os, vol, volumeAvailableStatus)
err = os.DeleteVolume(vol)
if err != nil {
t.Fatalf("Cannot delete Cinder volume %s: %v", vol, err)
}
t.Logf("Volume (%s) deleted\n", vol)
}
func TestInstanceIDFromProviderID(t *testing.T) {
testCases := []struct {
providerID string
instanceID string
fail bool
}{
{
providerID: ProviderName + "://" + "/" + "7b9cf879-7146-417c-abfd-cb4272f0c935",
instanceID: "7b9cf879-7146-417c-abfd-cb4272f0c935",
fail: false,
},
{
providerID: "openstack://7b9cf879-7146-417c-abfd-cb4272f0c935",
instanceID: "",
fail: true,
},
{
providerID: "7b9cf879-7146-417c-abfd-cb4272f0c935",
instanceID: "",
fail: true,
},
{
providerID: "other-provider:///7b9cf879-7146-417c-abfd-cb4272f0c935",
instanceID: "",
fail: true,
},
}
for _, test := range testCases {
instanceID, err := instanceIDFromProviderID(test.providerID)
if (err != nil) != test.fail {
t.Errorf("%s yielded `err != nil` as %t. expected %t", test.providerID, (err != nil), test.fail)
}
if test.fail {
continue
}
if instanceID != test.instanceID {
t.Errorf("%s yielded %s. expected %s", test.providerID, instanceID, test.instanceID)
}
}
}
func TestToAuth3Options(t *testing.T) {
cfg := Config{}
cfg.Global.Username = "user"
cfg.Global.Password = "pass" // Fake value for testing.
cfg.Global.DomainID = "2a73b8f597c04551a0fdc8e95544be8a"
cfg.Global.DomainName = "local"
cfg.Global.AuthURL = "http://auth.url"
cfg.Global.UserID = "user"
ao := cfg.toAuth3Options()
if !ao.AllowReauth {
t.Errorf("Will need to be able to reauthenticate")
}
if ao.Username != cfg.Global.Username {
t.Errorf("Username %s != %s", ao.Username, cfg.Global.Username)
}
if ao.Password != cfg.Global.Password {
t.Errorf("Password %s != %s", ao.Password, cfg.Global.Password)
}
if ao.DomainID != cfg.Global.DomainID {
t.Errorf("DomainID %s != %s", ao.DomainID, cfg.Global.DomainID)
}
if ao.IdentityEndpoint != cfg.Global.AuthURL {
t.Errorf("IdentityEndpoint %s != %s", ao.IdentityEndpoint, cfg.Global.AuthURL)
}
if ao.UserID != cfg.Global.UserID {
t.Errorf("UserID %s != %s", ao.UserID, cfg.Global.UserID)
}
if ao.DomainName != cfg.Global.DomainName {
t.Errorf("DomainName %s != %s", ao.DomainName, cfg.Global.DomainName)
}
}
func clearEnviron(t *testing.T) []string {
env := os.Environ()
for _, pair := range env {
if strings.HasPrefix(pair, "OS_") {
i := strings.Index(pair, "=") + 1
os.Unsetenv(pair[:i-1])
}
}
return env
}
func resetEnviron(t *testing.T, items []string) {
for _, pair := range items {
if strings.HasPrefix(pair, "OS_") {
i := strings.Index(pair, "=") + 1
if err := os.Setenv(pair[:i-1], pair[i:]); err != nil {
t.Errorf("Setenv(%q, %q) failed during reset: %v", pair[:i-1], pair[i:], err)
}
}
}
}

View File

@ -1,769 +0,0 @@
//go:build !providerless
// +build !providerless
/*
Copyright 2016 The Kubernetes Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package openstack
import (
"context"
"errors"
"fmt"
"io/ioutil"
"path"
"path/filepath"
"strings"
"time"
v1 "k8s.io/api/core/v1"
"k8s.io/apimachinery/pkg/api/resource"
"k8s.io/apimachinery/pkg/types"
cloudprovider "k8s.io/cloud-provider"
cloudvolume "k8s.io/cloud-provider/volume"
volerr "k8s.io/cloud-provider/volume/errors"
volumehelpers "k8s.io/cloud-provider/volume/helpers"
"k8s.io/component-base/metrics"
"github.com/gophercloud/gophercloud"
volumeexpand "github.com/gophercloud/gophercloud/openstack/blockstorage/extensions/volumeactions"
volumes_v1 "github.com/gophercloud/gophercloud/openstack/blockstorage/v1/volumes"
volumes_v2 "github.com/gophercloud/gophercloud/openstack/blockstorage/v2/volumes"
volumes_v3 "github.com/gophercloud/gophercloud/openstack/blockstorage/v3/volumes"
"github.com/gophercloud/gophercloud/openstack/compute/v2/extensions/volumeattach"
"k8s.io/klog/v2"
)
type volumeService interface {
createVolume(opts volumeCreateOpts) (string, string, error)
getVolume(volumeID string) (Volume, error)
deleteVolume(volumeName string) error
expandVolume(volumeID string, newSize int) error
}
// VolumesV1 is a Volumes implementation for cinder v1
type VolumesV1 struct {
blockstorage *gophercloud.ServiceClient
opts BlockStorageOpts
}
// VolumesV2 is a Volumes implementation for cinder v2
type VolumesV2 struct {
blockstorage *gophercloud.ServiceClient
opts BlockStorageOpts
}
// VolumesV3 is a Volumes implementation for cinder v3
type VolumesV3 struct {
blockstorage *gophercloud.ServiceClient
opts BlockStorageOpts
}
// Volume stores information about a single volume
type Volume struct {
// ID of the instance, to which this volume is attached. "" if not attached
AttachedServerID string
// Device file path
AttachedDevice string
// availabilityZone is which availability zone the volume is in
AvailabilityZone string
// Unique identifier for the volume.
ID string
// Human-readable display name for the volume.
Name string
// Current status of the volume.
Status string
// Volume size in GB
Size int
}
type volumeCreateOpts struct {
Size int
Availability string
Name string
VolumeType string
Metadata map[string]string
}
// implements PVLabeler.
var _ cloudprovider.PVLabeler = (*OpenStack)(nil)
const (
volumeAvailableStatus = "available"
volumeInUseStatus = "in-use"
volumeDeletedStatus = "deleted"
volumeErrorStatus = "error"
// On some environments, we need to query the metadata service in order
// to locate disks. We'll use the Newton version, which includes device
// metadata.
newtonMetadataVersion = "2016-06-30"
)
func (volumes *VolumesV1) createVolume(opts volumeCreateOpts) (string, string, error) {
startTime := time.Now()
createOpts := volumes_v1.CreateOpts{
Name: opts.Name,
Size: opts.Size,
VolumeType: opts.VolumeType,
AvailabilityZone: opts.Availability,
Metadata: opts.Metadata,
}
vol, err := volumes_v1.Create(volumes.blockstorage, createOpts).Extract()
timeTaken := time.Since(startTime).Seconds()
recordOpenstackOperationMetric("create_v1_volume", timeTaken, err)
if err != nil {
return "", "", err
}
return vol.ID, vol.AvailabilityZone, nil
}
func (volumes *VolumesV2) createVolume(opts volumeCreateOpts) (string, string, error) {
startTime := time.Now()
createOpts := volumes_v2.CreateOpts{
Name: opts.Name,
Size: opts.Size,
VolumeType: opts.VolumeType,
AvailabilityZone: opts.Availability,
Metadata: opts.Metadata,
}
vol, err := volumes_v2.Create(volumes.blockstorage, createOpts).Extract()
timeTaken := time.Since(startTime).Seconds()
recordOpenstackOperationMetric("create_v2_volume", timeTaken, err)
if err != nil {
return "", "", err
}
return vol.ID, vol.AvailabilityZone, nil
}
func (volumes *VolumesV3) createVolume(opts volumeCreateOpts) (string, string, error) {
startTime := time.Now()
createOpts := volumes_v3.CreateOpts{
Name: opts.Name,
Size: opts.Size,
VolumeType: opts.VolumeType,
AvailabilityZone: opts.Availability,
Metadata: opts.Metadata,
}
vol, err := volumes_v3.Create(volumes.blockstorage, createOpts).Extract()
timeTaken := time.Since(startTime).Seconds()
recordOpenstackOperationMetric("create_v3_volume", timeTaken, err)
if err != nil {
return "", "", err
}
return vol.ID, vol.AvailabilityZone, nil
}
func (volumes *VolumesV1) getVolume(volumeID string) (Volume, error) {
startTime := time.Now()
volumeV1, err := volumes_v1.Get(volumes.blockstorage, volumeID).Extract()
timeTaken := time.Since(startTime).Seconds()
recordOpenstackOperationMetric("get_v1_volume", timeTaken, err)
if err != nil {
if isNotFound(err) {
return Volume{}, ErrNotFound
}
return Volume{}, fmt.Errorf("error occurred getting volume by ID: %s, err: %v", volumeID, err)
}
volume := Volume{
AvailabilityZone: volumeV1.AvailabilityZone,
ID: volumeV1.ID,
Name: volumeV1.Name,
Status: volumeV1.Status,
Size: volumeV1.Size,
}
if len(volumeV1.Attachments) > 0 && volumeV1.Attachments[0]["server_id"] != nil {
volume.AttachedServerID = volumeV1.Attachments[0]["server_id"].(string)
volume.AttachedDevice = volumeV1.Attachments[0]["device"].(string)
}
return volume, nil
}
func (volumes *VolumesV2) getVolume(volumeID string) (Volume, error) {
startTime := time.Now()
volumeV2, err := volumes_v2.Get(volumes.blockstorage, volumeID).Extract()
timeTaken := time.Since(startTime).Seconds()
recordOpenstackOperationMetric("get_v2_volume", timeTaken, err)
if err != nil {
if isNotFound(err) {
return Volume{}, ErrNotFound
}
return Volume{}, fmt.Errorf("error occurred getting volume by ID: %s, err: %v", volumeID, err)
}
volume := Volume{
AvailabilityZone: volumeV2.AvailabilityZone,
ID: volumeV2.ID,
Name: volumeV2.Name,
Status: volumeV2.Status,
Size: volumeV2.Size,
}
if len(volumeV2.Attachments) > 0 {
volume.AttachedServerID = volumeV2.Attachments[0].ServerID
volume.AttachedDevice = volumeV2.Attachments[0].Device
}
return volume, nil
}
func (volumes *VolumesV3) getVolume(volumeID string) (Volume, error) {
startTime := time.Now()
volumeV3, err := volumes_v3.Get(volumes.blockstorage, volumeID).Extract()
timeTaken := time.Since(startTime).Seconds()
recordOpenstackOperationMetric("get_v3_volume", timeTaken, err)
if err != nil {
if isNotFound(err) {
return Volume{}, ErrNotFound
}
return Volume{}, fmt.Errorf("error occurred getting volume by ID: %s, err: %v", volumeID, err)
}
volume := Volume{
AvailabilityZone: volumeV3.AvailabilityZone,
ID: volumeV3.ID,
Name: volumeV3.Name,
Status: volumeV3.Status,
Size: volumeV3.Size,
}
if len(volumeV3.Attachments) > 0 {
volume.AttachedServerID = volumeV3.Attachments[0].ServerID
volume.AttachedDevice = volumeV3.Attachments[0].Device
}
return volume, nil
}
func (volumes *VolumesV1) deleteVolume(volumeID string) error {
startTime := time.Now()
err := volumes_v1.Delete(volumes.blockstorage, volumeID).ExtractErr()
timeTaken := time.Since(startTime).Seconds()
recordOpenstackOperationMetric("delete_v1_volume", timeTaken, err)
return err
}
func (volumes *VolumesV2) deleteVolume(volumeID string) error {
startTime := time.Now()
err := volumes_v2.Delete(volumes.blockstorage, volumeID, nil).ExtractErr()
timeTaken := time.Since(startTime).Seconds()
recordOpenstackOperationMetric("delete_v2_volume", timeTaken, err)
return err
}
func (volumes *VolumesV3) deleteVolume(volumeID string) error {
startTime := time.Now()
err := volumes_v3.Delete(volumes.blockstorage, volumeID, nil).ExtractErr()
timeTaken := time.Since(startTime).Seconds()
recordOpenstackOperationMetric("delete_v3_volume", timeTaken, err)
return err
}
func (volumes *VolumesV1) expandVolume(volumeID string, newSize int) error {
startTime := time.Now()
createOpts := volumeexpand.ExtendSizeOpts{
NewSize: newSize,
}
err := volumeexpand.ExtendSize(volumes.blockstorage, volumeID, createOpts).ExtractErr()
timeTaken := time.Since(startTime).Seconds()
recordOpenstackOperationMetric("expand_volume", timeTaken, err)
return err
}
func (volumes *VolumesV2) expandVolume(volumeID string, newSize int) error {
startTime := time.Now()
createOpts := volumeexpand.ExtendSizeOpts{
NewSize: newSize,
}
err := volumeexpand.ExtendSize(volumes.blockstorage, volumeID, createOpts).ExtractErr()
timeTaken := time.Since(startTime).Seconds()
recordOpenstackOperationMetric("expand_volume", timeTaken, err)
return err
}
func (volumes *VolumesV3) expandVolume(volumeID string, newSize int) error {
startTime := time.Now()
createOpts := volumeexpand.ExtendSizeOpts{
NewSize: newSize,
}
err := volumeexpand.ExtendSize(volumes.blockstorage, volumeID, createOpts).ExtractErr()
timeTaken := time.Since(startTime).Seconds()
recordOpenstackOperationMetric("expand_volume", timeTaken, err)
return err
}
// OperationPending checks if there is an operation pending on a volume
func (os *OpenStack) OperationPending(diskName string) (bool, string, error) {
volume, err := os.getVolume(diskName)
if err != nil {
return false, "", err
}
volumeStatus := volume.Status
if volumeStatus == volumeErrorStatus {
err = fmt.Errorf("status of volume %s is %s", diskName, volumeStatus)
return false, volumeStatus, err
}
if volumeStatus == volumeAvailableStatus || volumeStatus == volumeInUseStatus || volumeStatus == volumeDeletedStatus {
return false, volume.Status, nil
}
return true, volumeStatus, nil
}
// AttachDisk attaches given cinder volume to the compute running kubelet
func (os *OpenStack) AttachDisk(instanceID, volumeID string) (string, error) {
volume, err := os.getVolume(volumeID)
if err != nil {
return "", err
}
cClient, err := os.NewComputeV2()
if err != nil {
return "", err
}
if volume.AttachedServerID != "" {
if instanceID == volume.AttachedServerID {
klog.V(4).Infof("Disk %s is already attached to instance %s", volumeID, instanceID)
return volume.ID, nil
}
nodeName, err := os.GetNodeNameByID(volume.AttachedServerID)
attachErr := fmt.Sprintf("disk %s path %s is attached to a different instance (%s)", volumeID, volume.AttachedDevice, volume.AttachedServerID)
if err != nil {
klog.Error(attachErr)
return "", errors.New(attachErr)
}
// using volume.AttachedDevice may cause problems because cinder does not report device path correctly see issue #33128
devicePath := volume.AttachedDevice
danglingErr := volerr.NewDanglingError(attachErr, nodeName, devicePath)
klog.V(2).Infof("Found dangling volume %s attached to node %s", volumeID, nodeName)
return "", danglingErr
}
startTime := time.Now()
// add read only flag here if possible spothanis
_, err = volumeattach.Create(cClient, instanceID, &volumeattach.CreateOpts{
VolumeID: volume.ID,
}).Extract()
timeTaken := time.Since(startTime).Seconds()
recordOpenstackOperationMetric("attach_disk", timeTaken, err)
if err != nil {
return "", fmt.Errorf("failed to attach %s volume to %s compute: %v", volumeID, instanceID, err)
}
klog.V(2).Infof("Successfully attached %s volume to %s compute", volumeID, instanceID)
return volume.ID, nil
}
// DetachDisk detaches given cinder volume from the compute running kubelet
func (os *OpenStack) DetachDisk(instanceID, volumeID string) error {
volume, err := os.getVolume(volumeID)
if err != nil {
return err
}
if volume.Status == volumeAvailableStatus {
// "available" is fine since that means the volume is detached from instance already.
klog.V(2).Infof("volume: %s has been detached from compute: %s ", volume.ID, instanceID)
return nil
}
if volume.Status != volumeInUseStatus {
return fmt.Errorf("can not detach volume %s, its status is %s", volume.Name, volume.Status)
}
cClient, err := os.NewComputeV2()
if err != nil {
return err
}
if volume.AttachedServerID != instanceID {
return fmt.Errorf("disk: %s has no attachments or is not attached to compute: %s", volume.Name, instanceID)
}
startTime := time.Now()
// This is a blocking call and effects kubelet's performance directly.
// We should consider kicking it out into a separate routine, if it is bad.
err = volumeattach.Delete(cClient, instanceID, volume.ID).ExtractErr()
timeTaken := time.Since(startTime).Seconds()
recordOpenstackOperationMetric("detach_disk", timeTaken, err)
if err != nil {
return fmt.Errorf("failed to delete volume %s from compute %s attached %v", volume.ID, instanceID, err)
}
klog.V(2).Infof("Successfully detached volume: %s from compute: %s", volume.ID, instanceID)
return nil
}
// ExpandVolume expands the size of specific cinder volume (in GiB)
func (os *OpenStack) ExpandVolume(volumeID string, oldSize resource.Quantity, newSize resource.Quantity) (resource.Quantity, error) {
volume, err := os.getVolume(volumeID)
if err != nil {
return oldSize, err
}
if volume.Status != volumeAvailableStatus {
// cinder volume can not be expanded if its status is not available
if volume.Status == volumeInUseStatus {
// Send a nice event when the volume is used
return oldSize, fmt.Errorf("PVC used by a Pod can not be expanded, please ensure the PVC is not used by any Pod and is fully detached from a node")
}
// Send not so nice event when the volume is in any other state (deleted, error)
return oldSize, fmt.Errorf("volume in state %q can not be expanded, it must be \"available\"", volume.Status)
}
// Cinder works with gigabytes, convert to GiB with rounding up
volSizeGiB, err := volumehelpers.RoundUpToGiBInt(newSize)
if err != nil {
return oldSize, err
}
newSizeQuant := resource.MustParse(fmt.Sprintf("%dGi", volSizeGiB))
// if volume size equals to or greater than the newSize, return nil
if volume.Size >= volSizeGiB {
return newSizeQuant, nil
}
volumes, err := os.volumeService("")
if err != nil {
return oldSize, err
}
err = volumes.expandVolume(volumeID, volSizeGiB)
if err != nil {
return oldSize, err
}
return newSizeQuant, nil
}
// getVolume retrieves Volume by its ID.
func (os *OpenStack) getVolume(volumeID string) (Volume, error) {
volumes, err := os.volumeService("")
if err != nil {
return Volume{}, fmt.Errorf("unable to initialize cinder client for region: %s, err: %v", os.region, err)
}
return volumes.getVolume(volumeID)
}
// CreateVolume creates a volume of given size (in GiB)
func (os *OpenStack) CreateVolume(name string, size int, vtype, availability string, tags *map[string]string) (string, string, string, bool, error) {
volumes, err := os.volumeService("")
if err != nil {
return "", "", "", os.bsOpts.IgnoreVolumeAZ, fmt.Errorf("unable to initialize cinder client for region: %s, err: %v", os.region, err)
}
opts := volumeCreateOpts{
Name: name,
Size: size,
VolumeType: vtype,
Availability: availability,
}
if tags != nil {
opts.Metadata = *tags
}
volumeID, volumeAZ, err := volumes.createVolume(opts)
if err != nil {
return "", "", "", os.bsOpts.IgnoreVolumeAZ, fmt.Errorf("failed to create a %d GB volume: %v", size, err)
}
klog.Infof("Created volume %v in Availability Zone: %v Region: %v Ignore volume AZ: %v", volumeID, volumeAZ, os.region, os.bsOpts.IgnoreVolumeAZ)
return volumeID, volumeAZ, os.region, os.bsOpts.IgnoreVolumeAZ, nil
}
// GetDevicePathBySerialID returns the path of an attached block storage volume, specified by its id.
func (os *OpenStack) GetDevicePathBySerialID(volumeID string) string {
// Build a list of candidate device paths.
// Certain Nova drivers will set the disk serial ID, including the Cinder volume id.
// Newer OpenStacks may not truncate the volumeID to 20 chars.
candidateDeviceNodes := []string{
// KVM
fmt.Sprintf("virtio-%s", volumeID[:20]),
fmt.Sprintf("virtio-%s", volumeID),
// KVM virtio-scsi
fmt.Sprintf("scsi-0QEMU_QEMU_HARDDISK_%s", volumeID[:20]),
fmt.Sprintf("scsi-0QEMU_QEMU_HARDDISK_%s", volumeID),
// ESXi
fmt.Sprintf("wwn-0x%s", strings.Replace(volumeID, "-", "", -1)),
}
files, _ := ioutil.ReadDir("/dev/disk/by-id/")
for _, f := range files {
for _, c := range candidateDeviceNodes {
if c == f.Name() {
klog.V(4).Infof("Found disk attached as %q; full devicepath: %s\n", f.Name(), path.Join("/dev/disk/by-id/", f.Name()))
return path.Join("/dev/disk/by-id/", f.Name())
}
}
}
klog.V(4).Infof("Failed to find device for the volumeID: %q by serial ID", volumeID)
return ""
}
func (os *OpenStack) getDevicePathFromInstanceMetadata(volumeID string) string {
// Nova Hyper-V hosts cannot override disk SCSI IDs. In order to locate
// volumes, we're querying the metadata service. Note that the Hyper-V
// driver will include device metadata for untagged volumes as well.
//
// We're avoiding using cached metadata (or the configdrive),
// relying on the metadata service.
instanceMetadata, err := getMetadataFromMetadataService(
newtonMetadataVersion)
if err != nil {
klog.V(4).Infof(
"Could not retrieve instance metadata. Error: %v", err)
return ""
}
for _, device := range instanceMetadata.Devices {
if device.Type == "disk" && device.Serial == volumeID {
klog.V(4).Infof(
"Found disk metadata for volumeID %q. Bus: %q, Address: %q",
volumeID, device.Bus, device.Address)
diskPattern := fmt.Sprintf(
"/dev/disk/by-path/*-%s-%s",
device.Bus, device.Address)
diskPaths, err := filepath.Glob(diskPattern)
if err != nil {
klog.Errorf(
"could not retrieve disk path for volumeID: %q. Error filepath.Glob(%q): %v",
volumeID, diskPattern, err)
return ""
}
if len(diskPaths) == 1 {
return diskPaths[0]
}
klog.Errorf(
"expecting to find one disk path for volumeID %q, found %d: %v",
volumeID, len(diskPaths), diskPaths)
return ""
}
}
klog.V(4).Infof(
"Could not retrieve device metadata for volumeID: %q", volumeID)
return ""
}
// GetDevicePath returns the path of an attached block storage volume, specified by its id.
func (os *OpenStack) GetDevicePath(volumeID string) string {
devicePath := os.GetDevicePathBySerialID(volumeID)
if devicePath == "" {
devicePath = os.getDevicePathFromInstanceMetadata(volumeID)
}
if devicePath == "" {
klog.Warningf("Failed to find device for the volumeID: %q", volumeID)
}
return devicePath
}
// DeleteVolume deletes a volume given volume name.
func (os *OpenStack) DeleteVolume(volumeID string) error {
used, err := os.diskIsUsed(volumeID)
if err != nil {
return err
}
if used {
msg := fmt.Sprintf("Cannot delete the volume %q, it's still attached to a node", volumeID)
return volerr.NewDeletedVolumeInUseError(msg)
}
volumes, err := os.volumeService("")
if err != nil {
return fmt.Errorf("unable to initialize cinder client for region: %s, err: %v", os.region, err)
}
err = volumes.deleteVolume(volumeID)
return err
}
// GetAttachmentDiskPath gets device path of attached volume to the compute running kubelet, as known by cinder
func (os *OpenStack) GetAttachmentDiskPath(instanceID, volumeID string) (string, error) {
// See issue #33128 - Cinder does not always tell you the right device path, as such
// we must only use this value as a last resort.
volume, err := os.getVolume(volumeID)
if err != nil {
return "", err
}
if volume.Status != volumeInUseStatus {
return "", fmt.Errorf("can not get device path of volume %s, its status is %s ", volume.Name, volume.Status)
}
if volume.AttachedServerID != "" {
if instanceID == volume.AttachedServerID {
// Attachment[0]["device"] points to the device path
// see http://developer.openstack.org/api-ref-blockstorage-v1.html
return volume.AttachedDevice, nil
}
return "", fmt.Errorf("disk %q is attached to a different compute: %q, should be detached before proceeding", volumeID, volume.AttachedServerID)
}
return "", fmt.Errorf("volume %s has no ServerId", volumeID)
}
// DiskIsAttached queries if a volume is attached to a compute instance
func (os *OpenStack) DiskIsAttached(instanceID, volumeID string) (bool, error) {
if instanceID == "" {
klog.Warningf("calling DiskIsAttached with empty instanceid: %s %s", instanceID, volumeID)
}
volume, err := os.getVolume(volumeID)
if err != nil {
if err == ErrNotFound {
// Volume does not exists, it can't be attached.
return false, nil
}
return false, err
}
return instanceID == volume.AttachedServerID, nil
}
// DiskIsAttachedByName queries if a volume is attached to a compute instance by name
func (os *OpenStack) DiskIsAttachedByName(nodeName types.NodeName, volumeID string) (bool, string, error) {
cClient, err := os.NewComputeV2()
if err != nil {
return false, "", err
}
srv, err := getServerByName(cClient, nodeName)
if err != nil {
if err == ErrNotFound {
// instance not found anymore in cloudprovider, assume that cinder is detached
return false, "", nil
}
return false, "", err
}
instanceID := "/" + srv.ID
if ind := strings.LastIndex(instanceID, "/"); ind >= 0 {
instanceID = instanceID[(ind + 1):]
}
attached, err := os.DiskIsAttached(instanceID, volumeID)
return attached, instanceID, err
}
// DisksAreAttached queries if a list of volumes are attached to a compute instance
func (os *OpenStack) DisksAreAttached(instanceID string, volumeIDs []string) (map[string]bool, error) {
attached := make(map[string]bool)
for _, volumeID := range volumeIDs {
isAttached, err := os.DiskIsAttached(instanceID, volumeID)
if err != nil && err != ErrNotFound {
attached[volumeID] = true
continue
}
attached[volumeID] = isAttached
}
return attached, nil
}
// DisksAreAttachedByName queries if a list of volumes are attached to a compute instance by name
func (os *OpenStack) DisksAreAttachedByName(nodeName types.NodeName, volumeIDs []string) (map[string]bool, error) {
attached := make(map[string]bool)
cClient, err := os.NewComputeV2()
if err != nil {
return attached, err
}
srv, err := getServerByName(cClient, nodeName)
if err != nil {
if err == ErrNotFound {
// instance not found anymore, mark all volumes as detached
for _, volumeID := range volumeIDs {
attached[volumeID] = false
}
return attached, nil
}
return attached, err
}
instanceID := "/" + srv.ID
if ind := strings.LastIndex(instanceID, "/"); ind >= 0 {
instanceID = instanceID[(ind + 1):]
}
return os.DisksAreAttached(instanceID, volumeIDs)
}
// diskIsUsed returns true a disk is attached to any node.
func (os *OpenStack) diskIsUsed(volumeID string) (bool, error) {
volume, err := os.getVolume(volumeID)
if err != nil {
return false, err
}
return volume.AttachedServerID != "", nil
}
// ShouldTrustDevicePath queries if we should trust the cinder provide deviceName, See issue #33128
func (os *OpenStack) ShouldTrustDevicePath() bool {
return os.bsOpts.TrustDevicePath
}
// NodeVolumeAttachLimit specifies number of cinder volumes that can be attached to this node.
func (os *OpenStack) NodeVolumeAttachLimit() int {
return os.bsOpts.NodeVolumeAttachLimit
}
// GetLabelsForVolume implements PVLabeler.GetLabelsForVolume
func (os *OpenStack) GetLabelsForVolume(ctx context.Context, pv *v1.PersistentVolume) (map[string]string, error) {
// Ignore if not Cinder.
if pv.Spec.Cinder == nil {
return nil, nil
}
// Ignore any volumes that are being provisioned
if pv.Spec.Cinder.VolumeID == cloudvolume.ProvisionedVolumeName {
return nil, nil
}
// if volume az is to be ignored we should return nil from here
if os.bsOpts.IgnoreVolumeAZ {
return nil, nil
}
// Get Volume
volume, err := os.getVolume(pv.Spec.Cinder.VolumeID)
if err != nil {
return nil, err
}
// Construct Volume Labels
labels := make(map[string]string)
if volume.AvailabilityZone != "" {
labels[v1.LabelTopologyZone] = volume.AvailabilityZone
}
if os.region != "" {
labels[v1.LabelTopologyRegion] = os.region
}
klog.V(4).Infof("The Volume %s has labels %v", pv.Spec.Cinder.VolumeID, labels)
return labels, nil
}
// recordOpenstackOperationMetric records openstack operation metrics
func recordOpenstackOperationMetric(operation string, timeTaken float64, err error) {
if err != nil {
openstackAPIRequestErrors.With(metrics.Labels{"request": operation}).Inc()
} else {
openstackOperationsLatency.With(metrics.Labels{"request": operation}).Observe(timeTaken)
}
}

View File

@ -122,8 +122,6 @@ func restrictedVolumes_1_0(podMetadata *metav1.ObjectMeta, podSpec *corev1.PodSp
badVolumeTypes.Insert("rbd")
case volume.FlexVolume != nil:
badVolumeTypes.Insert("flexVolume")
case volume.Cinder != nil:
badVolumeTypes.Insert("cinder")
case volume.CephFS != nil:
badVolumeTypes.Insert("cephfs")
case volume.Flocker != nil:

View File

@ -53,7 +53,6 @@ func TestRestrictedVolumes(t *testing.T) {
{Name: "b7", VolumeSource: corev1.VolumeSource{Glusterfs: &corev1.GlusterfsVolumeSource{}}},
{Name: "b8", VolumeSource: corev1.VolumeSource{RBD: &corev1.RBDVolumeSource{}}},
{Name: "b9", VolumeSource: corev1.VolumeSource{FlexVolume: &corev1.FlexVolumeSource{}}},
{Name: "b10", VolumeSource: corev1.VolumeSource{Cinder: &corev1.CinderVolumeSource{}}},
{Name: "b11", VolumeSource: corev1.VolumeSource{CephFS: &corev1.CephFSVolumeSource{}}},
{Name: "b12", VolumeSource: corev1.VolumeSource{Flocker: &corev1.FlockerVolumeSource{}}},
{Name: "b13", VolumeSource: corev1.VolumeSource{FC: &corev1.FCVolumeSource{}}},
@ -72,9 +71,9 @@ func TestRestrictedVolumes(t *testing.T) {
}},
expectReason: `restricted volume types`,
expectDetail: `volumes ` +
`"b1", "b2", "b3", "b4", "b5", "b6", "b7", "b8", "b9", "b10", "b11", "b12", "b13", "b14", "b15", "b16", "b17", "b18", "b19", "b20", "b21", "c1"` +
`"b1", "b2", "b3", "b4", "b5", "b6", "b7", "b8", "b9", "b11", "b12", "b13", "b14", "b15", "b16", "b17", "b18", "b19", "b20", "b21", "c1"` +
` use restricted volume types ` +
`"awsElasticBlockStore", "azureDisk", "azureFile", "cephfs", "cinder", "fc", "flexVolume", "flocker", "gcePersistentDisk", "gitRepo", "glusterfs", ` +
`"awsElasticBlockStore", "azureDisk", "azureFile", "cephfs", "fc", "flexVolume", "flocker", "gcePersistentDisk", "gitRepo", "glusterfs", ` +
`"hostPath", "iscsi", "nfs", "photonPersistentDisk", "portworxVolume", "quobyte", "rbd", "scaleIO", "storageos", "unknown", "vsphereVolume"`,
},
}

View File

@ -31,8 +31,8 @@ limitations under the License.
* Note that the server containers are for testing purposes only and should not
* be used in production.
*
* 2) With server outside of Kubernetes (Cinder, ...)
* Appropriate server (e.g. OpenStack Cinder) must exist somewhere outside
* 2) With server outside of Kubernetes
* Appropriate server exist somewhere outside
* the tested Kubernetes cluster. The test itself creates a new volume,
* and checks, that Kubernetes can use it as a volume.
*/

View File

@ -56,7 +56,6 @@ import (
_ "k8s.io/kubernetes/test/e2e/framework/providers/azure"
_ "k8s.io/kubernetes/test/e2e/framework/providers/gce"
_ "k8s.io/kubernetes/test/e2e/framework/providers/kubemark"
_ "k8s.io/kubernetes/test/e2e/framework/providers/openstack"
_ "k8s.io/kubernetes/test/e2e/framework/providers/vsphere"
// Ensure that logging flags are part of the command line.

View File

@ -1,34 +0,0 @@
/*
Copyright 2019 The Kubernetes Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package openstack
import (
"k8s.io/kubernetes/test/e2e/framework"
)
func init() {
framework.RegisterProvider("openstack", newProvider)
}
func newProvider() (framework.ProviderInterface, error) {
return &Provider{}, nil
}
// Provider is a structure to handle OpenStack clouds for e2e testing
type Provider struct {
framework.NullProvider
}

View File

@ -256,7 +256,7 @@ type CloudConfig struct {
ClusterIPRange string
ClusterTag string
Network string
ConfigFile string // for azure and openstack
ConfigFile string // for azure
NodeTag string
MasterTag string

View File

@ -31,8 +31,8 @@ limitations under the License.
* Note that the server containers are for testing purposes only and should not
* be used in production.
*
* 2) With server outside of Kubernetes (Cinder, ...)
* Appropriate server (e.g. OpenStack Cinder) must exist somewhere outside
* 2) With server outside of Kubernetes
* Appropriate server must exist somewhere outside
* the tested Kubernetes cluster. The test itself creates a new volume,
* and checks, that Kubernetes can use it as a volume.
*/

View File

@ -27,7 +27,7 @@ limitations under the License.
* Note that the server containers are for testing purposes only and should not
* be used in production.
*
* 2) With server or cloud provider outside of Kubernetes (Cinder, GCE, AWS, Azure, ...)
* 2) With server or cloud provider outside of Kubernetes (GCE, AWS, Azure, ...)
* Appropriate server or cloud provider must exist somewhere outside
* the tested Kubernetes cluster. CreateVolume will create a new volume to be
* used in the TestSuites for inlineVolume or DynamicPV tests.

View File

@ -27,7 +27,7 @@ limitations under the License.
* Note that the server containers are for testing purposes only and should not
* be used in production.
*
* 2) With server or cloud provider outside of Kubernetes (Cinder, GCE, AWS, Azure, ...)
* 2) With server or cloud provider outside of Kubernetes (GCE, AWS, Azure, ...)
* Appropriate server or cloud provider must exist somewhere outside
* the tested Kubernetes cluster. CreateVolume will create a new volume to be
* used in the TestSuites for inlineVolume or DynamicPV tests.
@ -38,7 +38,6 @@ package drivers
import (
"context"
"fmt"
"os/exec"
"strconv"
"strings"
"time"
@ -1036,179 +1035,6 @@ func (e *emptydirDriver) PrepareTest(f *framework.Framework) (*storageframework.
}, func() {}
}
// Cinder
// This driver assumes that OpenStack client tools are installed
// (/usr/bin/nova, /usr/bin/cinder and /usr/bin/keystone)
// and that the usual OpenStack authentication env. variables are set
// (OS_USERNAME, OS_PASSWORD, OS_TENANT_NAME at least).
type cinderDriver struct {
driverInfo storageframework.DriverInfo
}
type cinderVolume struct {
volumeName string
volumeID string
}
var _ storageframework.TestDriver = &cinderDriver{}
var _ storageframework.PreprovisionedVolumeTestDriver = &cinderDriver{}
var _ storageframework.InlineVolumeTestDriver = &cinderDriver{}
var _ storageframework.PreprovisionedPVTestDriver = &cinderDriver{}
var _ storageframework.DynamicPVTestDriver = &cinderDriver{}
// InitCinderDriver returns cinderDriver that implements TestDriver interface
func InitCinderDriver() storageframework.TestDriver {
return &cinderDriver{
driverInfo: storageframework.DriverInfo{
Name: "cinder",
InTreePluginName: "kubernetes.io/cinder",
MaxFileSize: storageframework.FileSizeMedium,
SupportedSizeRange: e2evolume.SizeRange{
Min: "1Gi",
},
SupportedFsType: sets.NewString(
"", // Default fsType
),
TopologyKeys: []string{v1.LabelFailureDomainBetaZone},
Capabilities: map[storageframework.Capability]bool{
storageframework.CapPersistence: true,
storageframework.CapFsGroup: true,
storageframework.CapExec: true,
storageframework.CapBlock: true,
// Cinder supports volume limits, but the test creates large
// number of volumes and times out test suites.
storageframework.CapVolumeLimits: false,
storageframework.CapTopology: true,
},
},
}
}
func (c *cinderDriver) GetDriverInfo() *storageframework.DriverInfo {
return &c.driverInfo
}
func (c *cinderDriver) SkipUnsupportedTest(pattern storageframework.TestPattern) {
e2eskipper.SkipUnlessProviderIs("openstack")
}
func (c *cinderDriver) GetVolumeSource(readOnly bool, fsType string, e2evolume storageframework.TestVolume) *v1.VolumeSource {
cv, ok := e2evolume.(*cinderVolume)
framework.ExpectEqual(ok, true, "Failed to cast test volume to Cinder test volume")
volSource := v1.VolumeSource{
Cinder: &v1.CinderVolumeSource{
VolumeID: cv.volumeID,
ReadOnly: readOnly,
},
}
if fsType != "" {
volSource.Cinder.FSType = fsType
}
return &volSource
}
func (c *cinderDriver) GetPersistentVolumeSource(readOnly bool, fsType string, e2evolume storageframework.TestVolume) (*v1.PersistentVolumeSource, *v1.VolumeNodeAffinity) {
cv, ok := e2evolume.(*cinderVolume)
framework.ExpectEqual(ok, true, "Failed to cast test volume to Cinder test volume")
pvSource := v1.PersistentVolumeSource{
Cinder: &v1.CinderPersistentVolumeSource{
VolumeID: cv.volumeID,
ReadOnly: readOnly,
},
}
if fsType != "" {
pvSource.Cinder.FSType = fsType
}
return &pvSource, nil
}
func (c *cinderDriver) GetDynamicProvisionStorageClass(config *storageframework.PerTestConfig, fsType string) *storagev1.StorageClass {
provisioner := "kubernetes.io/cinder"
parameters := map[string]string{}
if fsType != "" {
parameters["fsType"] = fsType
}
ns := config.Framework.Namespace.Name
return storageframework.GetStorageClass(provisioner, parameters, nil, ns)
}
func (c *cinderDriver) PrepareTest(f *framework.Framework) (*storageframework.PerTestConfig, func()) {
return &storageframework.PerTestConfig{
Driver: c,
Prefix: "cinder",
Framework: f,
}, func() {}
}
func (c *cinderDriver) CreateVolume(config *storageframework.PerTestConfig, volType storageframework.TestVolType) storageframework.TestVolume {
f := config.Framework
ns := f.Namespace
// We assume that namespace.Name is a random string
volumeName := ns.Name
ginkgo.By("creating a test Cinder volume")
output, err := exec.Command("cinder", "create", "--display-name="+volumeName, "1").CombinedOutput()
outputString := string(output[:])
framework.Logf("cinder output:\n%s", outputString)
framework.ExpectNoError(err)
// Parse 'id'' from stdout. Expected format:
// | attachments | [] |
// | availability_zone | nova |
// ...
// | id | 1d6ff08f-5d1c-41a4-ad72-4ef872cae685 |
volumeID := ""
for _, line := range strings.Split(outputString, "\n") {
fields := strings.Fields(line)
if len(fields) != 5 {
continue
}
if fields[1] != "id" {
continue
}
volumeID = fields[3]
break
}
framework.Logf("Volume ID: %s", volumeID)
framework.ExpectNotEqual(volumeID, "")
return &cinderVolume{
volumeName: volumeName,
volumeID: volumeID,
}
}
func (v *cinderVolume) DeleteVolume() {
id := v.volumeID
name := v.volumeName
// Try to delete the volume for several seconds - it takes
// a while for the plugin to detach it.
var output []byte
var err error
timeout := time.Second * 120
framework.Logf("Waiting up to %v for removal of cinder volume %s / %s", timeout, id, name)
for start := time.Now(); time.Since(start) < timeout; time.Sleep(5 * time.Second) {
output, err = exec.Command("cinder", "delete", id).CombinedOutput()
if err == nil {
framework.Logf("Cinder volume %s deleted", id)
return
}
framework.Logf("Failed to delete volume %s / %s: %v\n%s", id, name, err, string(output))
}
// Timed out, try to get "cinder show <volume>" output for easier debugging
showOutput, showErr := exec.Command("cinder", "show", id).CombinedOutput()
if showErr != nil {
framework.Logf("Failed to show volume %s / %s: %v\n%s", id, name, showErr, string(showOutput))
} else {
framework.Logf("Volume %s / %s:\n%s", id, name, string(showOutput))
}
framework.Failf("Failed to delete pre-provisioned volume %s / %s: %v\n%s", id, name, err, string(output[:]))
}
// GCE
type gcePdDriver struct {
driverInfo storageframework.DriverInfo

View File

@ -37,7 +37,6 @@ var testDrivers = []func() storageframework.TestDriver{
drivers.InitHostPathDriver,
drivers.InitHostPathSymlinkDriver,
drivers.InitEmptydirDriver,
drivers.InitCinderDriver,
drivers.InitVSphereDriver,
drivers.InitAzureDiskDriver,
drivers.InitAzureFileDriver,

View File

@ -374,8 +374,6 @@ func getInTreeNodeLimits(cs clientset.Interface, nodeName string, driverInfo *st
allocatableKey = volumeutil.EBSVolumeLimitKey
case migrationplugins.GCEPDInTreePluginName:
allocatableKey = volumeutil.GCEVolumeLimitKey
case migrationplugins.CinderInTreePluginName:
allocatableKey = volumeutil.CinderVolumeLimitKey
case migrationplugins.AzureDiskInTreePluginName:
allocatableKey = volumeutil.AzureVolumeLimitKey
default:

View File

@ -286,34 +286,6 @@ var _ = utils.SIGDescribe("Dynamic Provisioning", func() {
framework.ExpectNoError(err, "checkAWSEBS gp2 encrypted")
},
},
// OpenStack generic tests (works on all OpenStack deployments)
{
Name: "generic Cinder volume on OpenStack",
CloudProviders: []string{"openstack"},
Timeouts: f.Timeouts,
Provisioner: "kubernetes.io/cinder",
Parameters: map[string]string{},
ClaimSize: "1.5Gi",
ExpectedSize: "2Gi",
PvCheck: func(claim *v1.PersistentVolumeClaim) {
testsuites.PVWriteReadSingleNodeCheck(c, f.Timeouts, claim, e2epod.NodeSelection{})
},
},
{
Name: "Cinder volume with empty volume type and zone on OpenStack",
CloudProviders: []string{"openstack"},
Timeouts: f.Timeouts,
Provisioner: "kubernetes.io/cinder",
Parameters: map[string]string{
"type": "",
"availability": "",
},
ClaimSize: "1.5Gi",
ExpectedSize: "2Gi",
PvCheck: func(claim *v1.PersistentVolumeClaim) {
testsuites.PVWriteReadSingleNodeCheck(c, f.Timeouts, claim, e2epod.NodeSelection{})
},
},
// vSphere generic test
{
Name: "generic vSphere volume",
@ -429,7 +401,7 @@ var _ = utils.SIGDescribe("Dynamic Provisioning", func() {
// not being deleted.
// NOTE: Polls until no PVs are detected, times out at 5 minutes.
e2eskipper.SkipUnlessProviderIs("openstack", "gce", "aws", "gke", "vsphere", "azure")
e2eskipper.SkipUnlessProviderIs("gce", "aws", "gke", "vsphere", "azure")
const raceAttempts int = 100
var residualPVs []*v1.PersistentVolume
@ -605,7 +577,7 @@ var _ = utils.SIGDescribe("Dynamic Provisioning", func() {
ginkgo.Describe("DynamicProvisioner Default", func() {
ginkgo.It("should create and delete default persistent volumes [Slow]", func() {
e2eskipper.SkipUnlessProviderIs("openstack", "gce", "aws", "gke", "vsphere", "azure")
e2eskipper.SkipUnlessProviderIs("gce", "aws", "gke", "vsphere", "azure")
e2epv.SkipIfNoDefaultStorageClass(c)
ginkgo.By("creating a claim with no annotation")
@ -631,7 +603,7 @@ var _ = utils.SIGDescribe("Dynamic Provisioning", func() {
// Modifying the default storage class can be disruptive to other tests that depend on it
ginkgo.It("should be disabled by changing the default annotation [Serial] [Disruptive]", func() {
e2eskipper.SkipUnlessProviderIs("openstack", "gce", "aws", "gke", "vsphere", "azure")
e2eskipper.SkipUnlessProviderIs("gce", "aws", "gke", "vsphere", "azure")
e2epv.SkipIfNoDefaultStorageClass(c)
scName, scErr := e2epv.GetDefaultStorageClassName(c)
@ -670,7 +642,7 @@ var _ = utils.SIGDescribe("Dynamic Provisioning", func() {
// Modifying the default storage class can be disruptive to other tests that depend on it
ginkgo.It("should be disabled by removing the default annotation [Serial] [Disruptive]", func() {
e2eskipper.SkipUnlessProviderIs("openstack", "gce", "aws", "gke", "vsphere", "azure")
e2eskipper.SkipUnlessProviderIs("gce", "aws", "gke", "vsphere", "azure")
e2epv.SkipIfNoDefaultStorageClass(c)
scName, scErr := e2epv.GetDefaultStorageClassName(c)
@ -844,8 +816,6 @@ func getDefaultPluginName() string {
return "kubernetes.io/gce-pd"
case framework.ProviderIs("aws"):
return "kubernetes.io/aws-ebs"
case framework.ProviderIs("openstack"):
return "kubernetes.io/cinder"
case framework.ProviderIs("vsphere"):
return "kubernetes.io/vsphere-volume"
case framework.ProviderIs("azure"):

View File

@ -47,7 +47,7 @@ const (
func (t *PersistentVolumeUpgradeTest) Setup(f *framework.Framework) {
var err error
e2eskipper.SkipUnlessProviderIs("gce", "gke", "openstack", "aws", "vsphere", "azure")
e2eskipper.SkipUnlessProviderIs("gce", "gke", "aws", "vsphere", "azure")
ns := f.Namespace.Name

View File

@ -52,7 +52,7 @@ func (VolumeModeDowngradeTest) Name() string {
// Skip returns true when this test can be skipped.
func (t *VolumeModeDowngradeTest) Skip(upgCtx upgrades.UpgradeContext) bool {
if !framework.ProviderIs("openstack", "gce", "aws", "gke", "vsphere", "azure") {
if !framework.ProviderIs("gce", "aws", "gke", "vsphere", "azure") {
return true
}

36
vendor/modules.txt vendored
View File

@ -463,39 +463,6 @@ github.com/google/uuid
github.com/googleapis/gax-go/v2
github.com/googleapis/gax-go/v2/apierror
github.com/googleapis/gax-go/v2/apierror/internal/proto
# github.com/gophercloud/gophercloud v0.1.0 => github.com/gophercloud/gophercloud v0.1.0
## explicit
github.com/gophercloud/gophercloud
github.com/gophercloud/gophercloud/openstack
github.com/gophercloud/gophercloud/openstack/blockstorage/extensions/volumeactions
github.com/gophercloud/gophercloud/openstack/blockstorage/v1/volumes
github.com/gophercloud/gophercloud/openstack/blockstorage/v2/volumes
github.com/gophercloud/gophercloud/openstack/blockstorage/v3/volumes
github.com/gophercloud/gophercloud/openstack/common/extensions
github.com/gophercloud/gophercloud/openstack/compute/v2/extensions/attachinterfaces
github.com/gophercloud/gophercloud/openstack/compute/v2/extensions/volumeattach
github.com/gophercloud/gophercloud/openstack/compute/v2/flavors
github.com/gophercloud/gophercloud/openstack/compute/v2/images
github.com/gophercloud/gophercloud/openstack/compute/v2/servers
github.com/gophercloud/gophercloud/openstack/identity/v2/tenants
github.com/gophercloud/gophercloud/openstack/identity/v2/tokens
github.com/gophercloud/gophercloud/openstack/identity/v3/extensions/trusts
github.com/gophercloud/gophercloud/openstack/identity/v3/tokens
github.com/gophercloud/gophercloud/openstack/networking/v2/extensions
github.com/gophercloud/gophercloud/openstack/networking/v2/extensions/external
github.com/gophercloud/gophercloud/openstack/networking/v2/extensions/layer3/floatingips
github.com/gophercloud/gophercloud/openstack/networking/v2/extensions/layer3/routers
github.com/gophercloud/gophercloud/openstack/networking/v2/extensions/lbaas_v2/l7policies
github.com/gophercloud/gophercloud/openstack/networking/v2/extensions/lbaas_v2/listeners
github.com/gophercloud/gophercloud/openstack/networking/v2/extensions/lbaas_v2/loadbalancers
github.com/gophercloud/gophercloud/openstack/networking/v2/extensions/lbaas_v2/monitors
github.com/gophercloud/gophercloud/openstack/networking/v2/extensions/lbaas_v2/pools
github.com/gophercloud/gophercloud/openstack/networking/v2/extensions/security/groups
github.com/gophercloud/gophercloud/openstack/networking/v2/extensions/security/rules
github.com/gophercloud/gophercloud/openstack/networking/v2/networks
github.com/gophercloud/gophercloud/openstack/networking/v2/ports
github.com/gophercloud/gophercloud/openstack/utils
github.com/gophercloud/gophercloud/pagination
# github.com/gorilla/mux v1.8.0 => github.com/gorilla/mux v1.8.0
## explicit; go 1.12
# github.com/gorilla/websocket v1.4.2 => github.com/gorilla/websocket v1.4.2
@ -1962,7 +1929,6 @@ k8s.io/client-go/plugin/pkg/client/auth/azure
k8s.io/client-go/plugin/pkg/client/auth/exec
k8s.io/client-go/plugin/pkg/client/auth/gcp
k8s.io/client-go/plugin/pkg/client/auth/oidc
k8s.io/client-go/plugin/pkg/client/auth/openstack
k8s.io/client-go/rest
k8s.io/client-go/rest/fake
k8s.io/client-go/rest/watch
@ -2359,7 +2325,6 @@ k8s.io/legacy-cloud-providers/azure/metrics
k8s.io/legacy-cloud-providers/azure/retry
k8s.io/legacy-cloud-providers/gce
k8s.io/legacy-cloud-providers/gce/gcpcredential
k8s.io/legacy-cloud-providers/openstack
k8s.io/legacy-cloud-providers/vsphere
k8s.io/legacy-cloud-providers/vsphere/testing
k8s.io/legacy-cloud-providers/vsphere/vclib
@ -2696,7 +2661,6 @@ sigs.k8s.io/yaml
# github.com/google/shlex => github.com/google/shlex v0.0.0-20191202100458-e7afc7fbc510
# github.com/google/uuid => github.com/google/uuid v1.1.2
# github.com/googleapis/gax-go/v2 => github.com/googleapis/gax-go/v2 v2.1.1
# github.com/gophercloud/gophercloud => github.com/gophercloud/gophercloud v0.1.0
# github.com/gopherjs/gopherjs => github.com/gopherjs/gopherjs v0.0.0-20200217142428-fce0ec30dd00
# github.com/gorilla/mux => github.com/gorilla/mux v1.8.0
# github.com/gorilla/websocket => github.com/gorilla/websocket v1.4.2