Update kubernetes version.

Signed-off-by: Lantao Liu <lantaol@google.com>
This commit is contained in:
Lantao Liu 2017-11-17 06:00:43 +00:00
parent 2143959895
commit cb0d97e74c
60 changed files with 3109 additions and 7030 deletions

View File

@ -1,5 +1,5 @@
RUNC_VERSION=74a17296470088de3805e138d3d87c62e613dfc4
CNI_VERSION=v0.6.0
CONTAINERD_VERSION=v1.0.0-beta.3
CRITOOL_VERSION=4e3c99777477277030734ee2c9253d44a6216955
CRITOOL_VERSION=4cd2b047a26a2ef01bbd02ee55f7d70d8825ebb5
KUBERNETES_VERSION=b958430ec2654bac10f74abbeab402c71cf5fa3b

View File

@ -63,10 +63,10 @@ google.golang.org/genproto d80a6e20e776b0b17a324d0ba1ab50a39c8e8944
google.golang.org/grpc v1.3.0
gopkg.in/inf.v0 3887ee99ecf07df5b447e9b00d9c0b2adaa9f3e4
gopkg.in/yaml.v2 53feefa2559fb8dfa8d81baad31be332c97d6c77
k8s.io/api 3b04a4d688b57f29fb040d71db43f332f22bc7c4
k8s.io/apimachinery e9a29eff7d472df0f7da9ead5ab59b74e74a07ec
k8s.io/apiserver d3f753a815b5b1a7f4ecc1d2fdf497dc715c98c7
k8s.io/client-go c80a7b81424d5cf943cc8b7f7c17c892470a6303
k8s.io/kube-openapi 89ae48fe8691077463af5b7fb3b6f194632c5946
k8s.io/kubernetes b958430ec2654bac10f74abbeab402c71cf5fa3b
k8s.io/utils 4fe312863be2155a7b68acd2aff1c9221b24e68c
k8s.io/api 218912509d74a117d05a718bb926d0948e531c20
k8s.io/apimachinery 18a564baac720819100827c16fdebcadb05b2d0d
k8s.io/apiserver 7001bc4df8883d4a0ec84cd4b2117655a0009b6c
k8s.io/client-go 72e1c2a1ef30b3f8da039e92d4a6a1f079f374e8
k8s.io/kube-openapi 39a7bf85c140f972372c2a0d1ee40adbf0c8bfe1
k8s.io/kubernetes 164317879bcd810b97e5ebf1c8df041770f2ff1b
k8s.io/utils bf963466fd3fea33c428098b12a89d8ecd012f25

202
vendor/k8s.io/api/LICENSE generated vendored Normal file
View File

@ -0,0 +1,202 @@
Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS
APPENDIX: How to apply the Apache License to your work.
To apply the Apache License to your work, attach the following
boilerplate notice, with the fields enclosed by brackets "[]"
replaced with your own identifying information. (Don't include
the brackets!) The text should be enclosed in the appropriate
comment syntax for the file format. We also recommend that a
file or class name and description of purpose be included on the
same "printed page" as the copyright notice for easier
identification within third-party archives.
Copyright [yyyy] [name of copyright owner]
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.

1
vendor/k8s.io/api/README.md generated vendored Normal file
View File

@ -0,0 +1 @@
This repo is still in the experimental stage. Shortly it will contain the schema of the API that are served by the Kubernetes apiserver.

File diff suppressed because it is too large Load Diff

View File

@ -565,7 +565,7 @@ message Container {
// Security options the pod should run with.
// More info: https://kubernetes.io/docs/concepts/policy/security-context/
// More info: https://git.k8s.io/community/contributors/design-proposals/security_context.md
// More info: https://kubernetes.io/docs/tasks/configure-pod-container/security-context/
// +optional
optional SecurityContext securityContext = 15;
@ -1442,7 +1442,7 @@ message LimitRangeList {
optional k8s.io.apimachinery.pkg.apis.meta.v1.ListMeta metadata = 1;
// Items is a list of LimitRange objects.
// More info: https://git.k8s.io/community/contributors/design-proposals/admission_control_limit_range.md
// More info: https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container/
repeated LimitRange items = 2;
}
@ -1593,7 +1593,7 @@ message NamespaceList {
// NamespaceSpec describes the attributes on a Namespace.
message NamespaceSpec {
// Finalizers is an opaque list of values that must be empty to permanently remove object from storage.
// More info: https://git.k8s.io/community/contributors/design-proposals/namespaces.md#finalizers
// More info: https://kubernetes.io/docs/tasks/administer-cluster/namespaces/
// +optional
repeated string finalizers = 1;
}
@ -1601,7 +1601,7 @@ message NamespaceSpec {
// NamespaceStatus is information about the current status of a Namespace.
message NamespaceStatus {
// Phase is the current lifecycle phase of the namespace.
// More info: https://git.k8s.io/community/contributors/design-proposals/namespaces.md#phases
// More info: https://kubernetes.io/docs/tasks/administer-cluster/namespaces/
// +optional
optional string phase = 1;
}
@ -2287,7 +2287,7 @@ message PersistentVolumeSource {
// RBD represents a Rados Block Device mount on the host that shares a pod's lifetime.
// More info: https://releases.k8s.io/HEAD/examples/volumes/rbd/README.md
// +optional
optional RBDVolumeSource rbd = 6;
optional RBDPersistentVolumeSource rbd = 6;
// ISCSI represents an ISCSI Disk resource that is attached to a
// kubelet's host machine and then exposed to the pod. Provisioned by an admin.
@ -2342,7 +2342,7 @@ message PersistentVolumeSource {
// ScaleIO represents a ScaleIO persistent volume attached and mounted on Kubernetes nodes.
// +optional
optional ScaleIOVolumeSource scaleIO = 19;
optional ScaleIOPersistentVolumeSource scaleIO = 19;
// Local represents directly-attached storage with node affinity
// +optional
@ -2474,7 +2474,7 @@ message PodAffinity {
// relative to the given namespace(s)) that this pod should be
// co-located (affinity) or not co-located (anti-affinity) with,
// where co-located is defined as running on a node whose value of
// the label with key <topologyKey> tches that of any node on which
// the label with key <topologyKey> matches that of any node on which
// a pod of the set of pods is running
message PodAffinityTerm {
// A label query over a set of resources, in this case pods.
@ -3157,6 +3157,57 @@ message QuobyteVolumeSource {
optional string group = 5;
}
// Represents a Rados Block Device mount that lasts the lifetime of a pod.
// RBD volumes support ownership management and SELinux relabeling.
message RBDPersistentVolumeSource {
// A collection of Ceph monitors.
// More info: https://releases.k8s.io/HEAD/examples/volumes/rbd/README.md#how-to-use-it
repeated string monitors = 1;
// The rados image name.
// More info: https://releases.k8s.io/HEAD/examples/volumes/rbd/README.md#how-to-use-it
optional string image = 2;
// Filesystem type of the volume that you want to mount.
// Tip: Ensure that the filesystem type is supported by the host operating system.
// Examples: "ext4", "xfs", "ntfs". Implicitly inferred to be "ext4" if unspecified.
// More info: https://kubernetes.io/docs/concepts/storage/volumes#rbd
// TODO: how do we prevent errors in the filesystem from compromising the machine
// +optional
optional string fsType = 3;
// The rados pool name.
// Default is rbd.
// More info: https://releases.k8s.io/HEAD/examples/volumes/rbd/README.md#how-to-use-it
// +optional
optional string pool = 4;
// The rados user name.
// Default is admin.
// More info: https://releases.k8s.io/HEAD/examples/volumes/rbd/README.md#how-to-use-it
// +optional
optional string user = 5;
// Keyring is the path to key ring for RBDUser.
// Default is /etc/ceph/keyring.
// More info: https://releases.k8s.io/HEAD/examples/volumes/rbd/README.md#how-to-use-it
// +optional
optional string keyring = 6;
// SecretRef is name of the authentication secret for RBDUser. If provided
// overrides keyring.
// Default is nil.
// More info: https://releases.k8s.io/HEAD/examples/volumes/rbd/README.md#how-to-use-it
// +optional
optional SecretReference secretRef = 7;
// ReadOnly here will force the ReadOnly setting in VolumeMounts.
// Defaults to false.
// More info: https://releases.k8s.io/HEAD/examples/volumes/rbd/README.md#how-to-use-it
// +optional
optional bool readOnly = 8;
}
// Represents a Rados Block Device mount that lasts the lifetime of a pod.
// RBD volumes support ownership management and SELinux relabeling.
message RBDVolumeSource {
@ -3377,14 +3428,14 @@ message ResourceQuotaList {
optional k8s.io.apimachinery.pkg.apis.meta.v1.ListMeta metadata = 1;
// Items is a list of ResourceQuota objects.
// More info: https://git.k8s.io/community/contributors/design-proposals/admission_control_resource_quota.md
// More info: https://kubernetes.io/docs/concepts/policy/resource-quotas/
repeated ResourceQuota items = 2;
}
// ResourceQuotaSpec defines the desired hard limits to enforce for Quota.
message ResourceQuotaSpec {
// Hard is the set of desired hard limits for each named resource.
// More info: https://git.k8s.io/community/contributors/design-proposals/admission_control_resource_quota.md
// More info: https://kubernetes.io/docs/concepts/policy/resource-quotas/
// +optional
map<string, k8s.io.apimachinery.pkg.api.resource.Quantity> hard = 1;
@ -3397,7 +3448,7 @@ message ResourceQuotaSpec {
// ResourceQuotaStatus defines the enforced hard limits and observed use.
message ResourceQuotaStatus {
// Hard is the set of enforced hard limits for each named resource.
// More info: https://git.k8s.io/community/contributors/design-proposals/admission_control_resource_quota.md
// More info: https://kubernetes.io/docs/concepts/policy/resource-quotas/
// +optional
map<string, k8s.io.apimachinery.pkg.api.resource.Quantity> hard = 1;
@ -3440,6 +3491,50 @@ message SELinuxOptions {
optional string level = 4;
}
// ScaleIOPersistentVolumeSource represents a persistent ScaleIO volume
message ScaleIOPersistentVolumeSource {
// The host address of the ScaleIO API Gateway.
optional string gateway = 1;
// The name of the storage system as configured in ScaleIO.
optional string system = 2;
// SecretRef references to the secret for ScaleIO user and other
// sensitive information. If this is not provided, Login operation will fail.
optional SecretReference secretRef = 3;
// Flag to enable/disable SSL communication with Gateway, default false
// +optional
optional bool sslEnabled = 4;
// The name of the ScaleIO Protection Domain for the configured storage.
// +optional
optional string protectionDomain = 5;
// The ScaleIO Storage Pool associated with the protection domain.
// +optional
optional string storagePool = 6;
// Indicates whether the storage for a volume should be ThickProvisioned or ThinProvisioned.
// +optional
optional string storageMode = 7;
// The name of a volume already created in the ScaleIO system
// that is associated with this volume source.
optional string volumeName = 8;
// Filesystem type to mount.
// Must be a filesystem type supported by the host operating system.
// Ex. "ext4", "xfs", "ntfs". Implicitly inferred to be "ext4" if unspecified.
// +optional
optional string fsType = 9;
// Defaults to false (read/write). ReadOnly here will force
// the ReadOnly setting in VolumeMounts.
// +optional
optional bool readOnly = 10;
}
// ScaleIOVolumeSource represents a persistent ScaleIO volume
message ScaleIOVolumeSource {
// The host address of the ScaleIO API Gateway.
@ -3456,15 +3551,15 @@ message ScaleIOVolumeSource {
// +optional
optional bool sslEnabled = 4;
// The name of the Protection Domain for the configured storage (defaults to "default").
// The name of the ScaleIO Protection Domain for the configured storage.
// +optional
optional string protectionDomain = 5;
// The Storage Pool associated with the protection domain (defaults to "default").
// The ScaleIO Storage Pool associated with the protection domain.
// +optional
optional string storagePool = 6;
// Indicates whether the storage for a volume should be thick or thin (defaults to "thin").
// Indicates whether the storage for a volume should be ThickProvisioned or ThinProvisioned.
// +optional
optional string storageMode = 7;

105
vendor/k8s.io/api/core/v1/types.go generated vendored
View File

@ -398,7 +398,7 @@ type PersistentVolumeSource struct {
// RBD represents a Rados Block Device mount on the host that shares a pod's lifetime.
// More info: https://releases.k8s.io/HEAD/examples/volumes/rbd/README.md
// +optional
RBD *RBDVolumeSource `json:"rbd,omitempty" protobuf:"bytes,6,opt,name=rbd"`
RBD *RBDPersistentVolumeSource `json:"rbd,omitempty" protobuf:"bytes,6,opt,name=rbd"`
// ISCSI represents an ISCSI Disk resource that is attached to a
// kubelet's host machine and then exposed to the pod. Provisioned by an admin.
// +optional
@ -440,7 +440,7 @@ type PersistentVolumeSource struct {
PortworxVolume *PortworxVolumeSource `json:"portworxVolume,omitempty" protobuf:"bytes,18,opt,name=portworxVolume"`
// ScaleIO represents a ScaleIO persistent volume attached and mounted on Kubernetes nodes.
// +optional
ScaleIO *ScaleIOVolumeSource `json:"scaleIO,omitempty" protobuf:"bytes,19,opt,name=scaleIO"`
ScaleIO *ScaleIOPersistentVolumeSource `json:"scaleIO,omitempty" protobuf:"bytes,19,opt,name=scaleIO"`
// Local represents directly-attached storage with node affinity
// +optional
Local *LocalVolumeSource `json:"local,omitempty" protobuf:"bytes,20,opt,name=local"`
@ -838,6 +838,50 @@ type RBDVolumeSource struct {
ReadOnly bool `json:"readOnly,omitempty" protobuf:"varint,8,opt,name=readOnly"`
}
// Represents a Rados Block Device mount that lasts the lifetime of a pod.
// RBD volumes support ownership management and SELinux relabeling.
type RBDPersistentVolumeSource struct {
// A collection of Ceph monitors.
// More info: https://releases.k8s.io/HEAD/examples/volumes/rbd/README.md#how-to-use-it
CephMonitors []string `json:"monitors" protobuf:"bytes,1,rep,name=monitors"`
// The rados image name.
// More info: https://releases.k8s.io/HEAD/examples/volumes/rbd/README.md#how-to-use-it
RBDImage string `json:"image" protobuf:"bytes,2,opt,name=image"`
// Filesystem type of the volume that you want to mount.
// Tip: Ensure that the filesystem type is supported by the host operating system.
// Examples: "ext4", "xfs", "ntfs". Implicitly inferred to be "ext4" if unspecified.
// More info: https://kubernetes.io/docs/concepts/storage/volumes#rbd
// TODO: how do we prevent errors in the filesystem from compromising the machine
// +optional
FSType string `json:"fsType,omitempty" protobuf:"bytes,3,opt,name=fsType"`
// The rados pool name.
// Default is rbd.
// More info: https://releases.k8s.io/HEAD/examples/volumes/rbd/README.md#how-to-use-it
// +optional
RBDPool string `json:"pool,omitempty" protobuf:"bytes,4,opt,name=pool"`
// The rados user name.
// Default is admin.
// More info: https://releases.k8s.io/HEAD/examples/volumes/rbd/README.md#how-to-use-it
// +optional
RadosUser string `json:"user,omitempty" protobuf:"bytes,5,opt,name=user"`
// Keyring is the path to key ring for RBDUser.
// Default is /etc/ceph/keyring.
// More info: https://releases.k8s.io/HEAD/examples/volumes/rbd/README.md#how-to-use-it
// +optional
Keyring string `json:"keyring,omitempty" protobuf:"bytes,6,opt,name=keyring"`
// SecretRef is name of the authentication secret for RBDUser. If provided
// overrides keyring.
// Default is nil.
// More info: https://releases.k8s.io/HEAD/examples/volumes/rbd/README.md#how-to-use-it
// +optional
SecretRef *SecretReference `json:"secretRef,omitempty" protobuf:"bytes,7,opt,name=secretRef"`
// ReadOnly here will force the ReadOnly setting in VolumeMounts.
// Defaults to false.
// More info: https://releases.k8s.io/HEAD/examples/volumes/rbd/README.md#how-to-use-it
// +optional
ReadOnly bool `json:"readOnly,omitempty" protobuf:"varint,8,opt,name=readOnly"`
}
// Represents a cinder volume resource in Openstack.
// A Cinder volume must exist before mounting to a container.
// The volume must also be in the same region as the kubelet.
@ -1352,13 +1396,48 @@ type ScaleIOVolumeSource struct {
// Flag to enable/disable SSL communication with Gateway, default false
// +optional
SSLEnabled bool `json:"sslEnabled,omitempty" protobuf:"varint,4,opt,name=sslEnabled"`
// The name of the Protection Domain for the configured storage (defaults to "default").
// The name of the ScaleIO Protection Domain for the configured storage.
// +optional
ProtectionDomain string `json:"protectionDomain,omitempty" protobuf:"bytes,5,opt,name=protectionDomain"`
// The Storage Pool associated with the protection domain (defaults to "default").
// The ScaleIO Storage Pool associated with the protection domain.
// +optional
StoragePool string `json:"storagePool,omitempty" protobuf:"bytes,6,opt,name=storagePool"`
// Indicates whether the storage for a volume should be thick or thin (defaults to "thin").
// Indicates whether the storage for a volume should be ThickProvisioned or ThinProvisioned.
// +optional
StorageMode string `json:"storageMode,omitempty" protobuf:"bytes,7,opt,name=storageMode"`
// The name of a volume already created in the ScaleIO system
// that is associated with this volume source.
VolumeName string `json:"volumeName,omitempty" protobuf:"bytes,8,opt,name=volumeName"`
// Filesystem type to mount.
// Must be a filesystem type supported by the host operating system.
// Ex. "ext4", "xfs", "ntfs". Implicitly inferred to be "ext4" if unspecified.
// +optional
FSType string `json:"fsType,omitempty" protobuf:"bytes,9,opt,name=fsType"`
// Defaults to false (read/write). ReadOnly here will force
// the ReadOnly setting in VolumeMounts.
// +optional
ReadOnly bool `json:"readOnly,omitempty" protobuf:"varint,10,opt,name=readOnly"`
}
// ScaleIOPersistentVolumeSource represents a persistent ScaleIO volume
type ScaleIOPersistentVolumeSource struct {
// The host address of the ScaleIO API Gateway.
Gateway string `json:"gateway" protobuf:"bytes,1,opt,name=gateway"`
// The name of the storage system as configured in ScaleIO.
System string `json:"system" protobuf:"bytes,2,opt,name=system"`
// SecretRef references to the secret for ScaleIO user and other
// sensitive information. If this is not provided, Login operation will fail.
SecretRef *SecretReference `json:"secretRef" protobuf:"bytes,3,opt,name=secretRef"`
// Flag to enable/disable SSL communication with Gateway, default false
// +optional
SSLEnabled bool `json:"sslEnabled,omitempty" protobuf:"varint,4,opt,name=sslEnabled"`
// The name of the ScaleIO Protection Domain for the configured storage.
// +optional
ProtectionDomain string `json:"protectionDomain,omitempty" protobuf:"bytes,5,opt,name=protectionDomain"`
// The ScaleIO Storage Pool associated with the protection domain.
// +optional
StoragePool string `json:"storagePool,omitempty" protobuf:"bytes,6,opt,name=storagePool"`
// Indicates whether the storage for a volume should be ThickProvisioned or ThinProvisioned.
// +optional
StorageMode string `json:"storageMode,omitempty" protobuf:"bytes,7,opt,name=storageMode"`
// The name of a volume already created in the ScaleIO system
@ -1996,7 +2075,7 @@ type Container struct {
ImagePullPolicy PullPolicy `json:"imagePullPolicy,omitempty" protobuf:"bytes,14,opt,name=imagePullPolicy,casttype=PullPolicy"`
// Security options the pod should run with.
// More info: https://kubernetes.io/docs/concepts/policy/security-context/
// More info: https://git.k8s.io/community/contributors/design-proposals/security_context.md
// More info: https://kubernetes.io/docs/tasks/configure-pod-container/security-context/
// +optional
SecurityContext *SecurityContext `json:"securityContext,omitempty" protobuf:"bytes,15,opt,name=securityContext"`
@ -2394,7 +2473,7 @@ type WeightedPodAffinityTerm struct {
// relative to the given namespace(s)) that this pod should be
// co-located (affinity) or not co-located (anti-affinity) with,
// where co-located is defined as running on a node whose value of
// the label with key <topologyKey> tches that of any node on which
// the label with key <topologyKey> matches that of any node on which
// a pod of the set of pods is running
type PodAffinityTerm struct {
// A label query over a set of resources, in this case pods.
@ -3825,7 +3904,7 @@ const (
// NamespaceSpec describes the attributes on a Namespace.
type NamespaceSpec struct {
// Finalizers is an opaque list of values that must be empty to permanently remove object from storage.
// More info: https://git.k8s.io/community/contributors/design-proposals/namespaces.md#finalizers
// More info: https://kubernetes.io/docs/tasks/administer-cluster/namespaces/
// +optional
Finalizers []FinalizerName `json:"finalizers,omitempty" protobuf:"bytes,1,rep,name=finalizers,casttype=FinalizerName"`
}
@ -3833,7 +3912,7 @@ type NamespaceSpec struct {
// NamespaceStatus is information about the current status of a Namespace.
type NamespaceStatus struct {
// Phase is the current lifecycle phase of the namespace.
// More info: https://git.k8s.io/community/contributors/design-proposals/namespaces.md#phases
// More info: https://kubernetes.io/docs/tasks/administer-cluster/namespaces/
// +optional
Phase NamespacePhase `json:"phase,omitempty" protobuf:"bytes,1,opt,name=phase,casttype=NamespacePhase"`
}
@ -4376,7 +4455,7 @@ type LimitRangeList struct {
metav1.ListMeta `json:"metadata,omitempty" protobuf:"bytes,1,opt,name=metadata"`
// Items is a list of LimitRange objects.
// More info: https://git.k8s.io/community/contributors/design-proposals/admission_control_limit_range.md
// More info: https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container/
Items []LimitRange `json:"items" protobuf:"bytes,2,rep,name=items"`
}
@ -4433,7 +4512,7 @@ const (
// ResourceQuotaSpec defines the desired hard limits to enforce for Quota.
type ResourceQuotaSpec struct {
// Hard is the set of desired hard limits for each named resource.
// More info: https://git.k8s.io/community/contributors/design-proposals/admission_control_resource_quota.md
// More info: https://kubernetes.io/docs/concepts/policy/resource-quotas/
// +optional
Hard ResourceList `json:"hard,omitempty" protobuf:"bytes,1,rep,name=hard,casttype=ResourceList,castkey=ResourceName"`
// A collection of filters that must match each object tracked by a quota.
@ -4445,7 +4524,7 @@ type ResourceQuotaSpec struct {
// ResourceQuotaStatus defines the enforced hard limits and observed use.
type ResourceQuotaStatus struct {
// Hard is the set of enforced hard limits for each named resource.
// More info: https://git.k8s.io/community/contributors/design-proposals/admission_control_resource_quota.md
// More info: https://kubernetes.io/docs/concepts/policy/resource-quotas/
// +optional
Hard ResourceList `json:"hard,omitempty" protobuf:"bytes,1,rep,name=hard,casttype=ResourceList,castkey=ResourceName"`
// Used is the current observed total usage of the resource in the namespace.
@ -4486,7 +4565,7 @@ type ResourceQuotaList struct {
metav1.ListMeta `json:"metadata,omitempty" protobuf:"bytes,1,opt,name=metadata"`
// Items is a list of ResourceQuota objects.
// More info: https://git.k8s.io/community/contributors/design-proposals/admission_control_resource_quota.md
// More info: https://kubernetes.io/docs/concepts/policy/resource-quotas/
Items []ResourceQuota `json:"items" protobuf:"bytes,2,rep,name=items"`
}

View File

@ -284,7 +284,7 @@ var map_Container = map[string]string{
"terminationMessagePath": "Optional: Path at which the file to which the container's termination message will be written is mounted into the container's filesystem. Message written is intended to be brief final status, such as an assertion failure message. Will be truncated by the node if greater than 4096 bytes. The total message length across all containers will be limited to 12kb. Defaults to /dev/termination-log. Cannot be updated.",
"terminationMessagePolicy": "Indicate how the termination message should be populated. File will use the contents of terminationMessagePath to populate the container status message on both success and failure. FallbackToLogsOnError will use the last chunk of container log output if the termination message file is empty and the container exited with an error. The log output is limited to 2048 bytes or 80 lines, whichever is smaller. Defaults to File. Cannot be updated.",
"imagePullPolicy": "Image pull policy. One of Always, Never, IfNotPresent. Defaults to Always if :latest tag is specified, or IfNotPresent otherwise. Cannot be updated. More info: https://kubernetes.io/docs/concepts/containers/images#updating-images",
"securityContext": "Security options the pod should run with. More info: https://kubernetes.io/docs/concepts/policy/security-context/ More info: https://git.k8s.io/community/contributors/design-proposals/security_context.md",
"securityContext": "Security options the pod should run with. More info: https://kubernetes.io/docs/concepts/policy/security-context/ More info: https://kubernetes.io/docs/tasks/configure-pod-container/security-context/",
"stdin": "Whether this container should allocate a buffer for stdin in the container runtime. If this is not set, reads from stdin in the container will always result in EOF. Default is false.",
"stdinOnce": "Whether the container runtime should close the stdin channel after it has been opened by a single attach. When stdin is true the stdin stream will remain open across multiple attach sessions. If stdinOnce is set to true, stdin is opened on container start, is empty until the first client attaches to stdin, and then remains open and accepts data until the client disconnects, at which time stdin is closed and remains closed until the container is restarted. If this flag is false, a container processes that reads from stdin will never receive an EOF. Default is false",
"tty": "Whether this container should allocate a TTY for itself, also requires 'stdin' to be true. Default is false.",
@ -765,7 +765,7 @@ func (LimitRangeItem) SwaggerDoc() map[string]string {
var map_LimitRangeList = map[string]string{
"": "LimitRangeList is a list of LimitRange items.",
"metadata": "Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#types-kinds",
"items": "Items is a list of LimitRange objects. More info: https://git.k8s.io/community/contributors/design-proposals/admission_control_limit_range.md",
"items": "Items is a list of LimitRange objects. More info: https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container/",
}
func (LimitRangeList) SwaggerDoc() map[string]string {
@ -866,7 +866,7 @@ func (NamespaceList) SwaggerDoc() map[string]string {
var map_NamespaceSpec = map[string]string{
"": "NamespaceSpec describes the attributes on a Namespace.",
"finalizers": "Finalizers is an opaque list of values that must be empty to permanently remove object from storage. More info: https://git.k8s.io/community/contributors/design-proposals/namespaces.md#finalizers",
"finalizers": "Finalizers is an opaque list of values that must be empty to permanently remove object from storage. More info: https://kubernetes.io/docs/tasks/administer-cluster/namespaces/",
}
func (NamespaceSpec) SwaggerDoc() map[string]string {
@ -875,7 +875,7 @@ func (NamespaceSpec) SwaggerDoc() map[string]string {
var map_NamespaceStatus = map[string]string{
"": "NamespaceStatus is information about the current status of a Namespace.",
"phase": "Phase is the current lifecycle phase of the namespace. More info: https://git.k8s.io/community/contributors/design-proposals/namespaces.md#phases",
"phase": "Phase is the current lifecycle phase of the namespace. More info: https://kubernetes.io/docs/tasks/administer-cluster/namespaces/",
}
func (NamespaceStatus) SwaggerDoc() map[string]string {
@ -1275,7 +1275,7 @@ func (PodAffinity) SwaggerDoc() map[string]string {
}
var map_PodAffinityTerm = map[string]string{
"": "Defines a set of pods (namely those matching the labelSelector relative to the given namespace(s)) that this pod should be co-located (affinity) or not co-located (anti-affinity) with, where co-located is defined as running on a node whose value of the label with key <topologyKey> tches that of any node on which a pod of the set of pods is running",
"": "Defines a set of pods (namely those matching the labelSelector relative to the given namespace(s)) that this pod should be co-located (affinity) or not co-located (anti-affinity) with, where co-located is defined as running on a node whose value of the label with key <topologyKey> matches that of any node on which a pod of the set of pods is running",
"labelSelector": "A label query over a set of resources, in this case pods.",
"namespaces": "namespaces specifies which namespaces the labelSelector applies to (matches against); null or empty list means \"this pod's namespace\"",
"topologyKey": "This pod should be co-located (affinity) or not co-located (anti-affinity) with the pods matching the labelSelector in the specified namespaces, where co-located is defined as running on a node whose value of the label with key topologyKey matches that of any node on which any of the selected pods is running. For PreferredDuringScheduling pod anti-affinity, empty topologyKey is interpreted as \"all topologies\" (\"all topologies\" here means all the topologyKeys indicated by scheduler command-line argument --failure-domains); for affinity and for RequiredDuringScheduling pod anti-affinity, empty topologyKey is not allowed.",
@ -1571,6 +1571,22 @@ func (QuobyteVolumeSource) SwaggerDoc() map[string]string {
return map_QuobyteVolumeSource
}
var map_RBDPersistentVolumeSource = map[string]string{
"": "Represents a Rados Block Device mount that lasts the lifetime of a pod. RBD volumes support ownership management and SELinux relabeling.",
"monitors": "A collection of Ceph monitors. More info: https://releases.k8s.io/HEAD/examples/volumes/rbd/README.md#how-to-use-it",
"image": "The rados image name. More info: https://releases.k8s.io/HEAD/examples/volumes/rbd/README.md#how-to-use-it",
"fsType": "Filesystem type of the volume that you want to mount. Tip: Ensure that the filesystem type is supported by the host operating system. Examples: \"ext4\", \"xfs\", \"ntfs\". Implicitly inferred to be \"ext4\" if unspecified. More info: https://kubernetes.io/docs/concepts/storage/volumes#rbd",
"pool": "The rados pool name. Default is rbd. More info: https://releases.k8s.io/HEAD/examples/volumes/rbd/README.md#how-to-use-it",
"user": "The rados user name. Default is admin. More info: https://releases.k8s.io/HEAD/examples/volumes/rbd/README.md#how-to-use-it",
"keyring": "Keyring is the path to key ring for RBDUser. Default is /etc/ceph/keyring. More info: https://releases.k8s.io/HEAD/examples/volumes/rbd/README.md#how-to-use-it",
"secretRef": "SecretRef is name of the authentication secret for RBDUser. If provided overrides keyring. Default is nil. More info: https://releases.k8s.io/HEAD/examples/volumes/rbd/README.md#how-to-use-it",
"readOnly": "ReadOnly here will force the ReadOnly setting in VolumeMounts. Defaults to false. More info: https://releases.k8s.io/HEAD/examples/volumes/rbd/README.md#how-to-use-it",
}
func (RBDPersistentVolumeSource) SwaggerDoc() map[string]string {
return map_RBDPersistentVolumeSource
}
var map_RBDVolumeSource = map[string]string{
"": "Represents a Rados Block Device mount that lasts the lifetime of a pod. RBD volumes support ownership management and SELinux relabeling.",
"monitors": "A collection of Ceph monitors. More info: https://releases.k8s.io/HEAD/examples/volumes/rbd/README.md#how-to-use-it",
@ -1683,7 +1699,7 @@ func (ResourceQuota) SwaggerDoc() map[string]string {
var map_ResourceQuotaList = map[string]string{
"": "ResourceQuotaList is a list of ResourceQuota items.",
"metadata": "Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#types-kinds",
"items": "Items is a list of ResourceQuota objects. More info: https://git.k8s.io/community/contributors/design-proposals/admission_control_resource_quota.md",
"items": "Items is a list of ResourceQuota objects. More info: https://kubernetes.io/docs/concepts/policy/resource-quotas/",
}
func (ResourceQuotaList) SwaggerDoc() map[string]string {
@ -1692,7 +1708,7 @@ func (ResourceQuotaList) SwaggerDoc() map[string]string {
var map_ResourceQuotaSpec = map[string]string{
"": "ResourceQuotaSpec defines the desired hard limits to enforce for Quota.",
"hard": "Hard is the set of desired hard limits for each named resource. More info: https://git.k8s.io/community/contributors/design-proposals/admission_control_resource_quota.md",
"hard": "Hard is the set of desired hard limits for each named resource. More info: https://kubernetes.io/docs/concepts/policy/resource-quotas/",
"scopes": "A collection of filters that must match each object tracked by a quota. If not specified, the quota matches all objects.",
}
@ -1702,7 +1718,7 @@ func (ResourceQuotaSpec) SwaggerDoc() map[string]string {
var map_ResourceQuotaStatus = map[string]string{
"": "ResourceQuotaStatus defines the enforced hard limits and observed use.",
"hard": "Hard is the set of enforced hard limits for each named resource. More info: https://git.k8s.io/community/contributors/design-proposals/admission_control_resource_quota.md",
"hard": "Hard is the set of enforced hard limits for each named resource. More info: https://kubernetes.io/docs/concepts/policy/resource-quotas/",
"used": "Used is the current observed total usage of the resource in the namespace.",
}
@ -1732,15 +1748,33 @@ func (SELinuxOptions) SwaggerDoc() map[string]string {
return map_SELinuxOptions
}
var map_ScaleIOPersistentVolumeSource = map[string]string{
"": "ScaleIOPersistentVolumeSource represents a persistent ScaleIO volume",
"gateway": "The host address of the ScaleIO API Gateway.",
"system": "The name of the storage system as configured in ScaleIO.",
"secretRef": "SecretRef references to the secret for ScaleIO user and other sensitive information. If this is not provided, Login operation will fail.",
"sslEnabled": "Flag to enable/disable SSL communication with Gateway, default false",
"protectionDomain": "The name of the ScaleIO Protection Domain for the configured storage.",
"storagePool": "The ScaleIO Storage Pool associated with the protection domain.",
"storageMode": "Indicates whether the storage for a volume should be ThickProvisioned or ThinProvisioned.",
"volumeName": "The name of a volume already created in the ScaleIO system that is associated with this volume source.",
"fsType": "Filesystem type to mount. Must be a filesystem type supported by the host operating system. Ex. \"ext4\", \"xfs\", \"ntfs\". Implicitly inferred to be \"ext4\" if unspecified.",
"readOnly": "Defaults to false (read/write). ReadOnly here will force the ReadOnly setting in VolumeMounts.",
}
func (ScaleIOPersistentVolumeSource) SwaggerDoc() map[string]string {
return map_ScaleIOPersistentVolumeSource
}
var map_ScaleIOVolumeSource = map[string]string{
"": "ScaleIOVolumeSource represents a persistent ScaleIO volume",
"gateway": "The host address of the ScaleIO API Gateway.",
"system": "The name of the storage system as configured in ScaleIO.",
"secretRef": "SecretRef references to the secret for ScaleIO user and other sensitive information. If this is not provided, Login operation will fail.",
"sslEnabled": "Flag to enable/disable SSL communication with Gateway, default false",
"protectionDomain": "The name of the Protection Domain for the configured storage (defaults to \"default\").",
"storagePool": "The Storage Pool associated with the protection domain (defaults to \"default\").",
"storageMode": "Indicates whether the storage for a volume should be thick or thin (defaults to \"thin\").",
"protectionDomain": "The name of the ScaleIO Protection Domain for the configured storage.",
"storagePool": "The ScaleIO Storage Pool associated with the protection domain.",
"storageMode": "Indicates whether the storage for a volume should be ThickProvisioned or ThinProvisioned.",
"volumeName": "The name of a volume already created in the ScaleIO system that is associated with this volume source.",
"fsType": "Filesystem type to mount. Must be a filesystem type supported by the host operating system. Ex. \"ext4\", \"xfs\", \"ntfs\". Implicitly inferred to be \"ext4\" if unspecified.",
"readOnly": "Defaults to false (read/write). ReadOnly here will force the ReadOnly setting in VolumeMounts.",

View File

@ -571,6 +571,10 @@ func RegisterDeepCopies(scheme *runtime.Scheme) error {
in.(*QuobyteVolumeSource).DeepCopyInto(out.(*QuobyteVolumeSource))
return nil
}, InType: reflect.TypeOf(&QuobyteVolumeSource{})},
conversion.GeneratedDeepCopyFunc{Fn: func(in interface{}, out interface{}, c *conversion.Cloner) error {
in.(*RBDPersistentVolumeSource).DeepCopyInto(out.(*RBDPersistentVolumeSource))
return nil
}, InType: reflect.TypeOf(&RBDPersistentVolumeSource{})},
conversion.GeneratedDeepCopyFunc{Fn: func(in interface{}, out interface{}, c *conversion.Cloner) error {
in.(*RBDVolumeSource).DeepCopyInto(out.(*RBDVolumeSource))
return nil
@ -627,6 +631,10 @@ func RegisterDeepCopies(scheme *runtime.Scheme) error {
in.(*SELinuxOptions).DeepCopyInto(out.(*SELinuxOptions))
return nil
}, InType: reflect.TypeOf(&SELinuxOptions{})},
conversion.GeneratedDeepCopyFunc{Fn: func(in interface{}, out interface{}, c *conversion.Cloner) error {
in.(*ScaleIOPersistentVolumeSource).DeepCopyInto(out.(*ScaleIOPersistentVolumeSource))
return nil
}, InType: reflect.TypeOf(&ScaleIOPersistentVolumeSource{})},
conversion.GeneratedDeepCopyFunc{Fn: func(in interface{}, out interface{}, c *conversion.Cloner) error {
in.(*ScaleIOVolumeSource).DeepCopyInto(out.(*ScaleIOVolumeSource))
return nil
@ -3733,7 +3741,7 @@ func (in *PersistentVolumeSource) DeepCopyInto(out *PersistentVolumeSource) {
if *in == nil {
*out = nil
} else {
*out = new(RBDVolumeSource)
*out = new(RBDPersistentVolumeSource)
(*in).DeepCopyInto(*out)
}
}
@ -3850,7 +3858,7 @@ func (in *PersistentVolumeSource) DeepCopyInto(out *PersistentVolumeSource) {
if *in == nil {
*out = nil
} else {
*out = new(ScaleIOVolumeSource)
*out = new(ScaleIOPersistentVolumeSource)
(*in).DeepCopyInto(*out)
}
}
@ -4801,6 +4809,36 @@ func (in *QuobyteVolumeSource) DeepCopy() *QuobyteVolumeSource {
return out
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *RBDPersistentVolumeSource) DeepCopyInto(out *RBDPersistentVolumeSource) {
*out = *in
if in.CephMonitors != nil {
in, out := &in.CephMonitors, &out.CephMonitors
*out = make([]string, len(*in))
copy(*out, *in)
}
if in.SecretRef != nil {
in, out := &in.SecretRef, &out.SecretRef
if *in == nil {
*out = nil
} else {
*out = new(SecretReference)
**out = **in
}
}
return
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new RBDPersistentVolumeSource.
func (in *RBDPersistentVolumeSource) DeepCopy() *RBDPersistentVolumeSource {
if in == nil {
return nil
}
out := new(RBDPersistentVolumeSource)
in.DeepCopyInto(out)
return out
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *RBDVolumeSource) DeepCopyInto(out *RBDVolumeSource) {
*out = *in
@ -5191,6 +5229,31 @@ func (in *SELinuxOptions) DeepCopy() *SELinuxOptions {
return out
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *ScaleIOPersistentVolumeSource) DeepCopyInto(out *ScaleIOPersistentVolumeSource) {
*out = *in
if in.SecretRef != nil {
in, out := &in.SecretRef, &out.SecretRef
if *in == nil {
*out = nil
} else {
*out = new(SecretReference)
**out = **in
}
}
return
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new ScaleIOPersistentVolumeSource.
func (in *ScaleIOPersistentVolumeSource) DeepCopy() *ScaleIOPersistentVolumeSource {
if in == nil {
return nil
}
out := new(ScaleIOPersistentVolumeSource)
in.DeepCopyInto(out)
return out
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *ScaleIOVolumeSource) DeepCopyInto(out *ScaleIOVolumeSource) {
*out = *in

View File

@ -1,49 +0,0 @@
/*
Copyright 2014 The Kubernetes Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package equality
import (
"k8s.io/apimachinery/pkg/api/resource"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/conversion"
"k8s.io/apimachinery/pkg/fields"
"k8s.io/apimachinery/pkg/labels"
)
// Semantic can do semantic deep equality checks for api objects.
// Example: apiequality.Semantic.DeepEqual(aPod, aPodWithNonNilButEmptyMaps) == true
var Semantic = conversion.EqualitiesOrDie(
func(a, b resource.Quantity) bool {
// Ignore formatting, only care that numeric value stayed the same.
// TODO: if we decide it's important, it should be safe to start comparing the format.
//
// Uninitialized quantities are equivalent to 0 quantities.
return a.Cmp(b) == 0
},
func(a, b metav1.MicroTime) bool {
return a.UTC() == b.UTC()
},
func(a, b metav1.Time) bool {
return a.UTC() == b.UTC()
},
func(a, b labels.Selector) bool {
return a.String() == b.String()
},
func(a, b fields.Selector) bool {
return a.String() == b.String()
},
)

View File

@ -1,19 +0,0 @@
/*
Copyright 2014 The Kubernetes Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
// Package meta provides functions for retrieving API metadata from objects
// belonging to the Kubernetes API
package meta // import "k8s.io/apimachinery/pkg/api/meta"

View File

@ -1,105 +0,0 @@
/*
Copyright 2014 The Kubernetes Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package meta
import (
"fmt"
"k8s.io/apimachinery/pkg/runtime/schema"
)
// AmbiguousResourceError is returned if the RESTMapper finds multiple matches for a resource
type AmbiguousResourceError struct {
PartialResource schema.GroupVersionResource
MatchingResources []schema.GroupVersionResource
MatchingKinds []schema.GroupVersionKind
}
func (e *AmbiguousResourceError) Error() string {
switch {
case len(e.MatchingKinds) > 0 && len(e.MatchingResources) > 0:
return fmt.Sprintf("%v matches multiple resources %v and kinds %v", e.PartialResource, e.MatchingResources, e.MatchingKinds)
case len(e.MatchingKinds) > 0:
return fmt.Sprintf("%v matches multiple kinds %v", e.PartialResource, e.MatchingKinds)
case len(e.MatchingResources) > 0:
return fmt.Sprintf("%v matches multiple resources %v", e.PartialResource, e.MatchingResources)
}
return fmt.Sprintf("%v matches multiple resources or kinds", e.PartialResource)
}
// AmbiguousKindError is returned if the RESTMapper finds multiple matches for a kind
type AmbiguousKindError struct {
PartialKind schema.GroupVersionKind
MatchingResources []schema.GroupVersionResource
MatchingKinds []schema.GroupVersionKind
}
func (e *AmbiguousKindError) Error() string {
switch {
case len(e.MatchingKinds) > 0 && len(e.MatchingResources) > 0:
return fmt.Sprintf("%v matches multiple resources %v and kinds %v", e.PartialKind, e.MatchingResources, e.MatchingKinds)
case len(e.MatchingKinds) > 0:
return fmt.Sprintf("%v matches multiple kinds %v", e.PartialKind, e.MatchingKinds)
case len(e.MatchingResources) > 0:
return fmt.Sprintf("%v matches multiple resources %v", e.PartialKind, e.MatchingResources)
}
return fmt.Sprintf("%v matches multiple resources or kinds", e.PartialKind)
}
func IsAmbiguousError(err error) bool {
if err == nil {
return false
}
switch err.(type) {
case *AmbiguousResourceError, *AmbiguousKindError:
return true
default:
return false
}
}
// NoResourceMatchError is returned if the RESTMapper can't find any match for a resource
type NoResourceMatchError struct {
PartialResource schema.GroupVersionResource
}
func (e *NoResourceMatchError) Error() string {
return fmt.Sprintf("no matches for %v", e.PartialResource)
}
// NoKindMatchError is returned if the RESTMapper can't find any match for a kind
type NoKindMatchError struct {
PartialKind schema.GroupVersionKind
}
func (e *NoKindMatchError) Error() string {
return fmt.Sprintf("no matches for %v", e.PartialKind)
}
func IsNoMatchError(err error) bool {
if err == nil {
return false
}
switch err.(type) {
case *NoResourceMatchError, *NoKindMatchError:
return true
default:
return false
}
}

View File

@ -1,97 +0,0 @@
/*
Copyright 2014 The Kubernetes Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package meta
import (
"fmt"
"k8s.io/apimachinery/pkg/runtime/schema"
utilerrors "k8s.io/apimachinery/pkg/util/errors"
)
// FirstHitRESTMapper is a wrapper for multiple RESTMappers which returns the
// first successful result for the singular requests
type FirstHitRESTMapper struct {
MultiRESTMapper
}
func (m FirstHitRESTMapper) String() string {
return fmt.Sprintf("FirstHitRESTMapper{\n\t%v\n}", m.MultiRESTMapper)
}
func (m FirstHitRESTMapper) ResourceFor(resource schema.GroupVersionResource) (schema.GroupVersionResource, error) {
errors := []error{}
for _, t := range m.MultiRESTMapper {
ret, err := t.ResourceFor(resource)
if err == nil {
return ret, nil
}
errors = append(errors, err)
}
return schema.GroupVersionResource{}, collapseAggregateErrors(errors)
}
func (m FirstHitRESTMapper) KindFor(resource schema.GroupVersionResource) (schema.GroupVersionKind, error) {
errors := []error{}
for _, t := range m.MultiRESTMapper {
ret, err := t.KindFor(resource)
if err == nil {
return ret, nil
}
errors = append(errors, err)
}
return schema.GroupVersionKind{}, collapseAggregateErrors(errors)
}
// RESTMapping provides the REST mapping for the resource based on the
// kind and version. This implementation supports multiple REST schemas and
// return the first match.
func (m FirstHitRESTMapper) RESTMapping(gk schema.GroupKind, versions ...string) (*RESTMapping, error) {
errors := []error{}
for _, t := range m.MultiRESTMapper {
ret, err := t.RESTMapping(gk, versions...)
if err == nil {
return ret, nil
}
errors = append(errors, err)
}
return nil, collapseAggregateErrors(errors)
}
// collapseAggregateErrors returns the minimal errors. it handles empty as nil, handles one item in a list
// by returning the item, and collapses all NoMatchErrors to a single one (since they should all be the same)
func collapseAggregateErrors(errors []error) error {
if len(errors) == 0 {
return nil
}
if len(errors) == 1 {
return errors[0]
}
allNoMatchErrors := true
for _, err := range errors {
allNoMatchErrors = allNoMatchErrors && IsNoMatchError(err)
}
if allNoMatchErrors {
return errors[0]
}
return utilerrors.NewAggregate(errors)
}

View File

@ -1,205 +0,0 @@
/*
Copyright 2015 The Kubernetes Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package meta
import (
"fmt"
"reflect"
"k8s.io/apimachinery/pkg/conversion"
"k8s.io/apimachinery/pkg/runtime"
)
// IsListType returns true if the provided Object has a slice called Items
func IsListType(obj runtime.Object) bool {
// if we're a runtime.Unstructured, check whether this is a list.
// TODO: refactor GetItemsPtr to use an interface that returns []runtime.Object
if unstructured, ok := obj.(runtime.Unstructured); ok {
return unstructured.IsList()
}
_, err := GetItemsPtr(obj)
return err == nil
}
// GetItemsPtr returns a pointer to the list object's Items member.
// If 'list' doesn't have an Items member, it's not really a list type
// and an error will be returned.
// This function will either return a pointer to a slice, or an error, but not both.
func GetItemsPtr(list runtime.Object) (interface{}, error) {
v, err := conversion.EnforcePtr(list)
if err != nil {
return nil, err
}
items := v.FieldByName("Items")
if !items.IsValid() {
return nil, fmt.Errorf("no Items field in %#v", list)
}
switch items.Kind() {
case reflect.Interface, reflect.Ptr:
target := reflect.TypeOf(items.Interface()).Elem()
if target.Kind() != reflect.Slice {
return nil, fmt.Errorf("items: Expected slice, got %s", target.Kind())
}
return items.Interface(), nil
case reflect.Slice:
return items.Addr().Interface(), nil
default:
return nil, fmt.Errorf("items: Expected slice, got %s", items.Kind())
}
}
// EachListItem invokes fn on each runtime.Object in the list. Any error immediately terminates
// the loop.
func EachListItem(obj runtime.Object, fn func(runtime.Object) error) error {
if unstructured, ok := obj.(runtime.Unstructured); ok {
return unstructured.EachListItem(fn)
}
// TODO: Change to an interface call?
itemsPtr, err := GetItemsPtr(obj)
if err != nil {
return err
}
items, err := conversion.EnforcePtr(itemsPtr)
if err != nil {
return err
}
len := items.Len()
if len == 0 {
return nil
}
takeAddr := false
if elemType := items.Type().Elem(); elemType.Kind() != reflect.Ptr && elemType.Kind() != reflect.Interface {
if !items.Index(0).CanAddr() {
return fmt.Errorf("unable to take address of items in %T for EachListItem", obj)
}
takeAddr = true
}
for i := 0; i < len; i++ {
raw := items.Index(i)
if takeAddr {
raw = raw.Addr()
}
switch item := raw.Interface().(type) {
case *runtime.RawExtension:
if err := fn(item.Object); err != nil {
return err
}
case runtime.Object:
if err := fn(item); err != nil {
return err
}
default:
obj, ok := item.(runtime.Object)
if !ok {
return fmt.Errorf("%v: item[%v]: Expected object, got %#v(%s)", obj, i, raw.Interface(), raw.Kind())
}
if err := fn(obj); err != nil {
return err
}
}
}
return nil
}
// ExtractList returns obj's Items element as an array of runtime.Objects.
// Returns an error if obj is not a List type (does not have an Items member).
func ExtractList(obj runtime.Object) ([]runtime.Object, error) {
itemsPtr, err := GetItemsPtr(obj)
if err != nil {
return nil, err
}
items, err := conversion.EnforcePtr(itemsPtr)
if err != nil {
return nil, err
}
list := make([]runtime.Object, items.Len())
for i := range list {
raw := items.Index(i)
switch item := raw.Interface().(type) {
case runtime.RawExtension:
switch {
case item.Object != nil:
list[i] = item.Object
case item.Raw != nil:
// TODO: Set ContentEncoding and ContentType correctly.
list[i] = &runtime.Unknown{Raw: item.Raw}
default:
list[i] = nil
}
case runtime.Object:
list[i] = item
default:
var found bool
if list[i], found = raw.Addr().Interface().(runtime.Object); !found {
return nil, fmt.Errorf("%v: item[%v]: Expected object, got %#v(%s)", obj, i, raw.Interface(), raw.Kind())
}
}
}
return list, nil
}
// objectSliceType is the type of a slice of Objects
var objectSliceType = reflect.TypeOf([]runtime.Object{})
// SetList sets the given list object's Items member have the elements given in
// objects.
// Returns an error if list is not a List type (does not have an Items member),
// or if any of the objects are not of the right type.
func SetList(list runtime.Object, objects []runtime.Object) error {
itemsPtr, err := GetItemsPtr(list)
if err != nil {
return err
}
items, err := conversion.EnforcePtr(itemsPtr)
if err != nil {
return err
}
if items.Type() == objectSliceType {
items.Set(reflect.ValueOf(objects))
return nil
}
slice := reflect.MakeSlice(items.Type(), len(objects), len(objects))
for i := range objects {
dest := slice.Index(i)
if dest.Type() == reflect.TypeOf(runtime.RawExtension{}) {
dest = dest.FieldByName("Object")
}
// check to see if you're directly assignable
if reflect.TypeOf(objects[i]).AssignableTo(dest.Type()) {
dest.Set(reflect.ValueOf(objects[i]))
continue
}
src, err := conversion.EnforcePtr(objects[i])
if err != nil {
return err
}
if src.Type().AssignableTo(dest.Type()) {
dest.Set(src)
} else if src.Type().ConvertibleTo(dest.Type()) {
dest.Set(src.Convert(dest.Type()))
} else {
return fmt.Errorf("item[%d]: can't assign or convert %v into %v", i, src.Type(), dest.Type())
}
}
items.Set(slice)
return nil
}

View File

@ -1,146 +0,0 @@
/*
Copyright 2014 The Kubernetes Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package meta
import (
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/runtime"
"k8s.io/apimachinery/pkg/runtime/schema"
"k8s.io/apimachinery/pkg/types"
)
// VersionInterfaces contains the interfaces one should use for dealing with types of a particular version.
type VersionInterfaces struct {
runtime.ObjectConvertor
MetadataAccessor
}
type ListMetaAccessor interface {
GetListMeta() List
}
// List lets you work with list metadata from any of the versioned or
// internal API objects. Attempting to set or retrieve a field on an object that does
// not support that field will be a no-op and return a default value.
type List metav1.ListInterface
// Type exposes the type and APIVersion of versioned or internal API objects.
type Type metav1.Type
// MetadataAccessor lets you work with object and list metadata from any of the versioned or
// internal API objects. Attempting to set or retrieve a field on an object that does
// not support that field (Name, UID, Namespace on lists) will be a no-op and return
// a default value.
//
// MetadataAccessor exposes Interface in a way that can be used with multiple objects.
type MetadataAccessor interface {
APIVersion(obj runtime.Object) (string, error)
SetAPIVersion(obj runtime.Object, version string) error
Kind(obj runtime.Object) (string, error)
SetKind(obj runtime.Object, kind string) error
Namespace(obj runtime.Object) (string, error)
SetNamespace(obj runtime.Object, namespace string) error
Name(obj runtime.Object) (string, error)
SetName(obj runtime.Object, name string) error
GenerateName(obj runtime.Object) (string, error)
SetGenerateName(obj runtime.Object, name string) error
UID(obj runtime.Object) (types.UID, error)
SetUID(obj runtime.Object, uid types.UID) error
SelfLink(obj runtime.Object) (string, error)
SetSelfLink(obj runtime.Object, selfLink string) error
Labels(obj runtime.Object) (map[string]string, error)
SetLabels(obj runtime.Object, labels map[string]string) error
Annotations(obj runtime.Object) (map[string]string, error)
SetAnnotations(obj runtime.Object, annotations map[string]string) error
runtime.ResourceVersioner
}
type RESTScopeName string
const (
RESTScopeNameNamespace RESTScopeName = "namespace"
RESTScopeNameRoot RESTScopeName = "root"
)
// RESTScope contains the information needed to deal with REST resources that are in a resource hierarchy
type RESTScope interface {
// Name of the scope
Name() RESTScopeName
// ParamName is the optional name of the parameter that should be inserted in the resource url
// If empty, no param will be inserted
ParamName() string
// ArgumentName is the optional name that should be used for the variable holding the value.
ArgumentName() string
// ParamDescription is the optional description to use to document the parameter in api documentation
ParamDescription() string
}
// RESTMapping contains the information needed to deal with objects of a specific
// resource and kind in a RESTful manner.
type RESTMapping struct {
// Resource is a string representing the name of this resource as a REST client would see it
Resource string
GroupVersionKind schema.GroupVersionKind
// Scope contains the information needed to deal with REST Resources that are in a resource hierarchy
Scope RESTScope
runtime.ObjectConvertor
MetadataAccessor
}
// RESTMapper allows clients to map resources to kind, and map kind and version
// to interfaces for manipulating those objects. It is primarily intended for
// consumers of Kubernetes compatible REST APIs as defined in docs/devel/api-conventions.md.
//
// The Kubernetes API provides versioned resources and object kinds which are scoped
// to API groups. In other words, kinds and resources should not be assumed to be
// unique across groups.
//
// TODO: split into sub-interfaces
type RESTMapper interface {
// KindFor takes a partial resource and returns the single match. Returns an error if there are multiple matches
KindFor(resource schema.GroupVersionResource) (schema.GroupVersionKind, error)
// KindsFor takes a partial resource and returns the list of potential kinds in priority order
KindsFor(resource schema.GroupVersionResource) ([]schema.GroupVersionKind, error)
// ResourceFor takes a partial resource and returns the single match. Returns an error if there are multiple matches
ResourceFor(input schema.GroupVersionResource) (schema.GroupVersionResource, error)
// ResourcesFor takes a partial resource and returns the list of potential resource in priority order
ResourcesFor(input schema.GroupVersionResource) ([]schema.GroupVersionResource, error)
// RESTMapping identifies a preferred resource mapping for the provided group kind.
RESTMapping(gk schema.GroupKind, versions ...string) (*RESTMapping, error)
// RESTMappings returns all resource mappings for the provided group kind if no
// version search is provided. Otherwise identifies a preferred resource mapping for
// the provided version(s).
RESTMappings(gk schema.GroupKind, versions ...string) ([]*RESTMapping, error)
ResourceSingularizer(resource string) (singular string, err error)
}

View File

@ -1,636 +0,0 @@
/*
Copyright 2014 The Kubernetes Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package meta
import (
"fmt"
"reflect"
"github.com/golang/glog"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
metav1alpha1 "k8s.io/apimachinery/pkg/apis/meta/v1alpha1"
"k8s.io/apimachinery/pkg/conversion"
"k8s.io/apimachinery/pkg/runtime"
"k8s.io/apimachinery/pkg/runtime/schema"
"k8s.io/apimachinery/pkg/types"
)
// errNotList is returned when an object implements the Object style interfaces but not the List style
// interfaces.
var errNotList = fmt.Errorf("object does not implement the List interfaces")
var errNotCommon = fmt.Errorf("object does not implement the common interface for accessing the SelfLink")
// CommonAccessor returns a Common interface for the provided object or an error if the object does
// not provide List.
// TODO: return bool instead of error
func CommonAccessor(obj interface{}) (metav1.Common, error) {
switch t := obj.(type) {
case List:
return t, nil
case metav1.ListInterface:
return t, nil
case ListMetaAccessor:
if m := t.GetListMeta(); m != nil {
return m, nil
}
return nil, errNotCommon
case metav1.ListMetaAccessor:
if m := t.GetListMeta(); m != nil {
return m, nil
}
return nil, errNotCommon
case metav1.Object:
return t, nil
case metav1.ObjectMetaAccessor:
if m := t.GetObjectMeta(); m != nil {
return m, nil
}
return nil, errNotCommon
default:
return nil, errNotCommon
}
}
// ListAccessor returns a List interface for the provided object or an error if the object does
// not provide List.
// IMPORTANT: Objects are NOT a superset of lists. Do not use this check to determine whether an
// object *is* a List.
// TODO: return bool instead of error
func ListAccessor(obj interface{}) (List, error) {
switch t := obj.(type) {
case List:
return t, nil
case metav1.ListInterface:
return t, nil
case ListMetaAccessor:
if m := t.GetListMeta(); m != nil {
return m, nil
}
return nil, errNotList
case metav1.ListMetaAccessor:
if m := t.GetListMeta(); m != nil {
return m, nil
}
return nil, errNotList
default:
panic(fmt.Errorf("%T does not implement the List interface", obj))
}
}
// errNotObject is returned when an object implements the List style interfaces but not the Object style
// interfaces.
var errNotObject = fmt.Errorf("object does not implement the Object interfaces")
// Accessor takes an arbitrary object pointer and returns meta.Interface.
// obj must be a pointer to an API type. An error is returned if the minimum
// required fields are missing. Fields that are not required return the default
// value and are a no-op if set.
// TODO: return bool instead of error
func Accessor(obj interface{}) (metav1.Object, error) {
switch t := obj.(type) {
case metav1.Object:
return t, nil
case metav1.ObjectMetaAccessor:
if m := t.GetObjectMeta(); m != nil {
return m, nil
}
return nil, errNotObject
default:
return nil, errNotObject
}
}
// AsPartialObjectMetadata takes the metav1 interface and returns a partial object.
// TODO: consider making this solely a conversion action.
func AsPartialObjectMetadata(m metav1.Object) *metav1alpha1.PartialObjectMetadata {
switch t := m.(type) {
case *metav1.ObjectMeta:
return &metav1alpha1.PartialObjectMetadata{ObjectMeta: *t}
default:
return &metav1alpha1.PartialObjectMetadata{
ObjectMeta: metav1.ObjectMeta{
Name: m.GetName(),
GenerateName: m.GetGenerateName(),
Namespace: m.GetNamespace(),
SelfLink: m.GetSelfLink(),
UID: m.GetUID(),
ResourceVersion: m.GetResourceVersion(),
Generation: m.GetGeneration(),
CreationTimestamp: m.GetCreationTimestamp(),
DeletionTimestamp: m.GetDeletionTimestamp(),
DeletionGracePeriodSeconds: m.GetDeletionGracePeriodSeconds(),
Labels: m.GetLabels(),
Annotations: m.GetAnnotations(),
OwnerReferences: m.GetOwnerReferences(),
Finalizers: m.GetFinalizers(),
ClusterName: m.GetClusterName(),
Initializers: m.GetInitializers(),
},
}
}
}
// TypeAccessor returns an interface that allows retrieving and modifying the APIVersion
// and Kind of an in-memory internal object.
// TODO: this interface is used to test code that does not have ObjectMeta or ListMeta
// in round tripping (objects which can use apiVersion/kind, but do not fit the Kube
// api conventions).
func TypeAccessor(obj interface{}) (Type, error) {
if typed, ok := obj.(runtime.Object); ok {
return objectAccessor{typed}, nil
}
v, err := conversion.EnforcePtr(obj)
if err != nil {
return nil, err
}
t := v.Type()
if v.Kind() != reflect.Struct {
return nil, fmt.Errorf("expected struct, but got %v: %v (%#v)", v.Kind(), t, v.Interface())
}
typeMeta := v.FieldByName("TypeMeta")
if !typeMeta.IsValid() {
return nil, fmt.Errorf("struct %v lacks embedded TypeMeta type", t)
}
a := &genericAccessor{}
if err := extractFromTypeMeta(typeMeta, a); err != nil {
return nil, fmt.Errorf("unable to find type fields on %#v: %v", typeMeta, err)
}
return a, nil
}
type objectAccessor struct {
runtime.Object
}
func (obj objectAccessor) GetKind() string {
return obj.GetObjectKind().GroupVersionKind().Kind
}
func (obj objectAccessor) SetKind(kind string) {
gvk := obj.GetObjectKind().GroupVersionKind()
gvk.Kind = kind
obj.GetObjectKind().SetGroupVersionKind(gvk)
}
func (obj objectAccessor) GetAPIVersion() string {
return obj.GetObjectKind().GroupVersionKind().GroupVersion().String()
}
func (obj objectAccessor) SetAPIVersion(version string) {
gvk := obj.GetObjectKind().GroupVersionKind()
gv, err := schema.ParseGroupVersion(version)
if err != nil {
gv = schema.GroupVersion{Version: version}
}
gvk.Group, gvk.Version = gv.Group, gv.Version
obj.GetObjectKind().SetGroupVersionKind(gvk)
}
// NewAccessor returns a MetadataAccessor that can retrieve
// or manipulate resource version on objects derived from core API
// metadata concepts.
func NewAccessor() MetadataAccessor {
return resourceAccessor{}
}
// resourceAccessor implements ResourceVersioner and SelfLinker.
type resourceAccessor struct{}
func (resourceAccessor) Kind(obj runtime.Object) (string, error) {
return objectAccessor{obj}.GetKind(), nil
}
func (resourceAccessor) SetKind(obj runtime.Object, kind string) error {
objectAccessor{obj}.SetKind(kind)
return nil
}
func (resourceAccessor) APIVersion(obj runtime.Object) (string, error) {
return objectAccessor{obj}.GetAPIVersion(), nil
}
func (resourceAccessor) SetAPIVersion(obj runtime.Object, version string) error {
objectAccessor{obj}.SetAPIVersion(version)
return nil
}
func (resourceAccessor) Namespace(obj runtime.Object) (string, error) {
accessor, err := Accessor(obj)
if err != nil {
return "", err
}
return accessor.GetNamespace(), nil
}
func (resourceAccessor) SetNamespace(obj runtime.Object, namespace string) error {
accessor, err := Accessor(obj)
if err != nil {
return err
}
accessor.SetNamespace(namespace)
return nil
}
func (resourceAccessor) Name(obj runtime.Object) (string, error) {
accessor, err := Accessor(obj)
if err != nil {
return "", err
}
return accessor.GetName(), nil
}
func (resourceAccessor) SetName(obj runtime.Object, name string) error {
accessor, err := Accessor(obj)
if err != nil {
return err
}
accessor.SetName(name)
return nil
}
func (resourceAccessor) GenerateName(obj runtime.Object) (string, error) {
accessor, err := Accessor(obj)
if err != nil {
return "", err
}
return accessor.GetGenerateName(), nil
}
func (resourceAccessor) SetGenerateName(obj runtime.Object, name string) error {
accessor, err := Accessor(obj)
if err != nil {
return err
}
accessor.SetGenerateName(name)
return nil
}
func (resourceAccessor) UID(obj runtime.Object) (types.UID, error) {
accessor, err := Accessor(obj)
if err != nil {
return "", err
}
return accessor.GetUID(), nil
}
func (resourceAccessor) SetUID(obj runtime.Object, uid types.UID) error {
accessor, err := Accessor(obj)
if err != nil {
return err
}
accessor.SetUID(uid)
return nil
}
func (resourceAccessor) SelfLink(obj runtime.Object) (string, error) {
accessor, err := CommonAccessor(obj)
if err != nil {
return "", err
}
return accessor.GetSelfLink(), nil
}
func (resourceAccessor) SetSelfLink(obj runtime.Object, selfLink string) error {
accessor, err := CommonAccessor(obj)
if err != nil {
return err
}
accessor.SetSelfLink(selfLink)
return nil
}
func (resourceAccessor) Labels(obj runtime.Object) (map[string]string, error) {
accessor, err := Accessor(obj)
if err != nil {
return nil, err
}
return accessor.GetLabels(), nil
}
func (resourceAccessor) SetLabels(obj runtime.Object, labels map[string]string) error {
accessor, err := Accessor(obj)
if err != nil {
return err
}
accessor.SetLabels(labels)
return nil
}
func (resourceAccessor) Annotations(obj runtime.Object) (map[string]string, error) {
accessor, err := Accessor(obj)
if err != nil {
return nil, err
}
return accessor.GetAnnotations(), nil
}
func (resourceAccessor) SetAnnotations(obj runtime.Object, annotations map[string]string) error {
accessor, err := Accessor(obj)
if err != nil {
return err
}
accessor.SetAnnotations(annotations)
return nil
}
func (resourceAccessor) ResourceVersion(obj runtime.Object) (string, error) {
accessor, err := CommonAccessor(obj)
if err != nil {
return "", err
}
return accessor.GetResourceVersion(), nil
}
func (resourceAccessor) SetResourceVersion(obj runtime.Object, version string) error {
accessor, err := CommonAccessor(obj)
if err != nil {
return err
}
accessor.SetResourceVersion(version)
return nil
}
// extractFromOwnerReference extracts v to o. v is the OwnerReferences field of an object.
func extractFromOwnerReference(v reflect.Value, o *metav1.OwnerReference) error {
if err := runtime.Field(v, "APIVersion", &o.APIVersion); err != nil {
return err
}
if err := runtime.Field(v, "Kind", &o.Kind); err != nil {
return err
}
if err := runtime.Field(v, "Name", &o.Name); err != nil {
return err
}
if err := runtime.Field(v, "UID", &o.UID); err != nil {
return err
}
var controllerPtr *bool
if err := runtime.Field(v, "Controller", &controllerPtr); err != nil {
return err
}
if controllerPtr != nil {
controller := *controllerPtr
o.Controller = &controller
}
var blockOwnerDeletionPtr *bool
if err := runtime.Field(v, "BlockOwnerDeletion", &blockOwnerDeletionPtr); err != nil {
return err
}
if blockOwnerDeletionPtr != nil {
block := *blockOwnerDeletionPtr
o.BlockOwnerDeletion = &block
}
return nil
}
// setOwnerReference sets v to o. v is the OwnerReferences field of an object.
func setOwnerReference(v reflect.Value, o *metav1.OwnerReference) error {
if err := runtime.SetField(o.APIVersion, v, "APIVersion"); err != nil {
return err
}
if err := runtime.SetField(o.Kind, v, "Kind"); err != nil {
return err
}
if err := runtime.SetField(o.Name, v, "Name"); err != nil {
return err
}
if err := runtime.SetField(o.UID, v, "UID"); err != nil {
return err
}
if o.Controller != nil {
controller := *(o.Controller)
if err := runtime.SetField(&controller, v, "Controller"); err != nil {
return err
}
}
if o.BlockOwnerDeletion != nil {
block := *(o.BlockOwnerDeletion)
if err := runtime.SetField(&block, v, "BlockOwnerDeletion"); err != nil {
return err
}
}
return nil
}
// genericAccessor contains pointers to strings that can modify an arbitrary
// struct and implements the Accessor interface.
type genericAccessor struct {
namespace *string
name *string
generateName *string
uid *types.UID
apiVersion *string
kind *string
resourceVersion *string
selfLink *string
creationTimestamp *metav1.Time
deletionTimestamp **metav1.Time
labels *map[string]string
annotations *map[string]string
ownerReferences reflect.Value
finalizers *[]string
}
func (a genericAccessor) GetNamespace() string {
if a.namespace == nil {
return ""
}
return *a.namespace
}
func (a genericAccessor) SetNamespace(namespace string) {
if a.namespace == nil {
return
}
*a.namespace = namespace
}
func (a genericAccessor) GetName() string {
if a.name == nil {
return ""
}
return *a.name
}
func (a genericAccessor) SetName(name string) {
if a.name == nil {
return
}
*a.name = name
}
func (a genericAccessor) GetGenerateName() string {
if a.generateName == nil {
return ""
}
return *a.generateName
}
func (a genericAccessor) SetGenerateName(generateName string) {
if a.generateName == nil {
return
}
*a.generateName = generateName
}
func (a genericAccessor) GetUID() types.UID {
if a.uid == nil {
return ""
}
return *a.uid
}
func (a genericAccessor) SetUID(uid types.UID) {
if a.uid == nil {
return
}
*a.uid = uid
}
func (a genericAccessor) GetAPIVersion() string {
return *a.apiVersion
}
func (a genericAccessor) SetAPIVersion(version string) {
*a.apiVersion = version
}
func (a genericAccessor) GetKind() string {
return *a.kind
}
func (a genericAccessor) SetKind(kind string) {
*a.kind = kind
}
func (a genericAccessor) GetResourceVersion() string {
return *a.resourceVersion
}
func (a genericAccessor) SetResourceVersion(version string) {
*a.resourceVersion = version
}
func (a genericAccessor) GetSelfLink() string {
return *a.selfLink
}
func (a genericAccessor) SetSelfLink(selfLink string) {
*a.selfLink = selfLink
}
func (a genericAccessor) GetCreationTimestamp() metav1.Time {
return *a.creationTimestamp
}
func (a genericAccessor) SetCreationTimestamp(timestamp metav1.Time) {
*a.creationTimestamp = timestamp
}
func (a genericAccessor) GetDeletionTimestamp() *metav1.Time {
return *a.deletionTimestamp
}
func (a genericAccessor) SetDeletionTimestamp(timestamp *metav1.Time) {
*a.deletionTimestamp = timestamp
}
func (a genericAccessor) GetLabels() map[string]string {
if a.labels == nil {
return nil
}
return *a.labels
}
func (a genericAccessor) SetLabels(labels map[string]string) {
*a.labels = labels
}
func (a genericAccessor) GetAnnotations() map[string]string {
if a.annotations == nil {
return nil
}
return *a.annotations
}
func (a genericAccessor) SetAnnotations(annotations map[string]string) {
if a.annotations == nil {
emptyAnnotations := make(map[string]string)
a.annotations = &emptyAnnotations
}
*a.annotations = annotations
}
func (a genericAccessor) GetFinalizers() []string {
if a.finalizers == nil {
return nil
}
return *a.finalizers
}
func (a genericAccessor) SetFinalizers(finalizers []string) {
*a.finalizers = finalizers
}
func (a genericAccessor) GetOwnerReferences() []metav1.OwnerReference {
var ret []metav1.OwnerReference
s := a.ownerReferences
if s.Kind() != reflect.Ptr || s.Elem().Kind() != reflect.Slice {
glog.Errorf("expect %v to be a pointer to slice", s)
return ret
}
s = s.Elem()
// Set the capacity to one element greater to avoid copy if the caller later append an element.
ret = make([]metav1.OwnerReference, s.Len(), s.Len()+1)
for i := 0; i < s.Len(); i++ {
if err := extractFromOwnerReference(s.Index(i), &ret[i]); err != nil {
glog.Errorf("extractFromOwnerReference failed: %v", err)
return ret
}
}
return ret
}
func (a genericAccessor) SetOwnerReferences(references []metav1.OwnerReference) {
s := a.ownerReferences
if s.Kind() != reflect.Ptr || s.Elem().Kind() != reflect.Slice {
glog.Errorf("expect %v to be a pointer to slice", s)
}
s = s.Elem()
newReferences := reflect.MakeSlice(s.Type(), len(references), len(references))
for i := 0; i < len(references); i++ {
if err := setOwnerReference(newReferences.Index(i), &references[i]); err != nil {
glog.Errorf("setOwnerReference failed: %v", err)
return
}
}
s.Set(newReferences)
}
// extractFromTypeMeta extracts pointers to version and kind fields from an object
func extractFromTypeMeta(v reflect.Value, a *genericAccessor) error {
if err := runtime.FieldPtr(v, "APIVersion", &a.apiVersion); err != nil {
return err
}
if err := runtime.FieldPtr(v, "Kind", &a.kind); err != nil {
return err
}
return nil
}

View File

@ -1,210 +0,0 @@
/*
Copyright 2014 The Kubernetes Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package meta
import (
"fmt"
"strings"
"k8s.io/apimachinery/pkg/runtime/schema"
utilerrors "k8s.io/apimachinery/pkg/util/errors"
)
// MultiRESTMapper is a wrapper for multiple RESTMappers.
type MultiRESTMapper []RESTMapper
func (m MultiRESTMapper) String() string {
nested := []string{}
for _, t := range m {
currString := fmt.Sprintf("%v", t)
splitStrings := strings.Split(currString, "\n")
nested = append(nested, strings.Join(splitStrings, "\n\t"))
}
return fmt.Sprintf("MultiRESTMapper{\n\t%s\n}", strings.Join(nested, "\n\t"))
}
// ResourceSingularizer converts a REST resource name from plural to singular (e.g., from pods to pod)
// This implementation supports multiple REST schemas and return the first match.
func (m MultiRESTMapper) ResourceSingularizer(resource string) (singular string, err error) {
for _, t := range m {
singular, err = t.ResourceSingularizer(resource)
if err == nil {
return
}
}
return
}
func (m MultiRESTMapper) ResourcesFor(resource schema.GroupVersionResource) ([]schema.GroupVersionResource, error) {
allGVRs := []schema.GroupVersionResource{}
for _, t := range m {
gvrs, err := t.ResourcesFor(resource)
// ignore "no match" errors, but any other error percolates back up
if IsNoMatchError(err) {
continue
}
if err != nil {
return nil, err
}
// walk the existing values to de-dup
for _, curr := range gvrs {
found := false
for _, existing := range allGVRs {
if curr == existing {
found = true
break
}
}
if !found {
allGVRs = append(allGVRs, curr)
}
}
}
if len(allGVRs) == 0 {
return nil, &NoResourceMatchError{PartialResource: resource}
}
return allGVRs, nil
}
func (m MultiRESTMapper) KindsFor(resource schema.GroupVersionResource) (gvk []schema.GroupVersionKind, err error) {
allGVKs := []schema.GroupVersionKind{}
for _, t := range m {
gvks, err := t.KindsFor(resource)
// ignore "no match" errors, but any other error percolates back up
if IsNoMatchError(err) {
continue
}
if err != nil {
return nil, err
}
// walk the existing values to de-dup
for _, curr := range gvks {
found := false
for _, existing := range allGVKs {
if curr == existing {
found = true
break
}
}
if !found {
allGVKs = append(allGVKs, curr)
}
}
}
if len(allGVKs) == 0 {
return nil, &NoResourceMatchError{PartialResource: resource}
}
return allGVKs, nil
}
func (m MultiRESTMapper) ResourceFor(resource schema.GroupVersionResource) (schema.GroupVersionResource, error) {
resources, err := m.ResourcesFor(resource)
if err != nil {
return schema.GroupVersionResource{}, err
}
if len(resources) == 1 {
return resources[0], nil
}
return schema.GroupVersionResource{}, &AmbiguousResourceError{PartialResource: resource, MatchingResources: resources}
}
func (m MultiRESTMapper) KindFor(resource schema.GroupVersionResource) (schema.GroupVersionKind, error) {
kinds, err := m.KindsFor(resource)
if err != nil {
return schema.GroupVersionKind{}, err
}
if len(kinds) == 1 {
return kinds[0], nil
}
return schema.GroupVersionKind{}, &AmbiguousResourceError{PartialResource: resource, MatchingKinds: kinds}
}
// RESTMapping provides the REST mapping for the resource based on the
// kind and version. This implementation supports multiple REST schemas and
// return the first match.
func (m MultiRESTMapper) RESTMapping(gk schema.GroupKind, versions ...string) (*RESTMapping, error) {
allMappings := []*RESTMapping{}
errors := []error{}
for _, t := range m {
currMapping, err := t.RESTMapping(gk, versions...)
// ignore "no match" errors, but any other error percolates back up
if IsNoMatchError(err) {
continue
}
if err != nil {
errors = append(errors, err)
continue
}
allMappings = append(allMappings, currMapping)
}
// if we got exactly one mapping, then use it even if other requested failed
if len(allMappings) == 1 {
return allMappings[0], nil
}
if len(allMappings) > 1 {
var kinds []schema.GroupVersionKind
for _, m := range allMappings {
kinds = append(kinds, m.GroupVersionKind)
}
return nil, &AmbiguousKindError{PartialKind: gk.WithVersion(""), MatchingKinds: kinds}
}
if len(errors) > 0 {
return nil, utilerrors.NewAggregate(errors)
}
return nil, &NoKindMatchError{PartialKind: gk.WithVersion("")}
}
// RESTMappings returns all possible RESTMappings for the provided group kind, or an error
// if the type is not recognized.
func (m MultiRESTMapper) RESTMappings(gk schema.GroupKind, versions ...string) ([]*RESTMapping, error) {
var allMappings []*RESTMapping
var errors []error
for _, t := range m {
currMappings, err := t.RESTMappings(gk, versions...)
// ignore "no match" errors, but any other error percolates back up
if IsNoMatchError(err) {
continue
}
if err != nil {
errors = append(errors, err)
continue
}
allMappings = append(allMappings, currMappings...)
}
if len(errors) > 0 {
return nil, utilerrors.NewAggregate(errors)
}
if len(allMappings) == 0 {
return nil, &NoKindMatchError{PartialKind: gk.WithVersion("")}
}
return allMappings, nil
}

View File

@ -1,222 +0,0 @@
/*
Copyright 2016 The Kubernetes Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package meta
import (
"fmt"
"k8s.io/apimachinery/pkg/runtime/schema"
)
const (
AnyGroup = "*"
AnyVersion = "*"
AnyResource = "*"
AnyKind = "*"
)
// PriorityRESTMapper is a wrapper for automatically choosing a particular Resource or Kind
// when multiple matches are possible
type PriorityRESTMapper struct {
// Delegate is the RESTMapper to use to locate all the Kind and Resource matches
Delegate RESTMapper
// ResourcePriority is a list of priority patterns to apply to matching resources.
// The list of all matching resources is narrowed based on the patterns until only one remains.
// A pattern with no matches is skipped. A pattern with more than one match uses its
// matches as the list to continue matching against.
ResourcePriority []schema.GroupVersionResource
// KindPriority is a list of priority patterns to apply to matching kinds.
// The list of all matching kinds is narrowed based on the patterns until only one remains.
// A pattern with no matches is skipped. A pattern with more than one match uses its
// matches as the list to continue matching against.
KindPriority []schema.GroupVersionKind
}
func (m PriorityRESTMapper) String() string {
return fmt.Sprintf("PriorityRESTMapper{\n\t%v\n\t%v\n\t%v\n}", m.ResourcePriority, m.KindPriority, m.Delegate)
}
// ResourceFor finds all resources, then passes them through the ResourcePriority patterns to find a single matching hit.
func (m PriorityRESTMapper) ResourceFor(partiallySpecifiedResource schema.GroupVersionResource) (schema.GroupVersionResource, error) {
originalGVRs, err := m.Delegate.ResourcesFor(partiallySpecifiedResource)
if err != nil {
return schema.GroupVersionResource{}, err
}
if len(originalGVRs) == 1 {
return originalGVRs[0], nil
}
remainingGVRs := append([]schema.GroupVersionResource{}, originalGVRs...)
for _, pattern := range m.ResourcePriority {
matchedGVRs := []schema.GroupVersionResource{}
for _, gvr := range remainingGVRs {
if resourceMatches(pattern, gvr) {
matchedGVRs = append(matchedGVRs, gvr)
}
}
switch len(matchedGVRs) {
case 0:
// if you have no matches, then nothing matched this pattern just move to the next
continue
case 1:
// one match, return
return matchedGVRs[0], nil
default:
// more than one match, use the matched hits as the list moving to the next pattern.
// this way you can have a series of selection criteria
remainingGVRs = matchedGVRs
}
}
return schema.GroupVersionResource{}, &AmbiguousResourceError{PartialResource: partiallySpecifiedResource, MatchingResources: originalGVRs}
}
// KindFor finds all kinds, then passes them through the KindPriority patterns to find a single matching hit.
func (m PriorityRESTMapper) KindFor(partiallySpecifiedResource schema.GroupVersionResource) (schema.GroupVersionKind, error) {
originalGVKs, err := m.Delegate.KindsFor(partiallySpecifiedResource)
if err != nil {
return schema.GroupVersionKind{}, err
}
if len(originalGVKs) == 1 {
return originalGVKs[0], nil
}
remainingGVKs := append([]schema.GroupVersionKind{}, originalGVKs...)
for _, pattern := range m.KindPriority {
matchedGVKs := []schema.GroupVersionKind{}
for _, gvr := range remainingGVKs {
if kindMatches(pattern, gvr) {
matchedGVKs = append(matchedGVKs, gvr)
}
}
switch len(matchedGVKs) {
case 0:
// if you have no matches, then nothing matched this pattern just move to the next
continue
case 1:
// one match, return
return matchedGVKs[0], nil
default:
// more than one match, use the matched hits as the list moving to the next pattern.
// this way you can have a series of selection criteria
remainingGVKs = matchedGVKs
}
}
return schema.GroupVersionKind{}, &AmbiguousResourceError{PartialResource: partiallySpecifiedResource, MatchingKinds: originalGVKs}
}
func resourceMatches(pattern schema.GroupVersionResource, resource schema.GroupVersionResource) bool {
if pattern.Group != AnyGroup && pattern.Group != resource.Group {
return false
}
if pattern.Version != AnyVersion && pattern.Version != resource.Version {
return false
}
if pattern.Resource != AnyResource && pattern.Resource != resource.Resource {
return false
}
return true
}
func kindMatches(pattern schema.GroupVersionKind, kind schema.GroupVersionKind) bool {
if pattern.Group != AnyGroup && pattern.Group != kind.Group {
return false
}
if pattern.Version != AnyVersion && pattern.Version != kind.Version {
return false
}
if pattern.Kind != AnyKind && pattern.Kind != kind.Kind {
return false
}
return true
}
func (m PriorityRESTMapper) RESTMapping(gk schema.GroupKind, versions ...string) (mapping *RESTMapping, err error) {
mappings, err := m.Delegate.RESTMappings(gk)
if err != nil {
return nil, err
}
// any versions the user provides take priority
priorities := m.KindPriority
if len(versions) > 0 {
priorities = make([]schema.GroupVersionKind, 0, len(m.KindPriority)+len(versions))
for _, version := range versions {
gv := schema.GroupVersion{
Version: version,
Group: gk.Group,
}
priorities = append(priorities, gv.WithKind(AnyKind))
}
priorities = append(priorities, m.KindPriority...)
}
remaining := append([]*RESTMapping{}, mappings...)
for _, pattern := range priorities {
var matching []*RESTMapping
for _, m := range remaining {
if kindMatches(pattern, m.GroupVersionKind) {
matching = append(matching, m)
}
}
switch len(matching) {
case 0:
// if you have no matches, then nothing matched this pattern just move to the next
continue
case 1:
// one match, return
return matching[0], nil
default:
// more than one match, use the matched hits as the list moving to the next pattern.
// this way you can have a series of selection criteria
remaining = matching
}
}
if len(remaining) == 1 {
return remaining[0], nil
}
var kinds []schema.GroupVersionKind
for _, m := range mappings {
kinds = append(kinds, m.GroupVersionKind)
}
return nil, &AmbiguousKindError{PartialKind: gk.WithVersion(""), MatchingKinds: kinds}
}
func (m PriorityRESTMapper) RESTMappings(gk schema.GroupKind, versions ...string) ([]*RESTMapping, error) {
return m.Delegate.RESTMappings(gk, versions...)
}
func (m PriorityRESTMapper) ResourceSingularizer(resource string) (singular string, err error) {
return m.Delegate.ResourceSingularizer(resource)
}
func (m PriorityRESTMapper) ResourcesFor(partiallySpecifiedResource schema.GroupVersionResource) ([]schema.GroupVersionResource, error) {
return m.Delegate.ResourcesFor(partiallySpecifiedResource)
}
func (m PriorityRESTMapper) KindsFor(partiallySpecifiedResource schema.GroupVersionResource) (gvk []schema.GroupVersionKind, err error) {
return m.Delegate.KindsFor(partiallySpecifiedResource)
}

View File

@ -1,548 +0,0 @@
/*
Copyright 2014 The Kubernetes Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
// TODO: move everything in this file to pkg/api/rest
package meta
import (
"fmt"
"sort"
"strings"
"k8s.io/apimachinery/pkg/runtime"
"k8s.io/apimachinery/pkg/runtime/schema"
)
// Implements RESTScope interface
type restScope struct {
name RESTScopeName
paramName string
argumentName string
paramDescription string
}
func (r *restScope) Name() RESTScopeName {
return r.name
}
func (r *restScope) ParamName() string {
return r.paramName
}
func (r *restScope) ArgumentName() string {
return r.argumentName
}
func (r *restScope) ParamDescription() string {
return r.paramDescription
}
var RESTScopeNamespace = &restScope{
name: RESTScopeNameNamespace,
paramName: "namespaces",
argumentName: "namespace",
paramDescription: "object name and auth scope, such as for teams and projects",
}
var RESTScopeRoot = &restScope{
name: RESTScopeNameRoot,
}
// DefaultRESTMapper exposes mappings between the types defined in a
// runtime.Scheme. It assumes that all types defined the provided scheme
// can be mapped with the provided MetadataAccessor and Codec interfaces.
//
// The resource name of a Kind is defined as the lowercase,
// English-plural version of the Kind string.
// When converting from resource to Kind, the singular version of the
// resource name is also accepted for convenience.
//
// TODO: Only accept plural for some operations for increased control?
// (`get pod bar` vs `get pods bar`)
type DefaultRESTMapper struct {
defaultGroupVersions []schema.GroupVersion
resourceToKind map[schema.GroupVersionResource]schema.GroupVersionKind
kindToPluralResource map[schema.GroupVersionKind]schema.GroupVersionResource
kindToScope map[schema.GroupVersionKind]RESTScope
singularToPlural map[schema.GroupVersionResource]schema.GroupVersionResource
pluralToSingular map[schema.GroupVersionResource]schema.GroupVersionResource
interfacesFunc VersionInterfacesFunc
}
func (m *DefaultRESTMapper) String() string {
return fmt.Sprintf("DefaultRESTMapper{kindToPluralResource=%v}", m.kindToPluralResource)
}
var _ RESTMapper = &DefaultRESTMapper{}
// VersionInterfacesFunc returns the appropriate typer, and metadata accessor for a
// given api version, or an error if no such api version exists.
type VersionInterfacesFunc func(version schema.GroupVersion) (*VersionInterfaces, error)
// NewDefaultRESTMapper initializes a mapping between Kind and APIVersion
// to a resource name and back based on the objects in a runtime.Scheme
// and the Kubernetes API conventions. Takes a group name, a priority list of the versions
// to search when an object has no default version (set empty to return an error),
// and a function that retrieves the correct metadata for a given version.
func NewDefaultRESTMapper(defaultGroupVersions []schema.GroupVersion, f VersionInterfacesFunc) *DefaultRESTMapper {
resourceToKind := make(map[schema.GroupVersionResource]schema.GroupVersionKind)
kindToPluralResource := make(map[schema.GroupVersionKind]schema.GroupVersionResource)
kindToScope := make(map[schema.GroupVersionKind]RESTScope)
singularToPlural := make(map[schema.GroupVersionResource]schema.GroupVersionResource)
pluralToSingular := make(map[schema.GroupVersionResource]schema.GroupVersionResource)
// TODO: verify name mappings work correctly when versions differ
return &DefaultRESTMapper{
resourceToKind: resourceToKind,
kindToPluralResource: kindToPluralResource,
kindToScope: kindToScope,
defaultGroupVersions: defaultGroupVersions,
singularToPlural: singularToPlural,
pluralToSingular: pluralToSingular,
interfacesFunc: f,
}
}
func (m *DefaultRESTMapper) Add(kind schema.GroupVersionKind, scope RESTScope) {
plural, singular := UnsafeGuessKindToResource(kind)
m.AddSpecific(kind, plural, singular, scope)
}
func (m *DefaultRESTMapper) AddSpecific(kind schema.GroupVersionKind, plural, singular schema.GroupVersionResource, scope RESTScope) {
m.singularToPlural[singular] = plural
m.pluralToSingular[plural] = singular
m.resourceToKind[singular] = kind
m.resourceToKind[plural] = kind
m.kindToPluralResource[kind] = plural
m.kindToScope[kind] = scope
}
// unpluralizedSuffixes is a list of resource suffixes that are the same plural and singular
// This is only is only necessary because some bits of code are lazy and don't actually use the RESTMapper like they should.
// TODO eliminate this so that different callers can correctly map to resources. This probably means updating all
// callers to use the RESTMapper they mean.
var unpluralizedSuffixes = []string{
"endpoints",
}
// UnsafeGuessKindToResource converts Kind to a resource name.
// Broken. This method only "sort of" works when used outside of this package. It assumes that Kinds and Resources match
// and they aren't guaranteed to do so.
func UnsafeGuessKindToResource(kind schema.GroupVersionKind) ( /*plural*/ schema.GroupVersionResource /*singular*/, schema.GroupVersionResource) {
kindName := kind.Kind
if len(kindName) == 0 {
return schema.GroupVersionResource{}, schema.GroupVersionResource{}
}
singularName := strings.ToLower(kindName)
singular := kind.GroupVersion().WithResource(singularName)
for _, skip := range unpluralizedSuffixes {
if strings.HasSuffix(singularName, skip) {
return singular, singular
}
}
switch string(singularName[len(singularName)-1]) {
case "s":
return kind.GroupVersion().WithResource(singularName + "es"), singular
case "y":
return kind.GroupVersion().WithResource(strings.TrimSuffix(singularName, "y") + "ies"), singular
}
return kind.GroupVersion().WithResource(singularName + "s"), singular
}
// ResourceSingularizer implements RESTMapper
// It converts a resource name from plural to singular (e.g., from pods to pod)
func (m *DefaultRESTMapper) ResourceSingularizer(resourceType string) (string, error) {
partialResource := schema.GroupVersionResource{Resource: resourceType}
resources, err := m.ResourcesFor(partialResource)
if err != nil {
return resourceType, err
}
singular := schema.GroupVersionResource{}
for _, curr := range resources {
currSingular, ok := m.pluralToSingular[curr]
if !ok {
continue
}
if singular.Empty() {
singular = currSingular
continue
}
if currSingular.Resource != singular.Resource {
return resourceType, fmt.Errorf("multiple possible singular resources (%v) found for %v", resources, resourceType)
}
}
if singular.Empty() {
return resourceType, fmt.Errorf("no singular of resource %v has been defined", resourceType)
}
return singular.Resource, nil
}
// coerceResourceForMatching makes the resource lower case and converts internal versions to unspecified (legacy behavior)
func coerceResourceForMatching(resource schema.GroupVersionResource) schema.GroupVersionResource {
resource.Resource = strings.ToLower(resource.Resource)
if resource.Version == runtime.APIVersionInternal {
resource.Version = ""
}
return resource
}
func (m *DefaultRESTMapper) ResourcesFor(input schema.GroupVersionResource) ([]schema.GroupVersionResource, error) {
resource := coerceResourceForMatching(input)
hasResource := len(resource.Resource) > 0
hasGroup := len(resource.Group) > 0
hasVersion := len(resource.Version) > 0
if !hasResource {
return nil, fmt.Errorf("a resource must be present, got: %v", resource)
}
ret := []schema.GroupVersionResource{}
switch {
case hasGroup && hasVersion:
// fully qualified. Find the exact match
for plural, singular := range m.pluralToSingular {
if singular == resource {
ret = append(ret, plural)
break
}
if plural == resource {
ret = append(ret, plural)
break
}
}
case hasGroup:
// given a group, prefer an exact match. If you don't find one, resort to a prefix match on group
foundExactMatch := false
requestedGroupResource := resource.GroupResource()
for plural, singular := range m.pluralToSingular {
if singular.GroupResource() == requestedGroupResource {
foundExactMatch = true
ret = append(ret, plural)
}
if plural.GroupResource() == requestedGroupResource {
foundExactMatch = true
ret = append(ret, plural)
}
}
// if you didn't find an exact match, match on group prefixing. This allows storageclass.storage to match
// storageclass.storage.k8s.io
if !foundExactMatch {
for plural, singular := range m.pluralToSingular {
if !strings.HasPrefix(plural.Group, requestedGroupResource.Group) {
continue
}
if singular.Resource == requestedGroupResource.Resource {
ret = append(ret, plural)
}
if plural.Resource == requestedGroupResource.Resource {
ret = append(ret, plural)
}
}
}
case hasVersion:
for plural, singular := range m.pluralToSingular {
if singular.Version == resource.Version && singular.Resource == resource.Resource {
ret = append(ret, plural)
}
if plural.Version == resource.Version && plural.Resource == resource.Resource {
ret = append(ret, plural)
}
}
default:
for plural, singular := range m.pluralToSingular {
if singular.Resource == resource.Resource {
ret = append(ret, plural)
}
if plural.Resource == resource.Resource {
ret = append(ret, plural)
}
}
}
if len(ret) == 0 {
return nil, &NoResourceMatchError{PartialResource: resource}
}
sort.Sort(resourceByPreferredGroupVersion{ret, m.defaultGroupVersions})
return ret, nil
}
func (m *DefaultRESTMapper) ResourceFor(resource schema.GroupVersionResource) (schema.GroupVersionResource, error) {
resources, err := m.ResourcesFor(resource)
if err != nil {
return schema.GroupVersionResource{}, err
}
if len(resources) == 1 {
return resources[0], nil
}
return schema.GroupVersionResource{}, &AmbiguousResourceError{PartialResource: resource, MatchingResources: resources}
}
func (m *DefaultRESTMapper) KindsFor(input schema.GroupVersionResource) ([]schema.GroupVersionKind, error) {
resource := coerceResourceForMatching(input)
hasResource := len(resource.Resource) > 0
hasGroup := len(resource.Group) > 0
hasVersion := len(resource.Version) > 0
if !hasResource {
return nil, fmt.Errorf("a resource must be present, got: %v", resource)
}
ret := []schema.GroupVersionKind{}
switch {
// fully qualified. Find the exact match
case hasGroup && hasVersion:
kind, exists := m.resourceToKind[resource]
if exists {
ret = append(ret, kind)
}
case hasGroup:
foundExactMatch := false
requestedGroupResource := resource.GroupResource()
for currResource, currKind := range m.resourceToKind {
if currResource.GroupResource() == requestedGroupResource {
foundExactMatch = true
ret = append(ret, currKind)
}
}
// if you didn't find an exact match, match on group prefixing. This allows storageclass.storage to match
// storageclass.storage.k8s.io
if !foundExactMatch {
for currResource, currKind := range m.resourceToKind {
if !strings.HasPrefix(currResource.Group, requestedGroupResource.Group) {
continue
}
if currResource.Resource == requestedGroupResource.Resource {
ret = append(ret, currKind)
}
}
}
case hasVersion:
for currResource, currKind := range m.resourceToKind {
if currResource.Version == resource.Version && currResource.Resource == resource.Resource {
ret = append(ret, currKind)
}
}
default:
for currResource, currKind := range m.resourceToKind {
if currResource.Resource == resource.Resource {
ret = append(ret, currKind)
}
}
}
if len(ret) == 0 {
return nil, &NoResourceMatchError{PartialResource: input}
}
sort.Sort(kindByPreferredGroupVersion{ret, m.defaultGroupVersions})
return ret, nil
}
func (m *DefaultRESTMapper) KindFor(resource schema.GroupVersionResource) (schema.GroupVersionKind, error) {
kinds, err := m.KindsFor(resource)
if err != nil {
return schema.GroupVersionKind{}, err
}
if len(kinds) == 1 {
return kinds[0], nil
}
return schema.GroupVersionKind{}, &AmbiguousResourceError{PartialResource: resource, MatchingKinds: kinds}
}
type kindByPreferredGroupVersion struct {
list []schema.GroupVersionKind
sortOrder []schema.GroupVersion
}
func (o kindByPreferredGroupVersion) Len() int { return len(o.list) }
func (o kindByPreferredGroupVersion) Swap(i, j int) { o.list[i], o.list[j] = o.list[j], o.list[i] }
func (o kindByPreferredGroupVersion) Less(i, j int) bool {
lhs := o.list[i]
rhs := o.list[j]
if lhs == rhs {
return false
}
if lhs.GroupVersion() == rhs.GroupVersion() {
return lhs.Kind < rhs.Kind
}
// otherwise, the difference is in the GroupVersion, so we need to sort with respect to the preferred order
lhsIndex := -1
rhsIndex := -1
for i := range o.sortOrder {
if o.sortOrder[i] == lhs.GroupVersion() {
lhsIndex = i
}
if o.sortOrder[i] == rhs.GroupVersion() {
rhsIndex = i
}
}
if rhsIndex == -1 {
return true
}
return lhsIndex < rhsIndex
}
type resourceByPreferredGroupVersion struct {
list []schema.GroupVersionResource
sortOrder []schema.GroupVersion
}
func (o resourceByPreferredGroupVersion) Len() int { return len(o.list) }
func (o resourceByPreferredGroupVersion) Swap(i, j int) { o.list[i], o.list[j] = o.list[j], o.list[i] }
func (o resourceByPreferredGroupVersion) Less(i, j int) bool {
lhs := o.list[i]
rhs := o.list[j]
if lhs == rhs {
return false
}
if lhs.GroupVersion() == rhs.GroupVersion() {
return lhs.Resource < rhs.Resource
}
// otherwise, the difference is in the GroupVersion, so we need to sort with respect to the preferred order
lhsIndex := -1
rhsIndex := -1
for i := range o.sortOrder {
if o.sortOrder[i] == lhs.GroupVersion() {
lhsIndex = i
}
if o.sortOrder[i] == rhs.GroupVersion() {
rhsIndex = i
}
}
if rhsIndex == -1 {
return true
}
return lhsIndex < rhsIndex
}
// RESTMapping returns a struct representing the resource path and conversion interfaces a
// RESTClient should use to operate on the provided group/kind in order of versions. If a version search
// order is not provided, the search order provided to DefaultRESTMapper will be used to resolve which
// version should be used to access the named group/kind.
func (m *DefaultRESTMapper) RESTMapping(gk schema.GroupKind, versions ...string) (*RESTMapping, error) {
mappings, err := m.RESTMappings(gk, versions...)
if err != nil {
return nil, err
}
if len(mappings) == 0 {
return nil, &NoKindMatchError{PartialKind: gk.WithVersion("")}
}
// since we rely on RESTMappings method
// take the first match and return to the caller
// as this was the existing behavior.
return mappings[0], nil
}
// RESTMappings returns the RESTMappings for the provided group kind. If a version search order
// is not provided, the search order provided to DefaultRESTMapper will be used.
func (m *DefaultRESTMapper) RESTMappings(gk schema.GroupKind, versions ...string) ([]*RESTMapping, error) {
mappings := make([]*RESTMapping, 0)
potentialGVK := make([]schema.GroupVersionKind, 0)
hadVersion := false
// Pick an appropriate version
for _, version := range versions {
if len(version) == 0 || version == runtime.APIVersionInternal {
continue
}
currGVK := gk.WithVersion(version)
hadVersion = true
if _, ok := m.kindToPluralResource[currGVK]; ok {
potentialGVK = append(potentialGVK, currGVK)
break
}
}
// Use the default preferred versions
if !hadVersion && len(potentialGVK) == 0 {
for _, gv := range m.defaultGroupVersions {
if gv.Group != gk.Group {
continue
}
potentialGVK = append(potentialGVK, gk.WithVersion(gv.Version))
}
}
if len(potentialGVK) == 0 {
return nil, &NoKindMatchError{PartialKind: gk.WithVersion("")}
}
for _, gvk := range potentialGVK {
//Ensure we have a REST mapping
res, ok := m.kindToPluralResource[gvk]
if !ok {
continue
}
// Ensure we have a REST scope
scope, ok := m.kindToScope[gvk]
if !ok {
return nil, fmt.Errorf("the provided version %q and kind %q cannot be mapped to a supported scope", gvk.GroupVersion(), gvk.Kind)
}
interfaces, err := m.interfacesFunc(gvk.GroupVersion())
if err != nil {
return nil, fmt.Errorf("the provided version %q has no relevant versions: %v", gvk.GroupVersion().String(), err)
}
mappings = append(mappings, &RESTMapping{
Resource: res.Resource,
GroupVersionKind: gvk,
Scope: scope,
ObjectConvertor: interfaces.ObjectConvertor,
MetadataAccessor: interfaces.MetadataAccessor,
})
}
if len(mappings) == 0 {
return nil, &NoResourceMatchError{PartialResource: schema.GroupVersionResource{Group: gk.Group, Resource: gk.Kind}}
}
return mappings, nil
}

View File

@ -1,31 +0,0 @@
/*
Copyright 2016 The Kubernetes Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package meta
import (
"k8s.io/apimachinery/pkg/apis/meta/v1/unstructured"
"k8s.io/apimachinery/pkg/runtime/schema"
)
// InterfacesForUnstructured returns VersionInterfaces suitable for
// dealing with unstructured.Unstructured objects.
func InterfacesForUnstructured(schema.GroupVersion) (*VersionInterfaces, error) {
return &VersionInterfaces{
ObjectConvertor: &unstructured.UnstructuredObjectConverter{},
MetadataAccessor: NewAccessor(),
}, nil
}

View File

@ -1,99 +0,0 @@
/*
Copyright 2016 The Kubernetes Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
// Package announced contains tools for announcing API group factories. This is
// distinct from registration (in the 'registered' package) in that it's safe
// to announce every possible group linked in, but only groups requested at
// runtime should be registered. This package contains both a registry, and
// factory code (which was formerly copy-pasta in every install package).
package announced
import (
"fmt"
"k8s.io/apimachinery/pkg/apimachinery/registered"
"k8s.io/apimachinery/pkg/runtime"
)
// APIGroupFactoryRegistry allows for groups and versions to announce themselves,
// which simply makes them available and doesn't take other actions. Later,
// users of the registry can select which groups and versions they'd actually
// like to register with an APIRegistrationManager.
//
// (Right now APIRegistrationManager has separate 'registration' and 'enabled'
// concepts-- APIGroupFactory is going to take over the former function;
// they will overlap untill the refactoring is finished.)
//
// The key is the group name. After initialization, this should be treated as
// read-only. It is implemented as a map from group name to group factory, and
// it is safe to use this knowledge to manually pick out groups to register
// (e.g., for testing).
type APIGroupFactoryRegistry map[string]*GroupMetaFactory
func (gar APIGroupFactoryRegistry) group(groupName string) *GroupMetaFactory {
gmf, ok := gar[groupName]
if !ok {
gmf = &GroupMetaFactory{VersionArgs: map[string]*GroupVersionFactoryArgs{}}
gar[groupName] = gmf
}
return gmf
}
// AnnounceGroupVersion adds the particular arguments for this group version to the group factory.
func (gar APIGroupFactoryRegistry) AnnounceGroupVersion(gvf *GroupVersionFactoryArgs) error {
gmf := gar.group(gvf.GroupName)
if _, ok := gmf.VersionArgs[gvf.VersionName]; ok {
return fmt.Errorf("version %q in group %q has already been announced", gvf.VersionName, gvf.GroupName)
}
gmf.VersionArgs[gvf.VersionName] = gvf
return nil
}
// AnnounceGroup adds the group-wide arguments to the group factory.
func (gar APIGroupFactoryRegistry) AnnounceGroup(args *GroupMetaFactoryArgs) error {
gmf := gar.group(args.GroupName)
if gmf.GroupArgs != nil {
return fmt.Errorf("group %q has already been announced", args.GroupName)
}
gmf.GroupArgs = args
return nil
}
// RegisterAndEnableAll throws every factory at the specified API registration
// manager, and lets it decide which to register. (If you want to do this a la
// cart, you may look through gar itself-- it's just a map.)
func (gar APIGroupFactoryRegistry) RegisterAndEnableAll(m *registered.APIRegistrationManager, scheme *runtime.Scheme) error {
for groupName, gmf := range gar {
if err := gmf.Register(m); err != nil {
return fmt.Errorf("error registering %v: %v", groupName, err)
}
if err := gmf.Enable(m, scheme); err != nil {
return fmt.Errorf("error enabling %v: %v", groupName, err)
}
}
return nil
}
// AnnouncePreconstructedFactory announces a factory which you've manually assembled.
// You may call this instead of calling AnnounceGroup and AnnounceGroupVersion.
func (gar APIGroupFactoryRegistry) AnnouncePreconstructedFactory(gmf *GroupMetaFactory) error {
name := gmf.GroupArgs.GroupName
if _, exists := gar[name]; exists {
return fmt.Errorf("the group %q has already been announced.", name)
}
gar[name] = gmf
return nil
}

View File

@ -1,255 +0,0 @@
/*
Copyright 2016 The Kubernetes Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package announced
import (
"fmt"
"github.com/golang/glog"
"k8s.io/apimachinery/pkg/api/meta"
"k8s.io/apimachinery/pkg/apimachinery"
"k8s.io/apimachinery/pkg/apimachinery/registered"
"k8s.io/apimachinery/pkg/runtime"
"k8s.io/apimachinery/pkg/runtime/schema"
"k8s.io/apimachinery/pkg/util/sets"
)
type SchemeFunc func(*runtime.Scheme) error
type VersionToSchemeFunc map[string]SchemeFunc
// GroupVersionFactoryArgs contains all the per-version parts of a GroupMetaFactory.
type GroupVersionFactoryArgs struct {
GroupName string
VersionName string
AddToScheme SchemeFunc
}
// GroupMetaFactoryArgs contains the group-level args of a GroupMetaFactory.
type GroupMetaFactoryArgs struct {
// GroupName is the name of the API-Group
//
// example: 'servicecatalog.k8s.io'
GroupName string
VersionPreferenceOrder []string
// RootScopedKinds are resources that are not namespaced.
RootScopedKinds sets.String // nil is allowed
IgnoredKinds sets.String // nil is allowed
// May be nil if there are no internal objects.
AddInternalObjectsToScheme SchemeFunc
}
// NewGroupMetaFactory builds the args for you. This is for if you're
// constructing a factory all at once and not using the registry.
func NewGroupMetaFactory(groupArgs *GroupMetaFactoryArgs, versions VersionToSchemeFunc) *GroupMetaFactory {
gmf := &GroupMetaFactory{
GroupArgs: groupArgs,
VersionArgs: map[string]*GroupVersionFactoryArgs{},
}
for v, f := range versions {
gmf.VersionArgs[v] = &GroupVersionFactoryArgs{
GroupName: groupArgs.GroupName,
VersionName: v,
AddToScheme: f,
}
}
return gmf
}
// Announce adds this Group factory to the global factory registry. It should
// only be called if you constructed the GroupMetaFactory yourself via
// NewGroupMetaFactory.
// Note that this will panic on an error, since it's expected that you'll be
// calling this at initialization time and any error is a result of a
// programmer importing the wrong set of packages. If this assumption doesn't
// work for you, just call DefaultGroupFactoryRegistry.AnnouncePreconstructedFactory
// yourself.
func (gmf *GroupMetaFactory) Announce(groupFactoryRegistry APIGroupFactoryRegistry) *GroupMetaFactory {
if err := groupFactoryRegistry.AnnouncePreconstructedFactory(gmf); err != nil {
panic(err)
}
return gmf
}
// GroupMetaFactory has the logic for actually assembling and registering a group.
//
// There are two ways of obtaining one of these.
// 1. You can announce your group and versions separately, and then let the
// GroupFactoryRegistry assemble this object for you. (This allows group and
// versions to be imported separately, without referencing each other, to
// keep import trees small.)
// 2. You can call NewGroupMetaFactory(), which is mostly a drop-in replacement
// for the old, bad way of doing things. You can then call .Announce() to
// announce your constructed factory to any code that would like to do
// things the new, better way.
//
// Note that GroupMetaFactory actually does construct GroupMeta objects, but
// currently it does so in a way that's very entangled with an
// APIRegistrationManager. It's a TODO item to cleanly separate that interface.
type GroupMetaFactory struct {
GroupArgs *GroupMetaFactoryArgs
// map of version name to version factory
VersionArgs map[string]*GroupVersionFactoryArgs
// assembled by Register()
prioritizedVersionList []schema.GroupVersion
}
// Register constructs the finalized prioritized version list and sanity checks
// the announced group & versions. Then it calls register.
func (gmf *GroupMetaFactory) Register(m *registered.APIRegistrationManager) error {
if gmf.GroupArgs == nil {
return fmt.Errorf("partially announced groups are not allowed, only got versions: %#v", gmf.VersionArgs)
}
if len(gmf.VersionArgs) == 0 {
return fmt.Errorf("group %v announced but no versions announced", gmf.GroupArgs.GroupName)
}
pvSet := sets.NewString(gmf.GroupArgs.VersionPreferenceOrder...)
if pvSet.Len() != len(gmf.GroupArgs.VersionPreferenceOrder) {
return fmt.Errorf("preference order for group %v has duplicates: %v", gmf.GroupArgs.GroupName, gmf.GroupArgs.VersionPreferenceOrder)
}
prioritizedVersions := []schema.GroupVersion{}
for _, v := range gmf.GroupArgs.VersionPreferenceOrder {
prioritizedVersions = append(
prioritizedVersions,
schema.GroupVersion{
Group: gmf.GroupArgs.GroupName,
Version: v,
},
)
}
// Go through versions that weren't explicitly prioritized.
unprioritizedVersions := []schema.GroupVersion{}
for _, v := range gmf.VersionArgs {
if v.GroupName != gmf.GroupArgs.GroupName {
return fmt.Errorf("found %v/%v in group %v?", v.GroupName, v.VersionName, gmf.GroupArgs.GroupName)
}
if pvSet.Has(v.VersionName) {
pvSet.Delete(v.VersionName)
continue
}
unprioritizedVersions = append(unprioritizedVersions, schema.GroupVersion{Group: v.GroupName, Version: v.VersionName})
}
if len(unprioritizedVersions) > 1 {
glog.Warningf("group %v has multiple unprioritized versions: %#v. They will have an arbitrary preference order!", gmf.GroupArgs.GroupName, unprioritizedVersions)
}
if pvSet.Len() != 0 {
return fmt.Errorf("group %v has versions in the priority list that were never announced: %s", gmf.GroupArgs.GroupName, pvSet)
}
prioritizedVersions = append(prioritizedVersions, unprioritizedVersions...)
m.RegisterVersions(prioritizedVersions)
gmf.prioritizedVersionList = prioritizedVersions
return nil
}
func (gmf *GroupMetaFactory) newRESTMapper(scheme *runtime.Scheme, externalVersions []schema.GroupVersion, groupMeta *apimachinery.GroupMeta) meta.RESTMapper {
// the list of kinds that are scoped at the root of the api hierarchy
// if a kind is not enumerated here, it is assumed to have a namespace scope
rootScoped := sets.NewString()
if gmf.GroupArgs.RootScopedKinds != nil {
rootScoped = gmf.GroupArgs.RootScopedKinds
}
ignoredKinds := sets.NewString()
if gmf.GroupArgs.IgnoredKinds != nil {
ignoredKinds = gmf.GroupArgs.IgnoredKinds
}
mapper := meta.NewDefaultRESTMapper(externalVersions, groupMeta.InterfacesFor)
for _, gv := range externalVersions {
for kind := range scheme.KnownTypes(gv) {
if ignoredKinds.Has(kind) {
continue
}
scope := meta.RESTScopeNamespace
if rootScoped.Has(kind) {
scope = meta.RESTScopeRoot
}
mapper.Add(gv.WithKind(kind), scope)
}
}
return mapper
}
// Enable enables group versions that are allowed, adds methods to the scheme, etc.
func (gmf *GroupMetaFactory) Enable(m *registered.APIRegistrationManager, scheme *runtime.Scheme) error {
externalVersions := []schema.GroupVersion{}
for _, v := range gmf.prioritizedVersionList {
if !m.IsAllowedVersion(v) {
continue
}
externalVersions = append(externalVersions, v)
if err := m.EnableVersions(v); err != nil {
return err
}
gmf.VersionArgs[v.Version].AddToScheme(scheme)
}
if len(externalVersions) == 0 {
glog.V(4).Infof("No version is registered for group %v", gmf.GroupArgs.GroupName)
return nil
}
if gmf.GroupArgs.AddInternalObjectsToScheme != nil {
gmf.GroupArgs.AddInternalObjectsToScheme(scheme)
}
preferredExternalVersion := externalVersions[0]
accessor := meta.NewAccessor()
groupMeta := &apimachinery.GroupMeta{
GroupVersion: preferredExternalVersion,
GroupVersions: externalVersions,
SelfLinker: runtime.SelfLinker(accessor),
}
for _, v := range externalVersions {
gvf := gmf.VersionArgs[v.Version]
if err := groupMeta.AddVersionInterfaces(
schema.GroupVersion{Group: gvf.GroupName, Version: gvf.VersionName},
&meta.VersionInterfaces{
ObjectConvertor: scheme,
MetadataAccessor: accessor,
},
); err != nil {
return err
}
}
groupMeta.InterfacesFor = groupMeta.DefaultInterfacesFor
groupMeta.RESTMapper = gmf.newRESTMapper(scheme, externalVersions, groupMeta)
if err := m.RegisterGroup(*groupMeta); err != nil {
return err
}
return nil
}
// RegisterAndEnable is provided only to allow this code to get added in multiple steps.
// It's really bad that this is called in init() methods, but supporting this
// temporarily lets us do the change incrementally.
func (gmf *GroupMetaFactory) RegisterAndEnable(registry *registered.APIRegistrationManager, scheme *runtime.Scheme) error {
if err := gmf.Register(registry); err != nil {
return err
}
if err := gmf.Enable(registry, scheme); err != nil {
return err
}
return nil
}

View File

@ -1,20 +0,0 @@
/*
Copyright 2014 The Kubernetes Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
// Package apimachinery contains the generic API machinery code that
// is common to both server and clients.
// This package should never import specific API objects.
package apimachinery // import "k8s.io/apimachinery/pkg/apimachinery"

View File

@ -1,336 +0,0 @@
/*
Copyright 2015 The Kubernetes Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
// Package to keep track of API Versions that can be registered and are enabled in api.Scheme.
package registered
import (
"fmt"
"sort"
"strings"
"github.com/golang/glog"
"k8s.io/apimachinery/pkg/api/meta"
"k8s.io/apimachinery/pkg/apimachinery"
"k8s.io/apimachinery/pkg/runtime/schema"
"k8s.io/apimachinery/pkg/util/sets"
)
// APIRegistrationManager provides the concept of what API groups are enabled.
//
// TODO: currently, it also provides a "registered" concept. But it's wrong to
// have both concepts in the same object. Therefore the "announced" package is
// going to take over the registered concept. After all the install packages
// are switched to using the announce package instead of this package, then we
// can combine the registered/enabled concepts in this object. Simplifying this
// isn't easy right now because there are so many callers of this package.
type APIRegistrationManager struct {
// registeredGroupVersions stores all API group versions for which RegisterGroup is called.
registeredVersions map[schema.GroupVersion]struct{}
// enabledVersions represents all enabled API versions. It should be a
// subset of registeredVersions. Please call EnableVersions() to add
// enabled versions.
enabledVersions map[schema.GroupVersion]struct{}
// map of group meta for all groups.
groupMetaMap map[string]*apimachinery.GroupMeta
// envRequestedVersions represents the versions requested via the
// KUBE_API_VERSIONS environment variable. The install package of each group
// checks this list before add their versions to the latest package and
// Scheme. This list is small and order matters, so represent as a slice
envRequestedVersions []schema.GroupVersion
}
// NewAPIRegistrationManager constructs a new manager. The argument ought to be
// the value of the KUBE_API_VERSIONS env var, or a value of this which you
// wish to test.
func NewAPIRegistrationManager(kubeAPIVersions string) (*APIRegistrationManager, error) {
m := &APIRegistrationManager{
registeredVersions: map[schema.GroupVersion]struct{}{},
enabledVersions: map[schema.GroupVersion]struct{}{},
groupMetaMap: map[string]*apimachinery.GroupMeta{},
envRequestedVersions: []schema.GroupVersion{},
}
if len(kubeAPIVersions) != 0 {
for _, version := range strings.Split(kubeAPIVersions, ",") {
gv, err := schema.ParseGroupVersion(version)
if err != nil {
return nil, fmt.Errorf("invalid api version: %s in KUBE_API_VERSIONS: %s.",
version, kubeAPIVersions)
}
m.envRequestedVersions = append(m.envRequestedVersions, gv)
}
}
return m, nil
}
func NewOrDie(kubeAPIVersions string) *APIRegistrationManager {
m, err := NewAPIRegistrationManager(kubeAPIVersions)
if err != nil {
glog.Fatalf("Could not construct version manager: %v (KUBE_API_VERSIONS=%q)", err, kubeAPIVersions)
}
return m
}
// RegisterVersions adds the given group versions to the list of registered group versions.
func (m *APIRegistrationManager) RegisterVersions(availableVersions []schema.GroupVersion) {
for _, v := range availableVersions {
m.registeredVersions[v] = struct{}{}
}
}
// RegisterGroup adds the given group to the list of registered groups.
func (m *APIRegistrationManager) RegisterGroup(groupMeta apimachinery.GroupMeta) error {
groupName := groupMeta.GroupVersion.Group
if _, found := m.groupMetaMap[groupName]; found {
return fmt.Errorf("group %q is already registered in groupsMap: %v", groupName, m.groupMetaMap)
}
m.groupMetaMap[groupName] = &groupMeta
return nil
}
// EnableVersions adds the versions for the given group to the list of enabled versions.
// Note that the caller should call RegisterGroup before calling this method.
// The caller of this function is responsible to add the versions to scheme and RESTMapper.
func (m *APIRegistrationManager) EnableVersions(versions ...schema.GroupVersion) error {
var unregisteredVersions []schema.GroupVersion
for _, v := range versions {
if _, found := m.registeredVersions[v]; !found {
unregisteredVersions = append(unregisteredVersions, v)
}
m.enabledVersions[v] = struct{}{}
}
if len(unregisteredVersions) != 0 {
return fmt.Errorf("Please register versions before enabling them: %v", unregisteredVersions)
}
return nil
}
// IsAllowedVersion returns if the version is allowed by the KUBE_API_VERSIONS
// environment variable. If the environment variable is empty, then it always
// returns true.
func (m *APIRegistrationManager) IsAllowedVersion(v schema.GroupVersion) bool {
if len(m.envRequestedVersions) == 0 {
return true
}
for _, envGV := range m.envRequestedVersions {
if v == envGV {
return true
}
}
return false
}
// IsEnabledVersion returns if a version is enabled.
func (m *APIRegistrationManager) IsEnabledVersion(v schema.GroupVersion) bool {
_, found := m.enabledVersions[v]
return found
}
// EnabledVersions returns all enabled versions. Groups are randomly ordered, but versions within groups
// are priority order from best to worst
func (m *APIRegistrationManager) EnabledVersions() []schema.GroupVersion {
ret := []schema.GroupVersion{}
for _, groupMeta := range m.groupMetaMap {
for _, version := range groupMeta.GroupVersions {
if m.IsEnabledVersion(version) {
ret = append(ret, version)
}
}
}
return ret
}
// EnabledVersionsForGroup returns all enabled versions for a group in order of best to worst
func (m *APIRegistrationManager) EnabledVersionsForGroup(group string) []schema.GroupVersion {
groupMeta, ok := m.groupMetaMap[group]
if !ok {
return []schema.GroupVersion{}
}
ret := []schema.GroupVersion{}
for _, version := range groupMeta.GroupVersions {
if m.IsEnabledVersion(version) {
ret = append(ret, version)
}
}
return ret
}
// Group returns the metadata of a group if the group is registered, otherwise
// an error is returned.
func (m *APIRegistrationManager) Group(group string) (*apimachinery.GroupMeta, error) {
groupMeta, found := m.groupMetaMap[group]
if !found {
return nil, fmt.Errorf("group %v has not been registered", group)
}
groupMetaCopy := *groupMeta
return &groupMetaCopy, nil
}
// IsRegistered takes a string and determines if it's one of the registered groups
func (m *APIRegistrationManager) IsRegistered(group string) bool {
_, found := m.groupMetaMap[group]
return found
}
// IsRegisteredVersion returns if a version is registered.
func (m *APIRegistrationManager) IsRegisteredVersion(v schema.GroupVersion) bool {
_, found := m.registeredVersions[v]
return found
}
// RegisteredGroupVersions returns all registered group versions.
func (m *APIRegistrationManager) RegisteredGroupVersions() []schema.GroupVersion {
ret := []schema.GroupVersion{}
for groupVersion := range m.registeredVersions {
ret = append(ret, groupVersion)
}
return ret
}
// InterfacesFor is a union meta.VersionInterfacesFunc func for all registered types
func (m *APIRegistrationManager) InterfacesFor(version schema.GroupVersion) (*meta.VersionInterfaces, error) {
groupMeta, err := m.Group(version.Group)
if err != nil {
return nil, err
}
return groupMeta.InterfacesFor(version)
}
// TODO: This is an expedient function, because we don't check if a Group is
// supported throughout the code base. We will abandon this function and
// checking the error returned by the Group() function.
func (m *APIRegistrationManager) GroupOrDie(group string) *apimachinery.GroupMeta {
groupMeta, found := m.groupMetaMap[group]
if !found {
if group == "" {
panic("The legacy v1 API is not registered.")
} else {
panic(fmt.Sprintf("Group %s is not registered.", group))
}
}
groupMetaCopy := *groupMeta
return &groupMetaCopy
}
// RESTMapper returns a union RESTMapper of all known types with priorities chosen in the following order:
// 1. if KUBE_API_VERSIONS is specified, then KUBE_API_VERSIONS in order, OR
// 1. legacy kube group preferred version, extensions preferred version, metrics perferred version, legacy
// kube any version, extensions any version, metrics any version, all other groups alphabetical preferred version,
// all other groups alphabetical.
func (m *APIRegistrationManager) RESTMapper(versionPatterns ...schema.GroupVersion) meta.RESTMapper {
unionMapper := meta.MultiRESTMapper{}
unionedGroups := sets.NewString()
for enabledVersion := range m.enabledVersions {
if !unionedGroups.Has(enabledVersion.Group) {
unionedGroups.Insert(enabledVersion.Group)
groupMeta := m.groupMetaMap[enabledVersion.Group]
unionMapper = append(unionMapper, groupMeta.RESTMapper)
}
}
if len(versionPatterns) != 0 {
resourcePriority := []schema.GroupVersionResource{}
kindPriority := []schema.GroupVersionKind{}
for _, versionPriority := range versionPatterns {
resourcePriority = append(resourcePriority, versionPriority.WithResource(meta.AnyResource))
kindPriority = append(kindPriority, versionPriority.WithKind(meta.AnyKind))
}
return meta.PriorityRESTMapper{Delegate: unionMapper, ResourcePriority: resourcePriority, KindPriority: kindPriority}
}
if len(m.envRequestedVersions) != 0 {
resourcePriority := []schema.GroupVersionResource{}
kindPriority := []schema.GroupVersionKind{}
for _, versionPriority := range m.envRequestedVersions {
resourcePriority = append(resourcePriority, versionPriority.WithResource(meta.AnyResource))
kindPriority = append(kindPriority, versionPriority.WithKind(meta.AnyKind))
}
return meta.PriorityRESTMapper{Delegate: unionMapper, ResourcePriority: resourcePriority, KindPriority: kindPriority}
}
prioritizedGroups := []string{"", "extensions", "metrics"}
resourcePriority, kindPriority := m.prioritiesForGroups(prioritizedGroups...)
prioritizedGroupsSet := sets.NewString(prioritizedGroups...)
remainingGroups := sets.String{}
for enabledVersion := range m.enabledVersions {
if !prioritizedGroupsSet.Has(enabledVersion.Group) {
remainingGroups.Insert(enabledVersion.Group)
}
}
remainingResourcePriority, remainingKindPriority := m.prioritiesForGroups(remainingGroups.List()...)
resourcePriority = append(resourcePriority, remainingResourcePriority...)
kindPriority = append(kindPriority, remainingKindPriority...)
return meta.PriorityRESTMapper{Delegate: unionMapper, ResourcePriority: resourcePriority, KindPriority: kindPriority}
}
// prioritiesForGroups returns the resource and kind priorities for a PriorityRESTMapper, preferring the preferred version of each group first,
// then any non-preferred version of the group second.
func (m *APIRegistrationManager) prioritiesForGroups(groups ...string) ([]schema.GroupVersionResource, []schema.GroupVersionKind) {
resourcePriority := []schema.GroupVersionResource{}
kindPriority := []schema.GroupVersionKind{}
for _, group := range groups {
availableVersions := m.EnabledVersionsForGroup(group)
if len(availableVersions) > 0 {
resourcePriority = append(resourcePriority, availableVersions[0].WithResource(meta.AnyResource))
kindPriority = append(kindPriority, availableVersions[0].WithKind(meta.AnyKind))
}
}
for _, group := range groups {
resourcePriority = append(resourcePriority, schema.GroupVersionResource{Group: group, Version: meta.AnyVersion, Resource: meta.AnyResource})
kindPriority = append(kindPriority, schema.GroupVersionKind{Group: group, Version: meta.AnyVersion, Kind: meta.AnyKind})
}
return resourcePriority, kindPriority
}
// AllPreferredGroupVersions returns the preferred versions of all registered
// groups in the form of "group1/version1,group2/version2,..."
func (m *APIRegistrationManager) AllPreferredGroupVersions() string {
if len(m.groupMetaMap) == 0 {
return ""
}
var defaults []string
for _, groupMeta := range m.groupMetaMap {
defaults = append(defaults, groupMeta.GroupVersion.String())
}
sort.Strings(defaults)
return strings.Join(defaults, ",")
}
// ValidateEnvRequestedVersions returns a list of versions that are requested in
// the KUBE_API_VERSIONS environment variable, but not enabled.
func (m *APIRegistrationManager) ValidateEnvRequestedVersions() []schema.GroupVersion {
var missingVersions []schema.GroupVersion
for _, v := range m.envRequestedVersions {
if _, found := m.enabledVersions[v]; !found {
missingVersions = append(missingVersions, v)
}
}
return missingVersions
}

View File

@ -1,87 +0,0 @@
/*
Copyright 2016 The Kubernetes Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package apimachinery
import (
"fmt"
"k8s.io/apimachinery/pkg/api/meta"
"k8s.io/apimachinery/pkg/runtime"
"k8s.io/apimachinery/pkg/runtime/schema"
)
// GroupMeta stores the metadata of a group.
type GroupMeta struct {
// GroupVersion represents the preferred version of the group.
GroupVersion schema.GroupVersion
// GroupVersions is Group + all versions in that group.
GroupVersions []schema.GroupVersion
// SelfLinker can set or get the SelfLink field of all API types.
// TODO: when versioning changes, make this part of each API definition.
// TODO(lavalamp): Combine SelfLinker & ResourceVersioner interfaces, force all uses
// to go through the InterfacesFor method below.
SelfLinker runtime.SelfLinker
// RESTMapper provides the default mapping between REST paths and the objects declared in api.Scheme and all known
// versions.
RESTMapper meta.RESTMapper
// InterfacesFor returns the default Codec and ResourceVersioner for a given version
// string, or an error if the version is not known.
// TODO: make this stop being a func pointer and always use the default
// function provided below once every place that populates this field has been changed.
InterfacesFor func(version schema.GroupVersion) (*meta.VersionInterfaces, error)
// InterfacesByVersion stores the per-version interfaces.
InterfacesByVersion map[schema.GroupVersion]*meta.VersionInterfaces
}
// DefaultInterfacesFor returns the default Codec and ResourceVersioner for a given version
// string, or an error if the version is not known.
// TODO: Remove the "Default" prefix.
func (gm *GroupMeta) DefaultInterfacesFor(version schema.GroupVersion) (*meta.VersionInterfaces, error) {
if v, ok := gm.InterfacesByVersion[version]; ok {
return v, nil
}
return nil, fmt.Errorf("unsupported storage version: %s (valid: %v)", version, gm.GroupVersions)
}
// AddVersionInterfaces adds the given version to the group. Only call during
// init, after that GroupMeta objects should be immutable. Not thread safe.
// (If you use this, be sure to set .InterfacesFor = .DefaultInterfacesFor)
// TODO: remove the "Interfaces" suffix and make this also maintain the
// .GroupVersions member.
func (gm *GroupMeta) AddVersionInterfaces(version schema.GroupVersion, interfaces *meta.VersionInterfaces) error {
if e, a := gm.GroupVersion.Group, version.Group; a != e {
return fmt.Errorf("got a version in group %v, but am in group %v", a, e)
}
if gm.InterfacesByVersion == nil {
gm.InterfacesByVersion = make(map[schema.GroupVersion]*meta.VersionInterfaces)
}
gm.InterfacesByVersion[version] = interfaces
// TODO: refactor to make the below error not possible, this function
// should *set* GroupVersions rather than depend on it.
for _, v := range gm.GroupVersions {
if v == version {
return nil
}
}
return fmt.Errorf("added a version interface without the corresponding version %v being in the list %#v", version, gm.GroupVersions)
}

View File

@ -252,7 +252,6 @@ func Convert_map_to_unversioned_LabelSelector(in *map[string]string, out *LabelS
if in == nil {
return nil
}
out = new(LabelSelector)
for labelKey, labelValue := range *in {
AddLabelToSelector(out, labelKey, labelValue)
}

View File

@ -80,7 +80,13 @@ func (t *Time) Before(u *Time) bool {
// Equal reports whether the time instant t is equal to u.
func (t *Time) Equal(u *Time) bool {
return t.Time.Equal(u.Time)
if t == nil && u == nil {
return true
}
if t != nil && u != nil {
return t.Time.Equal(u.Time)
}
return false
}
// Unix returns the local time corresponding to the given Unix time

View File

@ -1,842 +0,0 @@
/*
Copyright 2015 The Kubernetes Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package unstructured
import (
"bytes"
gojson "encoding/json"
"errors"
"fmt"
"io"
"strings"
"github.com/golang/glog"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/conversion/unstructured"
"k8s.io/apimachinery/pkg/runtime"
"k8s.io/apimachinery/pkg/runtime/schema"
"k8s.io/apimachinery/pkg/types"
"k8s.io/apimachinery/pkg/util/json"
utilruntime "k8s.io/apimachinery/pkg/util/runtime"
)
// Unstructured allows objects that do not have Golang structs registered to be manipulated
// generically. This can be used to deal with the API objects from a plug-in. Unstructured
// objects still have functioning TypeMeta features-- kind, version, etc.
//
// WARNING: This object has accessors for the v1 standard metadata. You *MUST NOT* use this
// type if you are dealing with objects that are not in the server meta v1 schema.
//
// TODO: make the serialization part of this type distinct from the field accessors.
// +k8s:deepcopy-gen:interfaces=k8s.io/apimachinery/pkg/runtime.Object
// +k8s:deepcopy-gen=true
type Unstructured struct {
// Object is a JSON compatible map with string, float, int, bool, []interface{}, or
// map[string]interface{}
// children.
Object map[string]interface{}
}
var _ metav1.Object = &Unstructured{}
var _ runtime.Unstructured = &Unstructured{}
var _ runtime.Unstructured = &UnstructuredList{}
func (obj *Unstructured) GetObjectKind() schema.ObjectKind { return obj }
func (obj *UnstructuredList) GetObjectKind() schema.ObjectKind { return obj }
func (obj *Unstructured) IsUnstructuredObject() {}
func (obj *UnstructuredList) IsUnstructuredObject() {}
func (obj *Unstructured) IsList() bool {
if obj.Object != nil {
_, ok := obj.Object["items"]
return ok
}
return false
}
func (obj *UnstructuredList) IsList() bool { return true }
func (obj *Unstructured) EachListItem(fn func(runtime.Object) error) error {
if obj.Object == nil {
return fmt.Errorf("content is not a list")
}
field, ok := obj.Object["items"]
if !ok {
return fmt.Errorf("content is not a list")
}
items, ok := field.([]interface{})
if !ok {
return nil
}
for _, item := range items {
child, ok := item.(map[string]interface{})
if !ok {
return fmt.Errorf("items member is not an object")
}
if err := fn(&Unstructured{Object: child}); err != nil {
return err
}
}
return nil
}
func (obj *UnstructuredList) EachListItem(fn func(runtime.Object) error) error {
for i := range obj.Items {
if err := fn(&obj.Items[i]); err != nil {
return err
}
}
return nil
}
func (obj *Unstructured) UnstructuredContent() map[string]interface{} {
if obj.Object == nil {
obj.Object = make(map[string]interface{})
}
return obj.Object
}
// UnstructuredContent returns a map contain an overlay of the Items field onto
// the Object field. Items always overwrites overlay. Changing "items" in the
// returned object will affect items in the underlying Items field, but changing
// the "items" slice itself will have no effect.
// TODO: expose SetUnstructuredContent on runtime.Unstructured that allows
// items to be changed.
func (obj *UnstructuredList) UnstructuredContent() map[string]interface{} {
out := obj.Object
if out == nil {
out = make(map[string]interface{})
}
items := make([]interface{}, len(obj.Items))
for i, item := range obj.Items {
items[i] = item.Object
}
out["items"] = items
return out
}
// MarshalJSON ensures that the unstructured object produces proper
// JSON when passed to Go's standard JSON library.
func (u *Unstructured) MarshalJSON() ([]byte, error) {
var buf bytes.Buffer
err := UnstructuredJSONScheme.Encode(u, &buf)
return buf.Bytes(), err
}
// UnmarshalJSON ensures that the unstructured object properly decodes
// JSON when passed to Go's standard JSON library.
func (u *Unstructured) UnmarshalJSON(b []byte) error {
_, _, err := UnstructuredJSONScheme.Decode(b, nil, u)
return err
}
func (in *Unstructured) DeepCopy() *Unstructured {
if in == nil {
return nil
}
out := new(Unstructured)
*out = *in
out.Object = unstructured.DeepCopyJSON(in.Object)
return out
}
func (in *UnstructuredList) DeepCopy() *UnstructuredList {
if in == nil {
return nil
}
out := new(UnstructuredList)
*out = *in
out.Object = unstructured.DeepCopyJSON(in.Object)
out.Items = make([]Unstructured, len(in.Items))
for i := range in.Items {
in.Items[i].DeepCopyInto(&out.Items[i])
}
return out
}
func getNestedField(obj map[string]interface{}, fields ...string) interface{} {
var val interface{} = obj
for _, field := range fields {
if _, ok := val.(map[string]interface{}); !ok {
return nil
}
val = val.(map[string]interface{})[field]
}
return val
}
func getNestedString(obj map[string]interface{}, fields ...string) string {
if str, ok := getNestedField(obj, fields...).(string); ok {
return str
}
return ""
}
func getNestedInt64(obj map[string]interface{}, fields ...string) int64 {
if str, ok := getNestedField(obj, fields...).(int64); ok {
return str
}
return 0
}
func getNestedInt64Pointer(obj map[string]interface{}, fields ...string) *int64 {
nested := getNestedField(obj, fields...)
switch n := nested.(type) {
case int64:
return &n
case *int64:
return n
default:
return nil
}
}
func getNestedSlice(obj map[string]interface{}, fields ...string) []string {
if m, ok := getNestedField(obj, fields...).([]interface{}); ok {
strSlice := make([]string, 0, len(m))
for _, v := range m {
if str, ok := v.(string); ok {
strSlice = append(strSlice, str)
}
}
return strSlice
}
return nil
}
func getNestedMap(obj map[string]interface{}, fields ...string) map[string]string {
if m, ok := getNestedField(obj, fields...).(map[string]interface{}); ok {
strMap := make(map[string]string, len(m))
for k, v := range m {
if str, ok := v.(string); ok {
strMap[k] = str
}
}
return strMap
}
return nil
}
func setNestedField(obj map[string]interface{}, value interface{}, fields ...string) {
m := obj
if len(fields) > 1 {
for _, field := range fields[0 : len(fields)-1] {
if _, ok := m[field].(map[string]interface{}); !ok {
m[field] = make(map[string]interface{})
}
m = m[field].(map[string]interface{})
}
}
m[fields[len(fields)-1]] = value
}
func setNestedSlice(obj map[string]interface{}, value []string, fields ...string) {
m := make([]interface{}, 0, len(value))
for _, v := range value {
m = append(m, v)
}
setNestedField(obj, m, fields...)
}
func setNestedMap(obj map[string]interface{}, value map[string]string, fields ...string) {
m := make(map[string]interface{}, len(value))
for k, v := range value {
m[k] = v
}
setNestedField(obj, m, fields...)
}
func (u *Unstructured) setNestedField(value interface{}, fields ...string) {
if u.Object == nil {
u.Object = make(map[string]interface{})
}
setNestedField(u.Object, value, fields...)
}
func (u *Unstructured) setNestedSlice(value []string, fields ...string) {
if u.Object == nil {
u.Object = make(map[string]interface{})
}
setNestedSlice(u.Object, value, fields...)
}
func (u *Unstructured) setNestedMap(value map[string]string, fields ...string) {
if u.Object == nil {
u.Object = make(map[string]interface{})
}
setNestedMap(u.Object, value, fields...)
}
func extractOwnerReference(src interface{}) metav1.OwnerReference {
v := src.(map[string]interface{})
// though this field is a *bool, but when decoded from JSON, it's
// unmarshalled as bool.
var controllerPtr *bool
controller, ok := (getNestedField(v, "controller")).(bool)
if !ok {
controllerPtr = nil
} else {
controllerCopy := controller
controllerPtr = &controllerCopy
}
var blockOwnerDeletionPtr *bool
blockOwnerDeletion, ok := (getNestedField(v, "blockOwnerDeletion")).(bool)
if !ok {
blockOwnerDeletionPtr = nil
} else {
blockOwnerDeletionCopy := blockOwnerDeletion
blockOwnerDeletionPtr = &blockOwnerDeletionCopy
}
return metav1.OwnerReference{
Kind: getNestedString(v, "kind"),
Name: getNestedString(v, "name"),
APIVersion: getNestedString(v, "apiVersion"),
UID: (types.UID)(getNestedString(v, "uid")),
Controller: controllerPtr,
BlockOwnerDeletion: blockOwnerDeletionPtr,
}
}
func setOwnerReference(src metav1.OwnerReference) map[string]interface{} {
ret := make(map[string]interface{})
setNestedField(ret, src.Kind, "kind")
setNestedField(ret, src.Name, "name")
setNestedField(ret, src.APIVersion, "apiVersion")
setNestedField(ret, string(src.UID), "uid")
// json.Unmarshal() extracts boolean json fields as bool, not as *bool and hence extractOwnerReference()
// expects bool or a missing field, not *bool. So if pointer is nil, fields are omitted from the ret object.
// If pointer is non-nil, they are set to the referenced value.
if src.Controller != nil {
setNestedField(ret, *src.Controller, "controller")
}
if src.BlockOwnerDeletion != nil {
setNestedField(ret, *src.BlockOwnerDeletion, "blockOwnerDeletion")
}
return ret
}
func getOwnerReferences(object map[string]interface{}) ([]map[string]interface{}, error) {
field := getNestedField(object, "metadata", "ownerReferences")
if field == nil {
return nil, fmt.Errorf("cannot find field metadata.ownerReferences in %v", object)
}
ownerReferences, ok := field.([]map[string]interface{})
if ok {
return ownerReferences, nil
}
// TODO: This is hacky...
interfaces, ok := field.([]interface{})
if !ok {
return nil, fmt.Errorf("expect metadata.ownerReferences to be a slice in %#v", object)
}
ownerReferences = make([]map[string]interface{}, 0, len(interfaces))
for i := 0; i < len(interfaces); i++ {
r, ok := interfaces[i].(map[string]interface{})
if !ok {
return nil, fmt.Errorf("expect element metadata.ownerReferences to be a map[string]interface{} in %#v", object)
}
ownerReferences = append(ownerReferences, r)
}
return ownerReferences, nil
}
func (u *Unstructured) GetOwnerReferences() []metav1.OwnerReference {
original, err := getOwnerReferences(u.Object)
if err != nil {
glog.V(6).Info(err)
return nil
}
ret := make([]metav1.OwnerReference, 0, len(original))
for i := 0; i < len(original); i++ {
ret = append(ret, extractOwnerReference(original[i]))
}
return ret
}
func (u *Unstructured) SetOwnerReferences(references []metav1.OwnerReference) {
var newReferences = make([]map[string]interface{}, 0, len(references))
for i := 0; i < len(references); i++ {
newReferences = append(newReferences, setOwnerReference(references[i]))
}
u.setNestedField(newReferences, "metadata", "ownerReferences")
}
func (u *Unstructured) GetAPIVersion() string {
return getNestedString(u.Object, "apiVersion")
}
func (u *Unstructured) SetAPIVersion(version string) {
u.setNestedField(version, "apiVersion")
}
func (u *Unstructured) GetKind() string {
return getNestedString(u.Object, "kind")
}
func (u *Unstructured) SetKind(kind string) {
u.setNestedField(kind, "kind")
}
func (u *Unstructured) GetNamespace() string {
return getNestedString(u.Object, "metadata", "namespace")
}
func (u *Unstructured) SetNamespace(namespace string) {
u.setNestedField(namespace, "metadata", "namespace")
}
func (u *Unstructured) GetName() string {
return getNestedString(u.Object, "metadata", "name")
}
func (u *Unstructured) SetName(name string) {
u.setNestedField(name, "metadata", "name")
}
func (u *Unstructured) GetGenerateName() string {
return getNestedString(u.Object, "metadata", "generateName")
}
func (u *Unstructured) SetGenerateName(name string) {
u.setNestedField(name, "metadata", "generateName")
}
func (u *Unstructured) GetUID() types.UID {
return types.UID(getNestedString(u.Object, "metadata", "uid"))
}
func (u *Unstructured) SetUID(uid types.UID) {
u.setNestedField(string(uid), "metadata", "uid")
}
func (u *Unstructured) GetResourceVersion() string {
return getNestedString(u.Object, "metadata", "resourceVersion")
}
func (u *Unstructured) SetResourceVersion(version string) {
u.setNestedField(version, "metadata", "resourceVersion")
}
func (u *Unstructured) GetGeneration() int64 {
return getNestedInt64(u.Object, "metadata", "generation")
}
func (u *Unstructured) SetGeneration(generation int64) {
u.setNestedField(generation, "metadata", "generation")
}
func (u *Unstructured) GetSelfLink() string {
return getNestedString(u.Object, "metadata", "selfLink")
}
func (u *Unstructured) SetSelfLink(selfLink string) {
u.setNestedField(selfLink, "metadata", "selfLink")
}
func (u *Unstructured) GetContinue() string {
return getNestedString(u.Object, "metadata", "continue")
}
func (u *Unstructured) SetContinue(c string) {
u.setNestedField(c, "metadata", "continue")
}
func (u *Unstructured) GetCreationTimestamp() metav1.Time {
var timestamp metav1.Time
timestamp.UnmarshalQueryParameter(getNestedString(u.Object, "metadata", "creationTimestamp"))
return timestamp
}
func (u *Unstructured) SetCreationTimestamp(timestamp metav1.Time) {
ts, _ := timestamp.MarshalQueryParameter()
u.setNestedField(ts, "metadata", "creationTimestamp")
}
func (u *Unstructured) GetDeletionTimestamp() *metav1.Time {
var timestamp metav1.Time
timestamp.UnmarshalQueryParameter(getNestedString(u.Object, "metadata", "deletionTimestamp"))
if timestamp.IsZero() {
return nil
}
return &timestamp
}
func (u *Unstructured) SetDeletionTimestamp(timestamp *metav1.Time) {
if timestamp == nil {
u.setNestedField(nil, "metadata", "deletionTimestamp")
return
}
ts, _ := timestamp.MarshalQueryParameter()
u.setNestedField(ts, "metadata", "deletionTimestamp")
}
func (u *Unstructured) GetDeletionGracePeriodSeconds() *int64 {
return getNestedInt64Pointer(u.Object, "metadata", "deletionGracePeriodSeconds")
}
func (u *Unstructured) SetDeletionGracePeriodSeconds(deletionGracePeriodSeconds *int64) {
u.setNestedField(deletionGracePeriodSeconds, "metadata", "deletionGracePeriodSeconds")
}
func (u *Unstructured) GetLabels() map[string]string {
return getNestedMap(u.Object, "metadata", "labels")
}
func (u *Unstructured) SetLabels(labels map[string]string) {
u.setNestedMap(labels, "metadata", "labels")
}
func (u *Unstructured) GetAnnotations() map[string]string {
return getNestedMap(u.Object, "metadata", "annotations")
}
func (u *Unstructured) SetAnnotations(annotations map[string]string) {
u.setNestedMap(annotations, "metadata", "annotations")
}
func (u *Unstructured) SetGroupVersionKind(gvk schema.GroupVersionKind) {
u.SetAPIVersion(gvk.GroupVersion().String())
u.SetKind(gvk.Kind)
}
func (u *Unstructured) GroupVersionKind() schema.GroupVersionKind {
gv, err := schema.ParseGroupVersion(u.GetAPIVersion())
if err != nil {
return schema.GroupVersionKind{}
}
gvk := gv.WithKind(u.GetKind())
return gvk
}
var converter = unstructured.NewConverter(false)
func (u *Unstructured) GetInitializers() *metav1.Initializers {
field := getNestedField(u.Object, "metadata", "initializers")
if field == nil {
return nil
}
obj, ok := field.(map[string]interface{})
if !ok {
return nil
}
out := &metav1.Initializers{}
if err := converter.FromUnstructured(obj, out); err != nil {
utilruntime.HandleError(fmt.Errorf("unable to retrieve initializers for object: %v", err))
}
return out
}
func (u *Unstructured) SetInitializers(initializers *metav1.Initializers) {
if u.Object == nil {
u.Object = make(map[string]interface{})
}
if initializers == nil {
setNestedField(u.Object, nil, "metadata", "initializers")
return
}
out, err := converter.ToUnstructured(initializers)
if err != nil {
utilruntime.HandleError(fmt.Errorf("unable to retrieve initializers for object: %v", err))
}
setNestedField(u.Object, out, "metadata", "initializers")
}
func (u *Unstructured) GetFinalizers() []string {
return getNestedSlice(u.Object, "metadata", "finalizers")
}
func (u *Unstructured) SetFinalizers(finalizers []string) {
u.setNestedSlice(finalizers, "metadata", "finalizers")
}
func (u *Unstructured) GetClusterName() string {
return getNestedString(u.Object, "metadata", "clusterName")
}
func (u *Unstructured) SetClusterName(clusterName string) {
u.setNestedField(clusterName, "metadata", "clusterName")
}
// UnstructuredList allows lists that do not have Golang structs
// registered to be manipulated generically. This can be used to deal
// with the API lists from a plug-in.
// +k8s:deepcopy-gen:interfaces=k8s.io/apimachinery/pkg/runtime.Object
// +k8s:deepcopy-gen=true
type UnstructuredList struct {
Object map[string]interface{}
// Items is a list of unstructured objects.
Items []Unstructured `json:"items"`
}
var _ metav1.ListInterface = &UnstructuredList{}
// MarshalJSON ensures that the unstructured list object produces proper
// JSON when passed to Go's standard JSON library.
func (u *UnstructuredList) MarshalJSON() ([]byte, error) {
var buf bytes.Buffer
err := UnstructuredJSONScheme.Encode(u, &buf)
return buf.Bytes(), err
}
// UnmarshalJSON ensures that the unstructured list object properly
// decodes JSON when passed to Go's standard JSON library.
func (u *UnstructuredList) UnmarshalJSON(b []byte) error {
_, _, err := UnstructuredJSONScheme.Decode(b, nil, u)
return err
}
func (u *UnstructuredList) setNestedField(value interface{}, fields ...string) {
if u.Object == nil {
u.Object = make(map[string]interface{})
}
setNestedField(u.Object, value, fields...)
}
func (u *UnstructuredList) GetAPIVersion() string {
return getNestedString(u.Object, "apiVersion")
}
func (u *UnstructuredList) SetAPIVersion(version string) {
u.setNestedField(version, "apiVersion")
}
func (u *UnstructuredList) GetKind() string {
return getNestedString(u.Object, "kind")
}
func (u *UnstructuredList) SetKind(kind string) {
u.setNestedField(kind, "kind")
}
func (u *UnstructuredList) GetResourceVersion() string {
return getNestedString(u.Object, "metadata", "resourceVersion")
}
func (u *UnstructuredList) SetResourceVersion(version string) {
u.setNestedField(version, "metadata", "resourceVersion")
}
func (u *UnstructuredList) GetSelfLink() string {
return getNestedString(u.Object, "metadata", "selfLink")
}
func (u *UnstructuredList) SetSelfLink(selfLink string) {
u.setNestedField(selfLink, "metadata", "selfLink")
}
func (u *UnstructuredList) GetContinue() string {
return getNestedString(u.Object, "metadata", "continue")
}
func (u *UnstructuredList) SetContinue(c string) {
u.setNestedField(c, "metadata", "continue")
}
func (u *UnstructuredList) SetGroupVersionKind(gvk schema.GroupVersionKind) {
u.SetAPIVersion(gvk.GroupVersion().String())
u.SetKind(gvk.Kind)
}
func (u *UnstructuredList) GroupVersionKind() schema.GroupVersionKind {
gv, err := schema.ParseGroupVersion(u.GetAPIVersion())
if err != nil {
return schema.GroupVersionKind{}
}
gvk := gv.WithKind(u.GetKind())
return gvk
}
// UnstructuredJSONScheme is capable of converting JSON data into the Unstructured
// type, which can be used for generic access to objects without a predefined scheme.
// TODO: move into serializer/json.
var UnstructuredJSONScheme runtime.Codec = unstructuredJSONScheme{}
type unstructuredJSONScheme struct{}
func (s unstructuredJSONScheme) Decode(data []byte, _ *schema.GroupVersionKind, obj runtime.Object) (runtime.Object, *schema.GroupVersionKind, error) {
var err error
if obj != nil {
err = s.decodeInto(data, obj)
} else {
obj, err = s.decode(data)
}
if err != nil {
return nil, nil, err
}
gvk := obj.GetObjectKind().GroupVersionKind()
if len(gvk.Kind) == 0 {
return nil, &gvk, runtime.NewMissingKindErr(string(data))
}
return obj, &gvk, nil
}
func (unstructuredJSONScheme) Encode(obj runtime.Object, w io.Writer) error {
switch t := obj.(type) {
case *Unstructured:
return json.NewEncoder(w).Encode(t.Object)
case *UnstructuredList:
items := make([]map[string]interface{}, 0, len(t.Items))
for _, i := range t.Items {
items = append(items, i.Object)
}
listObj := make(map[string]interface{}, len(t.Object)+1)
for k, v := range t.Object { // Make a shallow copy
listObj[k] = v
}
listObj["items"] = items
return json.NewEncoder(w).Encode(listObj)
case *runtime.Unknown:
// TODO: Unstructured needs to deal with ContentType.
_, err := w.Write(t.Raw)
return err
default:
return json.NewEncoder(w).Encode(t)
}
}
func (s unstructuredJSONScheme) decode(data []byte) (runtime.Object, error) {
type detector struct {
Items gojson.RawMessage
}
var det detector
if err := json.Unmarshal(data, &det); err != nil {
return nil, err
}
if det.Items != nil {
list := &UnstructuredList{}
err := s.decodeToList(data, list)
return list, err
}
// No Items field, so it wasn't a list.
unstruct := &Unstructured{}
err := s.decodeToUnstructured(data, unstruct)
return unstruct, err
}
func (s unstructuredJSONScheme) decodeInto(data []byte, obj runtime.Object) error {
switch x := obj.(type) {
case *Unstructured:
return s.decodeToUnstructured(data, x)
case *UnstructuredList:
return s.decodeToList(data, x)
case *runtime.VersionedObjects:
o, err := s.decode(data)
if err == nil {
x.Objects = []runtime.Object{o}
}
return err
default:
return json.Unmarshal(data, x)
}
}
func (unstructuredJSONScheme) decodeToUnstructured(data []byte, unstruct *Unstructured) error {
m := make(map[string]interface{})
if err := json.Unmarshal(data, &m); err != nil {
return err
}
unstruct.Object = m
return nil
}
func (s unstructuredJSONScheme) decodeToList(data []byte, list *UnstructuredList) error {
type decodeList struct {
Items []gojson.RawMessage
}
var dList decodeList
if err := json.Unmarshal(data, &dList); err != nil {
return err
}
if err := json.Unmarshal(data, &list.Object); err != nil {
return err
}
// For typed lists, e.g., a PodList, API server doesn't set each item's
// APIVersion and Kind. We need to set it.
listAPIVersion := list.GetAPIVersion()
listKind := list.GetKind()
itemKind := strings.TrimSuffix(listKind, "List")
delete(list.Object, "items")
list.Items = nil
for _, i := range dList.Items {
unstruct := &Unstructured{}
if err := s.decodeToUnstructured([]byte(i), unstruct); err != nil {
return err
}
// This is hacky. Set the item's Kind and APIVersion to those inferred
// from the List.
if len(unstruct.GetKind()) == 0 && len(unstruct.GetAPIVersion()) == 0 {
unstruct.SetKind(itemKind)
unstruct.SetAPIVersion(listAPIVersion)
}
list.Items = append(list.Items, *unstruct)
}
return nil
}
// UnstructuredObjectConverter is an ObjectConverter for use with
// Unstructured objects. Since it has no schema or type information,
// it will only succeed for no-op conversions. This is provided as a
// sane implementation for APIs that require an object converter.
type UnstructuredObjectConverter struct{}
func (UnstructuredObjectConverter) Convert(in, out, context interface{}) error {
unstructIn, ok := in.(*Unstructured)
if !ok {
return fmt.Errorf("input type %T in not valid for unstructured conversion", in)
}
unstructOut, ok := out.(*Unstructured)
if !ok {
return fmt.Errorf("output type %T in not valid for unstructured conversion", out)
}
// maybe deep copy the map? It is documented in the
// ObjectConverter interface that this function is not
// guaranteeed to not mutate the input. Or maybe set the input
// object to nil.
unstructOut.Object = unstructIn.Object
return nil
}
func (UnstructuredObjectConverter) ConvertToVersion(in runtime.Object, target runtime.GroupVersioner) (runtime.Object, error) {
if kind := in.GetObjectKind().GroupVersionKind(); !kind.Empty() {
gvk, ok := target.KindForGroupVersionKinds([]schema.GroupVersionKind{kind})
if !ok {
// TODO: should this be a typed error?
return nil, fmt.Errorf("%v is unstructured and is not suitable for converting to %q", kind, target)
}
in.GetObjectKind().SetGroupVersionKind(gvk)
}
return in, nil
}
func (UnstructuredObjectConverter) ConvertFieldLabel(version, kind, label, value string) (string, string, error) {
return "", "", errors.New("unstructured cannot convert field labels")
}

View File

@ -1,75 +0,0 @@
// +build !ignore_autogenerated
/*
Copyright 2017 The Kubernetes Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
// This file was autogenerated by deepcopy-gen. Do not edit it manually!
package unstructured
import (
conversion "k8s.io/apimachinery/pkg/conversion"
runtime "k8s.io/apimachinery/pkg/runtime"
reflect "reflect"
)
// GetGeneratedDeepCopyFuncs returns the generated funcs, since we aren't registering them.
//
// Deprecated: deepcopy registration will go away when static deepcopy is fully implemented.
func GetGeneratedDeepCopyFuncs() []conversion.GeneratedDeepCopyFunc {
return []conversion.GeneratedDeepCopyFunc{
{Fn: func(in interface{}, out interface{}, c *conversion.Cloner) error {
in.(*Unstructured).DeepCopyInto(out.(*Unstructured))
return nil
}, InType: reflect.TypeOf(&Unstructured{})},
{Fn: func(in interface{}, out interface{}, c *conversion.Cloner) error {
in.(*UnstructuredList).DeepCopyInto(out.(*UnstructuredList))
return nil
}, InType: reflect.TypeOf(&UnstructuredList{})},
}
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *Unstructured) DeepCopyInto(out *Unstructured) {
clone := in.DeepCopy()
*out = *clone
return
}
// DeepCopyObject is an autogenerated deepcopy function, copying the receiver, creating a new runtime.Object.
func (in *Unstructured) DeepCopyObject() runtime.Object {
if c := in.DeepCopy(); c != nil {
return c
} else {
return nil
}
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *UnstructuredList) DeepCopyInto(out *UnstructuredList) {
clone := in.DeepCopy()
*out = *clone
return
}
// DeepCopyObject is an autogenerated deepcopy function, copying the receiver, creating a new runtime.Object.
func (in *UnstructuredList) DeepCopyObject() runtime.Object {
if c := in.DeepCopy(); c != nil {
return c
} else {
return nil
}
}

View File

@ -1,776 +0,0 @@
/*
Copyright 2017 The Kubernetes Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package unstructured
import (
"bytes"
encodingjson "encoding/json"
"fmt"
"math"
"os"
"reflect"
"strconv"
"strings"
"sync"
"sync/atomic"
apiequality "k8s.io/apimachinery/pkg/api/equality"
"k8s.io/apimachinery/pkg/runtime"
"k8s.io/apimachinery/pkg/util/diff"
"k8s.io/apimachinery/pkg/util/json"
utilruntime "k8s.io/apimachinery/pkg/util/runtime"
"github.com/golang/glog"
)
// Converter is an interface for converting between interface{}
// and map[string]interface representation.
type Converter interface {
ToUnstructured(obj interface{}) (map[string]interface{}, error)
FromUnstructured(u map[string]interface{}, obj interface{}) error
}
type structField struct {
structType reflect.Type
field int
}
type fieldInfo struct {
name string
nameValue reflect.Value
omitempty bool
}
type fieldsCacheMap map[structField]*fieldInfo
type fieldsCache struct {
sync.Mutex
value atomic.Value
}
func newFieldsCache() *fieldsCache {
cache := &fieldsCache{}
cache.value.Store(make(fieldsCacheMap))
return cache
}
var (
marshalerType = reflect.TypeOf(new(encodingjson.Marshaler)).Elem()
unmarshalerType = reflect.TypeOf(new(encodingjson.Unmarshaler)).Elem()
mapStringInterfaceType = reflect.TypeOf(map[string]interface{}{})
stringType = reflect.TypeOf(string(""))
int64Type = reflect.TypeOf(int64(0))
uint64Type = reflect.TypeOf(uint64(0))
float64Type = reflect.TypeOf(float64(0))
boolType = reflect.TypeOf(bool(false))
fieldCache = newFieldsCache()
DefaultConverter = NewConverter(parseBool(os.Getenv("KUBE_PATCH_CONVERSION_DETECTOR")))
)
func parseBool(key string) bool {
if len(key) == 0 {
return false
}
value, err := strconv.ParseBool(key)
if err != nil {
utilruntime.HandleError(fmt.Errorf("Couldn't parse '%s' as bool for unstructured mismatch detection", key))
}
return value
}
// ConverterImpl knows how to convert between interface{} and
// Unstructured in both ways.
type converterImpl struct {
// If true, we will be additionally running conversion via json
// to ensure that the result is true.
// This is supposed to be set only in tests.
mismatchDetection bool
}
func NewConverter(mismatchDetection bool) Converter {
return &converterImpl{
mismatchDetection: mismatchDetection,
}
}
// FromUnstructured converts an object from map[string]interface{} representation into a concrete type.
// It uses encoding/json/Unmarshaler if object implements it or reflection if not.
func (c *converterImpl) FromUnstructured(u map[string]interface{}, obj interface{}) error {
t := reflect.TypeOf(obj)
value := reflect.ValueOf(obj)
if t.Kind() != reflect.Ptr || value.IsNil() {
return fmt.Errorf("FromUnstructured requires a non-nil pointer to an object, got %v", t)
}
err := fromUnstructured(reflect.ValueOf(u), value.Elem())
if c.mismatchDetection {
newObj := reflect.New(t.Elem()).Interface()
newErr := fromUnstructuredViaJSON(u, newObj)
if (err != nil) != (newErr != nil) {
glog.Fatalf("FromUnstructured unexpected error for %v: error: %v", u, err)
}
if err == nil && !apiequality.Semantic.DeepEqual(obj, newObj) {
glog.Fatalf("FromUnstructured mismatch for %#v, diff: %v", obj, diff.ObjectReflectDiff(obj, newObj))
}
}
return err
}
func fromUnstructuredViaJSON(u map[string]interface{}, obj interface{}) error {
data, err := json.Marshal(u)
if err != nil {
return err
}
return json.Unmarshal(data, obj)
}
func fromUnstructured(sv, dv reflect.Value) error {
sv = unwrapInterface(sv)
if !sv.IsValid() {
dv.Set(reflect.Zero(dv.Type()))
return nil
}
st, dt := sv.Type(), dv.Type()
switch dt.Kind() {
case reflect.Map, reflect.Slice, reflect.Ptr, reflect.Struct, reflect.Interface:
// Those require non-trivial conversion.
default:
// This should handle all simple types.
if st.AssignableTo(dt) {
dv.Set(sv)
return nil
}
// We cannot simply use "ConvertibleTo", as JSON doesn't support conversions
// between those four groups: bools, integers, floats and string. We need to
// do the same.
if st.ConvertibleTo(dt) {
switch st.Kind() {
case reflect.String:
switch dt.Kind() {
case reflect.String:
dv.Set(sv.Convert(dt))
return nil
}
case reflect.Bool:
switch dt.Kind() {
case reflect.Bool:
dv.Set(sv.Convert(dt))
return nil
}
case reflect.Int, reflect.Int8, reflect.Int16, reflect.Int32, reflect.Int64,
reflect.Uint, reflect.Uint8, reflect.Uint16, reflect.Uint32, reflect.Uint64:
switch dt.Kind() {
case reflect.Int, reflect.Int8, reflect.Int16, reflect.Int32, reflect.Int64,
reflect.Uint, reflect.Uint8, reflect.Uint16, reflect.Uint32, reflect.Uint64:
dv.Set(sv.Convert(dt))
return nil
}
case reflect.Float32, reflect.Float64:
switch dt.Kind() {
case reflect.Float32, reflect.Float64:
dv.Set(sv.Convert(dt))
return nil
}
if sv.Float() == math.Trunc(sv.Float()) {
dv.Set(sv.Convert(dt))
return nil
}
}
return fmt.Errorf("cannot convert %s to %s", st.String(), dt.String())
}
}
// Check if the object has a custom JSON marshaller/unmarshaller.
if reflect.PtrTo(dt).Implements(unmarshalerType) {
data, err := json.Marshal(sv.Interface())
if err != nil {
return fmt.Errorf("error encoding %s to json: %v", st.String(), err)
}
unmarshaler := dv.Addr().Interface().(encodingjson.Unmarshaler)
return unmarshaler.UnmarshalJSON(data)
}
switch dt.Kind() {
case reflect.Map:
return mapFromUnstructured(sv, dv)
case reflect.Slice:
return sliceFromUnstructured(sv, dv)
case reflect.Ptr:
return pointerFromUnstructured(sv, dv)
case reflect.Struct:
return structFromUnstructured(sv, dv)
case reflect.Interface:
return interfaceFromUnstructured(sv, dv)
default:
return fmt.Errorf("unrecognized type: %v", dt.Kind())
}
}
func fieldInfoFromField(structType reflect.Type, field int) *fieldInfo {
fieldCacheMap := fieldCache.value.Load().(fieldsCacheMap)
if info, ok := fieldCacheMap[structField{structType, field}]; ok {
return info
}
// Cache miss - we need to compute the field name.
info := &fieldInfo{}
typeField := structType.Field(field)
jsonTag := typeField.Tag.Get("json")
if len(jsonTag) == 0 {
// Make the first character lowercase.
if typeField.Name == "" {
info.name = typeField.Name
} else {
info.name = strings.ToLower(typeField.Name[:1]) + typeField.Name[1:]
}
} else {
items := strings.Split(jsonTag, ",")
info.name = items[0]
for i := range items {
if items[i] == "omitempty" {
info.omitempty = true
}
}
}
info.nameValue = reflect.ValueOf(info.name)
fieldCache.Lock()
defer fieldCache.Unlock()
fieldCacheMap = fieldCache.value.Load().(fieldsCacheMap)
newFieldCacheMap := make(fieldsCacheMap)
for k, v := range fieldCacheMap {
newFieldCacheMap[k] = v
}
newFieldCacheMap[structField{structType, field}] = info
fieldCache.value.Store(newFieldCacheMap)
return info
}
func unwrapInterface(v reflect.Value) reflect.Value {
for v.Kind() == reflect.Interface {
v = v.Elem()
}
return v
}
func mapFromUnstructured(sv, dv reflect.Value) error {
st, dt := sv.Type(), dv.Type()
if st.Kind() != reflect.Map {
return fmt.Errorf("cannot restore map from %v", st.Kind())
}
if !st.Key().AssignableTo(dt.Key()) && !st.Key().ConvertibleTo(dt.Key()) {
return fmt.Errorf("cannot copy map with non-assignable keys: %v %v", st.Key(), dt.Key())
}
if sv.IsNil() {
dv.Set(reflect.Zero(dt))
return nil
}
dv.Set(reflect.MakeMap(dt))
for _, key := range sv.MapKeys() {
value := reflect.New(dt.Elem()).Elem()
if val := unwrapInterface(sv.MapIndex(key)); val.IsValid() {
if err := fromUnstructured(val, value); err != nil {
return err
}
} else {
value.Set(reflect.Zero(dt.Elem()))
}
if st.Key().AssignableTo(dt.Key()) {
dv.SetMapIndex(key, value)
} else {
dv.SetMapIndex(key.Convert(dt.Key()), value)
}
}
return nil
}
func sliceFromUnstructured(sv, dv reflect.Value) error {
st, dt := sv.Type(), dv.Type()
if st.Kind() == reflect.String && dt.Elem().Kind() == reflect.Uint8 {
// We store original []byte representation as string.
// This conversion is allowed, but we need to be careful about
// marshaling data appropriately.
if len(sv.Interface().(string)) > 0 {
marshalled, err := json.Marshal(sv.Interface())
if err != nil {
return fmt.Errorf("error encoding %s to json: %v", st, err)
}
// TODO: Is this Unmarshal needed?
var data []byte
err = json.Unmarshal(marshalled, &data)
if err != nil {
return fmt.Errorf("error decoding from json: %v", err)
}
dv.SetBytes(data)
} else {
dv.Set(reflect.Zero(dt))
}
return nil
}
if st.Kind() != reflect.Slice {
return fmt.Errorf("cannot restore slice from %v", st.Kind())
}
if sv.IsNil() {
dv.Set(reflect.Zero(dt))
return nil
}
dv.Set(reflect.MakeSlice(dt, sv.Len(), sv.Cap()))
for i := 0; i < sv.Len(); i++ {
if err := fromUnstructured(sv.Index(i), dv.Index(i)); err != nil {
return err
}
}
return nil
}
func pointerFromUnstructured(sv, dv reflect.Value) error {
st, dt := sv.Type(), dv.Type()
if st.Kind() == reflect.Ptr && sv.IsNil() {
dv.Set(reflect.Zero(dt))
return nil
}
dv.Set(reflect.New(dt.Elem()))
switch st.Kind() {
case reflect.Ptr, reflect.Interface:
return fromUnstructured(sv.Elem(), dv.Elem())
default:
return fromUnstructured(sv, dv.Elem())
}
}
func structFromUnstructured(sv, dv reflect.Value) error {
st, dt := sv.Type(), dv.Type()
if st.Kind() != reflect.Map {
return fmt.Errorf("cannot restore struct from: %v", st.Kind())
}
for i := 0; i < dt.NumField(); i++ {
fieldInfo := fieldInfoFromField(dt, i)
fv := dv.Field(i)
if len(fieldInfo.name) == 0 {
// This field is inlined.
if err := fromUnstructured(sv, fv); err != nil {
return err
}
} else {
value := unwrapInterface(sv.MapIndex(fieldInfo.nameValue))
if value.IsValid() {
if err := fromUnstructured(value, fv); err != nil {
return err
}
} else {
fv.Set(reflect.Zero(fv.Type()))
}
}
}
return nil
}
func interfaceFromUnstructured(sv, dv reflect.Value) error {
// TODO: Is this conversion safe?
dv.Set(sv)
return nil
}
// ToUnstructured converts an object into map[string]interface{} representation.
// It uses encoding/json/Marshaler if object implements it or reflection if not.
func (c *converterImpl) ToUnstructured(obj interface{}) (map[string]interface{}, error) {
var u map[string]interface{}
var err error
if unstr, ok := obj.(runtime.Unstructured); ok {
u = DeepCopyJSON(unstr.UnstructuredContent())
} else {
t := reflect.TypeOf(obj)
value := reflect.ValueOf(obj)
if t.Kind() != reflect.Ptr || value.IsNil() {
return nil, fmt.Errorf("ToUnstructured requires a non-nil pointer to an object, got %v", t)
}
u = map[string]interface{}{}
err = toUnstructured(value.Elem(), reflect.ValueOf(&u).Elem())
}
if c.mismatchDetection {
newUnstr := map[string]interface{}{}
newErr := toUnstructuredViaJSON(obj, &newUnstr)
if (err != nil) != (newErr != nil) {
glog.Fatalf("ToUnstructured unexpected error for %v: error: %v; newErr: %v", obj, err, newErr)
}
if err == nil && !apiequality.Semantic.DeepEqual(u, newUnstr) {
glog.Fatalf("ToUnstructured mismatch for %#v, diff: %v", u, diff.ObjectReflectDiff(u, newUnstr))
}
}
if err != nil {
return nil, err
}
return u, nil
}
// DeepCopyJSON deep copies the passed value, assuming it is a valid JSON representation i.e. only contains
// types produced by json.Unmarshal().
func DeepCopyJSON(x map[string]interface{}) map[string]interface{} {
return deepCopyJSON(x).(map[string]interface{})
}
func deepCopyJSON(x interface{}) interface{} {
switch x := x.(type) {
case map[string]interface{}:
clone := make(map[string]interface{}, len(x))
for k, v := range x {
clone[k] = deepCopyJSON(v)
}
return clone
case []interface{}:
clone := make([]interface{}, len(x))
for i, v := range x {
clone[i] = deepCopyJSON(v)
}
return clone
case string, int64, bool, float64, nil, encodingjson.Number:
return x
default:
panic(fmt.Errorf("cannot deep copy %T", x))
}
}
func toUnstructuredViaJSON(obj interface{}, u *map[string]interface{}) error {
data, err := json.Marshal(obj)
if err != nil {
return err
}
return json.Unmarshal(data, u)
}
var (
nullBytes = []byte("null")
trueBytes = []byte("true")
falseBytes = []byte("false")
)
func getMarshaler(v reflect.Value) (encodingjson.Marshaler, bool) {
// Check value receivers if v is not a pointer and pointer receivers if v is a pointer
if v.Type().Implements(marshalerType) {
return v.Interface().(encodingjson.Marshaler), true
}
// Check pointer receivers if v is not a pointer
if v.Kind() != reflect.Ptr && v.CanAddr() {
v = v.Addr()
if v.Type().Implements(marshalerType) {
return v.Interface().(encodingjson.Marshaler), true
}
}
return nil, false
}
func toUnstructured(sv, dv reflect.Value) error {
// Check if the object has a custom JSON marshaller/unmarshaller.
if marshaler, ok := getMarshaler(sv); ok {
if sv.Kind() == reflect.Ptr && sv.IsNil() {
// We're done - we don't need to store anything.
return nil
}
data, err := marshaler.MarshalJSON()
if err != nil {
return err
}
switch {
case len(data) == 0:
return fmt.Errorf("error decoding from json: empty value")
case bytes.Equal(data, nullBytes):
// We're done - we don't need to store anything.
case bytes.Equal(data, trueBytes):
dv.Set(reflect.ValueOf(true))
case bytes.Equal(data, falseBytes):
dv.Set(reflect.ValueOf(false))
case data[0] == '"':
var result string
err := json.Unmarshal(data, &result)
if err != nil {
return fmt.Errorf("error decoding string from json: %v", err)
}
dv.Set(reflect.ValueOf(result))
case data[0] == '{':
result := make(map[string]interface{})
err := json.Unmarshal(data, &result)
if err != nil {
return fmt.Errorf("error decoding object from json: %v", err)
}
dv.Set(reflect.ValueOf(result))
case data[0] == '[':
result := make([]interface{}, 0)
err := json.Unmarshal(data, &result)
if err != nil {
return fmt.Errorf("error decoding array from json: %v", err)
}
dv.Set(reflect.ValueOf(result))
default:
var (
resultInt int64
resultFloat float64
err error
)
if err = json.Unmarshal(data, &resultInt); err == nil {
dv.Set(reflect.ValueOf(resultInt))
} else if err = json.Unmarshal(data, &resultFloat); err == nil {
dv.Set(reflect.ValueOf(resultFloat))
} else {
return fmt.Errorf("error decoding number from json: %v", err)
}
}
return nil
}
st, dt := sv.Type(), dv.Type()
switch st.Kind() {
case reflect.String:
if dt.Kind() == reflect.Interface && dv.NumMethod() == 0 {
dv.Set(reflect.New(stringType))
}
dv.Set(reflect.ValueOf(sv.String()))
return nil
case reflect.Bool:
if dt.Kind() == reflect.Interface && dv.NumMethod() == 0 {
dv.Set(reflect.New(boolType))
}
dv.Set(reflect.ValueOf(sv.Bool()))
return nil
case reflect.Int, reflect.Int8, reflect.Int16, reflect.Int32, reflect.Int64:
if dt.Kind() == reflect.Interface && dv.NumMethod() == 0 {
dv.Set(reflect.New(int64Type))
}
dv.Set(reflect.ValueOf(sv.Int()))
return nil
case reflect.Uint, reflect.Uint8, reflect.Uint16, reflect.Uint32, reflect.Uint64:
if dt.Kind() == reflect.Interface && dv.NumMethod() == 0 {
dv.Set(reflect.New(uint64Type))
}
dv.Set(reflect.ValueOf(sv.Uint()))
return nil
case reflect.Float32, reflect.Float64:
if dt.Kind() == reflect.Interface && dv.NumMethod() == 0 {
dv.Set(reflect.New(float64Type))
}
dv.Set(reflect.ValueOf(sv.Float()))
return nil
case reflect.Map:
return mapToUnstructured(sv, dv)
case reflect.Slice:
return sliceToUnstructured(sv, dv)
case reflect.Ptr:
return pointerToUnstructured(sv, dv)
case reflect.Struct:
return structToUnstructured(sv, dv)
case reflect.Interface:
return interfaceToUnstructured(sv, dv)
default:
return fmt.Errorf("unrecognized type: %v", st.Kind())
}
}
func mapToUnstructured(sv, dv reflect.Value) error {
st, dt := sv.Type(), dv.Type()
if sv.IsNil() {
dv.Set(reflect.Zero(dt))
return nil
}
if dt.Kind() == reflect.Interface && dv.NumMethod() == 0 {
if st.Key().Kind() == reflect.String {
switch st.Elem().Kind() {
// TODO It should be possible to reuse the slice for primitive types.
// However, it is panicing in the following form.
// case reflect.String, reflect.Bool,
// reflect.Int, reflect.Int8, reflect.Int16, reflect.Int32, reflect.Int64,
// reflect.Uint, reflect.Uint8, reflect.Uint16, reflect.Uint32, reflect.Uint64:
// sv.Set(sv)
// return nil
default:
// We need to do a proper conversion.
}
}
dv.Set(reflect.MakeMap(mapStringInterfaceType))
dv = dv.Elem()
dt = dv.Type()
}
if dt.Kind() != reflect.Map {
return fmt.Errorf("cannot convert struct to: %v", dt.Kind())
}
if !st.Key().AssignableTo(dt.Key()) && !st.Key().ConvertibleTo(dt.Key()) {
return fmt.Errorf("cannot copy map with non-assignable keys: %v %v", st.Key(), dt.Key())
}
for _, key := range sv.MapKeys() {
value := reflect.New(dt.Elem()).Elem()
if err := toUnstructured(sv.MapIndex(key), value); err != nil {
return err
}
if st.Key().AssignableTo(dt.Key()) {
dv.SetMapIndex(key, value)
} else {
dv.SetMapIndex(key.Convert(dt.Key()), value)
}
}
return nil
}
func sliceToUnstructured(sv, dv reflect.Value) error {
st, dt := sv.Type(), dv.Type()
if sv.IsNil() {
dv.Set(reflect.Zero(dt))
return nil
}
if st.Elem().Kind() == reflect.Uint8 {
dv.Set(reflect.New(stringType))
data, err := json.Marshal(sv.Bytes())
if err != nil {
return err
}
var result string
if err = json.Unmarshal(data, &result); err != nil {
return err
}
dv.Set(reflect.ValueOf(result))
return nil
}
if dt.Kind() == reflect.Interface && dv.NumMethod() == 0 {
switch st.Elem().Kind() {
// TODO It should be possible to reuse the slice for primitive types.
// However, it is panicing in the following form.
// case reflect.String, reflect.Bool,
// reflect.Int, reflect.Int8, reflect.Int16, reflect.Int32, reflect.Int64,
// reflect.Uint, reflect.Uint8, reflect.Uint16, reflect.Uint32, reflect.Uint64:
// sv.Set(sv)
// return nil
default:
// We need to do a proper conversion.
dv.Set(reflect.MakeSlice(reflect.SliceOf(dt), sv.Len(), sv.Cap()))
dv = dv.Elem()
dt = dv.Type()
}
}
if dt.Kind() != reflect.Slice {
return fmt.Errorf("cannot convert slice to: %v", dt.Kind())
}
for i := 0; i < sv.Len(); i++ {
if err := toUnstructured(sv.Index(i), dv.Index(i)); err != nil {
return err
}
}
return nil
}
func pointerToUnstructured(sv, dv reflect.Value) error {
if sv.IsNil() {
// We're done - we don't need to store anything.
return nil
}
return toUnstructured(sv.Elem(), dv)
}
func isZero(v reflect.Value) bool {
switch v.Kind() {
case reflect.Array, reflect.String:
return v.Len() == 0
case reflect.Bool:
return !v.Bool()
case reflect.Int, reflect.Int8, reflect.Int16, reflect.Int32, reflect.Int64:
return v.Int() == 0
case reflect.Uint, reflect.Uint8, reflect.Uint16, reflect.Uint32, reflect.Uint64:
return v.Uint() == 0
case reflect.Float32, reflect.Float64:
return v.Float() == 0
case reflect.Map, reflect.Slice:
// TODO: It seems that 0-len maps are ignored in it.
return v.IsNil() || v.Len() == 0
case reflect.Ptr, reflect.Interface:
return v.IsNil()
}
return false
}
func structToUnstructured(sv, dv reflect.Value) error {
st, dt := sv.Type(), dv.Type()
if dt.Kind() == reflect.Interface && dv.NumMethod() == 0 {
dv.Set(reflect.MakeMap(mapStringInterfaceType))
dv = dv.Elem()
dt = dv.Type()
}
if dt.Kind() != reflect.Map {
return fmt.Errorf("cannot convert struct to: %v", dt.Kind())
}
realMap := dv.Interface().(map[string]interface{})
for i := 0; i < st.NumField(); i++ {
fieldInfo := fieldInfoFromField(st, i)
fv := sv.Field(i)
if fieldInfo.name == "-" {
// This field should be skipped.
continue
}
if fieldInfo.omitempty && isZero(fv) {
// omitempty fields should be ignored.
continue
}
if len(fieldInfo.name) == 0 {
// This field is inlined.
if err := toUnstructured(fv, dv); err != nil {
return err
}
continue
}
switch fv.Type().Kind() {
case reflect.String:
realMap[fieldInfo.name] = fv.String()
case reflect.Bool:
realMap[fieldInfo.name] = fv.Bool()
case reflect.Int, reflect.Int8, reflect.Int16, reflect.Int32, reflect.Int64:
realMap[fieldInfo.name] = fv.Int()
case reflect.Uint, reflect.Uint8, reflect.Uint16, reflect.Uint32, reflect.Uint64:
realMap[fieldInfo.name] = fv.Uint()
case reflect.Float32, reflect.Float64:
realMap[fieldInfo.name] = fv.Float()
default:
subv := reflect.New(dt.Elem()).Elem()
if err := toUnstructured(fv, subv); err != nil {
return err
}
dv.SetMapIndex(fieldInfo.nameValue, subv)
}
}
return nil
}
func interfaceToUnstructured(sv, dv reflect.Value) error {
if !sv.IsValid() || sv.IsNil() {
dv.Set(reflect.Zero(dv.Type()))
return nil
}
return toUnstructured(sv.Elem(), dv)
}

View File

@ -1,19 +0,0 @@
/*
Copyright 2017 The Kubernetes Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
// Package unstructured provides conversion from runtime objects
// to map[string]interface{} representation.
package unstructured // import "k8s.io/apimachinery/pkg/conversion/unstructured"

View File

@ -550,7 +550,7 @@ func (p *Parser) lookahead(context ParserContext) (Token, string) {
return tok, lit
}
// consume returns current token and string. Increments the the position
// consume returns current token and string. Increments the position
func (p *Parser) consume(context ParserContext) (Token, string) {
p.position++
tok, lit := p.scannedItems[p.position-1].tok, p.scannedItems[p.position-1].literal

View File

@ -1,273 +0,0 @@
/*
Copyright 2014 The Kubernetes Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package diff
import (
"bytes"
"encoding/json"
"fmt"
"reflect"
"sort"
"strings"
"text/tabwriter"
"github.com/davecgh/go-spew/spew"
"k8s.io/apimachinery/pkg/util/validation/field"
)
// StringDiff diffs a and b and returns a human readable diff.
func StringDiff(a, b string) string {
ba := []byte(a)
bb := []byte(b)
out := []byte{}
i := 0
for ; i < len(ba) && i < len(bb); i++ {
if ba[i] != bb[i] {
break
}
out = append(out, ba[i])
}
out = append(out, []byte("\n\nA: ")...)
out = append(out, ba[i:]...)
out = append(out, []byte("\n\nB: ")...)
out = append(out, bb[i:]...)
out = append(out, []byte("\n\n")...)
return string(out)
}
// ObjectDiff writes the two objects out as JSON and prints out the identical part of
// the objects followed by the remaining part of 'a' and finally the remaining part of 'b'.
// For debugging tests.
func ObjectDiff(a, b interface{}) string {
ab, err := json.Marshal(a)
if err != nil {
panic(fmt.Sprintf("a: %v", err))
}
bb, err := json.Marshal(b)
if err != nil {
panic(fmt.Sprintf("b: %v", err))
}
return StringDiff(string(ab), string(bb))
}
// ObjectGoPrintDiff is like ObjectDiff, but uses go-spew to print the objects,
// which shows absolutely everything by recursing into every single pointer
// (go's %#v formatters OTOH stop at a certain point). This is needed when you
// can't figure out why reflect.DeepEqual is returning false and nothing is
// showing you differences. This will.
func ObjectGoPrintDiff(a, b interface{}) string {
s := spew.ConfigState{DisableMethods: true}
return StringDiff(
s.Sprintf("%#v", a),
s.Sprintf("%#v", b),
)
}
func ObjectReflectDiff(a, b interface{}) string {
vA, vB := reflect.ValueOf(a), reflect.ValueOf(b)
if vA.Type() != vB.Type() {
return fmt.Sprintf("type A %T and type B %T do not match", a, b)
}
diffs := objectReflectDiff(field.NewPath("object"), vA, vB)
if len(diffs) == 0 {
return "<no diffs>"
}
out := []string{""}
for _, d := range diffs {
out = append(out,
fmt.Sprintf("%s:", d.path),
limit(fmt.Sprintf(" a: %#v", d.a), 80),
limit(fmt.Sprintf(" b: %#v", d.b), 80),
)
}
return strings.Join(out, "\n")
}
func limit(s string, max int) string {
if len(s) > max {
return s[:max]
}
return s
}
func public(s string) bool {
if len(s) == 0 {
return false
}
return s[:1] == strings.ToUpper(s[:1])
}
type diff struct {
path *field.Path
a, b interface{}
}
type orderedDiffs []diff
func (d orderedDiffs) Len() int { return len(d) }
func (d orderedDiffs) Swap(i, j int) { d[i], d[j] = d[j], d[i] }
func (d orderedDiffs) Less(i, j int) bool {
a, b := d[i].path.String(), d[j].path.String()
if a < b {
return true
}
return false
}
func objectReflectDiff(path *field.Path, a, b reflect.Value) []diff {
switch a.Type().Kind() {
case reflect.Struct:
var changes []diff
for i := 0; i < a.Type().NumField(); i++ {
if !public(a.Type().Field(i).Name) {
if reflect.DeepEqual(a.Interface(), b.Interface()) {
continue
}
return []diff{{path: path, a: fmt.Sprintf("%#v", a), b: fmt.Sprintf("%#v", b)}}
}
if sub := objectReflectDiff(path.Child(a.Type().Field(i).Name), a.Field(i), b.Field(i)); len(sub) > 0 {
changes = append(changes, sub...)
}
}
return changes
case reflect.Ptr, reflect.Interface:
if a.IsNil() || b.IsNil() {
switch {
case a.IsNil() && b.IsNil():
return nil
case a.IsNil():
return []diff{{path: path, a: nil, b: b.Interface()}}
default:
return []diff{{path: path, a: a.Interface(), b: nil}}
}
}
return objectReflectDiff(path, a.Elem(), b.Elem())
case reflect.Chan:
if !reflect.DeepEqual(a.Interface(), b.Interface()) {
return []diff{{path: path, a: a.Interface(), b: b.Interface()}}
}
return nil
case reflect.Slice:
lA, lB := a.Len(), b.Len()
l := lA
if lB < lA {
l = lB
}
if lA == lB && lA == 0 {
if a.IsNil() != b.IsNil() {
return []diff{{path: path, a: a.Interface(), b: b.Interface()}}
}
return nil
}
var diffs []diff
for i := 0; i < l; i++ {
if !reflect.DeepEqual(a.Index(i), b.Index(i)) {
diffs = append(diffs, objectReflectDiff(path.Index(i), a.Index(i), b.Index(i))...)
}
}
for i := l; i < lA; i++ {
diffs = append(diffs, diff{path: path.Index(i), a: a.Index(i), b: nil})
}
for i := l; i < lB; i++ {
diffs = append(diffs, diff{path: path.Index(i), a: nil, b: b.Index(i)})
}
return diffs
case reflect.Map:
if reflect.DeepEqual(a.Interface(), b.Interface()) {
return nil
}
aKeys := make(map[interface{}]interface{})
for _, key := range a.MapKeys() {
aKeys[key.Interface()] = a.MapIndex(key).Interface()
}
var missing []diff
for _, key := range b.MapKeys() {
if _, ok := aKeys[key.Interface()]; ok {
delete(aKeys, key.Interface())
if reflect.DeepEqual(a.MapIndex(key).Interface(), b.MapIndex(key).Interface()) {
continue
}
missing = append(missing, objectReflectDiff(path.Key(fmt.Sprintf("%s", key.Interface())), a.MapIndex(key), b.MapIndex(key))...)
continue
}
missing = append(missing, diff{path: path.Key(fmt.Sprintf("%s", key.Interface())), a: nil, b: b.MapIndex(key).Interface()})
}
for key, value := range aKeys {
missing = append(missing, diff{path: path.Key(fmt.Sprintf("%s", key)), a: value, b: nil})
}
if len(missing) == 0 {
missing = append(missing, diff{path: path, a: a.Interface(), b: b.Interface()})
}
sort.Sort(orderedDiffs(missing))
return missing
default:
if reflect.DeepEqual(a.Interface(), b.Interface()) {
return nil
}
if !a.CanInterface() {
return []diff{{path: path, a: fmt.Sprintf("%#v", a), b: fmt.Sprintf("%#v", b)}}
}
return []diff{{path: path, a: a.Interface(), b: b.Interface()}}
}
}
// ObjectGoPrintSideBySide prints a and b as textual dumps side by side,
// enabling easy visual scanning for mismatches.
func ObjectGoPrintSideBySide(a, b interface{}) string {
s := spew.ConfigState{
Indent: " ",
// Extra deep spew.
DisableMethods: true,
}
sA := s.Sdump(a)
sB := s.Sdump(b)
linesA := strings.Split(sA, "\n")
linesB := strings.Split(sB, "\n")
width := 0
for _, s := range linesA {
l := len(s)
if l > width {
width = l
}
}
for _, s := range linesB {
l := len(s)
if l > width {
width = l
}
}
buf := &bytes.Buffer{}
w := tabwriter.NewWriter(buf, width, 0, 1, ' ', 0)
max := len(linesA)
if len(linesB) > max {
max = len(linesB)
}
for i := 0; i < max; i++ {
var a, b string
if i < len(linesA) {
a = linesA[i]
}
if i < len(linesB) {
b = linesB[i]
}
fmt.Fprintf(w, "%s\t%s\n", a, b)
}
w.Flush()
return buf.String()
}

View File

@ -110,7 +110,7 @@ func (s *SpdyRoundTripper) Dial(req *http.Request) (net.Conn, error) {
func (s *SpdyRoundTripper) dial(req *http.Request) (net.Conn, error) {
proxier := s.proxier
if proxier == nil {
proxier = http.ProxyFromEnvironment
proxier = utilnet.NewProxierWithNoProxyCIDR(http.ProxyFromEnvironment)
}
proxyURL, err := proxier(req)
if err != nil {

View File

@ -1,119 +0,0 @@
/*
Copyright 2015 The Kubernetes Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package json
import (
"bytes"
"encoding/json"
"io"
)
// NewEncoder delegates to json.NewEncoder
// It is only here so this package can be a drop-in for common encoding/json uses
func NewEncoder(w io.Writer) *json.Encoder {
return json.NewEncoder(w)
}
// Marshal delegates to json.Marshal
// It is only here so this package can be a drop-in for common encoding/json uses
func Marshal(v interface{}) ([]byte, error) {
return json.Marshal(v)
}
// Unmarshal unmarshals the given data
// If v is a *map[string]interface{}, numbers are converted to int64 or float64
func Unmarshal(data []byte, v interface{}) error {
switch v := v.(type) {
case *map[string]interface{}:
// Build a decoder from the given data
decoder := json.NewDecoder(bytes.NewBuffer(data))
// Preserve numbers, rather than casting to float64 automatically
decoder.UseNumber()
// Run the decode
if err := decoder.Decode(v); err != nil {
return err
}
// If the decode succeeds, post-process the map to convert json.Number objects to int64 or float64
return convertMapNumbers(*v)
case *[]interface{}:
// Build a decoder from the given data
decoder := json.NewDecoder(bytes.NewBuffer(data))
// Preserve numbers, rather than casting to float64 automatically
decoder.UseNumber()
// Run the decode
if err := decoder.Decode(v); err != nil {
return err
}
// If the decode succeeds, post-process the map to convert json.Number objects to int64 or float64
return convertSliceNumbers(*v)
default:
return json.Unmarshal(data, v)
}
}
// convertMapNumbers traverses the map, converting any json.Number values to int64 or float64.
// values which are map[string]interface{} or []interface{} are recursively visited
func convertMapNumbers(m map[string]interface{}) error {
var err error
for k, v := range m {
switch v := v.(type) {
case json.Number:
m[k], err = convertNumber(v)
case map[string]interface{}:
err = convertMapNumbers(v)
case []interface{}:
err = convertSliceNumbers(v)
}
if err != nil {
return err
}
}
return nil
}
// convertSliceNumbers traverses the slice, converting any json.Number values to int64 or float64.
// values which are map[string]interface{} or []interface{} are recursively visited
func convertSliceNumbers(s []interface{}) error {
var err error
for i, v := range s {
switch v := v.(type) {
case json.Number:
s[i], err = convertNumber(v)
case map[string]interface{}:
err = convertMapNumbers(v)
case []interface{}:
err = convertSliceNumbers(v)
}
if err != nil {
return err
}
}
return nil
}
// convertNumber converts a json.Number to an int64 or float64, or returns an error
func convertNumber(n json.Number) (interface{}, error) {
// Attempt to convert to an int64 first
if i, err := n.Int64(); err == nil {
return i, nil
}
// Return a float64 (default json.Decode() behavior)
// An overflow will return an error
return n.Float64()
}

View File

@ -128,7 +128,9 @@ func (r *rudimentaryErrorBackoff) OnError(error) {
r.lastErrorTimeLock.Lock()
defer r.lastErrorTimeLock.Unlock()
d := time.Since(r.lastErrorTime)
if d < r.minPeriod {
if d < r.minPeriod && d >= 0 {
// If the time moves backwards for any reason, do nothing
// TODO: remove check "d >= 0" after go 1.8 is no longer supported
time.Sleep(r.minPeriod - d)
}
r.lastErrorTime = time.Now()

202
vendor/k8s.io/apiserver/LICENSE generated vendored Normal file
View File

@ -0,0 +1,202 @@
Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS
APPENDIX: How to apply the Apache License to your work.
To apply the Apache License to your work, attach the following
boilerplate notice, with the fields enclosed by brackets "[]"
replaced with your own identifying information. (Don't include
the brackets!) The text should be enclosed in the appropriate
comment syntax for the file format. We also recommend that a
file or class name and description of purpose be included on the
same "printed page" as the copyright notice for easier
identification within third-party archives.
Copyright [yyyy] [name of copyright owner]
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.

30
vendor/k8s.io/apiserver/README.md generated vendored Normal file
View File

@ -0,0 +1,30 @@
# apiserver
Generic library for building a Kubernetes aggregated API server.
## Purpose
This library contains code to create Kubernetes aggregation server complete with delegated authentication and authorization,
`kubectl` compatible discovery information, optional admission chain, and versioned types. It's first consumers are
`k8s.io/kubernetes`, `k8s.io/kube-aggregator`, and `github.com/kubernetes-incubator/service-catalog`.
## Compatibility
There are *NO compatibility guarantees* for this repository, yet. It is in direct support of Kubernetes, so branches
will track Kubernetes and be compatible with that repo. As we more cleanly separate the layers, we will review the
compatibility guarantee. We have a goal to make this easier to use in 2017.
## Where does it come from?
`apiserver` is synced from https://github.com/kubernetes/kubernetes/blob/master/staging/src/k8s.io/apiserver.
Code changes are made in that location, merged into `k8s.io/kubernetes` and later synced here.
## Things you should *NOT* do
1. Directly modify any files under `pkg` in this repo. Those are driven from `k8s.io/kuberenetes/staging/src/k8s.io/apiserver`.
2. Expect compatibility. This repo is changing quickly in direct support of
Kubernetes and the API isn't yet stable enough for API guarantees.

51
vendor/k8s.io/client-go/README.md generated vendored
View File

@ -17,6 +17,7 @@ will give you head and doesn't handle the dependencies well.
- [Compatibility: client-go <-> Kubernetes clusters](#compatibility-client-go---kubernetes-clusters)
- [Compatibility matrix](#compatibility-matrix)
- [Why do the 1.4 and 1.5 branch contain top-level folder named after the version?](#why-do-the-14-and-15-branch-contain-top-level-folder-named-after-the-version)
- [Kuberentes tags](#kubernetes-tags)
- [How to get it](#how-to-get-it)
- [How to use it](#how-to-use-it)
- [Dependency management](#dependency-management)
@ -80,27 +81,29 @@ We will backport bugfixes--but not new features--into older versions of
#### Compatibility matrix
| | Kubernetes 1.3 | Kubernetes 1.4 | Kubernetes 1.5 | Kubernetes 1.6 | Kubernetes 1.7 |
| | Kubernetes 1.4 | Kubernetes 1.5 | Kubernetes 1.6 | Kubernetes 1.7 | Kubernetes 1.8 |
|---------------------|----------------|----------------|----------------|----------------|----------------|
| client-go 1.4 | + | ✓ | - | - | - |
| client-go 1.5 | + | + | - | - | - |
| client-go 2.0 | + | + | ✓ | - | - |
| client-go 3.0 | † | † | † | ✓ | - |
| client-go 4.0 | † | † | † | + | ✓ |
| client-go HEAD | † | † | † | + | + |
| client-go 1.4 | ✓ | - | - | - | - |
| client-go 1.5 | + | - | - | - | - |
| client-go 2.0 | +- | ✓ | +- | +- | +- |
| client-go 3.0 | +- | +- | ✓ | - | +- |
| client-go 4.0 | +- | +- | +- | ✓ | +- |
| client-go 5.0 | +- | +- | +- | +- | ✓ |
| client-go HEAD | +- | +- | +- | +- | + |
Key:
* `✓` Exactly the same features / API objects in both client-go and the Kubernetes
version.
* `+` client-go has features or api objects that may not be present in the
Kubernetes cluster, but everything they have in common will work. Please
note that alpha APIs may vanish or change significantly in a single release.
* `†` client-go has new features or api objects, and some APIs running in the
cluster may have been deprecated and removed from client-go. But everything
they share in common (i.e., most APIs) will work.
* `-` The Kubernetes cluster has features the client-go library can't use
(additional API objects, etc).
* `+` client-go has features or API objects that may not be present in the
Kubernetes cluster, either due to that client-go has additional new API, or
that the server has removed old API. However, everything they have in
common (i.e., most APIs) will work. Please note that alpha APIs may vanish or
change significantly in a single release.
* `-` The Kubernetes cluster has features the client-go library can't use,
either due to the server has additional new API, or that client-go has
removed old API. However, everything they share in common (i.e., most APIs)
will work.
See the [CHANGELOG](./CHANGELOG.md) for a detailed description of changes
between client-go versions.
@ -112,6 +115,7 @@ between client-go versions.
| client-go 2.0 | Kubernetes main repo, 1.5 branch | ✓ |
| client-go 3.0 | Kubernetes main repo, 1.6 branch | ✓ |
| client-go 4.0 | Kubernetes main repo, 1.7 branch | ✓ |
| client-go 5.0 | Kubernetes main repo, 1.8 branch | ✓ |
| client-go HEAD | Kubernetes main repo, master branch | ✓ |
Key:
@ -134,6 +138,23 @@ separate directories for each minor version. That soon proved to be a mistake.
We are keeping the top-level folders in the 1.4 and 1.5 branches so that
existing users won't be broken.
### Kubernetes tags
As of October 2017, client-go is still a mirror of
[k8s.io/kubernetes/staging/src/client-go](https://github.com/kubernetes/kubernetes/tree/master/staging/src/k8s.io/client-go),
the code development is still done in the staging area. Since Kubernetes 1.8
release, when syncing the code from the staging area, we also sync the Kubernetes
version tags to client-go, prefixed with "kubernetes-". For example, if you check
out the `kubernetes-v1.8.0` tag in client-go, the code you get is exactly the
same as if you check out the `v1.8.0` tag in kubernetes, and change directory to
`staging/src/k8s.io/client-go`. The purpose is to let users quickly find matching
commits among published repos, like
[sample-apiserver](https://github.com/kubernetes/sample-apiserver),
[apiextension-apiserver](https://github.com/kubernetes/apiextensions-apiserver),
etc. The Kubernetes version tag does NOT claim any backwards compatibility
guarantees for client-go. Please check the [semantic versions](#versioning) if
you care about backwards compatibility.
### How to get it
You can use `go get k8s.io/client-go/...` to get client-go, but **you will get

View File

@ -420,5 +420,45 @@ func AnonymousClientConfig(config *Config) *Config {
QPS: config.QPS,
Burst: config.Burst,
Timeout: config.Timeout,
Dial: config.Dial,
}
}
// CopyConfig returns a copy of the given config
func CopyConfig(config *Config) *Config {
return &Config{
Host: config.Host,
APIPath: config.APIPath,
Prefix: config.Prefix,
ContentConfig: config.ContentConfig,
Username: config.Username,
Password: config.Password,
BearerToken: config.BearerToken,
CacheDir: config.CacheDir,
Impersonate: ImpersonationConfig{
Groups: config.Impersonate.Groups,
Extra: config.Impersonate.Extra,
UserName: config.Impersonate.UserName,
},
AuthProvider: config.AuthProvider,
AuthConfigPersister: config.AuthConfigPersister,
TLSClientConfig: TLSClientConfig{
Insecure: config.TLSClientConfig.Insecure,
ServerName: config.TLSClientConfig.ServerName,
CertFile: config.TLSClientConfig.CertFile,
KeyFile: config.TLSClientConfig.KeyFile,
CAFile: config.TLSClientConfig.CAFile,
CertData: config.TLSClientConfig.CertData,
KeyData: config.TLSClientConfig.KeyData,
CAData: config.TLSClientConfig.CAData,
},
UserAgent: config.UserAgent,
Transport: config.Transport,
WrapTransport: config.WrapTransport,
QPS: config.QPS,
Burst: config.Burst,
RateLimiter: config.RateLimiter,
Timeout: config.Timeout,
Dial: config.Dial,
}
}

View File

@ -56,6 +56,14 @@ func DefaultServerURL(host, apiPath string, groupVersion schema.GroupVersion, de
// hostURL.Path should be blank.
//
// versionedAPIPath, a path relative to baseURL.Path, points to a versioned API base
versionedAPIPath := DefaultVersionedAPIPath(apiPath, groupVersion)
return hostURL, versionedAPIPath, nil
}
// DefaultVersionedAPIPathFor constructs the default path for the given group version, assuming the given
// API path, following the standard conventions of the Kubernetes API.
func DefaultVersionedAPIPath(apiPath string, groupVersion schema.GroupVersion) string {
versionedAPIPath := path.Join("/", apiPath)
// Add the version to the end of the path
@ -64,10 +72,9 @@ func DefaultServerURL(host, apiPath string, groupVersion schema.GroupVersion, de
} else {
versionedAPIPath = path.Join(versionedAPIPath, groupVersion.Version)
}
return hostURL, versionedAPIPath, nil
return versionedAPIPath
}
// defaultServerUrlFor is shared between IsConfigTransportTLS and RESTClientFor. It

View File

@ -16,7 +16,7 @@ limitations under the License.
// This file should be consistent with pkg/api/v1/annotation_key_constants.go.
package api
package core
const (
// ImagePolicyFailedOpenKey is added to pods created by failing open when the image policy
@ -45,12 +45,6 @@ const (
// to one container of a pod.
SeccompContainerAnnotationKeyPrefix string = "container.seccomp.security.alpha.kubernetes.io/"
// CreatedByAnnotation represents the key used to store the spec(json)
// used to create the resource.
// This field is deprecated in favor of ControllerRef (see #44407).
// TODO(#50720): Remove this field in v1.9.
CreatedByAnnotation = "kubernetes.io/created-by"
// PreferAvoidPodsAnnotationKey represents the key of preferAvoidPods data (json serialized)
// in the Annotations of a Node.
PreferAvoidPodsAnnotationKey string = "scheduler.alpha.kubernetes.io/preferAvoidPods"

View File

@ -14,11 +14,11 @@ See the License for the specific language governing permissions and
limitations under the License.
*/
// +k8s:deepcopy-gen=package,register
// +k8s:deepcopy-gen=package
// Package api contains the latest (or "internal") version of the
// Kubernetes API objects. This is the API objects as represented in memory.
// The contract presented to clients is located in the versioned packages,
// which are sub-directories. The first one is "v1". Those packages
// describe how a particular version is serialized to storage/network.
package api // import "k8s.io/kubernetes/pkg/api"
package core // import "k8s.io/kubernetes/pkg/apis/core"

View File

@ -14,7 +14,7 @@ See the License for the specific language governing permissions and
limitations under the License.
*/
package api
package core
// Field path constants that are specific to the internal API
// representation.

View File

@ -14,7 +14,7 @@ See the License for the specific language governing permissions and
limitations under the License.
*/
package api
package core
import "encoding/json"

View File

@ -17,7 +17,7 @@ limitations under the License.
//TODO: consider making these methods functions, because we don't want helper
//functions in the k8s.io/api repo.
package api
package core
import (
"k8s.io/apimachinery/pkg/runtime/schema"

View File

@ -14,45 +14,20 @@ See the License for the specific language governing permissions and
limitations under the License.
*/
package api
package core
import (
"os"
"k8s.io/apimachinery/pkg/apimachinery/announced"
"k8s.io/apimachinery/pkg/apimachinery/registered"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/runtime"
"k8s.io/apimachinery/pkg/runtime/schema"
"k8s.io/apimachinery/pkg/runtime/serializer"
)
// GroupFactoryRegistry is the APIGroupFactoryRegistry (overlaps a bit with Registry, see comments in package for details)
var GroupFactoryRegistry = make(announced.APIGroupFactoryRegistry)
// Registry is an instance of an API registry. This is an interim step to start removing the idea of a global
// API registry.
var Registry = registered.NewOrDie(os.Getenv("KUBE_API_VERSIONS"))
// Scheme is the default instance of runtime.Scheme to which types in the Kubernetes API are already registered.
// NOTE: If you are copying this file to start a new api group, STOP! Copy the
// extensions group instead. This Scheme is special and should appear ONLY in
// the api group, unless you really know what you're doing.
// TODO(lavalamp): make the above error impossible.
var Scheme = runtime.NewScheme()
// Codecs provides access to encoding and decoding for the scheme
var Codecs = serializer.NewCodecFactory(Scheme)
// GroupName is the group name use in this package
const GroupName = ""
// SchemeGroupVersion is group version used to register these objects
var SchemeGroupVersion = schema.GroupVersion{Group: GroupName, Version: runtime.APIVersionInternal}
// ParameterCodec handles versioning of objects that are converted to query parameters.
var ParameterCodec = runtime.NewParameterCodec(Scheme)
// Kind takes an unqualified kind and returns a Group qualified GroupKind
func Kind(kind string) schema.GroupKind {
return SchemeGroupVersion.WithKind(kind).GroupKind()

View File

@ -14,7 +14,7 @@ See the License for the specific language governing permissions and
limitations under the License.
*/
package api
package core
import (
"k8s.io/apimachinery/pkg/api/resource"

View File

@ -17,7 +17,7 @@ limitations under the License.
//TODO: consider making these methods functions, because we don't want helper
//functions in the k8s.io/api repo.
package api
package core
import "fmt"

View File

@ -17,7 +17,7 @@ limitations under the License.
//TODO: consider making these methods functions, because we don't want helper
//functions in the k8s.io/api repo.
package api
package core
// MatchToleration checks if the toleration matches tolerationToMatch. Tolerations are unique by <key,effect,operator,value>,
// if the two tolerations have same <key,effect,operator,value> combination, regard as they match.

View File

@ -14,7 +14,7 @@ See the License for the specific language governing permissions and
limitations under the License.
*/
package api
package core
import (
"k8s.io/apimachinery/pkg/api/resource"
@ -343,14 +343,14 @@ type PersistentVolumeSource struct {
NFS *NFSVolumeSource
// RBD represents a Rados Block Device mount on the host that shares a pod's lifetime
// +optional
RBD *RBDVolumeSource
RBD *RBDPersistentVolumeSource
// Quobyte represents a Quobyte mount on the host that shares a pod's lifetime
// +optional
Quobyte *QuobyteVolumeSource
// ISCSIVolumeSource represents an ISCSI resource that is attached to a
// ISCSIPersistentVolumeSource represents an ISCSI resource that is attached to a
// kubelet's host machine and then exposed to the pod.
// +optional
ISCSI *ISCSIVolumeSource
ISCSI *ISCSIPersistentVolumeSource
// FlexVolume represents a generic volume resource that is
// provisioned/attached using an exec based plugin. This is an alpha feature and may change in future.
// +optional
@ -383,7 +383,7 @@ type PersistentVolumeSource struct {
PortworxVolume *PortworxVolumeSource
// ScaleIO represents a ScaleIO persistent volume attached and mounted on Kubernetes nodes.
// +optional
ScaleIO *ScaleIOVolumeSource
ScaleIO *ScaleIOPersistentVolumeSource
// Local represents directly-attached storage with node affinity
// +optional
Local *LocalVolumeSource
@ -391,6 +391,9 @@ type PersistentVolumeSource struct {
// More info: https://releases.k8s.io/HEAD/examples/volumes/storageos/README.md
// +optional
StorageOS *StorageOSPersistentVolumeSource
// CSI (Container Storage Interface) represents storage that handled by an external CSI driver
// +optional
CSI *CSIPersistentVolumeSource
}
type PersistentVolumeClaimVolumeSource struct {
@ -404,7 +407,7 @@ type PersistentVolumeClaimVolumeSource struct {
const (
// BetaStorageClassAnnotation represents the beta/previous StorageClass annotation.
// It's currently still used and will be held for backwards compatibility
// It's deprecated and will be removed in a future release. (#51440)
BetaStorageClassAnnotation = "volume.beta.kubernetes.io/storage-class"
// MountOptionAnnotation defines mount option annotation used in PVs
@ -459,6 +462,11 @@ type PersistentVolumeSpec struct {
// simply fail if one is invalid.
// +optional
MountOptions []string
// volumeMode defines if a volume is intended to be used with a formatted filesystem
// or to remain in raw block state. Value of Filesystem is implied when not included in spec.
// This is an alpha feature and may change in the future.
// +optional
VolumeMode *PersistentVolumeMode
}
// PersistentVolumeReclaimPolicy describes a policy for end-of-life maintenance of persistent volumes
@ -476,6 +484,16 @@ const (
PersistentVolumeReclaimRetain PersistentVolumeReclaimPolicy = "Retain"
)
// PersistentVolumeMode describes how a volume is intended to be consumed, either Block or Filesystem.
type PersistentVolumeMode string
const (
// PersistentVolumeBlock means the volume will not be formatted with a filesystem and will remain a raw block device.
PersistentVolumeBlock PersistentVolumeMode = "Block"
// PersistentVolumeFilesystem means the volume will be or is formatted with a filesystem.
PersistentVolumeFilesystem PersistentVolumeMode = "Filesystem"
)
type PersistentVolumeStatus struct {
// Phase indicates if a volume is available, bound to a claim, or released by a claim
// +optional
@ -545,6 +563,11 @@ type PersistentVolumeClaimSpec struct {
// More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes/#class-1
// +optional
StorageClassName *string
// volumeMode defines what type of volume is required by the claim.
// Value of Filesystem is implied when not included in claim spec.
// This is an alpha feature and may change in the future.
// +optional
VolumeMode *PersistentVolumeMode
}
type PersistentVolumeClaimConditionType string
@ -770,6 +793,54 @@ type ISCSIVolumeSource struct {
InitiatorName *string
}
// ISCSIPersistentVolumeSource represents an ISCSI disk.
// ISCSI volumes can only be mounted as read/write once.
// ISCSI volumes support ownership management and SELinux relabeling.
type ISCSIPersistentVolumeSource struct {
// Required: iSCSI target portal
// the portal is either an IP or ip_addr:port if port is other than default (typically TCP ports 860 and 3260)
// +optional
TargetPortal string
// Required: target iSCSI Qualified Name
// +optional
IQN string
// Required: iSCSI target lun number
// +optional
Lun int32
// Optional: Defaults to 'default' (tcp). iSCSI interface name that uses an iSCSI transport.
// +optional
ISCSIInterface string
// Filesystem type to mount.
// Must be a filesystem type supported by the host operating system.
// Ex. "ext4", "xfs", "ntfs". Implicitly inferred to be "ext4" if unspecified.
// TODO: how do we prevent errors in the filesystem from compromising the machine
// +optional
FSType string
// Optional: Defaults to false (read/write). ReadOnly here will force
// the ReadOnly setting in VolumeMounts.
// +optional
ReadOnly bool
// Optional: list of iSCSI target portal ips for high availability.
// the portal is either an IP or ip_addr:port if port is other than default (typically TCP ports 860 and 3260)
// +optional
Portals []string
// Optional: whether support iSCSI Discovery CHAP authentication
// +optional
DiscoveryCHAPAuth bool
// Optional: whether support iSCSI Session CHAP authentication
// +optional
SessionCHAPAuth bool
// Optional: CHAP secret for iSCSI target and initiator authentication.
// The secret is used if either DiscoveryCHAPAuth or SessionCHAPAuth is true
// +optional
SecretRef *SecretReference
// Optional: Custom initiator name per volume.
// If initiatorName is specified with iscsiInterface simultaneously, new iSCSI interface
// <target portal>:<volume name> will be created for the connection.
// +optional
InitiatorName *string
}
// Represents a Fibre Channel volume.
// Fibre Channel volumes can only be mounted as read/write once.
// Fibre Channel volumes support ownership management and SELinux relabeling.
@ -1006,6 +1077,37 @@ type RBDVolumeSource struct {
ReadOnly bool
}
// Represents a Rados Block Device mount that lasts the lifetime of a pod.
// RBD volumes support ownership management and SELinux relabeling.
type RBDPersistentVolumeSource struct {
// Required: CephMonitors is a collection of Ceph monitors
CephMonitors []string
// Required: RBDImage is the rados image name
RBDImage string
// Filesystem type to mount.
// Must be a filesystem type supported by the host operating system.
// Ex. "ext4", "xfs", "ntfs". Implicitly inferred to be "ext4" if unspecified.
// TODO: how do we prevent errors in the filesystem from compromising the machine
// +optional
FSType string
// Optional: RadosPool is the rados pool name,default is rbd
// +optional
RBDPool string
// Optional: RBDUser is the rados user name, default is admin
// +optional
RadosUser string
// Optional: Keyring is the path to key ring for RBDUser, default is /etc/ceph/keyring
// +optional
Keyring string
// Optional: SecretRef is reference to the authentication secret for User, default is empty.
// +optional
SecretRef *SecretReference
// Optional: Defaults to false (read/write). ReadOnly here will force
// the ReadOnly setting in VolumeMounts.
// +optional
ReadOnly bool
}
// Represents a cinder volume resource in Openstack. A Cinder volume
// must exist before mounting to a container. The volume must also be
// in the same region as the kubelet. Cinder volumes support ownership
@ -1254,13 +1356,13 @@ type ScaleIOVolumeSource struct {
// Flag to enable/disable SSL communication with Gateway, default false
// +optional
SSLEnabled bool
// The name of the Protection Domain for the configured storage (defaults to "default").
// The name of the ScaleIO Protection Domain for the configured storage.
// +optional
ProtectionDomain string
// The Storage Pool associated with the protection domain (defaults to "default").
// The ScaleIO Storage Pool associated with the protection domain.
// +optional
StoragePool string
// Indicates whether the storage for a volume should be thick or thin (defaults to "thin").
// Indicates whether the storage for a volume should be ThickProvisioned or ThinProvisioned.
// +optional
StorageMode string
// The name of a volume already created in the ScaleIO system
@ -1277,6 +1379,42 @@ type ScaleIOVolumeSource struct {
ReadOnly bool
}
// ScaleIOPersistentVolumeSource represents a persistent ScaleIO volume that can be defined
// by a an admin via a storage class, for instance.
type ScaleIOPersistentVolumeSource struct {
// The host address of the ScaleIO API Gateway.
Gateway string
// The name of the storage system as configured in ScaleIO.
System string
// SecretRef references to the secret for ScaleIO user and other
// sensitive information. If this is not provided, Login operation will fail.
SecretRef *SecretReference
// Flag to enable/disable SSL communication with Gateway, default false
// +optional
SSLEnabled bool
// The name of the ScaleIO Protection Domain for the configured storage.
// +optional
ProtectionDomain string
// The ScaleIO Storage Pool associated with the protection domain.
// +optional
StoragePool string
// Indicates whether the storage for a volume should be ThickProvisioned or ThinProvisioned.
// +optional
StorageMode string
// The name of a volume created in the ScaleIO system
// that is associated with this volume source.
VolumeName string
// Filesystem type to mount.
// Must be a filesystem type supported by the host operating system.
// Ex. "ext4", "xfs", "ntfs". Implicitly inferred to be "ext4" if unspecified.
// +optional
FSType string
// Defaults to false (read/write). ReadOnly here will force
// the ReadOnly setting in VolumeMounts.
// +optional
ReadOnly bool
}
// Represents a StorageOS persistent volume resource.
type StorageOSVolumeSource struct {
// VolumeName is the human-readable name of the StorageOS volume. Volume
@ -1436,6 +1574,23 @@ type LocalVolumeSource struct {
Path string
}
// Represents storage that is managed by an external CSI volume driver
type CSIPersistentVolumeSource struct {
// Driver is the name of the driver to use for this volume.
// Required.
Driver string
// VolumeHandle is the unique volume name returned by the CSI volume
// plugins CreateVolume to refer to the volume on all subsequent calls.
// Required.
VolumeHandle string
// Optional: The value to pass to ControllerPublishVolumeRequest.
// Defaults to false (read/write).
// +optional
ReadOnly bool
}
// ContainerPort represents a network port in a single container
type ContainerPort struct {
// Optional: If specified, this must be an IANA_SVC_NAME Each named port
@ -1463,7 +1618,9 @@ type VolumeMount struct {
// Optional: Defaults to false (read-write).
// +optional
ReadOnly bool
// Required. Must not contain ':'.
// Required. If the path is not an absolute path (e.g. some/path) it
// will be prepended with the appropriate root prefix for the operating
// system. On Linux this is '/', on Windows this is 'C:\'.
MountPath string
// Path within the volume from which the container's volume should be mounted.
// Defaults to "" (volume's root).
@ -1497,6 +1654,14 @@ const (
MountPropagationBidirectional MountPropagationMode = "Bidirectional"
)
// VolumeDevice describes a mapping of a raw block device within a container.
type VolumeDevice struct {
// name must match the name of a persistentVolumeClaim in the pod
Name string
// devicePath is the path inside of the container that the device will be mapped to.
DevicePath string
}
// EnvVar represents an environment variable present in a Container.
type EnvVar struct {
// Required: This must be a C_IDENTIFIER.
@ -1790,6 +1955,10 @@ type Container struct {
Resources ResourceRequirements
// +optional
VolumeMounts []VolumeMount
// volumeDevices is the list of block devices to be used by the container.
// This is an alpha feature and may change in the future.
// +optional
VolumeDevices []VolumeDevice
// +optional
LivenessProbe *Probe
// +optional
@ -2658,8 +2827,8 @@ type ReplicationControllerCondition struct {
}
// +genclient
// +genclient:method=GetScale,verb=get,subresource=scale,result=k8s.io/kubernetes/pkg/apis/extensions.Scale
// +genclient:method=UpdateScale,verb=update,subresource=scale,input=k8s.io/kubernetes/pkg/apis/extensions.Scale,result=k8s.io/kubernetes/pkg/apis/extensions.Scale
// +genclient:method=GetScale,verb=get,subresource=scale,result=k8s.io/kubernetes/pkg/apis/autoscaling.Scale
// +genclient:method=UpdateScale,verb=update,subresource=scale,input=k8s.io/kubernetes/pkg/apis/autoscaling.Scale,result=k8s.io/kubernetes/pkg/apis/autoscaling.Scale
// +k8s:deepcopy-gen:interfaces=k8s.io/apimachinery/pkg/runtime.Object
// ReplicationController represents the configuration of a replication controller.
@ -2849,7 +3018,8 @@ type ServiceSpec struct {
// ExternalName is the external reference that kubedns or equivalent will
// return as a CNAME record for this service. No proxying will be involved.
// Must be a valid DNS name and requires Type to be ExternalName.
// Must be a valid RFC-1123 hostname (https://tools.ietf.org/html/rfc1123)
// and requires Type to be ExternalName.
ExternalName string
// ExternalIPs are used by external load balancers, or can be set by
@ -3508,6 +3678,10 @@ type DeleteOptions struct {
// Either this field or OrphanDependents may be set, but not both.
// The default policy is decided by the existing finalizer set in the
// metadata.finalizers and the resource-specific default policy.
// Acceptable values are: 'Orphan' - orphan the dependents; 'Background' -
// allow the garbage collector to delete the dependents in the background;
// 'Foreground' - a cascading policy that deletes all dependents in the
// foreground.
// +optional
PropagationPolicy *DeletionPropagation
}
@ -3897,6 +4071,13 @@ const (
ResourceLimitsEphemeralStorage ResourceName = "limits.ephemeral-storage"
)
// The following identify resource prefix for Kubernetes object types
const (
// HugePages request, in bytes. (500Gi = 500GiB = 500 * 1024 * 1024 * 1024)
// As burst is not supported for HugePages, we would only quota its request, and ignore the limit.
ResourceRequestsHugePagesPrefix = "requests.hugepages-"
)
// A ResourceQuotaScope defines a filter that must match each object tracked by a quota
type ResourceQuotaScope string
@ -4283,5 +4464,5 @@ const (
// corresponding to every RequiredDuringScheduling affinity rule.
// When the --hard-pod-affinity-weight scheduler flag is not specified,
// DefaultHardPodAffinityWeight defines the weight of the implicit PreferredDuringScheduling affinity rule.
DefaultHardPodAffinitySymmetricWeight int = 1
DefaultHardPodAffinitySymmetricWeight int32 = 1
)

View File

@ -18,742 +18,15 @@ limitations under the License.
// This file was autogenerated by deepcopy-gen. Do not edit it manually!
package api
package core
import (
resource "k8s.io/apimachinery/pkg/api/resource"
v1 "k8s.io/apimachinery/pkg/apis/meta/v1"
conversion "k8s.io/apimachinery/pkg/conversion"
runtime "k8s.io/apimachinery/pkg/runtime"
types "k8s.io/apimachinery/pkg/types"
reflect "reflect"
)
func init() {
SchemeBuilder.Register(RegisterDeepCopies)
}
// RegisterDeepCopies adds deep-copy functions to the given scheme. Public
// to allow building arbitrary schemes.
//
// Deprecated: deepcopy registration will go away when static deepcopy is fully implemented.
func RegisterDeepCopies(scheme *runtime.Scheme) error {
return scheme.AddGeneratedDeepCopyFuncs(
conversion.GeneratedDeepCopyFunc{Fn: func(in interface{}, out interface{}, c *conversion.Cloner) error {
in.(*AWSElasticBlockStoreVolumeSource).DeepCopyInto(out.(*AWSElasticBlockStoreVolumeSource))
return nil
}, InType: reflect.TypeOf(&AWSElasticBlockStoreVolumeSource{})},
conversion.GeneratedDeepCopyFunc{Fn: func(in interface{}, out interface{}, c *conversion.Cloner) error {
in.(*Affinity).DeepCopyInto(out.(*Affinity))
return nil
}, InType: reflect.TypeOf(&Affinity{})},
conversion.GeneratedDeepCopyFunc{Fn: func(in interface{}, out interface{}, c *conversion.Cloner) error {
in.(*AttachedVolume).DeepCopyInto(out.(*AttachedVolume))
return nil
}, InType: reflect.TypeOf(&AttachedVolume{})},
conversion.GeneratedDeepCopyFunc{Fn: func(in interface{}, out interface{}, c *conversion.Cloner) error {
in.(*AvoidPods).DeepCopyInto(out.(*AvoidPods))
return nil
}, InType: reflect.TypeOf(&AvoidPods{})},
conversion.GeneratedDeepCopyFunc{Fn: func(in interface{}, out interface{}, c *conversion.Cloner) error {
in.(*AzureDiskVolumeSource).DeepCopyInto(out.(*AzureDiskVolumeSource))
return nil
}, InType: reflect.TypeOf(&AzureDiskVolumeSource{})},
conversion.GeneratedDeepCopyFunc{Fn: func(in interface{}, out interface{}, c *conversion.Cloner) error {
in.(*AzureFilePersistentVolumeSource).DeepCopyInto(out.(*AzureFilePersistentVolumeSource))
return nil
}, InType: reflect.TypeOf(&AzureFilePersistentVolumeSource{})},
conversion.GeneratedDeepCopyFunc{Fn: func(in interface{}, out interface{}, c *conversion.Cloner) error {
in.(*AzureFileVolumeSource).DeepCopyInto(out.(*AzureFileVolumeSource))
return nil
}, InType: reflect.TypeOf(&AzureFileVolumeSource{})},
conversion.GeneratedDeepCopyFunc{Fn: func(in interface{}, out interface{}, c *conversion.Cloner) error {
in.(*Binding).DeepCopyInto(out.(*Binding))
return nil
}, InType: reflect.TypeOf(&Binding{})},
conversion.GeneratedDeepCopyFunc{Fn: func(in interface{}, out interface{}, c *conversion.Cloner) error {
in.(*Capabilities).DeepCopyInto(out.(*Capabilities))
return nil
}, InType: reflect.TypeOf(&Capabilities{})},
conversion.GeneratedDeepCopyFunc{Fn: func(in interface{}, out interface{}, c *conversion.Cloner) error {
in.(*CephFSPersistentVolumeSource).DeepCopyInto(out.(*CephFSPersistentVolumeSource))
return nil
}, InType: reflect.TypeOf(&CephFSPersistentVolumeSource{})},
conversion.GeneratedDeepCopyFunc{Fn: func(in interface{}, out interface{}, c *conversion.Cloner) error {
in.(*CephFSVolumeSource).DeepCopyInto(out.(*CephFSVolumeSource))
return nil
}, InType: reflect.TypeOf(&CephFSVolumeSource{})},
conversion.GeneratedDeepCopyFunc{Fn: func(in interface{}, out interface{}, c *conversion.Cloner) error {
in.(*CinderVolumeSource).DeepCopyInto(out.(*CinderVolumeSource))
return nil
}, InType: reflect.TypeOf(&CinderVolumeSource{})},
conversion.GeneratedDeepCopyFunc{Fn: func(in interface{}, out interface{}, c *conversion.Cloner) error {
in.(*ClientIPConfig).DeepCopyInto(out.(*ClientIPConfig))
return nil
}, InType: reflect.TypeOf(&ClientIPConfig{})},
conversion.GeneratedDeepCopyFunc{Fn: func(in interface{}, out interface{}, c *conversion.Cloner) error {
in.(*ComponentCondition).DeepCopyInto(out.(*ComponentCondition))
return nil
}, InType: reflect.TypeOf(&ComponentCondition{})},
conversion.GeneratedDeepCopyFunc{Fn: func(in interface{}, out interface{}, c *conversion.Cloner) error {
in.(*ComponentStatus).DeepCopyInto(out.(*ComponentStatus))
return nil
}, InType: reflect.TypeOf(&ComponentStatus{})},
conversion.GeneratedDeepCopyFunc{Fn: func(in interface{}, out interface{}, c *conversion.Cloner) error {
in.(*ComponentStatusList).DeepCopyInto(out.(*ComponentStatusList))
return nil
}, InType: reflect.TypeOf(&ComponentStatusList{})},
conversion.GeneratedDeepCopyFunc{Fn: func(in interface{}, out interface{}, c *conversion.Cloner) error {
in.(*ConfigMap).DeepCopyInto(out.(*ConfigMap))
return nil
}, InType: reflect.TypeOf(&ConfigMap{})},
conversion.GeneratedDeepCopyFunc{Fn: func(in interface{}, out interface{}, c *conversion.Cloner) error {
in.(*ConfigMapEnvSource).DeepCopyInto(out.(*ConfigMapEnvSource))
return nil
}, InType: reflect.TypeOf(&ConfigMapEnvSource{})},
conversion.GeneratedDeepCopyFunc{Fn: func(in interface{}, out interface{}, c *conversion.Cloner) error {
in.(*ConfigMapKeySelector).DeepCopyInto(out.(*ConfigMapKeySelector))
return nil
}, InType: reflect.TypeOf(&ConfigMapKeySelector{})},
conversion.GeneratedDeepCopyFunc{Fn: func(in interface{}, out interface{}, c *conversion.Cloner) error {
in.(*ConfigMapList).DeepCopyInto(out.(*ConfigMapList))
return nil
}, InType: reflect.TypeOf(&ConfigMapList{})},
conversion.GeneratedDeepCopyFunc{Fn: func(in interface{}, out interface{}, c *conversion.Cloner) error {
in.(*ConfigMapProjection).DeepCopyInto(out.(*ConfigMapProjection))
return nil
}, InType: reflect.TypeOf(&ConfigMapProjection{})},
conversion.GeneratedDeepCopyFunc{Fn: func(in interface{}, out interface{}, c *conversion.Cloner) error {
in.(*ConfigMapVolumeSource).DeepCopyInto(out.(*ConfigMapVolumeSource))
return nil
}, InType: reflect.TypeOf(&ConfigMapVolumeSource{})},
conversion.GeneratedDeepCopyFunc{Fn: func(in interface{}, out interface{}, c *conversion.Cloner) error {
in.(*Container).DeepCopyInto(out.(*Container))
return nil
}, InType: reflect.TypeOf(&Container{})},
conversion.GeneratedDeepCopyFunc{Fn: func(in interface{}, out interface{}, c *conversion.Cloner) error {
in.(*ContainerImage).DeepCopyInto(out.(*ContainerImage))
return nil
}, InType: reflect.TypeOf(&ContainerImage{})},
conversion.GeneratedDeepCopyFunc{Fn: func(in interface{}, out interface{}, c *conversion.Cloner) error {
in.(*ContainerPort).DeepCopyInto(out.(*ContainerPort))
return nil
}, InType: reflect.TypeOf(&ContainerPort{})},
conversion.GeneratedDeepCopyFunc{Fn: func(in interface{}, out interface{}, c *conversion.Cloner) error {
in.(*ContainerState).DeepCopyInto(out.(*ContainerState))
return nil
}, InType: reflect.TypeOf(&ContainerState{})},
conversion.GeneratedDeepCopyFunc{Fn: func(in interface{}, out interface{}, c *conversion.Cloner) error {
in.(*ContainerStateRunning).DeepCopyInto(out.(*ContainerStateRunning))
return nil
}, InType: reflect.TypeOf(&ContainerStateRunning{})},
conversion.GeneratedDeepCopyFunc{Fn: func(in interface{}, out interface{}, c *conversion.Cloner) error {
in.(*ContainerStateTerminated).DeepCopyInto(out.(*ContainerStateTerminated))
return nil
}, InType: reflect.TypeOf(&ContainerStateTerminated{})},
conversion.GeneratedDeepCopyFunc{Fn: func(in interface{}, out interface{}, c *conversion.Cloner) error {
in.(*ContainerStateWaiting).DeepCopyInto(out.(*ContainerStateWaiting))
return nil
}, InType: reflect.TypeOf(&ContainerStateWaiting{})},
conversion.GeneratedDeepCopyFunc{Fn: func(in interface{}, out interface{}, c *conversion.Cloner) error {
in.(*ContainerStatus).DeepCopyInto(out.(*ContainerStatus))
return nil
}, InType: reflect.TypeOf(&ContainerStatus{})},
conversion.GeneratedDeepCopyFunc{Fn: func(in interface{}, out interface{}, c *conversion.Cloner) error {
in.(*DaemonEndpoint).DeepCopyInto(out.(*DaemonEndpoint))
return nil
}, InType: reflect.TypeOf(&DaemonEndpoint{})},
conversion.GeneratedDeepCopyFunc{Fn: func(in interface{}, out interface{}, c *conversion.Cloner) error {
in.(*DeleteOptions).DeepCopyInto(out.(*DeleteOptions))
return nil
}, InType: reflect.TypeOf(&DeleteOptions{})},
conversion.GeneratedDeepCopyFunc{Fn: func(in interface{}, out interface{}, c *conversion.Cloner) error {
in.(*DownwardAPIProjection).DeepCopyInto(out.(*DownwardAPIProjection))
return nil
}, InType: reflect.TypeOf(&DownwardAPIProjection{})},
conversion.GeneratedDeepCopyFunc{Fn: func(in interface{}, out interface{}, c *conversion.Cloner) error {
in.(*DownwardAPIVolumeFile).DeepCopyInto(out.(*DownwardAPIVolumeFile))
return nil
}, InType: reflect.TypeOf(&DownwardAPIVolumeFile{})},
conversion.GeneratedDeepCopyFunc{Fn: func(in interface{}, out interface{}, c *conversion.Cloner) error {
in.(*DownwardAPIVolumeSource).DeepCopyInto(out.(*DownwardAPIVolumeSource))
return nil
}, InType: reflect.TypeOf(&DownwardAPIVolumeSource{})},
conversion.GeneratedDeepCopyFunc{Fn: func(in interface{}, out interface{}, c *conversion.Cloner) error {
in.(*EmptyDirVolumeSource).DeepCopyInto(out.(*EmptyDirVolumeSource))
return nil
}, InType: reflect.TypeOf(&EmptyDirVolumeSource{})},
conversion.GeneratedDeepCopyFunc{Fn: func(in interface{}, out interface{}, c *conversion.Cloner) error {
in.(*EndpointAddress).DeepCopyInto(out.(*EndpointAddress))
return nil
}, InType: reflect.TypeOf(&EndpointAddress{})},
conversion.GeneratedDeepCopyFunc{Fn: func(in interface{}, out interface{}, c *conversion.Cloner) error {
in.(*EndpointPort).DeepCopyInto(out.(*EndpointPort))
return nil
}, InType: reflect.TypeOf(&EndpointPort{})},
conversion.GeneratedDeepCopyFunc{Fn: func(in interface{}, out interface{}, c *conversion.Cloner) error {
in.(*EndpointSubset).DeepCopyInto(out.(*EndpointSubset))
return nil
}, InType: reflect.TypeOf(&EndpointSubset{})},
conversion.GeneratedDeepCopyFunc{Fn: func(in interface{}, out interface{}, c *conversion.Cloner) error {
in.(*Endpoints).DeepCopyInto(out.(*Endpoints))
return nil
}, InType: reflect.TypeOf(&Endpoints{})},
conversion.GeneratedDeepCopyFunc{Fn: func(in interface{}, out interface{}, c *conversion.Cloner) error {
in.(*EndpointsList).DeepCopyInto(out.(*EndpointsList))
return nil
}, InType: reflect.TypeOf(&EndpointsList{})},
conversion.GeneratedDeepCopyFunc{Fn: func(in interface{}, out interface{}, c *conversion.Cloner) error {
in.(*EnvFromSource).DeepCopyInto(out.(*EnvFromSource))
return nil
}, InType: reflect.TypeOf(&EnvFromSource{})},
conversion.GeneratedDeepCopyFunc{Fn: func(in interface{}, out interface{}, c *conversion.Cloner) error {
in.(*EnvVar).DeepCopyInto(out.(*EnvVar))
return nil
}, InType: reflect.TypeOf(&EnvVar{})},
conversion.GeneratedDeepCopyFunc{Fn: func(in interface{}, out interface{}, c *conversion.Cloner) error {
in.(*EnvVarSource).DeepCopyInto(out.(*EnvVarSource))
return nil
}, InType: reflect.TypeOf(&EnvVarSource{})},
conversion.GeneratedDeepCopyFunc{Fn: func(in interface{}, out interface{}, c *conversion.Cloner) error {
in.(*Event).DeepCopyInto(out.(*Event))
return nil
}, InType: reflect.TypeOf(&Event{})},
conversion.GeneratedDeepCopyFunc{Fn: func(in interface{}, out interface{}, c *conversion.Cloner) error {
in.(*EventList).DeepCopyInto(out.(*EventList))
return nil
}, InType: reflect.TypeOf(&EventList{})},
conversion.GeneratedDeepCopyFunc{Fn: func(in interface{}, out interface{}, c *conversion.Cloner) error {
in.(*EventSource).DeepCopyInto(out.(*EventSource))
return nil
}, InType: reflect.TypeOf(&EventSource{})},
conversion.GeneratedDeepCopyFunc{Fn: func(in interface{}, out interface{}, c *conversion.Cloner) error {
in.(*ExecAction).DeepCopyInto(out.(*ExecAction))
return nil
}, InType: reflect.TypeOf(&ExecAction{})},
conversion.GeneratedDeepCopyFunc{Fn: func(in interface{}, out interface{}, c *conversion.Cloner) error {
in.(*FCVolumeSource).DeepCopyInto(out.(*FCVolumeSource))
return nil
}, InType: reflect.TypeOf(&FCVolumeSource{})},
conversion.GeneratedDeepCopyFunc{Fn: func(in interface{}, out interface{}, c *conversion.Cloner) error {
in.(*FlexVolumeSource).DeepCopyInto(out.(*FlexVolumeSource))
return nil
}, InType: reflect.TypeOf(&FlexVolumeSource{})},
conversion.GeneratedDeepCopyFunc{Fn: func(in interface{}, out interface{}, c *conversion.Cloner) error {
in.(*FlockerVolumeSource).DeepCopyInto(out.(*FlockerVolumeSource))
return nil
}, InType: reflect.TypeOf(&FlockerVolumeSource{})},
conversion.GeneratedDeepCopyFunc{Fn: func(in interface{}, out interface{}, c *conversion.Cloner) error {
in.(*GCEPersistentDiskVolumeSource).DeepCopyInto(out.(*GCEPersistentDiskVolumeSource))
return nil
}, InType: reflect.TypeOf(&GCEPersistentDiskVolumeSource{})},
conversion.GeneratedDeepCopyFunc{Fn: func(in interface{}, out interface{}, c *conversion.Cloner) error {
in.(*GitRepoVolumeSource).DeepCopyInto(out.(*GitRepoVolumeSource))
return nil
}, InType: reflect.TypeOf(&GitRepoVolumeSource{})},
conversion.GeneratedDeepCopyFunc{Fn: func(in interface{}, out interface{}, c *conversion.Cloner) error {
in.(*GlusterfsVolumeSource).DeepCopyInto(out.(*GlusterfsVolumeSource))
return nil
}, InType: reflect.TypeOf(&GlusterfsVolumeSource{})},
conversion.GeneratedDeepCopyFunc{Fn: func(in interface{}, out interface{}, c *conversion.Cloner) error {
in.(*HTTPGetAction).DeepCopyInto(out.(*HTTPGetAction))
return nil
}, InType: reflect.TypeOf(&HTTPGetAction{})},
conversion.GeneratedDeepCopyFunc{Fn: func(in interface{}, out interface{}, c *conversion.Cloner) error {
in.(*HTTPHeader).DeepCopyInto(out.(*HTTPHeader))
return nil
}, InType: reflect.TypeOf(&HTTPHeader{})},
conversion.GeneratedDeepCopyFunc{Fn: func(in interface{}, out interface{}, c *conversion.Cloner) error {
in.(*Handler).DeepCopyInto(out.(*Handler))
return nil
}, InType: reflect.TypeOf(&Handler{})},
conversion.GeneratedDeepCopyFunc{Fn: func(in interface{}, out interface{}, c *conversion.Cloner) error {
in.(*HostAlias).DeepCopyInto(out.(*HostAlias))
return nil
}, InType: reflect.TypeOf(&HostAlias{})},
conversion.GeneratedDeepCopyFunc{Fn: func(in interface{}, out interface{}, c *conversion.Cloner) error {
in.(*HostPathVolumeSource).DeepCopyInto(out.(*HostPathVolumeSource))
return nil
}, InType: reflect.TypeOf(&HostPathVolumeSource{})},
conversion.GeneratedDeepCopyFunc{Fn: func(in interface{}, out interface{}, c *conversion.Cloner) error {
in.(*ISCSIVolumeSource).DeepCopyInto(out.(*ISCSIVolumeSource))
return nil
}, InType: reflect.TypeOf(&ISCSIVolumeSource{})},
conversion.GeneratedDeepCopyFunc{Fn: func(in interface{}, out interface{}, c *conversion.Cloner) error {
in.(*KeyToPath).DeepCopyInto(out.(*KeyToPath))
return nil
}, InType: reflect.TypeOf(&KeyToPath{})},
conversion.GeneratedDeepCopyFunc{Fn: func(in interface{}, out interface{}, c *conversion.Cloner) error {
in.(*Lifecycle).DeepCopyInto(out.(*Lifecycle))
return nil
}, InType: reflect.TypeOf(&Lifecycle{})},
conversion.GeneratedDeepCopyFunc{Fn: func(in interface{}, out interface{}, c *conversion.Cloner) error {
in.(*LimitRange).DeepCopyInto(out.(*LimitRange))
return nil
}, InType: reflect.TypeOf(&LimitRange{})},
conversion.GeneratedDeepCopyFunc{Fn: func(in interface{}, out interface{}, c *conversion.Cloner) error {
in.(*LimitRangeItem).DeepCopyInto(out.(*LimitRangeItem))
return nil
}, InType: reflect.TypeOf(&LimitRangeItem{})},
conversion.GeneratedDeepCopyFunc{Fn: func(in interface{}, out interface{}, c *conversion.Cloner) error {
in.(*LimitRangeList).DeepCopyInto(out.(*LimitRangeList))
return nil
}, InType: reflect.TypeOf(&LimitRangeList{})},
conversion.GeneratedDeepCopyFunc{Fn: func(in interface{}, out interface{}, c *conversion.Cloner) error {
in.(*LimitRangeSpec).DeepCopyInto(out.(*LimitRangeSpec))
return nil
}, InType: reflect.TypeOf(&LimitRangeSpec{})},
conversion.GeneratedDeepCopyFunc{Fn: func(in interface{}, out interface{}, c *conversion.Cloner) error {
in.(*List).DeepCopyInto(out.(*List))
return nil
}, InType: reflect.TypeOf(&List{})},
conversion.GeneratedDeepCopyFunc{Fn: func(in interface{}, out interface{}, c *conversion.Cloner) error {
in.(*ListOptions).DeepCopyInto(out.(*ListOptions))
return nil
}, InType: reflect.TypeOf(&ListOptions{})},
conversion.GeneratedDeepCopyFunc{Fn: func(in interface{}, out interface{}, c *conversion.Cloner) error {
in.(*LoadBalancerIngress).DeepCopyInto(out.(*LoadBalancerIngress))
return nil
}, InType: reflect.TypeOf(&LoadBalancerIngress{})},
conversion.GeneratedDeepCopyFunc{Fn: func(in interface{}, out interface{}, c *conversion.Cloner) error {
in.(*LoadBalancerStatus).DeepCopyInto(out.(*LoadBalancerStatus))
return nil
}, InType: reflect.TypeOf(&LoadBalancerStatus{})},
conversion.GeneratedDeepCopyFunc{Fn: func(in interface{}, out interface{}, c *conversion.Cloner) error {
in.(*LocalObjectReference).DeepCopyInto(out.(*LocalObjectReference))
return nil
}, InType: reflect.TypeOf(&LocalObjectReference{})},
conversion.GeneratedDeepCopyFunc{Fn: func(in interface{}, out interface{}, c *conversion.Cloner) error {
in.(*LocalVolumeSource).DeepCopyInto(out.(*LocalVolumeSource))
return nil
}, InType: reflect.TypeOf(&LocalVolumeSource{})},
conversion.GeneratedDeepCopyFunc{Fn: func(in interface{}, out interface{}, c *conversion.Cloner) error {
in.(*NFSVolumeSource).DeepCopyInto(out.(*NFSVolumeSource))
return nil
}, InType: reflect.TypeOf(&NFSVolumeSource{})},
conversion.GeneratedDeepCopyFunc{Fn: func(in interface{}, out interface{}, c *conversion.Cloner) error {
in.(*Namespace).DeepCopyInto(out.(*Namespace))
return nil
}, InType: reflect.TypeOf(&Namespace{})},
conversion.GeneratedDeepCopyFunc{Fn: func(in interface{}, out interface{}, c *conversion.Cloner) error {
in.(*NamespaceList).DeepCopyInto(out.(*NamespaceList))
return nil
}, InType: reflect.TypeOf(&NamespaceList{})},
conversion.GeneratedDeepCopyFunc{Fn: func(in interface{}, out interface{}, c *conversion.Cloner) error {
in.(*NamespaceSpec).DeepCopyInto(out.(*NamespaceSpec))
return nil
}, InType: reflect.TypeOf(&NamespaceSpec{})},
conversion.GeneratedDeepCopyFunc{Fn: func(in interface{}, out interface{}, c *conversion.Cloner) error {
in.(*NamespaceStatus).DeepCopyInto(out.(*NamespaceStatus))
return nil
}, InType: reflect.TypeOf(&NamespaceStatus{})},
conversion.GeneratedDeepCopyFunc{Fn: func(in interface{}, out interface{}, c *conversion.Cloner) error {
in.(*Node).DeepCopyInto(out.(*Node))
return nil
}, InType: reflect.TypeOf(&Node{})},
conversion.GeneratedDeepCopyFunc{Fn: func(in interface{}, out interface{}, c *conversion.Cloner) error {
in.(*NodeAddress).DeepCopyInto(out.(*NodeAddress))
return nil
}, InType: reflect.TypeOf(&NodeAddress{})},
conversion.GeneratedDeepCopyFunc{Fn: func(in interface{}, out interface{}, c *conversion.Cloner) error {
in.(*NodeAffinity).DeepCopyInto(out.(*NodeAffinity))
return nil
}, InType: reflect.TypeOf(&NodeAffinity{})},
conversion.GeneratedDeepCopyFunc{Fn: func(in interface{}, out interface{}, c *conversion.Cloner) error {
in.(*NodeCondition).DeepCopyInto(out.(*NodeCondition))
return nil
}, InType: reflect.TypeOf(&NodeCondition{})},
conversion.GeneratedDeepCopyFunc{Fn: func(in interface{}, out interface{}, c *conversion.Cloner) error {
in.(*NodeConfigSource).DeepCopyInto(out.(*NodeConfigSource))
return nil
}, InType: reflect.TypeOf(&NodeConfigSource{})},
conversion.GeneratedDeepCopyFunc{Fn: func(in interface{}, out interface{}, c *conversion.Cloner) error {
in.(*NodeDaemonEndpoints).DeepCopyInto(out.(*NodeDaemonEndpoints))
return nil
}, InType: reflect.TypeOf(&NodeDaemonEndpoints{})},
conversion.GeneratedDeepCopyFunc{Fn: func(in interface{}, out interface{}, c *conversion.Cloner) error {
in.(*NodeList).DeepCopyInto(out.(*NodeList))
return nil
}, InType: reflect.TypeOf(&NodeList{})},
conversion.GeneratedDeepCopyFunc{Fn: func(in interface{}, out interface{}, c *conversion.Cloner) error {
in.(*NodeProxyOptions).DeepCopyInto(out.(*NodeProxyOptions))
return nil
}, InType: reflect.TypeOf(&NodeProxyOptions{})},
conversion.GeneratedDeepCopyFunc{Fn: func(in interface{}, out interface{}, c *conversion.Cloner) error {
in.(*NodeResources).DeepCopyInto(out.(*NodeResources))
return nil
}, InType: reflect.TypeOf(&NodeResources{})},
conversion.GeneratedDeepCopyFunc{Fn: func(in interface{}, out interface{}, c *conversion.Cloner) error {
in.(*NodeSelector).DeepCopyInto(out.(*NodeSelector))
return nil
}, InType: reflect.TypeOf(&NodeSelector{})},
conversion.GeneratedDeepCopyFunc{Fn: func(in interface{}, out interface{}, c *conversion.Cloner) error {
in.(*NodeSelectorRequirement).DeepCopyInto(out.(*NodeSelectorRequirement))
return nil
}, InType: reflect.TypeOf(&NodeSelectorRequirement{})},
conversion.GeneratedDeepCopyFunc{Fn: func(in interface{}, out interface{}, c *conversion.Cloner) error {
in.(*NodeSelectorTerm).DeepCopyInto(out.(*NodeSelectorTerm))
return nil
}, InType: reflect.TypeOf(&NodeSelectorTerm{})},
conversion.GeneratedDeepCopyFunc{Fn: func(in interface{}, out interface{}, c *conversion.Cloner) error {
in.(*NodeSpec).DeepCopyInto(out.(*NodeSpec))
return nil
}, InType: reflect.TypeOf(&NodeSpec{})},
conversion.GeneratedDeepCopyFunc{Fn: func(in interface{}, out interface{}, c *conversion.Cloner) error {
in.(*NodeStatus).DeepCopyInto(out.(*NodeStatus))
return nil
}, InType: reflect.TypeOf(&NodeStatus{})},
conversion.GeneratedDeepCopyFunc{Fn: func(in interface{}, out interface{}, c *conversion.Cloner) error {
in.(*NodeSystemInfo).DeepCopyInto(out.(*NodeSystemInfo))
return nil
}, InType: reflect.TypeOf(&NodeSystemInfo{})},
conversion.GeneratedDeepCopyFunc{Fn: func(in interface{}, out interface{}, c *conversion.Cloner) error {
in.(*ObjectFieldSelector).DeepCopyInto(out.(*ObjectFieldSelector))
return nil
}, InType: reflect.TypeOf(&ObjectFieldSelector{})},
conversion.GeneratedDeepCopyFunc{Fn: func(in interface{}, out interface{}, c *conversion.Cloner) error {
in.(*ObjectMeta).DeepCopyInto(out.(*ObjectMeta))
return nil
}, InType: reflect.TypeOf(&ObjectMeta{})},
conversion.GeneratedDeepCopyFunc{Fn: func(in interface{}, out interface{}, c *conversion.Cloner) error {
in.(*ObjectReference).DeepCopyInto(out.(*ObjectReference))
return nil
}, InType: reflect.TypeOf(&ObjectReference{})},
conversion.GeneratedDeepCopyFunc{Fn: func(in interface{}, out interface{}, c *conversion.Cloner) error {
in.(*PersistentVolume).DeepCopyInto(out.(*PersistentVolume))
return nil
}, InType: reflect.TypeOf(&PersistentVolume{})},
conversion.GeneratedDeepCopyFunc{Fn: func(in interface{}, out interface{}, c *conversion.Cloner) error {
in.(*PersistentVolumeClaim).DeepCopyInto(out.(*PersistentVolumeClaim))
return nil
}, InType: reflect.TypeOf(&PersistentVolumeClaim{})},
conversion.GeneratedDeepCopyFunc{Fn: func(in interface{}, out interface{}, c *conversion.Cloner) error {
in.(*PersistentVolumeClaimCondition).DeepCopyInto(out.(*PersistentVolumeClaimCondition))
return nil
}, InType: reflect.TypeOf(&PersistentVolumeClaimCondition{})},
conversion.GeneratedDeepCopyFunc{Fn: func(in interface{}, out interface{}, c *conversion.Cloner) error {
in.(*PersistentVolumeClaimList).DeepCopyInto(out.(*PersistentVolumeClaimList))
return nil
}, InType: reflect.TypeOf(&PersistentVolumeClaimList{})},
conversion.GeneratedDeepCopyFunc{Fn: func(in interface{}, out interface{}, c *conversion.Cloner) error {
in.(*PersistentVolumeClaimSpec).DeepCopyInto(out.(*PersistentVolumeClaimSpec))
return nil
}, InType: reflect.TypeOf(&PersistentVolumeClaimSpec{})},
conversion.GeneratedDeepCopyFunc{Fn: func(in interface{}, out interface{}, c *conversion.Cloner) error {
in.(*PersistentVolumeClaimStatus).DeepCopyInto(out.(*PersistentVolumeClaimStatus))
return nil
}, InType: reflect.TypeOf(&PersistentVolumeClaimStatus{})},
conversion.GeneratedDeepCopyFunc{Fn: func(in interface{}, out interface{}, c *conversion.Cloner) error {
in.(*PersistentVolumeClaimVolumeSource).DeepCopyInto(out.(*PersistentVolumeClaimVolumeSource))
return nil
}, InType: reflect.TypeOf(&PersistentVolumeClaimVolumeSource{})},
conversion.GeneratedDeepCopyFunc{Fn: func(in interface{}, out interface{}, c *conversion.Cloner) error {
in.(*PersistentVolumeList).DeepCopyInto(out.(*PersistentVolumeList))
return nil
}, InType: reflect.TypeOf(&PersistentVolumeList{})},
conversion.GeneratedDeepCopyFunc{Fn: func(in interface{}, out interface{}, c *conversion.Cloner) error {
in.(*PersistentVolumeSource).DeepCopyInto(out.(*PersistentVolumeSource))
return nil
}, InType: reflect.TypeOf(&PersistentVolumeSource{})},
conversion.GeneratedDeepCopyFunc{Fn: func(in interface{}, out interface{}, c *conversion.Cloner) error {
in.(*PersistentVolumeSpec).DeepCopyInto(out.(*PersistentVolumeSpec))
return nil
}, InType: reflect.TypeOf(&PersistentVolumeSpec{})},
conversion.GeneratedDeepCopyFunc{Fn: func(in interface{}, out interface{}, c *conversion.Cloner) error {
in.(*PersistentVolumeStatus).DeepCopyInto(out.(*PersistentVolumeStatus))
return nil
}, InType: reflect.TypeOf(&PersistentVolumeStatus{})},
conversion.GeneratedDeepCopyFunc{Fn: func(in interface{}, out interface{}, c *conversion.Cloner) error {
in.(*PhotonPersistentDiskVolumeSource).DeepCopyInto(out.(*PhotonPersistentDiskVolumeSource))
return nil
}, InType: reflect.TypeOf(&PhotonPersistentDiskVolumeSource{})},
conversion.GeneratedDeepCopyFunc{Fn: func(in interface{}, out interface{}, c *conversion.Cloner) error {
in.(*Pod).DeepCopyInto(out.(*Pod))
return nil
}, InType: reflect.TypeOf(&Pod{})},
conversion.GeneratedDeepCopyFunc{Fn: func(in interface{}, out interface{}, c *conversion.Cloner) error {
in.(*PodAffinity).DeepCopyInto(out.(*PodAffinity))
return nil
}, InType: reflect.TypeOf(&PodAffinity{})},
conversion.GeneratedDeepCopyFunc{Fn: func(in interface{}, out interface{}, c *conversion.Cloner) error {
in.(*PodAffinityTerm).DeepCopyInto(out.(*PodAffinityTerm))
return nil
}, InType: reflect.TypeOf(&PodAffinityTerm{})},
conversion.GeneratedDeepCopyFunc{Fn: func(in interface{}, out interface{}, c *conversion.Cloner) error {
in.(*PodAntiAffinity).DeepCopyInto(out.(*PodAntiAffinity))
return nil
}, InType: reflect.TypeOf(&PodAntiAffinity{})},
conversion.GeneratedDeepCopyFunc{Fn: func(in interface{}, out interface{}, c *conversion.Cloner) error {
in.(*PodAttachOptions).DeepCopyInto(out.(*PodAttachOptions))
return nil
}, InType: reflect.TypeOf(&PodAttachOptions{})},
conversion.GeneratedDeepCopyFunc{Fn: func(in interface{}, out interface{}, c *conversion.Cloner) error {
in.(*PodCondition).DeepCopyInto(out.(*PodCondition))
return nil
}, InType: reflect.TypeOf(&PodCondition{})},
conversion.GeneratedDeepCopyFunc{Fn: func(in interface{}, out interface{}, c *conversion.Cloner) error {
in.(*PodExecOptions).DeepCopyInto(out.(*PodExecOptions))
return nil
}, InType: reflect.TypeOf(&PodExecOptions{})},
conversion.GeneratedDeepCopyFunc{Fn: func(in interface{}, out interface{}, c *conversion.Cloner) error {
in.(*PodList).DeepCopyInto(out.(*PodList))
return nil
}, InType: reflect.TypeOf(&PodList{})},
conversion.GeneratedDeepCopyFunc{Fn: func(in interface{}, out interface{}, c *conversion.Cloner) error {
in.(*PodLogOptions).DeepCopyInto(out.(*PodLogOptions))
return nil
}, InType: reflect.TypeOf(&PodLogOptions{})},
conversion.GeneratedDeepCopyFunc{Fn: func(in interface{}, out interface{}, c *conversion.Cloner) error {
in.(*PodPortForwardOptions).DeepCopyInto(out.(*PodPortForwardOptions))
return nil
}, InType: reflect.TypeOf(&PodPortForwardOptions{})},
conversion.GeneratedDeepCopyFunc{Fn: func(in interface{}, out interface{}, c *conversion.Cloner) error {
in.(*PodProxyOptions).DeepCopyInto(out.(*PodProxyOptions))
return nil
}, InType: reflect.TypeOf(&PodProxyOptions{})},
conversion.GeneratedDeepCopyFunc{Fn: func(in interface{}, out interface{}, c *conversion.Cloner) error {
in.(*PodSecurityContext).DeepCopyInto(out.(*PodSecurityContext))
return nil
}, InType: reflect.TypeOf(&PodSecurityContext{})},
conversion.GeneratedDeepCopyFunc{Fn: func(in interface{}, out interface{}, c *conversion.Cloner) error {
in.(*PodSignature).DeepCopyInto(out.(*PodSignature))
return nil
}, InType: reflect.TypeOf(&PodSignature{})},
conversion.GeneratedDeepCopyFunc{Fn: func(in interface{}, out interface{}, c *conversion.Cloner) error {
in.(*PodSpec).DeepCopyInto(out.(*PodSpec))
return nil
}, InType: reflect.TypeOf(&PodSpec{})},
conversion.GeneratedDeepCopyFunc{Fn: func(in interface{}, out interface{}, c *conversion.Cloner) error {
in.(*PodStatus).DeepCopyInto(out.(*PodStatus))
return nil
}, InType: reflect.TypeOf(&PodStatus{})},
conversion.GeneratedDeepCopyFunc{Fn: func(in interface{}, out interface{}, c *conversion.Cloner) error {
in.(*PodStatusResult).DeepCopyInto(out.(*PodStatusResult))
return nil
}, InType: reflect.TypeOf(&PodStatusResult{})},
conversion.GeneratedDeepCopyFunc{Fn: func(in interface{}, out interface{}, c *conversion.Cloner) error {
in.(*PodTemplate).DeepCopyInto(out.(*PodTemplate))
return nil
}, InType: reflect.TypeOf(&PodTemplate{})},
conversion.GeneratedDeepCopyFunc{Fn: func(in interface{}, out interface{}, c *conversion.Cloner) error {
in.(*PodTemplateList).DeepCopyInto(out.(*PodTemplateList))
return nil
}, InType: reflect.TypeOf(&PodTemplateList{})},
conversion.GeneratedDeepCopyFunc{Fn: func(in interface{}, out interface{}, c *conversion.Cloner) error {
in.(*PodTemplateSpec).DeepCopyInto(out.(*PodTemplateSpec))
return nil
}, InType: reflect.TypeOf(&PodTemplateSpec{})},
conversion.GeneratedDeepCopyFunc{Fn: func(in interface{}, out interface{}, c *conversion.Cloner) error {
in.(*PortworxVolumeSource).DeepCopyInto(out.(*PortworxVolumeSource))
return nil
}, InType: reflect.TypeOf(&PortworxVolumeSource{})},
conversion.GeneratedDeepCopyFunc{Fn: func(in interface{}, out interface{}, c *conversion.Cloner) error {
in.(*Preconditions).DeepCopyInto(out.(*Preconditions))
return nil
}, InType: reflect.TypeOf(&Preconditions{})},
conversion.GeneratedDeepCopyFunc{Fn: func(in interface{}, out interface{}, c *conversion.Cloner) error {
in.(*PreferAvoidPodsEntry).DeepCopyInto(out.(*PreferAvoidPodsEntry))
return nil
}, InType: reflect.TypeOf(&PreferAvoidPodsEntry{})},
conversion.GeneratedDeepCopyFunc{Fn: func(in interface{}, out interface{}, c *conversion.Cloner) error {
in.(*PreferredSchedulingTerm).DeepCopyInto(out.(*PreferredSchedulingTerm))
return nil
}, InType: reflect.TypeOf(&PreferredSchedulingTerm{})},
conversion.GeneratedDeepCopyFunc{Fn: func(in interface{}, out interface{}, c *conversion.Cloner) error {
in.(*Probe).DeepCopyInto(out.(*Probe))
return nil
}, InType: reflect.TypeOf(&Probe{})},
conversion.GeneratedDeepCopyFunc{Fn: func(in interface{}, out interface{}, c *conversion.Cloner) error {
in.(*ProjectedVolumeSource).DeepCopyInto(out.(*ProjectedVolumeSource))
return nil
}, InType: reflect.TypeOf(&ProjectedVolumeSource{})},
conversion.GeneratedDeepCopyFunc{Fn: func(in interface{}, out interface{}, c *conversion.Cloner) error {
in.(*QuobyteVolumeSource).DeepCopyInto(out.(*QuobyteVolumeSource))
return nil
}, InType: reflect.TypeOf(&QuobyteVolumeSource{})},
conversion.GeneratedDeepCopyFunc{Fn: func(in interface{}, out interface{}, c *conversion.Cloner) error {
in.(*RBDVolumeSource).DeepCopyInto(out.(*RBDVolumeSource))
return nil
}, InType: reflect.TypeOf(&RBDVolumeSource{})},
conversion.GeneratedDeepCopyFunc{Fn: func(in interface{}, out interface{}, c *conversion.Cloner) error {
in.(*RangeAllocation).DeepCopyInto(out.(*RangeAllocation))
return nil
}, InType: reflect.TypeOf(&RangeAllocation{})},
conversion.GeneratedDeepCopyFunc{Fn: func(in interface{}, out interface{}, c *conversion.Cloner) error {
in.(*ReplicationController).DeepCopyInto(out.(*ReplicationController))
return nil
}, InType: reflect.TypeOf(&ReplicationController{})},
conversion.GeneratedDeepCopyFunc{Fn: func(in interface{}, out interface{}, c *conversion.Cloner) error {
in.(*ReplicationControllerCondition).DeepCopyInto(out.(*ReplicationControllerCondition))
return nil
}, InType: reflect.TypeOf(&ReplicationControllerCondition{})},
conversion.GeneratedDeepCopyFunc{Fn: func(in interface{}, out interface{}, c *conversion.Cloner) error {
in.(*ReplicationControllerList).DeepCopyInto(out.(*ReplicationControllerList))
return nil
}, InType: reflect.TypeOf(&ReplicationControllerList{})},
conversion.GeneratedDeepCopyFunc{Fn: func(in interface{}, out interface{}, c *conversion.Cloner) error {
in.(*ReplicationControllerSpec).DeepCopyInto(out.(*ReplicationControllerSpec))
return nil
}, InType: reflect.TypeOf(&ReplicationControllerSpec{})},
conversion.GeneratedDeepCopyFunc{Fn: func(in interface{}, out interface{}, c *conversion.Cloner) error {
in.(*ReplicationControllerStatus).DeepCopyInto(out.(*ReplicationControllerStatus))
return nil
}, InType: reflect.TypeOf(&ReplicationControllerStatus{})},
conversion.GeneratedDeepCopyFunc{Fn: func(in interface{}, out interface{}, c *conversion.Cloner) error {
in.(*ResourceFieldSelector).DeepCopyInto(out.(*ResourceFieldSelector))
return nil
}, InType: reflect.TypeOf(&ResourceFieldSelector{})},
conversion.GeneratedDeepCopyFunc{Fn: func(in interface{}, out interface{}, c *conversion.Cloner) error {
in.(*ResourceQuota).DeepCopyInto(out.(*ResourceQuota))
return nil
}, InType: reflect.TypeOf(&ResourceQuota{})},
conversion.GeneratedDeepCopyFunc{Fn: func(in interface{}, out interface{}, c *conversion.Cloner) error {
in.(*ResourceQuotaList).DeepCopyInto(out.(*ResourceQuotaList))
return nil
}, InType: reflect.TypeOf(&ResourceQuotaList{})},
conversion.GeneratedDeepCopyFunc{Fn: func(in interface{}, out interface{}, c *conversion.Cloner) error {
in.(*ResourceQuotaSpec).DeepCopyInto(out.(*ResourceQuotaSpec))
return nil
}, InType: reflect.TypeOf(&ResourceQuotaSpec{})},
conversion.GeneratedDeepCopyFunc{Fn: func(in interface{}, out interface{}, c *conversion.Cloner) error {
in.(*ResourceQuotaStatus).DeepCopyInto(out.(*ResourceQuotaStatus))
return nil
}, InType: reflect.TypeOf(&ResourceQuotaStatus{})},
conversion.GeneratedDeepCopyFunc{Fn: func(in interface{}, out interface{}, c *conversion.Cloner) error {
in.(*ResourceRequirements).DeepCopyInto(out.(*ResourceRequirements))
return nil
}, InType: reflect.TypeOf(&ResourceRequirements{})},
conversion.GeneratedDeepCopyFunc{Fn: func(in interface{}, out interface{}, c *conversion.Cloner) error {
in.(*SELinuxOptions).DeepCopyInto(out.(*SELinuxOptions))
return nil
}, InType: reflect.TypeOf(&SELinuxOptions{})},
conversion.GeneratedDeepCopyFunc{Fn: func(in interface{}, out interface{}, c *conversion.Cloner) error {
in.(*ScaleIOVolumeSource).DeepCopyInto(out.(*ScaleIOVolumeSource))
return nil
}, InType: reflect.TypeOf(&ScaleIOVolumeSource{})},
conversion.GeneratedDeepCopyFunc{Fn: func(in interface{}, out interface{}, c *conversion.Cloner) error {
in.(*Secret).DeepCopyInto(out.(*Secret))
return nil
}, InType: reflect.TypeOf(&Secret{})},
conversion.GeneratedDeepCopyFunc{Fn: func(in interface{}, out interface{}, c *conversion.Cloner) error {
in.(*SecretEnvSource).DeepCopyInto(out.(*SecretEnvSource))
return nil
}, InType: reflect.TypeOf(&SecretEnvSource{})},
conversion.GeneratedDeepCopyFunc{Fn: func(in interface{}, out interface{}, c *conversion.Cloner) error {
in.(*SecretKeySelector).DeepCopyInto(out.(*SecretKeySelector))
return nil
}, InType: reflect.TypeOf(&SecretKeySelector{})},
conversion.GeneratedDeepCopyFunc{Fn: func(in interface{}, out interface{}, c *conversion.Cloner) error {
in.(*SecretList).DeepCopyInto(out.(*SecretList))
return nil
}, InType: reflect.TypeOf(&SecretList{})},
conversion.GeneratedDeepCopyFunc{Fn: func(in interface{}, out interface{}, c *conversion.Cloner) error {
in.(*SecretProjection).DeepCopyInto(out.(*SecretProjection))
return nil
}, InType: reflect.TypeOf(&SecretProjection{})},
conversion.GeneratedDeepCopyFunc{Fn: func(in interface{}, out interface{}, c *conversion.Cloner) error {
in.(*SecretReference).DeepCopyInto(out.(*SecretReference))
return nil
}, InType: reflect.TypeOf(&SecretReference{})},
conversion.GeneratedDeepCopyFunc{Fn: func(in interface{}, out interface{}, c *conversion.Cloner) error {
in.(*SecretVolumeSource).DeepCopyInto(out.(*SecretVolumeSource))
return nil
}, InType: reflect.TypeOf(&SecretVolumeSource{})},
conversion.GeneratedDeepCopyFunc{Fn: func(in interface{}, out interface{}, c *conversion.Cloner) error {
in.(*SecurityContext).DeepCopyInto(out.(*SecurityContext))
return nil
}, InType: reflect.TypeOf(&SecurityContext{})},
conversion.GeneratedDeepCopyFunc{Fn: func(in interface{}, out interface{}, c *conversion.Cloner) error {
in.(*SerializedReference).DeepCopyInto(out.(*SerializedReference))
return nil
}, InType: reflect.TypeOf(&SerializedReference{})},
conversion.GeneratedDeepCopyFunc{Fn: func(in interface{}, out interface{}, c *conversion.Cloner) error {
in.(*Service).DeepCopyInto(out.(*Service))
return nil
}, InType: reflect.TypeOf(&Service{})},
conversion.GeneratedDeepCopyFunc{Fn: func(in interface{}, out interface{}, c *conversion.Cloner) error {
in.(*ServiceAccount).DeepCopyInto(out.(*ServiceAccount))
return nil
}, InType: reflect.TypeOf(&ServiceAccount{})},
conversion.GeneratedDeepCopyFunc{Fn: func(in interface{}, out interface{}, c *conversion.Cloner) error {
in.(*ServiceAccountList).DeepCopyInto(out.(*ServiceAccountList))
return nil
}, InType: reflect.TypeOf(&ServiceAccountList{})},
conversion.GeneratedDeepCopyFunc{Fn: func(in interface{}, out interface{}, c *conversion.Cloner) error {
in.(*ServiceList).DeepCopyInto(out.(*ServiceList))
return nil
}, InType: reflect.TypeOf(&ServiceList{})},
conversion.GeneratedDeepCopyFunc{Fn: func(in interface{}, out interface{}, c *conversion.Cloner) error {
in.(*ServicePort).DeepCopyInto(out.(*ServicePort))
return nil
}, InType: reflect.TypeOf(&ServicePort{})},
conversion.GeneratedDeepCopyFunc{Fn: func(in interface{}, out interface{}, c *conversion.Cloner) error {
in.(*ServiceProxyOptions).DeepCopyInto(out.(*ServiceProxyOptions))
return nil
}, InType: reflect.TypeOf(&ServiceProxyOptions{})},
conversion.GeneratedDeepCopyFunc{Fn: func(in interface{}, out interface{}, c *conversion.Cloner) error {
in.(*ServiceSpec).DeepCopyInto(out.(*ServiceSpec))
return nil
}, InType: reflect.TypeOf(&ServiceSpec{})},
conversion.GeneratedDeepCopyFunc{Fn: func(in interface{}, out interface{}, c *conversion.Cloner) error {
in.(*ServiceStatus).DeepCopyInto(out.(*ServiceStatus))
return nil
}, InType: reflect.TypeOf(&ServiceStatus{})},
conversion.GeneratedDeepCopyFunc{Fn: func(in interface{}, out interface{}, c *conversion.Cloner) error {
in.(*SessionAffinityConfig).DeepCopyInto(out.(*SessionAffinityConfig))
return nil
}, InType: reflect.TypeOf(&SessionAffinityConfig{})},
conversion.GeneratedDeepCopyFunc{Fn: func(in interface{}, out interface{}, c *conversion.Cloner) error {
in.(*StorageOSPersistentVolumeSource).DeepCopyInto(out.(*StorageOSPersistentVolumeSource))
return nil
}, InType: reflect.TypeOf(&StorageOSPersistentVolumeSource{})},
conversion.GeneratedDeepCopyFunc{Fn: func(in interface{}, out interface{}, c *conversion.Cloner) error {
in.(*StorageOSVolumeSource).DeepCopyInto(out.(*StorageOSVolumeSource))
return nil
}, InType: reflect.TypeOf(&StorageOSVolumeSource{})},
conversion.GeneratedDeepCopyFunc{Fn: func(in interface{}, out interface{}, c *conversion.Cloner) error {
in.(*Sysctl).DeepCopyInto(out.(*Sysctl))
return nil
}, InType: reflect.TypeOf(&Sysctl{})},
conversion.GeneratedDeepCopyFunc{Fn: func(in interface{}, out interface{}, c *conversion.Cloner) error {
in.(*TCPSocketAction).DeepCopyInto(out.(*TCPSocketAction))
return nil
}, InType: reflect.TypeOf(&TCPSocketAction{})},
conversion.GeneratedDeepCopyFunc{Fn: func(in interface{}, out interface{}, c *conversion.Cloner) error {
in.(*Taint).DeepCopyInto(out.(*Taint))
return nil
}, InType: reflect.TypeOf(&Taint{})},
conversion.GeneratedDeepCopyFunc{Fn: func(in interface{}, out interface{}, c *conversion.Cloner) error {
in.(*Toleration).DeepCopyInto(out.(*Toleration))
return nil
}, InType: reflect.TypeOf(&Toleration{})},
conversion.GeneratedDeepCopyFunc{Fn: func(in interface{}, out interface{}, c *conversion.Cloner) error {
in.(*Volume).DeepCopyInto(out.(*Volume))
return nil
}, InType: reflect.TypeOf(&Volume{})},
conversion.GeneratedDeepCopyFunc{Fn: func(in interface{}, out interface{}, c *conversion.Cloner) error {
in.(*VolumeMount).DeepCopyInto(out.(*VolumeMount))
return nil
}, InType: reflect.TypeOf(&VolumeMount{})},
conversion.GeneratedDeepCopyFunc{Fn: func(in interface{}, out interface{}, c *conversion.Cloner) error {
in.(*VolumeProjection).DeepCopyInto(out.(*VolumeProjection))
return nil
}, InType: reflect.TypeOf(&VolumeProjection{})},
conversion.GeneratedDeepCopyFunc{Fn: func(in interface{}, out interface{}, c *conversion.Cloner) error {
in.(*VolumeSource).DeepCopyInto(out.(*VolumeSource))
return nil
}, InType: reflect.TypeOf(&VolumeSource{})},
conversion.GeneratedDeepCopyFunc{Fn: func(in interface{}, out interface{}, c *conversion.Cloner) error {
in.(*VsphereVirtualDiskVolumeSource).DeepCopyInto(out.(*VsphereVirtualDiskVolumeSource))
return nil
}, InType: reflect.TypeOf(&VsphereVirtualDiskVolumeSource{})},
conversion.GeneratedDeepCopyFunc{Fn: func(in interface{}, out interface{}, c *conversion.Cloner) error {
in.(*WeightedPodAffinityTerm).DeepCopyInto(out.(*WeightedPodAffinityTerm))
return nil
}, InType: reflect.TypeOf(&WeightedPodAffinityTerm{})},
)
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *AWSElasticBlockStoreVolumeSource) DeepCopyInto(out *AWSElasticBlockStoreVolumeSource) {
*out = *in
@ -973,6 +246,22 @@ func (in *Binding) DeepCopyObject() runtime.Object {
}
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *CSIPersistentVolumeSource) DeepCopyInto(out *CSIPersistentVolumeSource) {
*out = *in
return
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new CSIPersistentVolumeSource.
func (in *CSIPersistentVolumeSource) DeepCopy() *CSIPersistentVolumeSource {
if in == nil {
return nil
}
out := new(CSIPersistentVolumeSource)
in.DeepCopyInto(out)
return out
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *Capabilities) DeepCopyInto(out *Capabilities) {
*out = *in
@ -1417,6 +706,11 @@ func (in *Container) DeepCopyInto(out *Container) {
(*in)[i].DeepCopyInto(&(*out)[i])
}
}
if in.VolumeDevices != nil {
in, out := &in.VolumeDevices, &out.VolumeDevices
*out = make([]VolumeDevice, len(*in))
copy(*out, *in)
}
if in.LivenessProbe != nil {
in, out := &in.LivenessProbe, &out.LivenessProbe
if *in == nil {
@ -2440,6 +1734,45 @@ func (in *HostPathVolumeSource) DeepCopy() *HostPathVolumeSource {
return out
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *ISCSIPersistentVolumeSource) DeepCopyInto(out *ISCSIPersistentVolumeSource) {
*out = *in
if in.Portals != nil {
in, out := &in.Portals, &out.Portals
*out = make([]string, len(*in))
copy(*out, *in)
}
if in.SecretRef != nil {
in, out := &in.SecretRef, &out.SecretRef
if *in == nil {
*out = nil
} else {
*out = new(SecretReference)
**out = **in
}
}
if in.InitiatorName != nil {
in, out := &in.InitiatorName, &out.InitiatorName
if *in == nil {
*out = nil
} else {
*out = new(string)
**out = **in
}
}
return
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new ISCSIPersistentVolumeSource.
func (in *ISCSIPersistentVolumeSource) DeepCopy() *ISCSIPersistentVolumeSource {
if in == nil {
return nil
}
out := new(ISCSIPersistentVolumeSource)
in.DeepCopyInto(out)
return out
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *ISCSIVolumeSource) DeepCopyInto(out *ISCSIVolumeSource) {
*out = *in
@ -3596,6 +2929,15 @@ func (in *PersistentVolumeClaimSpec) DeepCopyInto(out *PersistentVolumeClaimSpec
**out = **in
}
}
if in.VolumeMode != nil {
in, out := &in.VolumeMode, &out.VolumeMode
if *in == nil {
*out = nil
} else {
*out = new(PersistentVolumeMode)
**out = **in
}
}
return
}
@ -3747,7 +3089,7 @@ func (in *PersistentVolumeSource) DeepCopyInto(out *PersistentVolumeSource) {
if *in == nil {
*out = nil
} else {
*out = new(RBDVolumeSource)
*out = new(RBDPersistentVolumeSource)
(*in).DeepCopyInto(*out)
}
}
@ -3765,7 +3107,7 @@ func (in *PersistentVolumeSource) DeepCopyInto(out *PersistentVolumeSource) {
if *in == nil {
*out = nil
} else {
*out = new(ISCSIVolumeSource)
*out = new(ISCSIPersistentVolumeSource)
(*in).DeepCopyInto(*out)
}
}
@ -3864,7 +3206,7 @@ func (in *PersistentVolumeSource) DeepCopyInto(out *PersistentVolumeSource) {
if *in == nil {
*out = nil
} else {
*out = new(ScaleIOVolumeSource)
*out = new(ScaleIOPersistentVolumeSource)
(*in).DeepCopyInto(*out)
}
}
@ -3886,6 +3228,15 @@ func (in *PersistentVolumeSource) DeepCopyInto(out *PersistentVolumeSource) {
(*in).DeepCopyInto(*out)
}
}
if in.CSI != nil {
in, out := &in.CSI, &out.CSI
if *in == nil {
*out = nil
} else {
*out = new(CSIPersistentVolumeSource)
**out = **in
}
}
return
}
@ -3929,6 +3280,15 @@ func (in *PersistentVolumeSpec) DeepCopyInto(out *PersistentVolumeSpec) {
*out = make([]string, len(*in))
copy(*out, *in)
}
if in.VolumeMode != nil {
in, out := &in.VolumeMode, &out.VolumeMode
if *in == nil {
*out = nil
} else {
*out = new(PersistentVolumeMode)
**out = **in
}
}
return
}
@ -4815,6 +4175,36 @@ func (in *QuobyteVolumeSource) DeepCopy() *QuobyteVolumeSource {
return out
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *RBDPersistentVolumeSource) DeepCopyInto(out *RBDPersistentVolumeSource) {
*out = *in
if in.CephMonitors != nil {
in, out := &in.CephMonitors, &out.CephMonitors
*out = make([]string, len(*in))
copy(*out, *in)
}
if in.SecretRef != nil {
in, out := &in.SecretRef, &out.SecretRef
if *in == nil {
*out = nil
} else {
*out = new(SecretReference)
**out = **in
}
}
return
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new RBDPersistentVolumeSource.
func (in *RBDPersistentVolumeSource) DeepCopy() *RBDPersistentVolumeSource {
if in == nil {
return nil
}
out := new(RBDPersistentVolumeSource)
in.DeepCopyInto(out)
return out
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *RBDVolumeSource) DeepCopyInto(out *RBDVolumeSource) {
*out = *in
@ -5196,6 +4586,31 @@ func (in *SELinuxOptions) DeepCopy() *SELinuxOptions {
return out
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *ScaleIOPersistentVolumeSource) DeepCopyInto(out *ScaleIOPersistentVolumeSource) {
*out = *in
if in.SecretRef != nil {
in, out := &in.SecretRef, &out.SecretRef
if *in == nil {
*out = nil
} else {
*out = new(SecretReference)
**out = **in
}
}
return
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new ScaleIOPersistentVolumeSource.
func (in *ScaleIOPersistentVolumeSource) DeepCopy() *ScaleIOPersistentVolumeSource {
if in == nil {
return nil
}
out := new(ScaleIOPersistentVolumeSource)
in.DeepCopyInto(out)
return out
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *ScaleIOVolumeSource) DeepCopyInto(out *ScaleIOVolumeSource) {
*out = *in
@ -5967,6 +5382,22 @@ func (in *Volume) DeepCopy() *Volume {
return out
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *VolumeDevice) DeepCopyInto(out *VolumeDevice) {
*out = *in
return
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new VolumeDevice.
func (in *VolumeDevice) DeepCopy() *VolumeDevice {
if in == nil {
return nil
}
out := new(VolumeDevice)
in.DeepCopyInto(out)
return out
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *VolumeMount) DeepCopyInto(out *VolumeMount) {
*out = *in

View File

@ -25,3 +25,31 @@ const (
// NetworkReady means the runtime network is up and ready to accept containers which require network.
NetworkReady = "NetworkReady"
)
// LogStreamType is the type of the stream in CRI container log.
type LogStreamType string
const (
// Stdout is the stream type for stdout.
Stdout LogStreamType = "stdout"
// Stderr is the stream type for stderr.
Stderr LogStreamType = "stderr"
)
// LogTag is the tag of a log line in CRI container log.
// Currently defined log tags:
// * First tag: Partial/End - P/E.
// The field in the container log format can be extended to include multiple
// tags by using a delimiter, but changes should be rare. If it becomes clear
// that better extensibility is desired, a more extensible format (e.g., json)
// should be adopted as a replacement and/or addition.
type LogTag string
const (
// LogTagPartial means the line is part of multiple lines.
LogTagPartial LogTag = "P"
// LogTagFull means the line is a single full line or the end of multiple lines.
LogTagFull LogTag = "F"
// LogTagDelimiter is the delimiter for different log tags.
LogTagDelimiter = ":"
)

View File

@ -28,7 +28,7 @@ import (
"k8s.io/apimachinery/pkg/util/httpstream"
"k8s.io/apimachinery/pkg/util/httpstream/spdy"
utilruntime "k8s.io/apimachinery/pkg/util/runtime"
"k8s.io/kubernetes/pkg/api"
api "k8s.io/kubernetes/pkg/apis/core"
"github.com/golang/glog"
)

View File

@ -32,7 +32,7 @@ import (
"k8s.io/apimachinery/pkg/util/runtime"
"k8s.io/apiserver/pkg/server/httplog"
"k8s.io/apiserver/pkg/util/wsstream"
"k8s.io/kubernetes/pkg/api"
api "k8s.io/kubernetes/pkg/apis/core"
)
const (

View File

@ -32,7 +32,7 @@ import (
"k8s.io/apimachinery/pkg/util/runtime"
"k8s.io/apiserver/pkg/util/wsstream"
"k8s.io/client-go/tools/remotecommand"
"k8s.io/kubernetes/pkg/api"
api "k8s.io/kubernetes/pkg/apis/core"
"github.com/golang/glog"
)

View File

@ -362,11 +362,11 @@ var _ remotecommandserver.Attacher = &criAdapter{}
var _ portforward.PortForwarder = &criAdapter{}
func (a *criAdapter) ExecInContainer(podName string, podUID types.UID, container string, cmd []string, in io.Reader, out, err io.WriteCloser, tty bool, resize <-chan remotecommand.TerminalSize, timeout time.Duration) error {
return a.Exec(container, cmd, in, out, err, tty, resize)
return a.Runtime.Exec(container, cmd, in, out, err, tty, resize)
}
func (a *criAdapter) AttachContainer(podName string, podUID types.UID, container string, in io.Reader, out, err io.WriteCloser, tty bool, resize <-chan remotecommand.TerminalSize) error {
return a.Attach(container, in, out, err, tty, resize)
return a.Runtime.Attach(container, in, out, err, tty, resize)
}
func (a *criAdapter) PortForward(podName string, podUID types.UID, port int32, stream io.ReadWriteCloser) error {

View File

@ -1,4 +1,4 @@
// +build freebsd linux
// +build freebsd linux darwin
/*
Copyright 2017 The Kubernetes Authors.

View File

@ -1,4 +1,4 @@
// +build !freebsd,!linux,!windows
// +build !freebsd,!linux,!windows,!darwin
/*
Copyright 2017 The Kubernetes Authors.

5
vendor/k8s.io/utils/README.md generated vendored
View File

@ -1,6 +1,6 @@
# Utils
[![Build Status]](https://travis-ci.org/kubernetes/utils)
[![Build Status]](https://travis-ci.org/kubernetes/utils) [![GoDoc](https://godoc.org/k8s.io/utils?status.svg)](https://godoc.org/k8s.io/utils)
A set of Go libraries that provide low-level,
kubernetes-independent packages supplementing the [Go
@ -40,6 +40,9 @@ an existing package to this repository.
to mock and replace in tests, especially with
the [FakeExec](exec/testing/fake_exec.go) struct.
- [Temp](/temp) provides an interface to create temporary directories. It also
provides a [FakeDir](temp/temptesting) implementation to replace in tests.
[Build Status]: https://travis-ci.org/kubernetes/utils.svg?branch=master
[Go standard libs]: https://golang.org/pkg/#stdlib
[api]: https://github.com/kubernetes/api