diff --git a/LICENSES/vendor/github.com/containernetworking/cni/LICENSE b/LICENSES/vendor/github.com/containernetworking/cni/LICENSE
deleted file mode 100644
index 3b1d199f53b..00000000000
--- a/LICENSES/vendor/github.com/containernetworking/cni/LICENSE
+++ /dev/null
@@ -1,206 +0,0 @@
-= vendor/github.com/containernetworking/cni licensed under: =
-
- Apache License
- Version 2.0, January 2004
- http://www.apache.org/licenses/
-
- TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
-
- 1. Definitions.
-
- "License" shall mean the terms and conditions for use, reproduction,
- and distribution as defined by Sections 1 through 9 of this document.
-
- "Licensor" shall mean the copyright owner or entity authorized by
- the copyright owner that is granting the License.
-
- "Legal Entity" shall mean the union of the acting entity and all
- other entities that control, are controlled by, or are under common
- control with that entity. For the purposes of this definition,
- "control" means (i) the power, direct or indirect, to cause the
- direction or management of such entity, whether by contract or
- otherwise, or (ii) ownership of fifty percent (50%) or more of the
- outstanding shares, or (iii) beneficial ownership of such entity.
-
- "You" (or "Your") shall mean an individual or Legal Entity
- exercising permissions granted by this License.
-
- "Source" form shall mean the preferred form for making modifications,
- including but not limited to software source code, documentation
- source, and configuration files.
-
- "Object" form shall mean any form resulting from mechanical
- transformation or translation of a Source form, including but
- not limited to compiled object code, generated documentation,
- and conversions to other media types.
-
- "Work" shall mean the work of authorship, whether in Source or
- Object form, made available under the License, as indicated by a
- copyright notice that is included in or attached to the work
- (an example is provided in the Appendix below).
-
- "Derivative Works" shall mean any work, whether in Source or Object
- form, that is based on (or derived from) the Work and for which the
- editorial revisions, annotations, elaborations, or other modifications
- represent, as a whole, an original work of authorship. For the purposes
- of this License, Derivative Works shall not include works that remain
- separable from, or merely link (or bind by name) to the interfaces of,
- the Work and Derivative Works thereof.
-
- "Contribution" shall mean any work of authorship, including
- the original version of the Work and any modifications or additions
- to that Work or Derivative Works thereof, that is intentionally
- submitted to Licensor for inclusion in the Work by the copyright owner
- or by an individual or Legal Entity authorized to submit on behalf of
- the copyright owner. For the purposes of this definition, "submitted"
- means any form of electronic, verbal, or written communication sent
- to the Licensor or its representatives, including but not limited to
- communication on electronic mailing lists, source code control systems,
- and issue tracking systems that are managed by, or on behalf of, the
- Licensor for the purpose of discussing and improving the Work, but
- excluding communication that is conspicuously marked or otherwise
- designated in writing by the copyright owner as "Not a Contribution."
-
- "Contributor" shall mean Licensor and any individual or Legal Entity
- on behalf of whom a Contribution has been received by Licensor and
- subsequently incorporated within the Work.
-
- 2. Grant of Copyright License. Subject to the terms and conditions of
- this License, each Contributor hereby grants to You a perpetual,
- worldwide, non-exclusive, no-charge, royalty-free, irrevocable
- copyright license to reproduce, prepare Derivative Works of,
- publicly display, publicly perform, sublicense, and distribute the
- Work and such Derivative Works in Source or Object form.
-
- 3. Grant of Patent License. Subject to the terms and conditions of
- this License, each Contributor hereby grants to You a perpetual,
- worldwide, non-exclusive, no-charge, royalty-free, irrevocable
- (except as stated in this section) patent license to make, have made,
- use, offer to sell, sell, import, and otherwise transfer the Work,
- where such license applies only to those patent claims licensable
- by such Contributor that are necessarily infringed by their
- Contribution(s) alone or by combination of their Contribution(s)
- with the Work to which such Contribution(s) was submitted. If You
- institute patent litigation against any entity (including a
- cross-claim or counterclaim in a lawsuit) alleging that the Work
- or a Contribution incorporated within the Work constitutes direct
- or contributory patent infringement, then any patent licenses
- granted to You under this License for that Work shall terminate
- as of the date such litigation is filed.
-
- 4. Redistribution. You may reproduce and distribute copies of the
- Work or Derivative Works thereof in any medium, with or without
- modifications, and in Source or Object form, provided that You
- meet the following conditions:
-
- (a) You must give any other recipients of the Work or
- Derivative Works a copy of this License; and
-
- (b) You must cause any modified files to carry prominent notices
- stating that You changed the files; and
-
- (c) You must retain, in the Source form of any Derivative Works
- that You distribute, all copyright, patent, trademark, and
- attribution notices from the Source form of the Work,
- excluding those notices that do not pertain to any part of
- the Derivative Works; and
-
- (d) If the Work includes a "NOTICE" text file as part of its
- distribution, then any Derivative Works that You distribute must
- include a readable copy of the attribution notices contained
- within such NOTICE file, excluding those notices that do not
- pertain to any part of the Derivative Works, in at least one
- of the following places: within a NOTICE text file distributed
- as part of the Derivative Works; within the Source form or
- documentation, if provided along with the Derivative Works; or,
- within a display generated by the Derivative Works, if and
- wherever such third-party notices normally appear. The contents
- of the NOTICE file are for informational purposes only and
- do not modify the License. You may add Your own attribution
- notices within Derivative Works that You distribute, alongside
- or as an addendum to the NOTICE text from the Work, provided
- that such additional attribution notices cannot be construed
- as modifying the License.
-
- You may add Your own copyright statement to Your modifications and
- may provide additional or different license terms and conditions
- for use, reproduction, or distribution of Your modifications, or
- for any such Derivative Works as a whole, provided Your use,
- reproduction, and distribution of the Work otherwise complies with
- the conditions stated in this License.
-
- 5. Submission of Contributions. Unless You explicitly state otherwise,
- any Contribution intentionally submitted for inclusion in the Work
- by You to the Licensor shall be under the terms and conditions of
- this License, without any additional terms or conditions.
- Notwithstanding the above, nothing herein shall supersede or modify
- the terms of any separate license agreement you may have executed
- with Licensor regarding such Contributions.
-
- 6. Trademarks. This License does not grant permission to use the trade
- names, trademarks, service marks, or product names of the Licensor,
- except as required for reasonable and customary use in describing the
- origin of the Work and reproducing the content of the NOTICE file.
-
- 7. Disclaimer of Warranty. Unless required by applicable law or
- agreed to in writing, Licensor provides the Work (and each
- Contributor provides its Contributions) on an "AS IS" BASIS,
- WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
- implied, including, without limitation, any warranties or conditions
- of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
- PARTICULAR PURPOSE. You are solely responsible for determining the
- appropriateness of using or redistributing the Work and assume any
- risks associated with Your exercise of permissions under this License.
-
- 8. Limitation of Liability. In no event and under no legal theory,
- whether in tort (including negligence), contract, or otherwise,
- unless required by applicable law (such as deliberate and grossly
- negligent acts) or agreed to in writing, shall any Contributor be
- liable to You for damages, including any direct, indirect, special,
- incidental, or consequential damages of any character arising as a
- result of this License or out of the use or inability to use the
- Work (including but not limited to damages for loss of goodwill,
- work stoppage, computer failure or malfunction, or any and all
- other commercial damages or losses), even if such Contributor
- has been advised of the possibility of such damages.
-
- 9. Accepting Warranty or Additional Liability. While redistributing
- the Work or Derivative Works thereof, You may choose to offer,
- and charge a fee for, acceptance of support, warranty, indemnity,
- or other liability obligations and/or rights consistent with this
- License. However, in accepting such obligations, You may act only
- on Your own behalf and on Your sole responsibility, not on behalf
- of any other Contributor, and only if You agree to indemnify,
- defend, and hold each Contributor harmless for any liability
- incurred by, or claims asserted against, such Contributor by reason
- of your accepting any such warranty or additional liability.
-
- END OF TERMS AND CONDITIONS
-
- APPENDIX: How to apply the Apache License to your work.
-
- To apply the Apache License to your work, attach the following
- boilerplate notice, with the fields enclosed by brackets "{}"
- replaced with your own identifying information. (Don't include
- the brackets!) The text should be enclosed in the appropriate
- comment syntax for the file format. We also recommend that a
- file or class name and description of purpose be included on the
- same "printed page" as the copyright notice for easier
- identification within third-party archives.
-
- Copyright {yyyy} {name of copyright owner}
-
- Licensed under the Apache License, Version 2.0 (the "License");
- you may not use this file except in compliance with the License.
- You may obtain a copy of the License at
-
- http://www.apache.org/licenses/LICENSE-2.0
-
- Unless required by applicable law or agreed to in writing, software
- distributed under the License is distributed on an "AS IS" BASIS,
- WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- See the License for the specific language governing permissions and
- limitations under the License.
-
-
-= vendor/github.com/containernetworking/cni/LICENSE fa818a259cbed7ce8bc2a22d35a464fc
diff --git a/LICENSES/vendor/github.com/docker/docker/LICENSE b/LICENSES/vendor/github.com/docker/docker/LICENSE
deleted file mode 100644
index 48c33574e4c..00000000000
--- a/LICENSES/vendor/github.com/docker/docker/LICENSE
+++ /dev/null
@@ -1,195 +0,0 @@
-= vendor/github.com/docker/docker licensed under: =
-
-
- Apache License
- Version 2.0, January 2004
- https://www.apache.org/licenses/
-
- TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
-
- 1. Definitions.
-
- "License" shall mean the terms and conditions for use, reproduction,
- and distribution as defined by Sections 1 through 9 of this document.
-
- "Licensor" shall mean the copyright owner or entity authorized by
- the copyright owner that is granting the License.
-
- "Legal Entity" shall mean the union of the acting entity and all
- other entities that control, are controlled by, or are under common
- control with that entity. For the purposes of this definition,
- "control" means (i) the power, direct or indirect, to cause the
- direction or management of such entity, whether by contract or
- otherwise, or (ii) ownership of fifty percent (50%) or more of the
- outstanding shares, or (iii) beneficial ownership of such entity.
-
- "You" (or "Your") shall mean an individual or Legal Entity
- exercising permissions granted by this License.
-
- "Source" form shall mean the preferred form for making modifications,
- including but not limited to software source code, documentation
- source, and configuration files.
-
- "Object" form shall mean any form resulting from mechanical
- transformation or translation of a Source form, including but
- not limited to compiled object code, generated documentation,
- and conversions to other media types.
-
- "Work" shall mean the work of authorship, whether in Source or
- Object form, made available under the License, as indicated by a
- copyright notice that is included in or attached to the work
- (an example is provided in the Appendix below).
-
- "Derivative Works" shall mean any work, whether in Source or Object
- form, that is based on (or derived from) the Work and for which the
- editorial revisions, annotations, elaborations, or other modifications
- represent, as a whole, an original work of authorship. For the purposes
- of this License, Derivative Works shall not include works that remain
- separable from, or merely link (or bind by name) to the interfaces of,
- the Work and Derivative Works thereof.
-
- "Contribution" shall mean any work of authorship, including
- the original version of the Work and any modifications or additions
- to that Work or Derivative Works thereof, that is intentionally
- submitted to Licensor for inclusion in the Work by the copyright owner
- or by an individual or Legal Entity authorized to submit on behalf of
- the copyright owner. For the purposes of this definition, "submitted"
- means any form of electronic, verbal, or written communication sent
- to the Licensor or its representatives, including but not limited to
- communication on electronic mailing lists, source code control systems,
- and issue tracking systems that are managed by, or on behalf of, the
- Licensor for the purpose of discussing and improving the Work, but
- excluding communication that is conspicuously marked or otherwise
- designated in writing by the copyright owner as "Not a Contribution."
-
- "Contributor" shall mean Licensor and any individual or Legal Entity
- on behalf of whom a Contribution has been received by Licensor and
- subsequently incorporated within the Work.
-
- 2. Grant of Copyright License. Subject to the terms and conditions of
- this License, each Contributor hereby grants to You a perpetual,
- worldwide, non-exclusive, no-charge, royalty-free, irrevocable
- copyright license to reproduce, prepare Derivative Works of,
- publicly display, publicly perform, sublicense, and distribute the
- Work and such Derivative Works in Source or Object form.
-
- 3. Grant of Patent License. Subject to the terms and conditions of
- this License, each Contributor hereby grants to You a perpetual,
- worldwide, non-exclusive, no-charge, royalty-free, irrevocable
- (except as stated in this section) patent license to make, have made,
- use, offer to sell, sell, import, and otherwise transfer the Work,
- where such license applies only to those patent claims licensable
- by such Contributor that are necessarily infringed by their
- Contribution(s) alone or by combination of their Contribution(s)
- with the Work to which such Contribution(s) was submitted. If You
- institute patent litigation against any entity (including a
- cross-claim or counterclaim in a lawsuit) alleging that the Work
- or a Contribution incorporated within the Work constitutes direct
- or contributory patent infringement, then any patent licenses
- granted to You under this License for that Work shall terminate
- as of the date such litigation is filed.
-
- 4. Redistribution. You may reproduce and distribute copies of the
- Work or Derivative Works thereof in any medium, with or without
- modifications, and in Source or Object form, provided that You
- meet the following conditions:
-
- (a) You must give any other recipients of the Work or
- Derivative Works a copy of this License; and
-
- (b) You must cause any modified files to carry prominent notices
- stating that You changed the files; and
-
- (c) You must retain, in the Source form of any Derivative Works
- that You distribute, all copyright, patent, trademark, and
- attribution notices from the Source form of the Work,
- excluding those notices that do not pertain to any part of
- the Derivative Works; and
-
- (d) If the Work includes a "NOTICE" text file as part of its
- distribution, then any Derivative Works that You distribute must
- include a readable copy of the attribution notices contained
- within such NOTICE file, excluding those notices that do not
- pertain to any part of the Derivative Works, in at least one
- of the following places: within a NOTICE text file distributed
- as part of the Derivative Works; within the Source form or
- documentation, if provided along with the Derivative Works; or,
- within a display generated by the Derivative Works, if and
- wherever such third-party notices normally appear. The contents
- of the NOTICE file are for informational purposes only and
- do not modify the License. You may add Your own attribution
- notices within Derivative Works that You distribute, alongside
- or as an addendum to the NOTICE text from the Work, provided
- that such additional attribution notices cannot be construed
- as modifying the License.
-
- You may add Your own copyright statement to Your modifications and
- may provide additional or different license terms and conditions
- for use, reproduction, or distribution of Your modifications, or
- for any such Derivative Works as a whole, provided Your use,
- reproduction, and distribution of the Work otherwise complies with
- the conditions stated in this License.
-
- 5. Submission of Contributions. Unless You explicitly state otherwise,
- any Contribution intentionally submitted for inclusion in the Work
- by You to the Licensor shall be under the terms and conditions of
- this License, without any additional terms or conditions.
- Notwithstanding the above, nothing herein shall supersede or modify
- the terms of any separate license agreement you may have executed
- with Licensor regarding such Contributions.
-
- 6. Trademarks. This License does not grant permission to use the trade
- names, trademarks, service marks, or product names of the Licensor,
- except as required for reasonable and customary use in describing the
- origin of the Work and reproducing the content of the NOTICE file.
-
- 7. Disclaimer of Warranty. Unless required by applicable law or
- agreed to in writing, Licensor provides the Work (and each
- Contributor provides its Contributions) on an "AS IS" BASIS,
- WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
- implied, including, without limitation, any warranties or conditions
- of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
- PARTICULAR PURPOSE. You are solely responsible for determining the
- appropriateness of using or redistributing the Work and assume any
- risks associated with Your exercise of permissions under this License.
-
- 8. Limitation of Liability. In no event and under no legal theory,
- whether in tort (including negligence), contract, or otherwise,
- unless required by applicable law (such as deliberate and grossly
- negligent acts) or agreed to in writing, shall any Contributor be
- liable to You for damages, including any direct, indirect, special,
- incidental, or consequential damages of any character arising as a
- result of this License or out of the use or inability to use the
- Work (including but not limited to damages for loss of goodwill,
- work stoppage, computer failure or malfunction, or any and all
- other commercial damages or losses), even if such Contributor
- has been advised of the possibility of such damages.
-
- 9. Accepting Warranty or Additional Liability. While redistributing
- the Work or Derivative Works thereof, You may choose to offer,
- and charge a fee for, acceptance of support, warranty, indemnity,
- or other liability obligations and/or rights consistent with this
- License. However, in accepting such obligations, You may act only
- on Your own behalf and on Your sole responsibility, not on behalf
- of any other Contributor, and only if You agree to indemnify,
- defend, and hold each Contributor harmless for any liability
- incurred by, or claims asserted against, such Contributor by reason
- of your accepting any such warranty or additional liability.
-
- END OF TERMS AND CONDITIONS
-
- Copyright 2013-2018 Docker, Inc.
-
- Licensed under the Apache License, Version 2.0 (the "License");
- you may not use this file except in compliance with the License.
- You may obtain a copy of the License at
-
- https://www.apache.org/licenses/LICENSE-2.0
-
- Unless required by applicable law or agreed to in writing, software
- distributed under the License is distributed on an "AS IS" BASIS,
- WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- See the License for the specific language governing permissions and
- limitations under the License.
-
-= vendor/github.com/docker/docker/LICENSE 4859e97a9c7780e77972d989f0823f28
diff --git a/LICENSES/vendor/github.com/docker/go-connections/LICENSE b/LICENSES/vendor/github.com/docker/go-connections/LICENSE
deleted file mode 100644
index 08061a0926b..00000000000
--- a/LICENSES/vendor/github.com/docker/go-connections/LICENSE
+++ /dev/null
@@ -1,195 +0,0 @@
-= vendor/github.com/docker/go-connections licensed under: =
-
-
- Apache License
- Version 2.0, January 2004
- https://www.apache.org/licenses/
-
- TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
-
- 1. Definitions.
-
- "License" shall mean the terms and conditions for use, reproduction,
- and distribution as defined by Sections 1 through 9 of this document.
-
- "Licensor" shall mean the copyright owner or entity authorized by
- the copyright owner that is granting the License.
-
- "Legal Entity" shall mean the union of the acting entity and all
- other entities that control, are controlled by, or are under common
- control with that entity. For the purposes of this definition,
- "control" means (i) the power, direct or indirect, to cause the
- direction or management of such entity, whether by contract or
- otherwise, or (ii) ownership of fifty percent (50%) or more of the
- outstanding shares, or (iii) beneficial ownership of such entity.
-
- "You" (or "Your") shall mean an individual or Legal Entity
- exercising permissions granted by this License.
-
- "Source" form shall mean the preferred form for making modifications,
- including but not limited to software source code, documentation
- source, and configuration files.
-
- "Object" form shall mean any form resulting from mechanical
- transformation or translation of a Source form, including but
- not limited to compiled object code, generated documentation,
- and conversions to other media types.
-
- "Work" shall mean the work of authorship, whether in Source or
- Object form, made available under the License, as indicated by a
- copyright notice that is included in or attached to the work
- (an example is provided in the Appendix below).
-
- "Derivative Works" shall mean any work, whether in Source or Object
- form, that is based on (or derived from) the Work and for which the
- editorial revisions, annotations, elaborations, or other modifications
- represent, as a whole, an original work of authorship. For the purposes
- of this License, Derivative Works shall not include works that remain
- separable from, or merely link (or bind by name) to the interfaces of,
- the Work and Derivative Works thereof.
-
- "Contribution" shall mean any work of authorship, including
- the original version of the Work and any modifications or additions
- to that Work or Derivative Works thereof, that is intentionally
- submitted to Licensor for inclusion in the Work by the copyright owner
- or by an individual or Legal Entity authorized to submit on behalf of
- the copyright owner. For the purposes of this definition, "submitted"
- means any form of electronic, verbal, or written communication sent
- to the Licensor or its representatives, including but not limited to
- communication on electronic mailing lists, source code control systems,
- and issue tracking systems that are managed by, or on behalf of, the
- Licensor for the purpose of discussing and improving the Work, but
- excluding communication that is conspicuously marked or otherwise
- designated in writing by the copyright owner as "Not a Contribution."
-
- "Contributor" shall mean Licensor and any individual or Legal Entity
- on behalf of whom a Contribution has been received by Licensor and
- subsequently incorporated within the Work.
-
- 2. Grant of Copyright License. Subject to the terms and conditions of
- this License, each Contributor hereby grants to You a perpetual,
- worldwide, non-exclusive, no-charge, royalty-free, irrevocable
- copyright license to reproduce, prepare Derivative Works of,
- publicly display, publicly perform, sublicense, and distribute the
- Work and such Derivative Works in Source or Object form.
-
- 3. Grant of Patent License. Subject to the terms and conditions of
- this License, each Contributor hereby grants to You a perpetual,
- worldwide, non-exclusive, no-charge, royalty-free, irrevocable
- (except as stated in this section) patent license to make, have made,
- use, offer to sell, sell, import, and otherwise transfer the Work,
- where such license applies only to those patent claims licensable
- by such Contributor that are necessarily infringed by their
- Contribution(s) alone or by combination of their Contribution(s)
- with the Work to which such Contribution(s) was submitted. If You
- institute patent litigation against any entity (including a
- cross-claim or counterclaim in a lawsuit) alleging that the Work
- or a Contribution incorporated within the Work constitutes direct
- or contributory patent infringement, then any patent licenses
- granted to You under this License for that Work shall terminate
- as of the date such litigation is filed.
-
- 4. Redistribution. You may reproduce and distribute copies of the
- Work or Derivative Works thereof in any medium, with or without
- modifications, and in Source or Object form, provided that You
- meet the following conditions:
-
- (a) You must give any other recipients of the Work or
- Derivative Works a copy of this License; and
-
- (b) You must cause any modified files to carry prominent notices
- stating that You changed the files; and
-
- (c) You must retain, in the Source form of any Derivative Works
- that You distribute, all copyright, patent, trademark, and
- attribution notices from the Source form of the Work,
- excluding those notices that do not pertain to any part of
- the Derivative Works; and
-
- (d) If the Work includes a "NOTICE" text file as part of its
- distribution, then any Derivative Works that You distribute must
- include a readable copy of the attribution notices contained
- within such NOTICE file, excluding those notices that do not
- pertain to any part of the Derivative Works, in at least one
- of the following places: within a NOTICE text file distributed
- as part of the Derivative Works; within the Source form or
- documentation, if provided along with the Derivative Works; or,
- within a display generated by the Derivative Works, if and
- wherever such third-party notices normally appear. The contents
- of the NOTICE file are for informational purposes only and
- do not modify the License. You may add Your own attribution
- notices within Derivative Works that You distribute, alongside
- or as an addendum to the NOTICE text from the Work, provided
- that such additional attribution notices cannot be construed
- as modifying the License.
-
- You may add Your own copyright statement to Your modifications and
- may provide additional or different license terms and conditions
- for use, reproduction, or distribution of Your modifications, or
- for any such Derivative Works as a whole, provided Your use,
- reproduction, and distribution of the Work otherwise complies with
- the conditions stated in this License.
-
- 5. Submission of Contributions. Unless You explicitly state otherwise,
- any Contribution intentionally submitted for inclusion in the Work
- by You to the Licensor shall be under the terms and conditions of
- this License, without any additional terms or conditions.
- Notwithstanding the above, nothing herein shall supersede or modify
- the terms of any separate license agreement you may have executed
- with Licensor regarding such Contributions.
-
- 6. Trademarks. This License does not grant permission to use the trade
- names, trademarks, service marks, or product names of the Licensor,
- except as required for reasonable and customary use in describing the
- origin of the Work and reproducing the content of the NOTICE file.
-
- 7. Disclaimer of Warranty. Unless required by applicable law or
- agreed to in writing, Licensor provides the Work (and each
- Contributor provides its Contributions) on an "AS IS" BASIS,
- WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
- implied, including, without limitation, any warranties or conditions
- of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
- PARTICULAR PURPOSE. You are solely responsible for determining the
- appropriateness of using or redistributing the Work and assume any
- risks associated with Your exercise of permissions under this License.
-
- 8. Limitation of Liability. In no event and under no legal theory,
- whether in tort (including negligence), contract, or otherwise,
- unless required by applicable law (such as deliberate and grossly
- negligent acts) or agreed to in writing, shall any Contributor be
- liable to You for damages, including any direct, indirect, special,
- incidental, or consequential damages of any character arising as a
- result of this License or out of the use or inability to use the
- Work (including but not limited to damages for loss of goodwill,
- work stoppage, computer failure or malfunction, or any and all
- other commercial damages or losses), even if such Contributor
- has been advised of the possibility of such damages.
-
- 9. Accepting Warranty or Additional Liability. While redistributing
- the Work or Derivative Works thereof, You may choose to offer,
- and charge a fee for, acceptance of support, warranty, indemnity,
- or other liability obligations and/or rights consistent with this
- License. However, in accepting such obligations, You may act only
- on Your own behalf and on Your sole responsibility, not on behalf
- of any other Contributor, and only if You agree to indemnify,
- defend, and hold each Contributor harmless for any liability
- incurred by, or claims asserted against, such Contributor by reason
- of your accepting any such warranty or additional liability.
-
- END OF TERMS AND CONDITIONS
-
- Copyright 2015 Docker, Inc.
-
- Licensed under the Apache License, Version 2.0 (the "License");
- you may not use this file except in compliance with the License.
- You may obtain a copy of the License at
-
- https://www.apache.org/licenses/LICENSE-2.0
-
- Unless required by applicable law or agreed to in writing, software
- distributed under the License is distributed on an "AS IS" BASIS,
- WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- See the License for the specific language governing permissions and
- limitations under the License.
-
-= vendor/github.com/docker/go-connections/LICENSE 04424bc6f5a5be60691b9824d65c2ad8
diff --git a/LICENSES/vendor/github.com/morikuni/aec/LICENSE b/LICENSES/vendor/github.com/morikuni/aec/LICENSE
deleted file mode 100644
index d710121aa78..00000000000
--- a/LICENSES/vendor/github.com/morikuni/aec/LICENSE
+++ /dev/null
@@ -1,25 +0,0 @@
-= vendor/github.com/morikuni/aec licensed under: =
-
-The MIT License (MIT)
-
-Copyright (c) 2016 Taihei Morikuni
-
-Permission is hereby granted, free of charge, to any person obtaining a copy
-of this software and associated documentation files (the "Software"), to deal
-in the Software without restriction, including without limitation the rights
-to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
-copies of the Software, and to permit persons to whom the Software is
-furnished to do so, subject to the following conditions:
-
-The above copyright notice and this permission notice shall be included in all
-copies or substantial portions of the Software.
-
-THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
-IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
-FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
-AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
-LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
-OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
-SOFTWARE.
-
-= vendor/github.com/morikuni/aec/LICENSE 86852eb2df591157c788f3ba889c8aec
diff --git a/LICENSES/vendor/github.com/opencontainers/image-spec/LICENSE b/LICENSES/vendor/github.com/opencontainers/image-spec/LICENSE
deleted file mode 100644
index b4ccc319f02..00000000000
--- a/LICENSES/vendor/github.com/opencontainers/image-spec/LICENSE
+++ /dev/null
@@ -1,195 +0,0 @@
-= vendor/github.com/opencontainers/image-spec licensed under: =
-
-
- Apache License
- Version 2.0, January 2004
- http://www.apache.org/licenses/
-
- TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
-
- 1. Definitions.
-
- "License" shall mean the terms and conditions for use, reproduction,
- and distribution as defined by Sections 1 through 9 of this document.
-
- "Licensor" shall mean the copyright owner or entity authorized by
- the copyright owner that is granting the License.
-
- "Legal Entity" shall mean the union of the acting entity and all
- other entities that control, are controlled by, or are under common
- control with that entity. For the purposes of this definition,
- "control" means (i) the power, direct or indirect, to cause the
- direction or management of such entity, whether by contract or
- otherwise, or (ii) ownership of fifty percent (50%) or more of the
- outstanding shares, or (iii) beneficial ownership of such entity.
-
- "You" (or "Your") shall mean an individual or Legal Entity
- exercising permissions granted by this License.
-
- "Source" form shall mean the preferred form for making modifications,
- including but not limited to software source code, documentation
- source, and configuration files.
-
- "Object" form shall mean any form resulting from mechanical
- transformation or translation of a Source form, including but
- not limited to compiled object code, generated documentation,
- and conversions to other media types.
-
- "Work" shall mean the work of authorship, whether in Source or
- Object form, made available under the License, as indicated by a
- copyright notice that is included in or attached to the work
- (an example is provided in the Appendix below).
-
- "Derivative Works" shall mean any work, whether in Source or Object
- form, that is based on (or derived from) the Work and for which the
- editorial revisions, annotations, elaborations, or other modifications
- represent, as a whole, an original work of authorship. For the purposes
- of this License, Derivative Works shall not include works that remain
- separable from, or merely link (or bind by name) to the interfaces of,
- the Work and Derivative Works thereof.
-
- "Contribution" shall mean any work of authorship, including
- the original version of the Work and any modifications or additions
- to that Work or Derivative Works thereof, that is intentionally
- submitted to Licensor for inclusion in the Work by the copyright owner
- or by an individual or Legal Entity authorized to submit on behalf of
- the copyright owner. For the purposes of this definition, "submitted"
- means any form of electronic, verbal, or written communication sent
- to the Licensor or its representatives, including but not limited to
- communication on electronic mailing lists, source code control systems,
- and issue tracking systems that are managed by, or on behalf of, the
- Licensor for the purpose of discussing and improving the Work, but
- excluding communication that is conspicuously marked or otherwise
- designated in writing by the copyright owner as "Not a Contribution."
-
- "Contributor" shall mean Licensor and any individual or Legal Entity
- on behalf of whom a Contribution has been received by Licensor and
- subsequently incorporated within the Work.
-
- 2. Grant of Copyright License. Subject to the terms and conditions of
- this License, each Contributor hereby grants to You a perpetual,
- worldwide, non-exclusive, no-charge, royalty-free, irrevocable
- copyright license to reproduce, prepare Derivative Works of,
- publicly display, publicly perform, sublicense, and distribute the
- Work and such Derivative Works in Source or Object form.
-
- 3. Grant of Patent License. Subject to the terms and conditions of
- this License, each Contributor hereby grants to You a perpetual,
- worldwide, non-exclusive, no-charge, royalty-free, irrevocable
- (except as stated in this section) patent license to make, have made,
- use, offer to sell, sell, import, and otherwise transfer the Work,
- where such license applies only to those patent claims licensable
- by such Contributor that are necessarily infringed by their
- Contribution(s) alone or by combination of their Contribution(s)
- with the Work to which such Contribution(s) was submitted. If You
- institute patent litigation against any entity (including a
- cross-claim or counterclaim in a lawsuit) alleging that the Work
- or a Contribution incorporated within the Work constitutes direct
- or contributory patent infringement, then any patent licenses
- granted to You under this License for that Work shall terminate
- as of the date such litigation is filed.
-
- 4. Redistribution. You may reproduce and distribute copies of the
- Work or Derivative Works thereof in any medium, with or without
- modifications, and in Source or Object form, provided that You
- meet the following conditions:
-
- (a) You must give any other recipients of the Work or
- Derivative Works a copy of this License; and
-
- (b) You must cause any modified files to carry prominent notices
- stating that You changed the files; and
-
- (c) You must retain, in the Source form of any Derivative Works
- that You distribute, all copyright, patent, trademark, and
- attribution notices from the Source form of the Work,
- excluding those notices that do not pertain to any part of
- the Derivative Works; and
-
- (d) If the Work includes a "NOTICE" text file as part of its
- distribution, then any Derivative Works that You distribute must
- include a readable copy of the attribution notices contained
- within such NOTICE file, excluding those notices that do not
- pertain to any part of the Derivative Works, in at least one
- of the following places: within a NOTICE text file distributed
- as part of the Derivative Works; within the Source form or
- documentation, if provided along with the Derivative Works; or,
- within a display generated by the Derivative Works, if and
- wherever such third-party notices normally appear. The contents
- of the NOTICE file are for informational purposes only and
- do not modify the License. You may add Your own attribution
- notices within Derivative Works that You distribute, alongside
- or as an addendum to the NOTICE text from the Work, provided
- that such additional attribution notices cannot be construed
- as modifying the License.
-
- You may add Your own copyright statement to Your modifications and
- may provide additional or different license terms and conditions
- for use, reproduction, or distribution of Your modifications, or
- for any such Derivative Works as a whole, provided Your use,
- reproduction, and distribution of the Work otherwise complies with
- the conditions stated in this License.
-
- 5. Submission of Contributions. Unless You explicitly state otherwise,
- any Contribution intentionally submitted for inclusion in the Work
- by You to the Licensor shall be under the terms and conditions of
- this License, without any additional terms or conditions.
- Notwithstanding the above, nothing herein shall supersede or modify
- the terms of any separate license agreement you may have executed
- with Licensor regarding such Contributions.
-
- 6. Trademarks. This License does not grant permission to use the trade
- names, trademarks, service marks, or product names of the Licensor,
- except as required for reasonable and customary use in describing the
- origin of the Work and reproducing the content of the NOTICE file.
-
- 7. Disclaimer of Warranty. Unless required by applicable law or
- agreed to in writing, Licensor provides the Work (and each
- Contributor provides its Contributions) on an "AS IS" BASIS,
- WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
- implied, including, without limitation, any warranties or conditions
- of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
- PARTICULAR PURPOSE. You are solely responsible for determining the
- appropriateness of using or redistributing the Work and assume any
- risks associated with Your exercise of permissions under this License.
-
- 8. Limitation of Liability. In no event and under no legal theory,
- whether in tort (including negligence), contract, or otherwise,
- unless required by applicable law (such as deliberate and grossly
- negligent acts) or agreed to in writing, shall any Contributor be
- liable to You for damages, including any direct, indirect, special,
- incidental, or consequential damages of any character arising as a
- result of this License or out of the use or inability to use the
- Work (including but not limited to damages for loss of goodwill,
- work stoppage, computer failure or malfunction, or any and all
- other commercial damages or losses), even if such Contributor
- has been advised of the possibility of such damages.
-
- 9. Accepting Warranty or Additional Liability. While redistributing
- the Work or Derivative Works thereof, You may choose to offer,
- and charge a fee for, acceptance of support, warranty, indemnity,
- or other liability obligations and/or rights consistent with this
- License. However, in accepting such obligations, You may act only
- on Your own behalf and on Your sole responsibility, not on behalf
- of any other Contributor, and only if You agree to indemnify,
- defend, and hold each Contributor harmless for any liability
- incurred by, or claims asserted against, such Contributor by reason
- of your accepting any such warranty or additional liability.
-
- END OF TERMS AND CONDITIONS
-
- Copyright 2016 The Linux Foundation.
-
- Licensed under the Apache License, Version 2.0 (the "License");
- you may not use this file except in compliance with the License.
- You may obtain a copy of the License at
-
- http://www.apache.org/licenses/LICENSE-2.0
-
- Unless required by applicable law or agreed to in writing, software
- distributed under the License is distributed on an "AS IS" BASIS,
- WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- See the License for the specific language governing permissions and
- limitations under the License.
-
-= vendor/github.com/opencontainers/image-spec/LICENSE 27ef03aa2da6e424307f102e8b42621d
diff --git a/build/dependencies.yaml b/build/dependencies.yaml
index 882f3638b2c..8e4ed0dc977 100644
--- a/build/dependencies.yaml
+++ b/build/dependencies.yaml
@@ -172,8 +172,6 @@ dependencies:
match: defaultPodSandboxImageVersion\s+=
- path: hack/testdata/pod-with-precision.json
match: k8s.gcr.io\/pause:\d+\.\d+
- - path: pkg/kubelet/dockershim/docker_sandbox.go
- match: k8s.gcr.io\/pause:\d+\.\d+
- path: staging/src/k8s.io/kubectl/testdata/set/multi-resource-yaml.yaml
match: k8s.gcr.io\/pause:\d+\.\d+
- path: staging/src/k8s.io/kubectl/testdata/set/namespaced-resource.yaml
diff --git a/cmd/kubelet/app/options/globalflags_linux.go b/cmd/kubelet/app/options/globalflags_linux.go
index ad3b68628f6..e75e65ec37c 100644
--- a/cmd/kubelet/app/options/globalflags_linux.go
+++ b/cmd/kubelet/app/options/globalflags_linux.go
@@ -28,7 +28,6 @@ import (
// ensure libs have a chance to globally register their flags
_ "github.com/google/cadvisor/container/common"
_ "github.com/google/cadvisor/container/containerd"
- _ "github.com/google/cadvisor/container/docker"
_ "github.com/google/cadvisor/container/raw"
_ "github.com/google/cadvisor/machine"
_ "github.com/google/cadvisor/manager"
@@ -41,9 +40,6 @@ func addCadvisorFlags(fs *pflag.FlagSet) {
global := flag.CommandLine
local := pflag.NewFlagSet(os.Args[0], pflag.ExitOnError)
- // These flags were also implicit from cadvisor, but are actually used by something in the core repo:
- // TODO(mtaufen): This one is stil used by our salt, but for heaven's sake it's even deprecated in cadvisor
- register(global, local, "docker_root")
// e2e node tests rely on this
register(global, local, "housekeeping_interval")
@@ -54,13 +50,6 @@ func addCadvisorFlags(fs *pflag.FlagSet) {
registerDeprecated(global, local, "boot_id_file", deprecated)
registerDeprecated(global, local, "container_hints", deprecated)
registerDeprecated(global, local, "containerd", deprecated)
- registerDeprecated(global, local, "docker", deprecated)
- registerDeprecated(global, local, "docker_env_metadata_whitelist", deprecated)
- registerDeprecated(global, local, "docker_only", deprecated)
- registerDeprecated(global, local, "docker-tls", deprecated)
- registerDeprecated(global, local, "docker-tls-ca", deprecated)
- registerDeprecated(global, local, "docker-tls-cert", deprecated)
- registerDeprecated(global, local, "docker-tls-key", deprecated)
registerDeprecated(global, local, "enable_load_reader", deprecated)
registerDeprecated(global, local, "event_storage_age_limit", deprecated)
registerDeprecated(global, local, "event_storage_event_limit", deprecated)
diff --git a/go.mod b/go.mod
index a17878d68d0..7dccd35bb6b 100644
--- a/go.mod
+++ b/go.mod
@@ -25,15 +25,12 @@ require (
github.com/boltdb/bolt v1.3.1 // indirect
github.com/clusterhq/flocker-go v0.0.0-20160920122132-2b8b7259d313
github.com/container-storage-interface/spec v1.5.0
- github.com/containernetworking/cni v0.8.1
github.com/coredns/corefile-migration v1.0.14
github.com/coreos/go-oidc v2.1.0+incompatible
github.com/coreos/go-systemd/v22 v22.3.2
github.com/cpuguy83/go-md2man/v2 v2.0.0
github.com/davecgh/go-spew v1.1.1
github.com/docker/distribution v2.7.1+incompatible
- github.com/docker/docker v20.10.7+incompatible
- github.com/docker/go-connections v0.4.0
github.com/docker/go-units v0.4.0
github.com/elazarl/goproxy v0.0.0-20180725130230-947c36da3153
github.com/emicklei/go-restful v2.9.5+incompatible
@@ -63,7 +60,6 @@ require (
github.com/mvdan/xurls v1.1.0
github.com/onsi/ginkgo v1.14.0
github.com/onsi/gomega v1.10.1
- github.com/opencontainers/go-digest v1.0.0
github.com/opencontainers/runc v1.0.2
github.com/opencontainers/selinux v1.8.2
github.com/pkg/errors v0.9.1
@@ -209,7 +205,6 @@ replace (
github.com/containerd/go-runc => github.com/containerd/go-runc v1.0.0
github.com/containerd/ttrpc => github.com/containerd/ttrpc v1.0.2
github.com/containerd/typeurl => github.com/containerd/typeurl v1.0.2
- github.com/containernetworking/cni => github.com/containernetworking/cni v0.8.1
github.com/coredns/caddy => github.com/coredns/caddy v1.1.0
github.com/coredns/corefile-migration => github.com/coredns/corefile-migration v1.0.14
github.com/coreos/go-oidc => github.com/coreos/go-oidc v2.1.0+incompatible
diff --git a/go.sum b/go.sum
index b458fb06802..9121b2f4ac8 100644
--- a/go.sum
+++ b/go.sum
@@ -116,8 +116,6 @@ github.com/containerd/ttrpc v1.0.2 h1:2/O3oTZN36q2xRolk0a2WWGgh7/Vf/liElg5hFYLX9
github.com/containerd/ttrpc v1.0.2/go.mod h1:UAxOpgT9ziI0gJrmKvgcZivgxOp8iFPSk8httJEt98Y=
github.com/containerd/typeurl v1.0.2 h1:Chlt8zIieDbzQFzXzAeBEF92KhExuE4p9p92/QmY7aY=
github.com/containerd/typeurl v1.0.2/go.mod h1:9trJWW2sRlGub4wZJRTW83VtbOLS6hwcDZXTn6oPz9s=
-github.com/containernetworking/cni v0.8.1 h1:7zpDnQ3T3s4ucOuJ/ZCLrYBxzkg0AELFfII3Epo9TmI=
-github.com/containernetworking/cni v0.8.1/go.mod h1:LGwApLUm2FpoOfxTDEeq8T9ipbpZ61X79hmU3w8FmsY=
github.com/coredns/caddy v1.1.0 h1:ezvsPrT/tA/7pYDBZxu0cT0VmWk75AfIaf6GSYCNMf0=
github.com/coredns/caddy v1.1.0/go.mod h1:A6ntJQlAWuQfFlsd9hvigKbo2WS0VUs2l1e2F+BawD4=
github.com/coredns/corefile-migration v1.0.14 h1:Tz3WZhoj2NdP8drrQH86NgnCng+VrPjNeg2Oe1ALKag=
@@ -353,7 +351,6 @@ github.com/mohae/deepcopy v0.0.0-20170603005431-491d3605edfb h1:e+l77LJOEqXTIQih
github.com/mohae/deepcopy v0.0.0-20170603005431-491d3605edfb/go.mod h1:TaXosZuwdSHYgviHp1DAtfrULt5eUgsSMsZf+YrPgl8=
github.com/monochromegane/go-gitignore v0.0.0-20200626010858-205db1a8cc00 h1:n6/2gBQ3RWajuToeY6ZtZTIKv2v7ThUy5KKusIT0yc0=
github.com/monochromegane/go-gitignore v0.0.0-20200626010858-205db1a8cc00/go.mod h1:Pm3mSP3c5uWn86xMLZ5Sa7JB9GsEZySvHYXCTK4E9q4=
-github.com/morikuni/aec v1.0.0 h1:nP9CBfwrvYnBRgY6qfDQkygYDmYwOilePFkwzv4dU8A=
github.com/morikuni/aec v1.0.0/go.mod h1:BbKIizmSmc5MMPqRYbxO4ZU0S0+P200+tUnFx7PXmsc=
github.com/mrunalp/fileutils v0.5.0 h1:NKzVxiH7eSk+OQ4M+ZYW1K6h27RUV3MI6NUTsHhU6Z4=
github.com/mrunalp/fileutils v0.5.0/go.mod h1:M1WthSahJixYnrXQl/DFQuteStB1weuxD2QJNHXfbSQ=
diff --git a/pkg/kubelet/cadvisor/cadvisor_linux_docker.go b/pkg/kubelet/cadvisor/cadvisor_linux_docker.go
deleted file mode 100644
index cdd975efffd..00000000000
--- a/pkg/kubelet/cadvisor/cadvisor_linux_docker.go
+++ /dev/null
@@ -1,26 +0,0 @@
-//go:build linux && !dockerless
-// +build linux,!dockerless
-
-/*
-Copyright 2020 The Kubernetes Authors.
-
-Licensed under the Apache License, Version 2.0 (the "License");
-you may not use this file except in compliance with the License.
-You may obtain a copy of the License at
-
- http://www.apache.org/licenses/LICENSE-2.0
-
-Unless required by applicable law or agreed to in writing, software
-distributed under the License is distributed on an "AS IS" BASIS,
-WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-See the License for the specific language governing permissions and
-limitations under the License.
-*/
-
-package cadvisor
-
-import (
- // We only want to perform this docker specific cadvisor init when we are not
- // using the `dockerless` build tag.
- _ "github.com/google/cadvisor/container/docker/install"
-)
diff --git a/pkg/kubelet/dockershim/cm/container_manager.go b/pkg/kubelet/dockershim/cm/container_manager.go
deleted file mode 100644
index f2255cd9ec0..00000000000
--- a/pkg/kubelet/dockershim/cm/container_manager.go
+++ /dev/null
@@ -1,26 +0,0 @@
-//go:build !dockerless
-// +build !dockerless
-
-/*
-Copyright 2016 The Kubernetes Authors.
-
-Licensed under the Apache License, Version 2.0 (the "License");
-you may not use this file except in compliance with the License.
-You may obtain a copy of the License at
-
- http://www.apache.org/licenses/LICENSE-2.0
-
-Unless required by applicable law or agreed to in writing, software
-distributed under the License is distributed on an "AS IS" BASIS,
-WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-See the License for the specific language governing permissions and
-limitations under the License.
-*/
-
-package cm
-
-// ContainerManager is an interface that abstracts the basic operations of a
-// container manager.
-type ContainerManager interface {
- Start() error
-}
diff --git a/pkg/kubelet/dockershim/cm/container_manager_linux.go b/pkg/kubelet/dockershim/cm/container_manager_linux.go
deleted file mode 100644
index 759e27f26c5..00000000000
--- a/pkg/kubelet/dockershim/cm/container_manager_linux.go
+++ /dev/null
@@ -1,158 +0,0 @@
-//go:build linux && !dockerless
-// +build linux,!dockerless
-
-/*
-Copyright 2016 The Kubernetes Authors.
-
-Licensed under the Apache License, Version 2.0 (the "License");
-you may not use this file except in compliance with the License.
-You may obtain a copy of the License at
-
- http://www.apache.org/licenses/LICENSE-2.0
-
-Unless required by applicable law or agreed to in writing, software
-distributed under the License is distributed on an "AS IS" BASIS,
-WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-See the License for the specific language governing permissions and
-limitations under the License.
-*/
-
-package cm
-
-import (
- "fmt"
- "io/ioutil"
- "regexp"
- "strconv"
- "time"
-
- "github.com/opencontainers/runc/libcontainer/cgroups"
- cgroupfs "github.com/opencontainers/runc/libcontainer/cgroups/fs"
- "github.com/opencontainers/runc/libcontainer/configs"
- utilversion "k8s.io/apimachinery/pkg/util/version"
- "k8s.io/apimachinery/pkg/util/wait"
- "k8s.io/klog/v2"
- kubecm "k8s.io/kubernetes/pkg/kubelet/cm"
-
- "k8s.io/kubernetes/pkg/kubelet/dockershim/libdocker"
-)
-
-const (
- // The percent of the machine memory capacity. The value is used to calculate
- // docker memory resource container's hardlimit to workaround docker memory
- // leakage issue. Please see kubernetes/issues/9881 for more detail.
- dockerMemoryLimitThresholdPercent = 70
-
- // The minimum memory limit allocated to docker container: 150Mi
- minDockerMemoryLimit = 150 * 1024 * 1024
-
- // The OOM score adjustment for the docker process (i.e. the docker
- // daemon). Essentially, makes docker very unlikely to experience an oom
- // kill.
- dockerOOMScoreAdj = -999
-)
-
-var (
- memoryCapacityRegexp = regexp.MustCompile(`MemTotal:\s*([0-9]+) kB`)
-)
-
-// NewContainerManager creates a new instance of ContainerManager
-func NewContainerManager(cgroupsName string, client libdocker.Interface) ContainerManager {
- return &containerManager{
- cgroupsName: cgroupsName,
- client: client,
- }
-}
-
-type containerManager struct {
- // Docker client.
- client libdocker.Interface
- // Name of the cgroups.
- cgroupsName string
- // Manager for the cgroups.
- cgroupsManager cgroups.Manager
-}
-
-func (m *containerManager) Start() error {
- // TODO: check if the required cgroups are mounted.
- if len(m.cgroupsName) != 0 {
- manager, err := createCgroupManager(m.cgroupsName)
- if err != nil {
- return err
- }
- m.cgroupsManager = manager
- }
- go wait.Until(m.doWork, 5*time.Minute, wait.NeverStop)
- return nil
-}
-
-func (m *containerManager) doWork() {
- v, err := m.client.Version()
- if err != nil {
- klog.ErrorS(err, "Unable to get docker version")
- return
- }
- version, err := utilversion.ParseGeneric(v.APIVersion)
- if err != nil {
- klog.ErrorS(err, "Unable to parse docker version", "dockerVersion", v.APIVersion)
- return
- }
- // EnsureDockerInContainer does two things.
- // 1. Ensure processes run in the cgroups if m.cgroupsManager is not nil.
- // 2. Ensure processes have the OOM score applied.
- if err := kubecm.EnsureDockerInContainer(version, dockerOOMScoreAdj, m.cgroupsManager); err != nil {
- klog.ErrorS(err, "Unable to ensure the docker processes run in the desired containers")
- }
-}
-
-func createCgroupManager(name string) (cgroups.Manager, error) {
- var memoryLimit uint64
-
- memoryCapacity, err := getMemoryCapacity()
- if err != nil {
- klog.ErrorS(err, "Failed to get the memory capacity on machine")
- } else {
- memoryLimit = memoryCapacity * dockerMemoryLimitThresholdPercent / 100
- }
-
- if err != nil || memoryLimit < minDockerMemoryLimit {
- memoryLimit = minDockerMemoryLimit
- }
- klog.V(2).InfoS("Configure resource-only container with memory limit", "containerName", name, "memoryLimit", memoryLimit)
-
- cg := &configs.Cgroup{
- Parent: "/",
- Name: name,
- Resources: &configs.Resources{
- Memory: int64(memoryLimit),
- MemorySwap: -1,
- SkipDevices: true,
- },
- }
- return cgroupfs.NewManager(cg, nil, false), nil
-}
-
-// getMemoryCapacity returns the memory capacity on the machine in bytes.
-func getMemoryCapacity() (uint64, error) {
- out, err := ioutil.ReadFile("/proc/meminfo")
- if err != nil {
- return 0, err
- }
- return parseCapacity(out, memoryCapacityRegexp)
-}
-
-// parseCapacity matches a Regexp in a []byte, returning the resulting value in bytes.
-// Assumes that the value matched by the Regexp is in KB.
-func parseCapacity(b []byte, r *regexp.Regexp) (uint64, error) {
- matches := r.FindSubmatch(b)
- if len(matches) != 2 {
- return 0, fmt.Errorf("failed to match regexp in output: %q", string(b))
- }
- m, err := strconv.ParseUint(string(matches[1]), 10, 64)
- if err != nil {
- return 0, err
- }
-
- // Convert to bytes.
- return m * 1024, err
-}
diff --git a/pkg/kubelet/dockershim/cm/container_manager_unsupported.go b/pkg/kubelet/dockershim/cm/container_manager_unsupported.go
deleted file mode 100644
index 6c9bd354446..00000000000
--- a/pkg/kubelet/dockershim/cm/container_manager_unsupported.go
+++ /dev/null
@@ -1,38 +0,0 @@
-//go:build !linux && !windows && !dockerless
-// +build !linux,!windows,!dockerless
-
-/*
-Copyright 2016 The Kubernetes Authors.
-
-Licensed under the Apache License, Version 2.0 (the "License");
-you may not use this file except in compliance with the License.
-You may obtain a copy of the License at
-
- http://www.apache.org/licenses/LICENSE-2.0
-
-Unless required by applicable law or agreed to in writing, software
-distributed under the License is distributed on an "AS IS" BASIS,
-WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-See the License for the specific language governing permissions and
-limitations under the License.
-*/
-
-package cm
-
-import (
- "fmt"
-
- "k8s.io/kubernetes/pkg/kubelet/dockershim/libdocker"
-)
-
-type unsupportedContainerManager struct {
-}
-
-// NewContainerManager creates a new instance of ContainerManager
-func NewContainerManager(_ string, _ libdocker.Interface) ContainerManager {
- return &unsupportedContainerManager{}
-}
-
-func (m *unsupportedContainerManager) Start() error {
- return fmt.Errorf("Container Manager is unsupported in this build")
-}
diff --git a/pkg/kubelet/dockershim/cm/container_manager_windows.go b/pkg/kubelet/dockershim/cm/container_manager_windows.go
deleted file mode 100644
index 135c20c15d6..00000000000
--- a/pkg/kubelet/dockershim/cm/container_manager_windows.go
+++ /dev/null
@@ -1,37 +0,0 @@
-//go:build windows && !dockerless
-// +build windows,!dockerless
-
-/*
-Copyright 2017 The Kubernetes Authors.
-
-Licensed under the Apache License, Version 2.0 (the "License");
-you may not use this file except in compliance with the License.
-You may obtain a copy of the License at
-
- http://www.apache.org/licenses/LICENSE-2.0
-
-Unless required by applicable law or agreed to in writing, software
-distributed under the License is distributed on an "AS IS" BASIS,
-WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-See the License for the specific language governing permissions and
-limitations under the License.
-*/
-
-package cm
-
-import (
- "k8s.io/kubernetes/pkg/kubelet/dockershim/libdocker"
-)
-
-// no-op
-type containerManager struct {
-}
-
-// NewContainerManager creates a new instance of ContainerManager
-func NewContainerManager(_ string, _ libdocker.Interface) ContainerManager {
- return &containerManager{}
-}
-
-func (m *containerManager) Start() error {
- return nil
-}
diff --git a/pkg/kubelet/dockershim/convert.go b/pkg/kubelet/dockershim/convert.go
deleted file mode 100644
index dcbe9d07333..00000000000
--- a/pkg/kubelet/dockershim/convert.go
+++ /dev/null
@@ -1,181 +0,0 @@
-//go:build !dockerless
-// +build !dockerless
-
-/*
-Copyright 2016 The Kubernetes Authors.
-
-Licensed under the Apache License, Version 2.0 (the "License");
-you may not use this file except in compliance with the License.
-You may obtain a copy of the License at
-
- http://www.apache.org/licenses/LICENSE-2.0
-
-Unless required by applicable law or agreed to in writing, software
-distributed under the License is distributed on an "AS IS" BASIS,
-WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-See the License for the specific language governing permissions and
-limitations under the License.
-*/
-
-package dockershim
-
-import (
- "fmt"
- "strings"
- "time"
-
- dockertypes "github.com/docker/docker/api/types"
-
- runtimeapi "k8s.io/cri-api/pkg/apis/runtime/v1alpha2"
- "k8s.io/kubernetes/pkg/kubelet/dockershim/libdocker"
-)
-
-// This file contains helper functions to convert docker API types to runtime
-// API types, or vice versa.
-
-func imageToRuntimeAPIImage(image *dockertypes.ImageSummary) (*runtimeapi.Image, error) {
- if image == nil {
- return nil, fmt.Errorf("unable to convert a nil pointer to a runtime API image")
- }
-
- size := uint64(image.VirtualSize)
- return &runtimeapi.Image{
- Id: image.ID,
- RepoTags: image.RepoTags,
- RepoDigests: image.RepoDigests,
- Size_: size,
- }, nil
-}
-
-func imageInspectToRuntimeAPIImage(image *dockertypes.ImageInspect) (*runtimeapi.Image, error) {
- if image == nil || image.Config == nil {
- return nil, fmt.Errorf("unable to convert a nil pointer to a runtime API image")
- }
-
- size := uint64(image.VirtualSize)
- runtimeImage := &runtimeapi.Image{
- Id: image.ID,
- RepoTags: image.RepoTags,
- RepoDigests: image.RepoDigests,
- Size_: size,
- }
-
- uid, username := getUserFromImageUser(image.Config.User)
- if uid != nil {
- runtimeImage.Uid = &runtimeapi.Int64Value{Value: *uid}
- }
- runtimeImage.Username = username
- return runtimeImage, nil
-}
-
-func toPullableImageID(id string, image *dockertypes.ImageInspect) string {
- // Default to the image ID, but if RepoDigests is not empty, use
- // the first digest instead.
- imageID := DockerImageIDPrefix + id
- if image != nil && len(image.RepoDigests) > 0 {
- imageID = DockerPullableImageIDPrefix + image.RepoDigests[0]
- }
- return imageID
-}
-
-func toRuntimeAPIContainer(c *dockertypes.Container) (*runtimeapi.Container, error) {
- state := toRuntimeAPIContainerState(c.Status)
- if len(c.Names) == 0 {
- return nil, fmt.Errorf("unexpected empty container name: %+v", c)
- }
- metadata, err := parseContainerName(c.Names[0])
- if err != nil {
- return nil, err
- }
- labels, annotations := extractLabels(c.Labels)
- sandboxID := c.Labels[sandboxIDLabelKey]
- // The timestamp in dockertypes.Container is in seconds.
- createdAt := c.Created * int64(time.Second)
- return &runtimeapi.Container{
- Id: c.ID,
- PodSandboxId: sandboxID,
- Metadata: metadata,
- Image: &runtimeapi.ImageSpec{Image: c.Image},
- ImageRef: c.ImageID,
- State: state,
- CreatedAt: createdAt,
- Labels: labels,
- Annotations: annotations,
- }, nil
-}
-
-func toDockerContainerStatus(state runtimeapi.ContainerState) string {
- switch state {
- case runtimeapi.ContainerState_CONTAINER_CREATED:
- return "created"
- case runtimeapi.ContainerState_CONTAINER_RUNNING:
- return "running"
- case runtimeapi.ContainerState_CONTAINER_EXITED:
- return "exited"
- case runtimeapi.ContainerState_CONTAINER_UNKNOWN:
- fallthrough
- default:
- return "unknown"
- }
-}
-
-func toRuntimeAPIContainerState(state string) runtimeapi.ContainerState {
- // Parse the state string in dockertypes.Container. This could break when
- // we upgrade docker.
- switch {
- case strings.HasPrefix(state, libdocker.StatusRunningPrefix):
- return runtimeapi.ContainerState_CONTAINER_RUNNING
- case strings.HasPrefix(state, libdocker.StatusExitedPrefix):
- return runtimeapi.ContainerState_CONTAINER_EXITED
- case strings.HasPrefix(state, libdocker.StatusCreatedPrefix):
- return runtimeapi.ContainerState_CONTAINER_CREATED
- default:
- return runtimeapi.ContainerState_CONTAINER_UNKNOWN
- }
-}
-
-func toRuntimeAPISandboxState(state string) runtimeapi.PodSandboxState {
- // Parse the state string in dockertypes.Container. This could break when
- // we upgrade docker.
- switch {
- case strings.HasPrefix(state, libdocker.StatusRunningPrefix):
- return runtimeapi.PodSandboxState_SANDBOX_READY
- default:
- return runtimeapi.PodSandboxState_SANDBOX_NOTREADY
- }
-}
-
-func containerToRuntimeAPISandbox(c *dockertypes.Container) (*runtimeapi.PodSandbox, error) {
- state := toRuntimeAPISandboxState(c.Status)
- if len(c.Names) == 0 {
- return nil, fmt.Errorf("unexpected empty sandbox name: %+v", c)
- }
- metadata, err := parseSandboxName(c.Names[0])
- if err != nil {
- return nil, err
- }
- labels, annotations := extractLabels(c.Labels)
- // The timestamp in dockertypes.Container is in seconds.
- createdAt := c.Created * int64(time.Second)
- return &runtimeapi.PodSandbox{
- Id: c.ID,
- Metadata: metadata,
- State: state,
- CreatedAt: createdAt,
- Labels: labels,
- Annotations: annotations,
- }, nil
-}
-
-func checkpointToRuntimeAPISandbox(id string, checkpoint ContainerCheckpoint) *runtimeapi.PodSandbox {
- state := runtimeapi.PodSandboxState_SANDBOX_NOTREADY
- _, name, namespace, _, _ := checkpoint.GetData()
- return &runtimeapi.PodSandbox{
- Id: id,
- Metadata: &runtimeapi.PodSandboxMetadata{
- Name: name,
- Namespace: namespace,
- },
- State: state,
- }
-}
diff --git a/pkg/kubelet/dockershim/convert_test.go b/pkg/kubelet/dockershim/convert_test.go
deleted file mode 100644
index f663fe0122d..00000000000
--- a/pkg/kubelet/dockershim/convert_test.go
+++ /dev/null
@@ -1,74 +0,0 @@
-//go:build !dockerless
-// +build !dockerless
-
-/*
-Copyright 2016 The Kubernetes Authors.
-
-Licensed under the Apache License, Version 2.0 (the "License");
-you may not use this file except in compliance with the License.
-You may obtain a copy of the License at
-
- http://www.apache.org/licenses/LICENSE-2.0
-
-Unless required by applicable law or agreed to in writing, software
-distributed under the License is distributed on an "AS IS" BASIS,
-WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-See the License for the specific language governing permissions and
-limitations under the License.
-*/
-
-package dockershim
-
-import (
- "testing"
-
- dockertypes "github.com/docker/docker/api/types"
- "github.com/stretchr/testify/assert"
-
- runtimeapi "k8s.io/cri-api/pkg/apis/runtime/v1alpha2"
-)
-
-func TestConvertDockerStatusToRuntimeAPIState(t *testing.T) {
- testCases := []struct {
- input string
- expected runtimeapi.ContainerState
- }{
- {input: "Up 5 hours", expected: runtimeapi.ContainerState_CONTAINER_RUNNING},
- {input: "Exited (0) 2 hours ago", expected: runtimeapi.ContainerState_CONTAINER_EXITED},
- {input: "Created", expected: runtimeapi.ContainerState_CONTAINER_CREATED},
- {input: "Random string", expected: runtimeapi.ContainerState_CONTAINER_UNKNOWN},
- }
-
- for _, test := range testCases {
- actual := toRuntimeAPIContainerState(test.input)
- assert.Equal(t, test.expected, actual)
- }
-}
-
-func TestConvertToPullableImageID(t *testing.T) {
- testCases := []struct {
- id string
- image *dockertypes.ImageInspect
- expected string
- }{
- {
- id: "image-1",
- image: &dockertypes.ImageInspect{
- RepoDigests: []string{"digest-1"},
- },
- expected: DockerPullableImageIDPrefix + "digest-1",
- },
- {
- id: "image-2",
- image: &dockertypes.ImageInspect{
- RepoDigests: []string{},
- },
- expected: DockerImageIDPrefix + "image-2",
- },
- }
-
- for _, test := range testCases {
- actual := toPullableImageID(test.id, test.image)
- assert.Equal(t, test.expected, actual)
- }
-}
diff --git a/pkg/kubelet/dockershim/doc.go b/pkg/kubelet/dockershim/doc.go
deleted file mode 100644
index e630364b2a8..00000000000
--- a/pkg/kubelet/dockershim/doc.go
+++ /dev/null
@@ -1,22 +0,0 @@
-//go:build !dockerless
-// +build !dockerless
-
-/*
-Copyright 2016 The Kubernetes Authors.
-
-Licensed under the Apache License, Version 2.0 (the "License");
-you may not use this file except in compliance with the License.
-You may obtain a copy of the License at
-
- http://www.apache.org/licenses/LICENSE-2.0
-
-Unless required by applicable law or agreed to in writing, software
-distributed under the License is distributed on an "AS IS" BASIS,
-WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-See the License for the specific language governing permissions and
-limitations under the License.
-*/
-
-// Package dockershim implements a container runtime interface
-// Docker integration using k8s.io/cri-api/pkg/apis/runtime/v1/api.pb.go
-package dockershim
diff --git a/pkg/kubelet/dockershim/docker_checkpoint.go b/pkg/kubelet/dockershim/docker_checkpoint.go
deleted file mode 100644
index 37ac73050be..00000000000
--- a/pkg/kubelet/dockershim/docker_checkpoint.go
+++ /dev/null
@@ -1,109 +0,0 @@
-//go:build !dockerless
-// +build !dockerless
-
-/*
-Copyright 2017 The Kubernetes Authors.
-
-Licensed under the Apache License, Version 2.0 (the "License");
-you may not use this file except in compliance with the License.
-You may obtain a copy of the License at
-
- http://www.apache.org/licenses/LICENSE-2.0
-
-Unless required by applicable law or agreed to in writing, software
-distributed under the License is distributed on an "AS IS" BASIS,
-WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-See the License for the specific language governing permissions and
-limitations under the License.
-*/
-
-package dockershim
-
-import (
- "encoding/json"
-
- "k8s.io/kubernetes/pkg/kubelet/checkpointmanager"
- "k8s.io/kubernetes/pkg/kubelet/checkpointmanager/checksum"
-)
-
-const (
- // default directory to store pod sandbox checkpoint files
- sandboxCheckpointDir = "sandbox"
- protocolTCP = Protocol("tcp")
- protocolUDP = Protocol("udp")
- protocolSCTP = Protocol("sctp")
- schemaVersion = "v1"
-)
-
-// ContainerCheckpoint provides the interface for process container's checkpoint data
-type ContainerCheckpoint interface {
- checkpointmanager.Checkpoint
- GetData() (string, string, string, []*PortMapping, bool)
-}
-
-// Protocol is the type of port mapping protocol
-type Protocol string
-
-// PortMapping is the port mapping configurations of a sandbox.
-type PortMapping struct {
- // Protocol of the port mapping.
- Protocol *Protocol `json:"protocol,omitempty"`
- // Port number within the container.
- ContainerPort *int32 `json:"container_port,omitempty"`
- // Port number on the host.
- HostPort *int32 `json:"host_port,omitempty"`
- // Host ip to expose.
- HostIP string `json:"host_ip,omitempty"`
-}
-
-// CheckpointData contains all types of data that can be stored in the checkpoint.
-type CheckpointData struct {
- PortMappings []*PortMapping `json:"port_mappings,omitempty"`
- HostNetwork bool `json:"host_network,omitempty"`
-}
-
-// PodSandboxCheckpoint is the checkpoint structure for a sandbox
-type PodSandboxCheckpoint struct {
- // Version of the pod sandbox checkpoint schema.
- Version string `json:"version"`
- // Pod name of the sandbox. Same as the pod name in the Pod ObjectMeta.
- Name string `json:"name"`
- // Pod namespace of the sandbox. Same as the pod namespace in the Pod ObjectMeta.
- Namespace string `json:"namespace"`
- // Data to checkpoint for pod sandbox.
- Data *CheckpointData `json:"data,omitempty"`
- // Checksum is calculated with fnv hash of the checkpoint object with checksum field set to be zero
- Checksum checksum.Checksum `json:"checksum"`
-}
-
-// NewPodSandboxCheckpoint inits a PodSandboxCheckpoint with the given args
-func NewPodSandboxCheckpoint(namespace, name string, data *CheckpointData) ContainerCheckpoint {
- return &PodSandboxCheckpoint{
- Version: schemaVersion,
- Namespace: namespace,
- Name: name,
- Data: data,
- }
-}
-
-// MarshalCheckpoint encodes the PodSandboxCheckpoint instance to a json object
-func (cp *PodSandboxCheckpoint) MarshalCheckpoint() ([]byte, error) {
- cp.Checksum = checksum.New(*cp.Data)
- return json.Marshal(*cp)
-}
-
-// UnmarshalCheckpoint decodes the blob data to the PodSandboxCheckpoint instance
-func (cp *PodSandboxCheckpoint) UnmarshalCheckpoint(blob []byte) error {
- return json.Unmarshal(blob, cp)
-}
-
-// VerifyChecksum verifies whether the PodSandboxCheckpoint's data checksum is
-// the same as calculated checksum
-func (cp *PodSandboxCheckpoint) VerifyChecksum() error {
- return cp.Checksum.Verify(*cp.Data)
-}
-
-// GetData gets the PodSandboxCheckpoint's version and some net information
-func (cp *PodSandboxCheckpoint) GetData() (string, string, string, []*PortMapping, bool) {
- return cp.Version, cp.Name, cp.Namespace, cp.Data.PortMappings, cp.Data.HostNetwork
-}
diff --git a/pkg/kubelet/dockershim/docker_checkpoint_test.go b/pkg/kubelet/dockershim/docker_checkpoint_test.go
deleted file mode 100644
index 0649e78b0d3..00000000000
--- a/pkg/kubelet/dockershim/docker_checkpoint_test.go
+++ /dev/null
@@ -1,36 +0,0 @@
-//go:build !dockerless
-// +build !dockerless
-
-/*
-Copyright 2017 The Kubernetes Authors.
-
-Licensed under the Apache License, Version 2.0 (the "License");
-you may not use this file except in compliance with the License.
-You may obtain a copy of the License at
-
- http://www.apache.org/licenses/LICENSE-2.0
-
-Unless required by applicable law or agreed to in writing, software
-distributed under the License is distributed on an "AS IS" BASIS,
-WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-See the License for the specific language governing permissions and
-limitations under the License.
-*/
-
-package dockershim
-
-import (
- "testing"
-
- "github.com/stretchr/testify/assert"
-)
-
-func TestPodSandboxCheckpoint(t *testing.T) {
- data := &CheckpointData{HostNetwork: true}
- checkpoint := NewPodSandboxCheckpoint("ns1", "sandbox1", data)
- version, name, namespace, _, hostNetwork := checkpoint.GetData()
- assert.Equal(t, schemaVersion, version)
- assert.Equal(t, "ns1", namespace)
- assert.Equal(t, "sandbox1", name)
- assert.Equal(t, true, hostNetwork)
-}
diff --git a/pkg/kubelet/dockershim/docker_container.go b/pkg/kubelet/dockershim/docker_container.go
deleted file mode 100644
index 8cc686d9885..00000000000
--- a/pkg/kubelet/dockershim/docker_container.go
+++ /dev/null
@@ -1,508 +0,0 @@
-//go:build !dockerless
-// +build !dockerless
-
-/*
-Copyright 2016 The Kubernetes Authors.
-
-Licensed under the Apache License, Version 2.0 (the "License");
-you may not use this file except in compliance with the License.
-You may obtain a copy of the License at
-
- http://www.apache.org/licenses/LICENSE-2.0
-
-Unless required by applicable law or agreed to in writing, software
-distributed under the License is distributed on an "AS IS" BASIS,
-WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-See the License for the specific language governing permissions and
-limitations under the License.
-*/
-
-package dockershim
-
-import (
- "context"
- "fmt"
- "os"
- "path/filepath"
- "time"
-
- dockertypes "github.com/docker/docker/api/types"
- dockercontainer "github.com/docker/docker/api/types/container"
- dockerfilters "github.com/docker/docker/api/types/filters"
- dockerstrslice "github.com/docker/docker/api/types/strslice"
- "k8s.io/klog/v2"
-
- runtimeapi "k8s.io/cri-api/pkg/apis/runtime/v1alpha2"
- "k8s.io/kubernetes/pkg/kubelet/dockershim/libdocker"
-)
-
-// ListContainers lists all containers matching the filter.
-func (ds *dockerService) ListContainers(_ context.Context, r *runtimeapi.ListContainersRequest) (*runtimeapi.ListContainersResponse, error) {
- filter := r.GetFilter()
- opts := dockertypes.ContainerListOptions{All: true}
-
- opts.Filters = dockerfilters.NewArgs()
- f := newDockerFilter(&opts.Filters)
- // Add filter to get *only* (non-sandbox) containers.
- f.AddLabel(containerTypeLabelKey, containerTypeLabelContainer)
-
- if filter != nil {
- if filter.Id != "" {
- f.Add("id", filter.Id)
- }
- if filter.State != nil {
- f.Add("status", toDockerContainerStatus(filter.GetState().State))
- }
- if filter.PodSandboxId != "" {
- f.AddLabel(sandboxIDLabelKey, filter.PodSandboxId)
- }
-
- if filter.LabelSelector != nil {
- for k, v := range filter.LabelSelector {
- f.AddLabel(k, v)
- }
- }
- }
- containers, err := ds.client.ListContainers(opts)
- if err != nil {
- return nil, err
- }
- // Convert docker to runtime api containers.
- result := []*runtimeapi.Container{}
- for i := range containers {
- c := containers[i]
-
- converted, err := toRuntimeAPIContainer(&c)
- if err != nil {
- klog.V(4).InfoS("Unable to convert docker to runtime API container", "err", err)
- continue
- }
-
- result = append(result, converted)
- }
-
- return &runtimeapi.ListContainersResponse{Containers: result}, nil
-}
-
-func (ds *dockerService) getContainerCleanupInfo(containerID string) (*containerCleanupInfo, bool) {
- ds.cleanupInfosLock.RLock()
- defer ds.cleanupInfosLock.RUnlock()
- info, ok := ds.containerCleanupInfos[containerID]
- return info, ok
-}
-
-func (ds *dockerService) setContainerCleanupInfo(containerID string, info *containerCleanupInfo) {
- ds.cleanupInfosLock.Lock()
- defer ds.cleanupInfosLock.Unlock()
- ds.containerCleanupInfos[containerID] = info
-}
-
-func (ds *dockerService) clearContainerCleanupInfo(containerID string) {
- ds.cleanupInfosLock.Lock()
- defer ds.cleanupInfosLock.Unlock()
- delete(ds.containerCleanupInfos, containerID)
-}
-
-// CreateContainer creates a new container in the given PodSandbox
-// Docker cannot store the log to an arbitrary location (yet), so we create an
-// symlink at LogPath, linking to the actual path of the log.
-// TODO: check if the default values returned by the runtime API are ok.
-func (ds *dockerService) CreateContainer(_ context.Context, r *runtimeapi.CreateContainerRequest) (*runtimeapi.CreateContainerResponse, error) {
- podSandboxID := r.PodSandboxId
- config := r.GetConfig()
- sandboxConfig := r.GetSandboxConfig()
-
- if config == nil {
- return nil, fmt.Errorf("container config is nil")
- }
- if sandboxConfig == nil {
- return nil, fmt.Errorf("sandbox config is nil for container %q", config.Metadata.Name)
- }
-
- labels := makeLabels(config.GetLabels(), config.GetAnnotations())
- // Apply a the container type label.
- labels[containerTypeLabelKey] = containerTypeLabelContainer
- // Write the container log path in the labels.
- labels[containerLogPathLabelKey] = filepath.Join(sandboxConfig.LogDirectory, config.LogPath)
- // Write the sandbox ID in the labels.
- labels[sandboxIDLabelKey] = podSandboxID
-
- apiVersion, err := ds.getDockerAPIVersion()
- if err != nil {
- return nil, fmt.Errorf("unable to get the docker API version: %v", err)
- }
-
- image := ""
- if iSpec := config.GetImage(); iSpec != nil {
- image = iSpec.Image
- }
- containerName := makeContainerName(sandboxConfig, config)
- createConfig := dockertypes.ContainerCreateConfig{
- Name: containerName,
- Config: &dockercontainer.Config{
- // TODO: set User.
- Entrypoint: dockerstrslice.StrSlice(config.Command),
- Cmd: dockerstrslice.StrSlice(config.Args),
- Env: generateEnvList(config.GetEnvs()),
- Image: image,
- WorkingDir: config.WorkingDir,
- Labels: labels,
- // Interactive containers:
- OpenStdin: config.Stdin,
- StdinOnce: config.StdinOnce,
- Tty: config.Tty,
- // Disable Docker's health check until we officially support it
- // (https://github.com/kubernetes/kubernetes/issues/25829).
- Healthcheck: &dockercontainer.HealthConfig{
- Test: []string{"NONE"},
- },
- },
- HostConfig: &dockercontainer.HostConfig{
- Binds: generateMountBindings(config.GetMounts()),
- RestartPolicy: dockercontainer.RestartPolicy{
- Name: "no",
- },
- },
- }
-
- hc := createConfig.HostConfig
- err = ds.updateCreateConfig(&createConfig, config, sandboxConfig, podSandboxID, securityOptSeparator, apiVersion)
- if err != nil {
- return nil, fmt.Errorf("failed to update container create config: %v", err)
- }
- // Set devices for container.
- devices := make([]dockercontainer.DeviceMapping, len(config.Devices))
- for i, device := range config.Devices {
- devices[i] = dockercontainer.DeviceMapping{
- PathOnHost: device.HostPath,
- PathInContainer: device.ContainerPath,
- CgroupPermissions: device.Permissions,
- }
- }
- hc.Resources.Devices = devices
-
- //nolint:staticcheck // SA1019 backwards compatibility
- securityOpts, err := ds.getSecurityOpts(config.GetLinux().GetSecurityContext().GetSeccompProfilePath(), securityOptSeparator)
- if err != nil {
- return nil, fmt.Errorf("failed to generate security options for container %q: %v", config.Metadata.Name, err)
- }
-
- hc.SecurityOpt = append(hc.SecurityOpt, securityOpts...)
-
- cleanupInfo, err := ds.applyPlatformSpecificDockerConfig(r, &createConfig)
- if err != nil {
- return nil, err
- }
-
- createResp, createErr := ds.client.CreateContainer(createConfig)
- if createErr != nil {
- createResp, createErr = recoverFromCreationConflictIfNeeded(ds.client, createConfig, createErr)
- }
-
- if createResp != nil {
- containerID := createResp.ID
-
- if cleanupInfo != nil {
- // we don't perform the clean up just yet at that could destroy information
- // needed for the container to start (e.g. Windows credentials stored in
- // registry keys); instead, we'll clean up when the container gets removed
- ds.setContainerCleanupInfo(containerID, cleanupInfo)
- }
- return &runtimeapi.CreateContainerResponse{ContainerId: containerID}, nil
- }
-
- // the creation failed, let's clean up right away - we ignore any errors though,
- // this is best effort
- ds.performPlatformSpecificContainerCleanupAndLogErrors(containerName, cleanupInfo)
-
- return nil, createErr
-}
-
-// getContainerLogPath returns the container log path specified by kubelet and the real
-// path where docker stores the container log.
-func (ds *dockerService) getContainerLogPath(containerID string) (string, string, error) {
- info, err := ds.client.InspectContainer(containerID)
- if err != nil {
- return "", "", fmt.Errorf("failed to inspect container %q: %v", containerID, err)
- }
- return info.Config.Labels[containerLogPathLabelKey], info.LogPath, nil
-}
-
-// createContainerLogSymlink creates the symlink for docker container log.
-func (ds *dockerService) createContainerLogSymlink(containerID string) error {
- path, realPath, err := ds.getContainerLogPath(containerID)
- if err != nil {
- return fmt.Errorf("failed to get container %q log path: %v", containerID, err)
- }
-
- if path == "" {
- klog.V(5).InfoS("Container log path isn't specified, will not create the symlink", "containerID", containerID)
- return nil
- }
-
- if realPath != "" {
- // Only create the symlink when container log path is specified and log file exists.
- // Delete possibly existing file first
- if err = ds.os.Remove(path); err == nil {
- klog.InfoS("Deleted previously existing symlink file", "path", path)
- }
- if err = ds.os.Symlink(realPath, path); err != nil {
- return fmt.Errorf("failed to create symbolic link %q to the container log file %q for container %q: %v",
- path, realPath, containerID, err)
- }
- } else {
- supported, err := ds.IsCRISupportedLogDriver()
- if err != nil {
- klog.InfoS("Failed to check supported logging driver by CRI", "err", err)
- return nil
- }
-
- if supported {
- klog.InfoS("Cannot create symbolic link because container log file doesn't exist!")
- } else {
- klog.V(5).InfoS("Unsupported logging driver by CRI")
- }
- }
-
- return nil
-}
-
-// removeContainerLogSymlink removes the symlink for docker container log.
-func (ds *dockerService) removeContainerLogSymlink(containerID string) error {
- path, _, err := ds.getContainerLogPath(containerID)
- if err != nil {
- return fmt.Errorf("failed to get container %q log path: %v", containerID, err)
- }
- if path != "" {
- // Only remove the symlink when container log path is specified.
- err := ds.os.Remove(path)
- if err != nil && !os.IsNotExist(err) {
- return fmt.Errorf("failed to remove container %q log symlink %q: %v", containerID, path, err)
- }
- }
- return nil
-}
-
-// StartContainer starts the container.
-func (ds *dockerService) StartContainer(_ context.Context, r *runtimeapi.StartContainerRequest) (*runtimeapi.StartContainerResponse, error) {
- err := ds.client.StartContainer(r.ContainerId)
-
- // Create container log symlink for all containers (including failed ones).
- if linkError := ds.createContainerLogSymlink(r.ContainerId); linkError != nil {
- // Do not stop the container if we failed to create symlink because:
- // 1. This is not a critical failure.
- // 2. We don't have enough information to properly stop container here.
- // Kubelet will surface this error to user via an event.
- return nil, linkError
- }
-
- if err != nil {
- err = transformStartContainerError(err)
- return nil, fmt.Errorf("failed to start container %q: %v", r.ContainerId, err)
- }
-
- return &runtimeapi.StartContainerResponse{}, nil
-}
-
-// StopContainer stops a running container with a grace period (i.e., timeout).
-func (ds *dockerService) StopContainer(_ context.Context, r *runtimeapi.StopContainerRequest) (*runtimeapi.StopContainerResponse, error) {
- err := ds.client.StopContainer(r.ContainerId, time.Duration(r.Timeout)*time.Second)
- if err != nil {
- return nil, err
- }
- return &runtimeapi.StopContainerResponse{}, nil
-}
-
-// RemoveContainer removes the container.
-func (ds *dockerService) RemoveContainer(_ context.Context, r *runtimeapi.RemoveContainerRequest) (*runtimeapi.RemoveContainerResponse, error) {
- // Ideally, log lifecycle should be independent of container lifecycle.
- // However, docker will remove container log after container is removed,
- // we can't prevent that now, so we also clean up the symlink here.
- err := ds.removeContainerLogSymlink(r.ContainerId)
- if err != nil {
- return nil, err
- }
- errors := ds.performPlatformSpecificContainerForContainer(r.ContainerId)
- if len(errors) != 0 {
- return nil, fmt.Errorf("failed to run platform-specific clean ups for container %q: %v", r.ContainerId, errors)
- }
- err = ds.client.RemoveContainer(r.ContainerId, dockertypes.ContainerRemoveOptions{RemoveVolumes: true, Force: true})
- if err != nil {
- return nil, fmt.Errorf("failed to remove container %q: %v", r.ContainerId, err)
- }
-
- return &runtimeapi.RemoveContainerResponse{}, nil
-}
-
-func getContainerTimestamps(r *dockertypes.ContainerJSON) (time.Time, time.Time, time.Time, error) {
- var createdAt, startedAt, finishedAt time.Time
- var err error
-
- createdAt, err = libdocker.ParseDockerTimestamp(r.Created)
- if err != nil {
- return createdAt, startedAt, finishedAt, err
- }
- startedAt, err = libdocker.ParseDockerTimestamp(r.State.StartedAt)
- if err != nil {
- return createdAt, startedAt, finishedAt, err
- }
- finishedAt, err = libdocker.ParseDockerTimestamp(r.State.FinishedAt)
- if err != nil {
- return createdAt, startedAt, finishedAt, err
- }
- return createdAt, startedAt, finishedAt, nil
-}
-
-// ContainerStatus inspects the docker container and returns the status.
-func (ds *dockerService) ContainerStatus(_ context.Context, req *runtimeapi.ContainerStatusRequest) (*runtimeapi.ContainerStatusResponse, error) {
- containerID := req.ContainerId
- r, err := ds.client.InspectContainer(containerID)
- if err != nil {
- return nil, err
- }
-
- // Parse the timestamps.
- createdAt, startedAt, finishedAt, err := getContainerTimestamps(r)
- if err != nil {
- return nil, fmt.Errorf("failed to parse timestamp for container %q: %v", containerID, err)
- }
-
- // Convert the image id to a pullable id.
- ir, err := ds.client.InspectImageByID(r.Image)
- if err != nil {
- if !libdocker.IsImageNotFoundError(err) {
- return nil, fmt.Errorf("unable to inspect docker image %q while inspecting docker container %q: %v", r.Image, containerID, err)
- }
- klog.InfoS("Ignore error image not found while inspecting docker container", "containerID", containerID, "image", r.Image, "err", err)
- }
- imageID := toPullableImageID(r.Image, ir)
-
- // Convert the mounts.
- mounts := make([]*runtimeapi.Mount, 0, len(r.Mounts))
- for i := range r.Mounts {
- m := r.Mounts[i]
- readonly := !m.RW
- mounts = append(mounts, &runtimeapi.Mount{
- HostPath: m.Source,
- ContainerPath: m.Destination,
- Readonly: readonly,
- // Note: Can't set SeLinuxRelabel
- })
- }
- // Interpret container states and convert time to unix timestamps.
- var state runtimeapi.ContainerState
- var reason, message string
- ct, st, ft := createdAt.UnixNano(), int64(0), int64(0)
- if r.State.Running {
- // Container is running.
- state = runtimeapi.ContainerState_CONTAINER_RUNNING
- // If container is not in the exited state, not set finished timestamp
- st = startedAt.UnixNano()
- } else {
- // Container is *not* running. We need to get more details.
- // * Case 1: container has run and exited with non-zero finishedAt
- // time.
- // * Case 2: container has failed to start; it has a zero finishedAt
- // time, but a non-zero exit code.
- // * Case 3: container has been created, but not started (yet).
- if !finishedAt.IsZero() { // Case 1
- state = runtimeapi.ContainerState_CONTAINER_EXITED
- st, ft = startedAt.UnixNano(), finishedAt.UnixNano()
- switch {
- case r.State.OOMKilled:
- // TODO: consider exposing OOMKilled via the runtimeAPI.
- // Note: if an application handles OOMKilled gracefully, the
- // exit code could be zero.
- reason = "OOMKilled"
- case r.State.ExitCode == 0:
- reason = "Completed"
- default:
- reason = "Error"
- }
- } else if r.State.ExitCode != 0 { // Case 2
- state = runtimeapi.ContainerState_CONTAINER_EXITED
- // Adjust finished and started timestamp to createdAt time to avoid
- // the confusion.
- st, ft = createdAt.UnixNano(), createdAt.UnixNano()
- reason = "ContainerCannotRun"
- } else { // Case 3
- state = runtimeapi.ContainerState_CONTAINER_CREATED
- }
- message = r.State.Error
- }
- exitCode := int32(r.State.ExitCode)
-
- metadata, err := parseContainerName(r.Name)
- if err != nil {
- return nil, err
- }
-
- labels, annotations := extractLabels(r.Config.Labels)
- imageName := r.Config.Image
- if ir != nil && len(ir.RepoTags) > 0 {
- imageName = ir.RepoTags[0]
- }
- status := &runtimeapi.ContainerStatus{
- Id: r.ID,
- Metadata: metadata,
- Image: &runtimeapi.ImageSpec{Image: imageName},
- ImageRef: imageID,
- Mounts: mounts,
- ExitCode: exitCode,
- State: state,
- CreatedAt: ct,
- StartedAt: st,
- FinishedAt: ft,
- Reason: reason,
- Message: message,
- Labels: labels,
- Annotations: annotations,
- LogPath: r.Config.Labels[containerLogPathLabelKey],
- }
- return &runtimeapi.ContainerStatusResponse{Status: status}, nil
-}
-
-func (ds *dockerService) UpdateContainerResources(_ context.Context, r *runtimeapi.UpdateContainerResourcesRequest) (*runtimeapi.UpdateContainerResourcesResponse, error) {
- resources := r.Linux
- updateConfig := dockercontainer.UpdateConfig{
- Resources: dockercontainer.Resources{
- CPUPeriod: resources.CpuPeriod,
- CPUQuota: resources.CpuQuota,
- CPUShares: resources.CpuShares,
- Memory: resources.MemoryLimitInBytes,
- CpusetCpus: resources.CpusetCpus,
- CpusetMems: resources.CpusetMems,
- },
- }
-
- err := ds.client.UpdateContainerResources(r.ContainerId, updateConfig)
- if err != nil {
- return nil, fmt.Errorf("failed to update container %q: %v", r.ContainerId, err)
- }
- return &runtimeapi.UpdateContainerResourcesResponse{}, nil
-}
-
-func (ds *dockerService) performPlatformSpecificContainerForContainer(containerID string) (errors []error) {
- if cleanupInfo, present := ds.getContainerCleanupInfo(containerID); present {
- errors = ds.performPlatformSpecificContainerCleanupAndLogErrors(containerID, cleanupInfo)
-
- if len(errors) == 0 {
- ds.clearContainerCleanupInfo(containerID)
- }
- }
-
- return
-}
-
-func (ds *dockerService) performPlatformSpecificContainerCleanupAndLogErrors(containerNameOrID string, cleanupInfo *containerCleanupInfo) []error {
- if cleanupInfo == nil {
- return nil
- }
-
- errors := ds.performPlatformSpecificContainerCleanup(cleanupInfo)
- for _, err := range errors {
- klog.InfoS("Error when cleaning up after container", "containerNameOrID", containerNameOrID, "err", err)
- }
-
- return errors
-}
diff --git a/pkg/kubelet/dockershim/docker_container_test.go b/pkg/kubelet/dockershim/docker_container_test.go
deleted file mode 100644
index 071864cbe1b..00000000000
--- a/pkg/kubelet/dockershim/docker_container_test.go
+++ /dev/null
@@ -1,370 +0,0 @@
-//go:build !dockerless
-// +build !dockerless
-
-/*
-Copyright 2016 The Kubernetes Authors.
-
-Licensed under the Apache License, Version 2.0 (the "License");
-you may not use this file except in compliance with the License.
-You may obtain a copy of the License at
-
- http://www.apache.org/licenses/LICENSE-2.0
-
-Unless required by applicable law or agreed to in writing, software
-distributed under the License is distributed on an "AS IS" BASIS,
-WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-See the License for the specific language governing permissions and
-limitations under the License.
-*/
-
-package dockershim
-
-import (
- "context"
- "fmt"
- "path/filepath"
- "strings"
- "sync"
- "testing"
- "time"
-
- dockertypes "github.com/docker/docker/api/types"
- "github.com/stretchr/testify/assert"
- "github.com/stretchr/testify/require"
-
- runtimeapi "k8s.io/cri-api/pkg/apis/runtime/v1alpha2"
- containertest "k8s.io/kubernetes/pkg/kubelet/container/testing"
-)
-
-const (
- sandboxID = "sandboxid"
- containerID = "containerid"
-)
-
-// A helper to create a basic config.
-func makeContainerConfig(sConfig *runtimeapi.PodSandboxConfig, name, image string, attempt uint32, labels, annotations map[string]string) *runtimeapi.ContainerConfig {
- return &runtimeapi.ContainerConfig{
- Metadata: &runtimeapi.ContainerMetadata{
- Name: name,
- Attempt: attempt,
- },
- Image: &runtimeapi.ImageSpec{Image: image},
- Labels: labels,
- Annotations: annotations,
- }
-}
-
-func getTestCTX() context.Context {
- return context.Background()
-}
-
-// TestConcurrentlyCreateAndDeleteContainers is a regression test for #93771, which ensures
-// kubelet would not panic on concurrent writes to `dockerService.containerCleanupInfos`.
-func TestConcurrentlyCreateAndDeleteContainers(t *testing.T) {
- ds, _, _ := newTestDockerService()
- podName, namespace := "foo", "bar"
- containerName, image := "sidecar", "logger"
-
- const count = 20
- configs := make([]*runtimeapi.ContainerConfig, 0, count)
- sConfigs := make([]*runtimeapi.PodSandboxConfig, 0, count)
- for i := 0; i < count; i++ {
- s := makeSandboxConfig(fmt.Sprintf("%s%d", podName, i),
- fmt.Sprintf("%s%d", namespace, i), fmt.Sprintf("%d", i), 0)
- labels := map[string]string{"concurrent-test": fmt.Sprintf("label%d", i)}
- c := makeContainerConfig(s, fmt.Sprintf("%s%d", containerName, i),
- fmt.Sprintf("%s:v%d", image, i), uint32(i), labels, nil)
- sConfigs = append(sConfigs, s)
- configs = append(configs, c)
- }
-
- containerIDs := make(chan string, len(configs)) // make channel non-blocking to simulate concurrent containers creation
-
- var (
- creationWg sync.WaitGroup
- deletionWg sync.WaitGroup
- )
-
- creationWg.Add(len(configs))
-
- go func() {
- creationWg.Wait()
- close(containerIDs)
- }()
- for i := range configs {
- go func(i int) {
- defer creationWg.Done()
- // We don't care about the sandbox id; pass a bogus one.
- sandboxID := fmt.Sprintf("sandboxid%d", i)
- req := &runtimeapi.CreateContainerRequest{PodSandboxId: sandboxID, Config: configs[i], SandboxConfig: sConfigs[i]}
- createResp, err := ds.CreateContainer(getTestCTX(), req)
- if err != nil {
- t.Errorf("CreateContainer: %v", err)
- return
- }
- containerIDs <- createResp.ContainerId
- }(i)
- }
-
- for containerID := range containerIDs {
- deletionWg.Add(1)
- go func(id string) {
- defer deletionWg.Done()
- _, err := ds.RemoveContainer(getTestCTX(), &runtimeapi.RemoveContainerRequest{ContainerId: id})
- if err != nil {
- t.Errorf("RemoveContainer: %v", err)
- }
- }(containerID)
- }
- deletionWg.Wait()
-}
-
-// TestListContainers creates several containers and then list them to check
-// whether the correct metadatas, states, and labels are returned.
-func TestListContainers(t *testing.T) {
- ds, _, fakeClock := newTestDockerService()
- podName, namespace := "foo", "bar"
- containerName, image := "sidecar", "logger"
-
- configs := []*runtimeapi.ContainerConfig{}
- sConfigs := []*runtimeapi.PodSandboxConfig{}
- for i := 0; i < 3; i++ {
- s := makeSandboxConfig(fmt.Sprintf("%s%d", podName, i),
- fmt.Sprintf("%s%d", namespace, i), fmt.Sprintf("%d", i), 0)
- labels := map[string]string{"abc.xyz": fmt.Sprintf("label%d", i)}
- annotations := map[string]string{"foo.bar.baz": fmt.Sprintf("annotation%d", i)}
- c := makeContainerConfig(s, fmt.Sprintf("%s%d", containerName, i),
- fmt.Sprintf("%s:v%d", image, i), uint32(i), labels, annotations)
- sConfigs = append(sConfigs, s)
- configs = append(configs, c)
- }
-
- expected := []*runtimeapi.Container{}
- state := runtimeapi.ContainerState_CONTAINER_RUNNING
- var createdAt int64 = fakeClock.Now().UnixNano()
- for i := range configs {
- // We don't care about the sandbox id; pass a bogus one.
- sandboxID := fmt.Sprintf("sandboxid%d", i)
- req := &runtimeapi.CreateContainerRequest{PodSandboxId: sandboxID, Config: configs[i], SandboxConfig: sConfigs[i]}
- createResp, err := ds.CreateContainer(getTestCTX(), req)
- require.NoError(t, err)
- id := createResp.ContainerId
- _, err = ds.StartContainer(getTestCTX(), &runtimeapi.StartContainerRequest{ContainerId: id})
- require.NoError(t, err)
-
- imageRef := "" // FakeDockerClient doesn't populate ImageRef yet.
- // Prepend to the expected list because ListContainers returns
- // the most recent containers first.
- expected = append([]*runtimeapi.Container{{
- Metadata: configs[i].Metadata,
- Id: id,
- PodSandboxId: sandboxID,
- State: state,
- CreatedAt: createdAt,
- Image: configs[i].Image,
- ImageRef: imageRef,
- Labels: configs[i].Labels,
- Annotations: configs[i].Annotations,
- }}, expected...)
- }
- listResp, err := ds.ListContainers(getTestCTX(), &runtimeapi.ListContainersRequest{})
- require.NoError(t, err)
- assert.Len(t, listResp.Containers, len(expected))
- assert.Equal(t, expected, listResp.Containers)
-}
-
-// TestContainerStatus tests the basic lifecycle operations and verify that
-// the status returned reflects the operations performed.
-func TestContainerStatus(t *testing.T) {
- ds, fDocker, fClock := newTestDockerService()
- sConfig := makeSandboxConfig("foo", "bar", "1", 0)
- labels := map[string]string{"abc.xyz": "foo"}
- annotations := map[string]string{"foo.bar.baz": "abc"}
- imageName := "iamimage"
- config := makeContainerConfig(sConfig, "pause", imageName, 0, labels, annotations)
-
- state := runtimeapi.ContainerState_CONTAINER_CREATED
- imageRef := DockerImageIDPrefix + imageName
- // The following variables are not set in FakeDockerClient.
- exitCode := int32(0)
- var reason, message string
-
- expected := &runtimeapi.ContainerStatus{
- State: state,
- Metadata: config.Metadata,
- Image: config.Image,
- ImageRef: imageRef,
- ExitCode: exitCode,
- Reason: reason,
- Message: message,
- Mounts: []*runtimeapi.Mount{},
- Labels: config.Labels,
- Annotations: config.Annotations,
- }
-
- fDocker.InjectImages([]dockertypes.ImageSummary{{ID: imageName}})
-
- // Create the container.
- fClock.SetTime(time.Now().Add(-1 * time.Hour))
- expected.CreatedAt = fClock.Now().UnixNano()
-
- req := &runtimeapi.CreateContainerRequest{PodSandboxId: sandboxID, Config: config, SandboxConfig: sConfig}
- createResp, err := ds.CreateContainer(getTestCTX(), req)
- require.NoError(t, err)
- id := createResp.ContainerId
-
- // Check internal labels
- c, err := fDocker.InspectContainer(id)
- require.NoError(t, err)
- assert.Equal(t, c.Config.Labels[containerTypeLabelKey], containerTypeLabelContainer)
- assert.Equal(t, c.Config.Labels[sandboxIDLabelKey], sandboxID)
-
- // Set the id manually since we don't know the id until it's created.
- expected.Id = id
- assert.NoError(t, err)
- resp, err := ds.ContainerStatus(getTestCTX(), &runtimeapi.ContainerStatusRequest{ContainerId: id})
- require.NoError(t, err)
- assert.Equal(t, expected, resp.Status)
-
- // Advance the clock and start the container.
- fClock.SetTime(time.Now())
- expected.StartedAt = fClock.Now().UnixNano()
- expected.State = runtimeapi.ContainerState_CONTAINER_RUNNING
-
- _, err = ds.StartContainer(getTestCTX(), &runtimeapi.StartContainerRequest{ContainerId: id})
- require.NoError(t, err)
-
- resp, err = ds.ContainerStatus(getTestCTX(), &runtimeapi.ContainerStatusRequest{ContainerId: id})
- require.NoError(t, err)
- assert.Equal(t, expected, resp.Status)
-
- // Advance the clock and stop the container.
- fClock.SetTime(time.Now().Add(1 * time.Hour))
- expected.FinishedAt = fClock.Now().UnixNano()
- expected.State = runtimeapi.ContainerState_CONTAINER_EXITED
- expected.Reason = "Completed"
-
- _, err = ds.StopContainer(getTestCTX(), &runtimeapi.StopContainerRequest{ContainerId: id, Timeout: int64(0)})
- assert.NoError(t, err)
- resp, err = ds.ContainerStatus(getTestCTX(), &runtimeapi.ContainerStatusRequest{ContainerId: id})
- require.NoError(t, err)
- assert.Equal(t, expected, resp.Status)
-
- // Remove the container.
- _, err = ds.RemoveContainer(getTestCTX(), &runtimeapi.RemoveContainerRequest{ContainerId: id})
- require.NoError(t, err)
- resp, err = ds.ContainerStatus(getTestCTX(), &runtimeapi.ContainerStatusRequest{ContainerId: id})
- assert.Error(t, err, fmt.Sprintf("status of container: %+v", resp))
-}
-
-// TestContainerLogPath tests the container log creation logic.
-func TestContainerLogPath(t *testing.T) {
- ds, fDocker, _ := newTestDockerService()
- podLogPath := "/pod/1"
- containerLogPath := "0"
- kubeletContainerLogPath := filepath.Join(podLogPath, containerLogPath)
- sConfig := makeSandboxConfig("foo", "bar", "1", 0)
- sConfig.LogDirectory = podLogPath
- config := makeContainerConfig(sConfig, "pause", "iamimage", 0, nil, nil)
- config.LogPath = containerLogPath
-
- req := &runtimeapi.CreateContainerRequest{PodSandboxId: sandboxID, Config: config, SandboxConfig: sConfig}
- createResp, err := ds.CreateContainer(getTestCTX(), req)
- require.NoError(t, err)
- id := createResp.ContainerId
-
- // Check internal container log label
- c, err := fDocker.InspectContainer(id)
- assert.NoError(t, err)
- assert.Equal(t, c.Config.Labels[containerLogPathLabelKey], kubeletContainerLogPath)
-
- // Set docker container log path
- dockerContainerLogPath := "/docker/container/log"
- c.LogPath = dockerContainerLogPath
-
- // Verify container log symlink creation
- fakeOS := ds.os.(*containertest.FakeOS)
- fakeOS.SymlinkFn = func(oldname, newname string) error {
- assert.Equal(t, dockerContainerLogPath, oldname)
- assert.Equal(t, kubeletContainerLogPath, newname)
- return nil
- }
- _, err = ds.StartContainer(getTestCTX(), &runtimeapi.StartContainerRequest{ContainerId: id})
- require.NoError(t, err)
-
- _, err = ds.StopContainer(getTestCTX(), &runtimeapi.StopContainerRequest{ContainerId: id, Timeout: int64(0)})
- require.NoError(t, err)
-
- // Verify container log symlink deletion
- // symlink is also tentatively deleted at startup
- _, err = ds.RemoveContainer(getTestCTX(), &runtimeapi.RemoveContainerRequest{ContainerId: id})
- require.NoError(t, err)
- assert.Equal(t, []string{kubeletContainerLogPath, kubeletContainerLogPath}, fakeOS.Removes)
-}
-
-// TestContainerCreationConflict tests the logic to work around docker container
-// creation naming conflict bug.
-func TestContainerCreationConflict(t *testing.T) {
- sConfig := makeSandboxConfig("foo", "bar", "1", 0)
- config := makeContainerConfig(sConfig, "pause", "iamimage", 0, map[string]string{}, map[string]string{})
- containerName := makeContainerName(sConfig, config)
- conflictError := fmt.Errorf("Error response from daemon: Conflict. The name \"/%s\" is already in use by container %q. You have to remove (or rename) that container to be able to reuse that name",
- containerName, containerID)
- noContainerError := fmt.Errorf("Error response from daemon: No such container: %s", containerID)
- randomError := fmt.Errorf("random error")
-
- for desc, test := range map[string]struct {
- createError error
- removeError error
- expectError error
- expectCalls []string
- expectFields int
- }{
- "no create error": {
- expectCalls: []string{"create"},
- expectFields: 6,
- },
- "random create error": {
- createError: randomError,
- expectError: randomError,
- expectCalls: []string{"create"},
- },
- "conflict create error with successful remove": {
- createError: conflictError,
- expectError: conflictError,
- expectCalls: []string{"create", "remove"},
- },
- "conflict create error with random remove error": {
- createError: conflictError,
- removeError: randomError,
- expectError: conflictError,
- expectCalls: []string{"create", "remove"},
- },
- "conflict create error with no such container remove error": {
- createError: conflictError,
- removeError: noContainerError,
- expectCalls: []string{"create", "remove", "create"},
- expectFields: 7,
- },
- } {
- t.Logf("TestCase: %s", desc)
- ds, fDocker, _ := newTestDockerService()
-
- if test.createError != nil {
- fDocker.InjectError("create", test.createError)
- }
- if test.removeError != nil {
- fDocker.InjectError("remove", test.removeError)
- }
-
- req := &runtimeapi.CreateContainerRequest{PodSandboxId: sandboxID, Config: config, SandboxConfig: sConfig}
- createResp, err := ds.CreateContainer(getTestCTX(), req)
- require.Equal(t, test.expectError, err)
- assert.NoError(t, fDocker.AssertCalls(test.expectCalls))
- if err == nil {
- c, err := fDocker.InspectContainer(createResp.ContainerId)
- assert.NoError(t, err)
- assert.Len(t, strings.Split(c.Name, nameDelimiter), test.expectFields)
- }
- }
-}
diff --git a/pkg/kubelet/dockershim/docker_container_unsupported.go b/pkg/kubelet/dockershim/docker_container_unsupported.go
deleted file mode 100644
index 597002bbe4d..00000000000
--- a/pkg/kubelet/dockershim/docker_container_unsupported.go
+++ /dev/null
@@ -1,48 +0,0 @@
-//go:build !windows && !dockerless
-// +build !windows,!dockerless
-
-/*
-Copyright 2019 The Kubernetes Authors.
-
-Licensed under the Apache License, Version 2.0 (the "License");
-you may not use this file except in compliance with the License.
-You may obtain a copy of the License at
-
- http://www.apache.org/licenses/LICENSE-2.0
-
-Unless required by applicable law or agreed to in writing, software
-distributed under the License is distributed on an "AS IS" BASIS,
-WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-See the License for the specific language governing permissions and
-limitations under the License.
-*/
-
-package dockershim
-
-import (
- dockertypes "github.com/docker/docker/api/types"
- runtimeapi "k8s.io/cri-api/pkg/apis/runtime/v1alpha2"
-)
-
-type containerCleanupInfo struct{}
-
-// applyPlatformSpecificDockerConfig applies platform-specific configurations to a dockertypes.ContainerCreateConfig struct.
-// The containerCleanupInfo struct it returns will be passed as is to performPlatformSpecificContainerCleanup
-// after either the container creation has failed or the container has been removed.
-func (ds *dockerService) applyPlatformSpecificDockerConfig(*runtimeapi.CreateContainerRequest, *dockertypes.ContainerCreateConfig) (*containerCleanupInfo, error) {
- return nil, nil
-}
-
-// performPlatformSpecificContainerCleanup is responsible for doing any platform-specific cleanup
-// after either the container creation has failed or the container has been removed.
-func (ds *dockerService) performPlatformSpecificContainerCleanup(cleanupInfo *containerCleanupInfo) (errors []error) {
- return
-}
-
-// platformSpecificContainerInitCleanup is called when dockershim
-// is starting, and is meant to clean up any cruft left by previous runs
-// creating containers.
-// Errors are simply logged, but don't prevent dockershim from starting.
-func (ds *dockerService) platformSpecificContainerInitCleanup() (errors []error) {
- return
-}
diff --git a/pkg/kubelet/dockershim/docker_container_windows.go b/pkg/kubelet/dockershim/docker_container_windows.go
deleted file mode 100644
index e76b4fd3080..00000000000
--- a/pkg/kubelet/dockershim/docker_container_windows.go
+++ /dev/null
@@ -1,218 +0,0 @@
-//go:build windows && !dockerless
-// +build windows,!dockerless
-
-/*
-Copyright 2019 The Kubernetes Authors.
-
-Licensed under the Apache License, Version 2.0 (the "License");
-you may not use this file except in compliance with the License.
-You may obtain a copy of the License at
-
- http://www.apache.org/licenses/LICENSE-2.0
-
-Unless required by applicable law or agreed to in writing, software
-distributed under the License is distributed on an "AS IS" BASIS,
-WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-See the License for the specific language governing permissions and
-limitations under the License.
-*/
-
-package dockershim
-
-import (
- "crypto/rand"
- "encoding/hex"
- "fmt"
- "regexp"
-
- "golang.org/x/sys/windows/registry"
-
- dockertypes "github.com/docker/docker/api/types"
- dockercontainer "github.com/docker/docker/api/types/container"
-
- runtimeapi "k8s.io/cri-api/pkg/apis/runtime/v1alpha2"
-)
-
-type containerCleanupInfo struct {
- gMSARegistryValueName string
-}
-
-// applyPlatformSpecificDockerConfig applies platform-specific configurations to a dockertypes.ContainerCreateConfig struct.
-// The containerCleanupInfo struct it returns will be passed as is to performPlatformSpecificContainerCleanup
-// after either the container creation has failed or the container has been removed.
-func (ds *dockerService) applyPlatformSpecificDockerConfig(request *runtimeapi.CreateContainerRequest, createConfig *dockertypes.ContainerCreateConfig) (*containerCleanupInfo, error) {
- cleanupInfo := &containerCleanupInfo{}
-
- if err := applyGMSAConfig(request.GetConfig(), createConfig, cleanupInfo); err != nil {
- return nil, err
- }
-
- return cleanupInfo, nil
-}
-
-// applyGMSAConfig looks at the container's .Windows.SecurityContext.GMSACredentialSpec field; if present,
-// it copies its contents to a unique registry value, and sets a SecurityOpt on the config pointing to that registry value.
-// We use registry values instead of files since their location cannot change - as opposed to credential spec files,
-// whose location could potentially change down the line, or even be unknown (eg if docker is not installed on the
-// C: drive)
-// When docker supports passing a credential spec's contents directly, we should switch to using that
-// as it will avoid cluttering the registry - there is a moby PR out for this:
-// https://github.com/moby/moby/pull/38777
-func applyGMSAConfig(config *runtimeapi.ContainerConfig, createConfig *dockertypes.ContainerCreateConfig, cleanupInfo *containerCleanupInfo) error {
- var credSpec string
- if config.Windows != nil && config.Windows.SecurityContext != nil {
- credSpec = config.Windows.SecurityContext.CredentialSpec
- }
- if credSpec == "" {
- return nil
- }
-
- valueName, err := copyGMSACredSpecToRegistryValue(credSpec)
- if err != nil {
- return err
- }
-
- if createConfig.HostConfig == nil {
- createConfig.HostConfig = &dockercontainer.HostConfig{}
- }
-
- createConfig.HostConfig.SecurityOpt = append(createConfig.HostConfig.SecurityOpt, "credentialspec=registry://"+valueName)
- cleanupInfo.gMSARegistryValueName = valueName
-
- return nil
-}
-
-const (
- // same as https://github.com/moby/moby/blob/93d994e29c9cc8d81f1b0477e28d705fa7e2cd72/daemon/oci_windows.go#L23
- credentialSpecRegistryLocation = `SOFTWARE\Microsoft\Windows NT\CurrentVersion\Virtualization\Containers\CredentialSpecs`
- // the prefix for the registry values we write GMSA cred specs to
- gMSARegistryValueNamePrefix = "k8s-cred-spec-"
- // the number of random bytes to generate suffixes for registry value names
- gMSARegistryValueNameSuffixRandomBytes = 40
-)
-
-// registryKey is an interface wrapper around `registry.Key`,
-// listing only the methods we care about here.
-// It's mainly useful to easily allow mocking the registry in tests.
-type registryKey interface {
- SetStringValue(name, value string) error
- DeleteValue(name string) error
- ReadValueNames(n int) ([]string, error)
- Close() error
-}
-
-var registryCreateKeyFunc = func(baseKey registry.Key, path string, access uint32) (registryKey, bool, error) {
- return registry.CreateKey(baseKey, path, access)
-}
-
-// randomReader is only meant to ever be overridden for testing purposes,
-// same idea as for `registryKey` above
-var randomReader = rand.Reader
-
-// gMSARegistryValueNamesRegex is the regex used to detect gMSA cred spec
-// registry values in `removeAllGMSARegistryValues` below.
-var gMSARegistryValueNamesRegex = regexp.MustCompile(fmt.Sprintf("^%s[0-9a-f]{%d}$", gMSARegistryValueNamePrefix, 2*gMSARegistryValueNameSuffixRandomBytes))
-
-// copyGMSACredSpecToRegistryKey copies the credential specs to a unique registry value, and returns its name.
-func copyGMSACredSpecToRegistryValue(credSpec string) (string, error) {
- valueName, err := gMSARegistryValueName()
- if err != nil {
- return "", err
- }
-
- // write to the registry
- key, _, err := registryCreateKeyFunc(registry.LOCAL_MACHINE, credentialSpecRegistryLocation, registry.SET_VALUE)
- if err != nil {
- return "", fmt.Errorf("unable to open registry key %q: %v", credentialSpecRegistryLocation, err)
- }
- defer key.Close()
- if err = key.SetStringValue(valueName, credSpec); err != nil {
- return "", fmt.Errorf("unable to write into registry value %q/%q: %v", credentialSpecRegistryLocation, valueName, err)
- }
-
- return valueName, nil
-}
-
-// gMSARegistryValueName computes the name of the registry value where to store the GMSA cred spec contents.
-// The value's name is a purely random suffix appended to `gMSARegistryValueNamePrefix`.
-func gMSARegistryValueName() (string, error) {
- randomSuffix, err := randomString(gMSARegistryValueNameSuffixRandomBytes)
-
- if err != nil {
- return "", fmt.Errorf("error when generating gMSA registry value name: %v", err)
- }
-
- return gMSARegistryValueNamePrefix + randomSuffix, nil
-}
-
-// randomString returns a random hex string.
-func randomString(length int) (string, error) {
- randBytes := make([]byte, length)
-
- if n, err := randomReader.Read(randBytes); err != nil || n != length {
- if err == nil {
- err = fmt.Errorf("only got %v random bytes, expected %v", n, length)
- }
- return "", fmt.Errorf("unable to generate random string: %v", err)
- }
-
- return hex.EncodeToString(randBytes), nil
-}
-
-// performPlatformSpecificContainerCleanup is responsible for doing any platform-specific cleanup
-// after either the container creation has failed or the container has been removed.
-func (ds *dockerService) performPlatformSpecificContainerCleanup(cleanupInfo *containerCleanupInfo) (errors []error) {
- if err := removeGMSARegistryValue(cleanupInfo); err != nil {
- errors = append(errors, err)
- }
-
- return
-}
-
-func removeGMSARegistryValue(cleanupInfo *containerCleanupInfo) error {
- if cleanupInfo == nil || cleanupInfo.gMSARegistryValueName == "" {
- return nil
- }
-
- key, _, err := registryCreateKeyFunc(registry.LOCAL_MACHINE, credentialSpecRegistryLocation, registry.SET_VALUE)
- if err != nil {
- return fmt.Errorf("unable to open registry key %q: %v", credentialSpecRegistryLocation, err)
- }
- defer key.Close()
- if err = key.DeleteValue(cleanupInfo.gMSARegistryValueName); err != nil {
- return fmt.Errorf("unable to remove registry value %q/%q: %v", credentialSpecRegistryLocation, cleanupInfo.gMSARegistryValueName, err)
- }
-
- return nil
-}
-
-// platformSpecificContainerInitCleanup is called when dockershim
-// is starting, and is meant to clean up any cruft left by previous runs
-// creating containers.
-// Errors are simply logged, but don't prevent dockershim from starting.
-func (ds *dockerService) platformSpecificContainerInitCleanup() (errors []error) {
- return removeAllGMSARegistryValues()
-}
-
-func removeAllGMSARegistryValues() (errors []error) {
- key, _, err := registryCreateKeyFunc(registry.LOCAL_MACHINE, credentialSpecRegistryLocation, registry.SET_VALUE)
- if err != nil {
- return []error{fmt.Errorf("unable to open registry key %q: %v", credentialSpecRegistryLocation, err)}
- }
- defer key.Close()
-
- valueNames, err := key.ReadValueNames(0)
- if err != nil {
- return []error{fmt.Errorf("unable to list values under registry key %q: %v", credentialSpecRegistryLocation, err)}
- }
-
- for _, valueName := range valueNames {
- if gMSARegistryValueNamesRegex.MatchString(valueName) {
- if err = key.DeleteValue(valueName); err != nil {
- errors = append(errors, fmt.Errorf("unable to remove registry value %q/%q: %v", credentialSpecRegistryLocation, valueName, err))
- }
- }
- }
-
- return
-}
diff --git a/pkg/kubelet/dockershim/docker_container_windows_test.go b/pkg/kubelet/dockershim/docker_container_windows_test.go
deleted file mode 100644
index c7af2818774..00000000000
--- a/pkg/kubelet/dockershim/docker_container_windows_test.go
+++ /dev/null
@@ -1,311 +0,0 @@
-//go:build !dockerless
-// +build !dockerless
-
-/*
-Copyright 2019 The Kubernetes Authors.
-
-Licensed under the Apache License, Version 2.0 (the "License");
-you may not use this file except in compliance with the License.
-You may obtain a copy of the License at
-
- http://www.apache.org/licenses/LICENSE-2.0
-
-Unless required by applicable law or agreed to in writing, software
-distributed under the License is distributed on an "AS IS" BASIS,
-WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-See the License for the specific language governing permissions and
-limitations under the License.
-*/
-
-package dockershim
-
-import (
- "bytes"
- "fmt"
- "regexp"
- "testing"
-
- dockertypes "github.com/docker/docker/api/types"
- "github.com/stretchr/testify/assert"
- "github.com/stretchr/testify/require"
- "golang.org/x/sys/windows/registry"
- runtimeapi "k8s.io/cri-api/pkg/apis/runtime/v1alpha2"
-)
-
-type dummyRegistryKey struct {
- setStringValueError error
- setStringValueArgs [][]string
-
- deleteValueFunc func(name string) error
- deleteValueArgs []string
-
- readValueNamesError error
- readValueNamesReturn []string
- readValueNamesArgs []int
-
- closed bool
-}
-
-func (k *dummyRegistryKey) SetStringValue(name, value string) error {
- k.setStringValueArgs = append(k.setStringValueArgs, []string{name, value})
- return k.setStringValueError
-}
-
-func (k *dummyRegistryKey) DeleteValue(name string) error {
- k.deleteValueArgs = append(k.deleteValueArgs, name)
- if k.deleteValueFunc == nil {
- return nil
- }
- return k.deleteValueFunc(name)
-}
-
-func (k *dummyRegistryKey) ReadValueNames(n int) ([]string, error) {
- k.readValueNamesArgs = append(k.readValueNamesArgs, n)
- return k.readValueNamesReturn, k.readValueNamesError
-}
-
-func (k *dummyRegistryKey) Close() error {
- k.closed = true
- return nil
-}
-
-func TestApplyGMSAConfig(t *testing.T) {
- dummyCredSpec := "test cred spec contents"
- randomBytes := []byte{0x19, 0x0, 0x25, 0x45, 0x18, 0x52, 0x9e, 0x2a, 0x3d, 0xed, 0xb8, 0x5c, 0xde, 0xc0, 0x3c, 0xe2, 0x70, 0x55, 0x96, 0x47, 0x45, 0x9a, 0xb5, 0x31, 0xf0, 0x7a, 0xf5, 0xeb, 0x1c, 0x54, 0x95, 0xfd, 0xa7, 0x9, 0x43, 0x5c, 0xe8, 0x2a, 0xb8, 0x9c}
- expectedHex := "1900254518529e2a3dedb85cdec03ce270559647459ab531f07af5eb1c5495fda709435ce82ab89c"
- expectedValueName := "k8s-cred-spec-" + expectedHex
-
- containerConfigWithGMSAAnnotation := &runtimeapi.ContainerConfig{
- Windows: &runtimeapi.WindowsContainerConfig{
- SecurityContext: &runtimeapi.WindowsContainerSecurityContext{
- CredentialSpec: dummyCredSpec,
- },
- },
- }
-
- t.Run("happy path", func(t *testing.T) {
- key := &dummyRegistryKey{}
- defer setRegistryCreateKeyFunc(t, key)()
- defer setRandomReader(randomBytes)()
-
- createConfig := &dockertypes.ContainerCreateConfig{}
- cleanupInfo := &containerCleanupInfo{}
- err := applyGMSAConfig(containerConfigWithGMSAAnnotation, createConfig, cleanupInfo)
-
- assert.NoError(t, err)
-
- // the registry key should have been properly created
- if assert.Equal(t, 1, len(key.setStringValueArgs)) {
- assert.Equal(t, []string{expectedValueName, dummyCredSpec}, key.setStringValueArgs[0])
- }
- assert.True(t, key.closed)
-
- // the create config's security opt should have been populated
- if assert.NotNil(t, createConfig.HostConfig) {
- assert.Equal(t, createConfig.HostConfig.SecurityOpt, []string{"credentialspec=registry://" + expectedValueName})
- }
-
- // and the name of that value should have been saved to the cleanup info
- assert.Equal(t, expectedValueName, cleanupInfo.gMSARegistryValueName)
- })
- t.Run("happy path with a truly random string", func(t *testing.T) {
- defer setRegistryCreateKeyFunc(t, &dummyRegistryKey{})()
-
- createConfig := &dockertypes.ContainerCreateConfig{}
- cleanupInfo := &containerCleanupInfo{}
- err := applyGMSAConfig(containerConfigWithGMSAAnnotation, createConfig, cleanupInfo)
-
- assert.NoError(t, err)
-
- if assert.NotNil(t, createConfig.HostConfig) && assert.Equal(t, 1, len(createConfig.HostConfig.SecurityOpt)) {
- secOpt := createConfig.HostConfig.SecurityOpt[0]
-
- expectedPrefix := "credentialspec=registry://k8s-cred-spec-"
- assert.Equal(t, expectedPrefix, secOpt[:len(expectedPrefix)])
-
- hex := secOpt[len(expectedPrefix):]
- hexRegex := regexp.MustCompile("^[0-9a-f]{80}$")
- assert.True(t, hexRegex.MatchString(hex))
- assert.NotEqual(t, expectedHex, hex)
-
- assert.Equal(t, "k8s-cred-spec-"+hex, cleanupInfo.gMSARegistryValueName)
- }
- })
- t.Run("when there's an error generating the random value name", func(t *testing.T) {
- defer setRandomReader([]byte{})()
-
- err := applyGMSAConfig(containerConfigWithGMSAAnnotation, &dockertypes.ContainerCreateConfig{}, &containerCleanupInfo{})
-
- require.Error(t, err)
- assert.Contains(t, err.Error(), "error when generating gMSA registry value name: unable to generate random string")
- })
- t.Run("if there's an error opening the registry key", func(t *testing.T) {
- defer setRegistryCreateKeyFunc(t, &dummyRegistryKey{}, fmt.Errorf("dummy error"))()
-
- err := applyGMSAConfig(containerConfigWithGMSAAnnotation, &dockertypes.ContainerCreateConfig{}, &containerCleanupInfo{})
-
- require.Error(t, err)
- assert.Contains(t, err.Error(), "unable to open registry key")
- })
- t.Run("if there's an error writing to the registry key", func(t *testing.T) {
- key := &dummyRegistryKey{}
- key.setStringValueError = fmt.Errorf("dummy error")
- defer setRegistryCreateKeyFunc(t, key)()
-
- err := applyGMSAConfig(containerConfigWithGMSAAnnotation, &dockertypes.ContainerCreateConfig{}, &containerCleanupInfo{})
-
- if assert.Error(t, err) {
- assert.Contains(t, err.Error(), "unable to write into registry value")
- }
- assert.True(t, key.closed)
- })
- t.Run("if there is no GMSA annotation", func(t *testing.T) {
- createConfig := &dockertypes.ContainerCreateConfig{}
-
- err := applyGMSAConfig(&runtimeapi.ContainerConfig{}, createConfig, &containerCleanupInfo{})
-
- assert.NoError(t, err)
- assert.Nil(t, createConfig.HostConfig)
- })
-}
-
-func TestRemoveGMSARegistryValue(t *testing.T) {
- valueName := "k8s-cred-spec-1900254518529e2a3dedb85cdec03ce270559647459ab531f07af5eb1c5495fda709435ce82ab89c"
- cleanupInfoWithValue := &containerCleanupInfo{gMSARegistryValueName: valueName}
-
- t.Run("it does remove the registry value", func(t *testing.T) {
- key := &dummyRegistryKey{}
- defer setRegistryCreateKeyFunc(t, key)()
-
- err := removeGMSARegistryValue(cleanupInfoWithValue)
-
- assert.NoError(t, err)
-
- // the registry key should have been properly deleted
- if assert.Equal(t, 1, len(key.deleteValueArgs)) {
- assert.Equal(t, []string{valueName}, key.deleteValueArgs)
- }
- assert.True(t, key.closed)
- })
- t.Run("if there's an error opening the registry key", func(t *testing.T) {
- defer setRegistryCreateKeyFunc(t, &dummyRegistryKey{}, fmt.Errorf("dummy error"))()
-
- err := removeGMSARegistryValue(cleanupInfoWithValue)
-
- require.Error(t, err)
- assert.Contains(t, err.Error(), "unable to open registry key")
- })
- t.Run("if there's an error deleting from the registry key", func(t *testing.T) {
- key := &dummyRegistryKey{}
- key.deleteValueFunc = func(name string) error { return fmt.Errorf("dummy error") }
- defer setRegistryCreateKeyFunc(t, key)()
-
- err := removeGMSARegistryValue(cleanupInfoWithValue)
-
- if assert.Error(t, err) {
- assert.Contains(t, err.Error(), "unable to remove registry value")
- }
- assert.True(t, key.closed)
- })
- t.Run("if there's no registry value to be removed, it does nothing", func(t *testing.T) {
- key := &dummyRegistryKey{}
- defer setRegistryCreateKeyFunc(t, key)()
-
- err := removeGMSARegistryValue(&containerCleanupInfo{})
-
- assert.NoError(t, err)
- assert.Equal(t, 0, len(key.deleteValueArgs))
- })
-}
-
-func TestRemoveAllGMSARegistryValues(t *testing.T) {
- cred1 := "k8s-cred-spec-1900254518529e2a3dedb85cdec03ce270559647459ab531f07af5eb1c5495fda709435ce82ab89c"
- cred2 := "k8s-cred-spec-8891436007c795a904fdf77b5348e94305e4c48c5f01c47e7f65e980dc7edda85f112715891d65fd"
- cred3 := "k8s-cred-spec-2f11f1c9e4f8182fe13caa708bd42b2098c8eefc489d6cc98806c058ccbe4cb3703b9ade61ce59a1"
- cred4 := "k8s-cred-spec-dc532f189598a8220a1e538f79081eee979f94fbdbf8d37e36959485dee57157c03742d691e1fae2"
-
- t.Run("it removes the keys matching the k8s creds pattern", func(t *testing.T) {
- key := &dummyRegistryKey{readValueNamesReturn: []string{cred1, "other_creds", cred2}}
- defer setRegistryCreateKeyFunc(t, key)()
-
- errors := removeAllGMSARegistryValues()
-
- assert.Equal(t, 0, len(errors))
- assert.Equal(t, []string{cred1, cred2}, key.deleteValueArgs)
- assert.Equal(t, []int{0}, key.readValueNamesArgs)
- assert.True(t, key.closed)
- })
- t.Run("it ignores errors and does a best effort at removing all k8s creds", func(t *testing.T) {
- key := &dummyRegistryKey{
- readValueNamesReturn: []string{cred1, cred2, cred3, cred4},
- deleteValueFunc: func(name string) error {
- if name == cred1 || name == cred3 {
- return fmt.Errorf("dummy error")
- }
- return nil
- },
- }
- defer setRegistryCreateKeyFunc(t, key)()
-
- errors := removeAllGMSARegistryValues()
-
- assert.Equal(t, 2, len(errors))
- for _, err := range errors {
- assert.Contains(t, err.Error(), "unable to remove registry value")
- }
- assert.Equal(t, []string{cred1, cred2, cred3, cred4}, key.deleteValueArgs)
- assert.Equal(t, []int{0}, key.readValueNamesArgs)
- assert.True(t, key.closed)
- })
- t.Run("if there's an error opening the registry key", func(t *testing.T) {
- defer setRegistryCreateKeyFunc(t, &dummyRegistryKey{}, fmt.Errorf("dummy error"))()
-
- errors := removeAllGMSARegistryValues()
-
- require.Equal(t, 1, len(errors))
- assert.Contains(t, errors[0].Error(), "unable to open registry key")
- })
- t.Run("if it's unable to list the registry values", func(t *testing.T) {
- key := &dummyRegistryKey{readValueNamesError: fmt.Errorf("dummy error")}
- defer setRegistryCreateKeyFunc(t, key)()
-
- errors := removeAllGMSARegistryValues()
-
- if assert.Equal(t, 1, len(errors)) {
- assert.Contains(t, errors[0].Error(), "unable to list values under registry key")
- }
- assert.True(t, key.closed)
- })
-}
-
-// setRegistryCreateKeyFunc replaces the registryCreateKeyFunc package variable, and returns a function
-// to be called to revert the change when done with testing.
-func setRegistryCreateKeyFunc(t *testing.T, key *dummyRegistryKey, err ...error) func() {
- previousRegistryCreateKeyFunc := registryCreateKeyFunc
-
- registryCreateKeyFunc = func(baseKey registry.Key, path string, access uint32) (registryKey, bool, error) {
- // this should always be called with exactly the same arguments
- assert.Equal(t, registry.LOCAL_MACHINE, baseKey)
- assert.Equal(t, credentialSpecRegistryLocation, path)
- assert.Equal(t, uint32(registry.SET_VALUE), access)
-
- if len(err) > 0 {
- return nil, false, err[0]
- }
- return key, false, nil
- }
-
- return func() {
- registryCreateKeyFunc = previousRegistryCreateKeyFunc
- }
-}
-
-// setRandomReader replaces the randomReader package variable with a dummy reader that returns the provided
-// byte slice, and returns a function to be called to revert the change when done with testing.
-func setRandomReader(b []byte) func() {
- previousRandomReader := randomReader
- randomReader = bytes.NewReader(b)
- return func() {
- randomReader = previousRandomReader
- }
-}
diff --git a/pkg/kubelet/dockershim/docker_image.go b/pkg/kubelet/dockershim/docker_image.go
deleted file mode 100644
index 1982adedbd0..00000000000
--- a/pkg/kubelet/dockershim/docker_image.go
+++ /dev/null
@@ -1,192 +0,0 @@
-//go:build !dockerless
-// +build !dockerless
-
-/*
-Copyright 2016 The Kubernetes Authors.
-
-Licensed under the Apache License, Version 2.0 (the "License");
-you may not use this file except in compliance with the License.
-You may obtain a copy of the License at
-
- http://www.apache.org/licenses/LICENSE-2.0
-
-Unless required by applicable law or agreed to in writing, software
-distributed under the License is distributed on an "AS IS" BASIS,
-WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-See the License for the specific language governing permissions and
-limitations under the License.
-*/
-
-package dockershim
-
-import (
- "context"
- "fmt"
- "net/http"
-
- dockertypes "github.com/docker/docker/api/types"
- dockerfilters "github.com/docker/docker/api/types/filters"
- "github.com/docker/docker/pkg/jsonmessage"
-
- runtimeapi "k8s.io/cri-api/pkg/apis/runtime/v1alpha2"
- "k8s.io/klog/v2"
- "k8s.io/kubernetes/pkg/kubelet/dockershim/libdocker"
-)
-
-// This file implements methods in ImageManagerService.
-
-// ListImages lists existing images.
-func (ds *dockerService) ListImages(_ context.Context, r *runtimeapi.ListImagesRequest) (*runtimeapi.ListImagesResponse, error) {
- filter := r.GetFilter()
- opts := dockertypes.ImageListOptions{}
- if filter != nil {
- if filter.GetImage().GetImage() != "" {
- opts.Filters = dockerfilters.NewArgs()
- opts.Filters.Add("reference", filter.GetImage().GetImage())
- }
- }
-
- images, err := ds.client.ListImages(opts)
- if err != nil {
- return nil, err
- }
-
- result := make([]*runtimeapi.Image, 0, len(images))
- for _, i := range images {
- apiImage, err := imageToRuntimeAPIImage(&i)
- if err != nil {
- klog.V(5).InfoS("Failed to convert docker API image to runtime API image", "image", i, "err", err)
- continue
- }
- result = append(result, apiImage)
- }
- return &runtimeapi.ListImagesResponse{Images: result}, nil
-}
-
-// ImageStatus returns the status of the image, returns nil if the image doesn't present.
-func (ds *dockerService) ImageStatus(_ context.Context, r *runtimeapi.ImageStatusRequest) (*runtimeapi.ImageStatusResponse, error) {
- image := r.GetImage()
-
- imageInspect, err := ds.client.InspectImageByRef(image.Image)
- if err != nil {
- if !libdocker.IsImageNotFoundError(err) {
- return nil, err
- }
- imageInspect, err = ds.client.InspectImageByID(image.Image)
- if err != nil {
- if libdocker.IsImageNotFoundError(err) {
- return &runtimeapi.ImageStatusResponse{}, nil
- }
- return nil, err
- }
- }
-
- imageStatus, err := imageInspectToRuntimeAPIImage(imageInspect)
- if err != nil {
- return nil, err
- }
-
- res := runtimeapi.ImageStatusResponse{Image: imageStatus}
- if r.GetVerbose() {
- res.Info = imageInspect.Config.Labels
- }
- return &res, nil
-}
-
-// PullImage pulls an image with authentication config.
-func (ds *dockerService) PullImage(_ context.Context, r *runtimeapi.PullImageRequest) (*runtimeapi.PullImageResponse, error) {
- image := r.GetImage()
- auth := r.GetAuth()
- authConfig := dockertypes.AuthConfig{}
-
- if auth != nil {
- authConfig.Username = auth.Username
- authConfig.Password = auth.Password
- authConfig.ServerAddress = auth.ServerAddress
- authConfig.IdentityToken = auth.IdentityToken
- authConfig.RegistryToken = auth.RegistryToken
- }
- err := ds.client.PullImage(image.Image,
- authConfig,
- dockertypes.ImagePullOptions{},
- )
- if err != nil {
- return nil, filterHTTPError(err, image.Image)
- }
-
- imageRef, err := getImageRef(ds.client, image.Image)
- if err != nil {
- return nil, err
- }
-
- return &runtimeapi.PullImageResponse{ImageRef: imageRef}, nil
-}
-
-// RemoveImage removes the image.
-func (ds *dockerService) RemoveImage(_ context.Context, r *runtimeapi.RemoveImageRequest) (*runtimeapi.RemoveImageResponse, error) {
- image := r.GetImage()
- // If the image has multiple tags, we need to remove all the tags
- // TODO: We assume image.Image is image ID here, which is true in the current implementation
- // of kubelet, but we should still clarify this in CRI.
- imageInspect, err := ds.client.InspectImageByID(image.Image)
-
- // dockerclient.InspectImageByID doesn't work with digest and repoTags,
- // it is safe to continue removing it since there is another check below.
- if err != nil && !libdocker.IsImageNotFoundError(err) {
- return nil, err
- }
-
- if imageInspect == nil {
- // image is nil, assuming it doesn't exist.
- return &runtimeapi.RemoveImageResponse{}, nil
- }
-
- // An image can have different numbers of RepoTags and RepoDigests.
- // Iterating over both of them plus the image ID ensures the image really got removed.
- // It also prevents images from being deleted, which actually are deletable using this approach.
- var images []string
- images = append(images, imageInspect.RepoTags...)
- images = append(images, imageInspect.RepoDigests...)
- images = append(images, image.Image)
-
- for _, image := range images {
- if _, err := ds.client.RemoveImage(image, dockertypes.ImageRemoveOptions{PruneChildren: true}); err != nil && !libdocker.IsImageNotFoundError(err) {
- return nil, err
- }
- }
-
- return &runtimeapi.RemoveImageResponse{}, nil
-}
-
-// getImageRef returns the image digest if exists, or else returns the image ID.
-func getImageRef(client libdocker.Interface, image string) (string, error) {
- img, err := client.InspectImageByRef(image)
- if err != nil {
- return "", err
- }
- if img == nil {
- return "", fmt.Errorf("unable to inspect image %s", image)
- }
-
- // Returns the digest if it exist.
- if len(img.RepoDigests) > 0 {
- return img.RepoDigests[0], nil
- }
-
- return img.ID, nil
-}
-
-func filterHTTPError(err error, image string) error {
- // docker/docker/pull/11314 prints detailed error info for docker pull.
- // When it hits 502, it returns a verbose html output including an inline svg,
- // which makes the output of kubectl get pods much harder to parse.
- // Here converts such verbose output to a concise one.
- jerr, ok := err.(*jsonmessage.JSONError)
- if ok && (jerr.Code == http.StatusBadGateway ||
- jerr.Code == http.StatusServiceUnavailable ||
- jerr.Code == http.StatusGatewayTimeout) {
- return fmt.Errorf("RegistryUnavailable: %v", err)
- }
- return err
-
-}
diff --git a/pkg/kubelet/dockershim/docker_image_linux.go b/pkg/kubelet/dockershim/docker_image_linux.go
deleted file mode 100644
index ecd695d65db..00000000000
--- a/pkg/kubelet/dockershim/docker_image_linux.go
+++ /dev/null
@@ -1,70 +0,0 @@
-//go:build linux && !dockerless
-// +build linux,!dockerless
-
-/*
-Copyright 2017 The Kubernetes Authors.
-
-Licensed under the Apache License, Version 2.0 (the "License");
-you may not use this file except in compliance with the License.
-You may obtain a copy of the License at
-
- http://www.apache.org/licenses/LICENSE-2.0
-
-Unless required by applicable law or agreed to in writing, software
-distributed under the License is distributed on an "AS IS" BASIS,
-WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-See the License for the specific language governing permissions and
-limitations under the License.
-*/
-
-package dockershim
-
-import (
- "context"
- "os"
- "path/filepath"
- "time"
-
- runtimeapi "k8s.io/cri-api/pkg/apis/runtime/v1alpha2"
-)
-
-// ImageFsInfo returns information of the filesystem that is used to store images.
-func (ds *dockerService) ImageFsInfo(_ context.Context, _ *runtimeapi.ImageFsInfoRequest) (*runtimeapi.ImageFsInfoResponse, error) {
- bytes, inodes, err := dirSize(filepath.Join(ds.dockerRootDir, "image"))
- if err != nil {
- return nil, err
- }
-
- return &runtimeapi.ImageFsInfoResponse{
- ImageFilesystems: []*runtimeapi.FilesystemUsage{
- {
- Timestamp: time.Now().Unix(),
- FsId: &runtimeapi.FilesystemIdentifier{
- Mountpoint: ds.dockerRootDir,
- },
- UsedBytes: &runtimeapi.UInt64Value{
- Value: uint64(bytes),
- },
- InodesUsed: &runtimeapi.UInt64Value{
- Value: uint64(inodes),
- },
- },
- },
- }, nil
-}
-
-func dirSize(path string) (int64, int64, error) {
- bytes := int64(0)
- inodes := int64(0)
- err := filepath.Walk(path, func(dir string, info os.FileInfo, err error) error {
- if err != nil {
- return err
- }
- inodes++
- if !info.IsDir() {
- bytes += info.Size()
- }
- return nil
- })
- return bytes, inodes, err
-}
diff --git a/pkg/kubelet/dockershim/docker_image_test.go b/pkg/kubelet/dockershim/docker_image_test.go
deleted file mode 100644
index 5a0e15cc225..00000000000
--- a/pkg/kubelet/dockershim/docker_image_test.go
+++ /dev/null
@@ -1,113 +0,0 @@
-//go:build !dockerless
-// +build !dockerless
-
-/*
-Copyright 2016 The Kubernetes Authors.
-
-Licensed under the Apache License, Version 2.0 (the "License");
-you may not use this file except in compliance with the License.
-You may obtain a copy of the License at
-
- http://www.apache.org/licenses/LICENSE-2.0
-
-Unless required by applicable law or agreed to in writing, software
-distributed under the License is distributed on an "AS IS" BASIS,
-WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-See the License for the specific language governing permissions and
-limitations under the License.
-*/
-
-package dockershim
-
-import (
- "fmt"
- "testing"
-
- dockertypes "github.com/docker/docker/api/types"
- "github.com/docker/docker/pkg/jsonmessage"
- "github.com/stretchr/testify/assert"
- "github.com/stretchr/testify/require"
-
- runtimeapi "k8s.io/cri-api/pkg/apis/runtime/v1alpha2"
- "k8s.io/kubernetes/pkg/kubelet/dockershim/libdocker"
-)
-
-func TestRemoveImage(t *testing.T) {
- tests := map[string]struct {
- image dockertypes.ImageInspect
- calledDetails []libdocker.CalledDetail
- }{
- "single tag": {
- dockertypes.ImageInspect{ID: "1111", RepoTags: []string{"foo"}},
- []libdocker.CalledDetail{
- libdocker.NewCalledDetail("inspect_image", nil),
- libdocker.NewCalledDetail("remove_image", []interface{}{"foo", dockertypes.ImageRemoveOptions{PruneChildren: true}}),
- libdocker.NewCalledDetail("remove_image", []interface{}{"1111", dockertypes.ImageRemoveOptions{PruneChildren: true}}),
- },
- },
- "multiple tags": {
- dockertypes.ImageInspect{ID: "2222", RepoTags: []string{"foo", "bar"}},
- []libdocker.CalledDetail{
- libdocker.NewCalledDetail("inspect_image", nil),
- libdocker.NewCalledDetail("remove_image", []interface{}{"foo", dockertypes.ImageRemoveOptions{PruneChildren: true}}),
- libdocker.NewCalledDetail("remove_image", []interface{}{"bar", dockertypes.ImageRemoveOptions{PruneChildren: true}}),
- libdocker.NewCalledDetail("remove_image", []interface{}{"2222", dockertypes.ImageRemoveOptions{PruneChildren: true}}),
- },
- },
- "single tag multiple repo digests": {
- dockertypes.ImageInspect{ID: "3333", RepoTags: []string{"foo"}, RepoDigests: []string{"foo@3333", "example.com/foo@3333"}},
- []libdocker.CalledDetail{
- libdocker.NewCalledDetail("inspect_image", nil),
- libdocker.NewCalledDetail("remove_image", []interface{}{"foo", dockertypes.ImageRemoveOptions{PruneChildren: true}}),
- libdocker.NewCalledDetail("remove_image", []interface{}{"foo@3333", dockertypes.ImageRemoveOptions{PruneChildren: true}}),
- libdocker.NewCalledDetail("remove_image", []interface{}{"example.com/foo@3333", dockertypes.ImageRemoveOptions{PruneChildren: true}}),
- libdocker.NewCalledDetail("remove_image", []interface{}{"3333", dockertypes.ImageRemoveOptions{PruneChildren: true}}),
- },
- },
- "no tags multiple repo digests": {
- dockertypes.ImageInspect{ID: "4444", RepoTags: []string{}, RepoDigests: []string{"foo@4444", "example.com/foo@4444"}},
- []libdocker.CalledDetail{
- libdocker.NewCalledDetail("inspect_image", nil),
- libdocker.NewCalledDetail("remove_image", []interface{}{"foo@4444", dockertypes.ImageRemoveOptions{PruneChildren: true}}),
- libdocker.NewCalledDetail("remove_image", []interface{}{"example.com/foo@4444", dockertypes.ImageRemoveOptions{PruneChildren: true}}),
- libdocker.NewCalledDetail("remove_image", []interface{}{"4444", dockertypes.ImageRemoveOptions{PruneChildren: true}}),
- },
- },
- }
-
- for name, test := range tests {
- t.Run(name, func(t *testing.T) {
- ds, fakeDocker, _ := newTestDockerService()
- fakeDocker.InjectImageInspects([]dockertypes.ImageInspect{test.image})
- ds.RemoveImage(getTestCTX(), &runtimeapi.RemoveImageRequest{Image: &runtimeapi.ImageSpec{Image: test.image.ID}})
- err := fakeDocker.AssertCallDetails(test.calledDetails...)
- assert.NoError(t, err)
- })
- }
-}
-
-func TestPullWithJSONError(t *testing.T) {
- ds, fakeDocker, _ := newTestDockerService()
- tests := map[string]struct {
- image *runtimeapi.ImageSpec
- err error
- expectedError string
- }{
- "Json error": {
- &runtimeapi.ImageSpec{Image: "ubuntu"},
- &jsonmessage.JSONError{Code: 50, Message: "Json error"},
- "Json error",
- },
- "Bad gateway": {
- &runtimeapi.ImageSpec{Image: "ubuntu"},
- &jsonmessage.JSONError{Code: 502, Message: "\n\n
\n \n \n Oops, there was an error!
\n We have been contacted of this error, feel free to check out status.docker.com\n to see if there is a bigger issue.
\n\n \n"},
- "RegistryUnavailable",
- },
- }
- for key, test := range tests {
- fakeDocker.InjectError("pull", test.err)
- _, err := ds.PullImage(getTestCTX(), &runtimeapi.PullImageRequest{Image: test.image, Auth: &runtimeapi.AuthConfig{}})
- require.Error(t, err, fmt.Sprintf("TestCase [%s]", key))
- assert.Contains(t, err.Error(), test.expectedError)
- }
-}
diff --git a/pkg/kubelet/dockershim/docker_image_unsupported.go b/pkg/kubelet/dockershim/docker_image_unsupported.go
deleted file mode 100644
index 02420213e54..00000000000
--- a/pkg/kubelet/dockershim/docker_image_unsupported.go
+++ /dev/null
@@ -1,32 +0,0 @@
-//go:build !linux && !windows && !dockerless
-// +build !linux,!windows,!dockerless
-
-/*
-Copyright 2016 The Kubernetes Authors.
-
-Licensed under the Apache License, Version 2.0 (the "License");
-you may not use this file except in compliance with the License.
-You may obtain a copy of the License at
-
- http://www.apache.org/licenses/LICENSE-2.0
-
-Unless required by applicable law or agreed to in writing, software
-distributed under the License is distributed on an "AS IS" BASIS,
-WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-See the License for the specific language governing permissions and
-limitations under the License.
-*/
-
-package dockershim
-
-import (
- "context"
- "fmt"
-
- runtimeapi "k8s.io/cri-api/pkg/apis/runtime/v1alpha2"
-)
-
-// ImageFsInfo returns information of the filesystem that is used to store images.
-func (ds *dockerService) ImageFsInfo(_ context.Context, r *runtimeapi.ImageFsInfoRequest) (*runtimeapi.ImageFsInfoResponse, error) {
- return nil, fmt.Errorf("not implemented")
-}
diff --git a/pkg/kubelet/dockershim/docker_image_windows.go b/pkg/kubelet/dockershim/docker_image_windows.go
deleted file mode 100644
index ad617116677..00000000000
--- a/pkg/kubelet/dockershim/docker_image_windows.go
+++ /dev/null
@@ -1,52 +0,0 @@
-//go:build windows && !dockerless
-// +build windows,!dockerless
-
-/*
-Copyright 2016 The Kubernetes Authors.
-
-Licensed under the Apache License, Version 2.0 (the "License");
-you may not use this file except in compliance with the License.
-You may obtain a copy of the License at
-
- http://www.apache.org/licenses/LICENSE-2.0
-
-Unless required by applicable law or agreed to in writing, software
-distributed under the License is distributed on an "AS IS" BASIS,
-WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-See the License for the specific language governing permissions and
-limitations under the License.
-*/
-
-package dockershim
-
-import (
- "context"
- "time"
-
- "k8s.io/klog/v2"
-
- runtimeapi "k8s.io/cri-api/pkg/apis/runtime/v1alpha2"
- "k8s.io/kubernetes/pkg/kubelet/winstats"
-)
-
-// ImageFsInfo returns information of the filesystem that is used to store images.
-func (ds *dockerService) ImageFsInfo(_ context.Context, _ *runtimeapi.ImageFsInfoRequest) (*runtimeapi.ImageFsInfoResponse, error) {
- statsClient := &winstats.StatsClient{}
- fsinfo, err := statsClient.GetDirFsInfo(ds.dockerRootDir)
- if err != nil {
- klog.ErrorS(err, "Failed to get fsInfo for dockerRootDir", "path", ds.dockerRootDir)
- return nil, err
- }
-
- filesystems := []*runtimeapi.FilesystemUsage{
- {
- Timestamp: time.Now().UnixNano(),
- UsedBytes: &runtimeapi.UInt64Value{Value: fsinfo.Usage},
- FsId: &runtimeapi.FilesystemIdentifier{
- Mountpoint: ds.dockerRootDir,
- },
- },
- }
-
- return &runtimeapi.ImageFsInfoResponse{ImageFilesystems: filesystems}, nil
-}
diff --git a/pkg/kubelet/dockershim/docker_legacy_service.go b/pkg/kubelet/dockershim/docker_legacy_service.go
deleted file mode 100644
index b2b6529ad9f..00000000000
--- a/pkg/kubelet/dockershim/docker_legacy_service.go
+++ /dev/null
@@ -1,132 +0,0 @@
-//go:build !dockerless
-// +build !dockerless
-
-/*
-Copyright 2016 The Kubernetes Authors.
-
-Licensed under the Apache License, Version 2.0 (the "License");
-you may not use this file except in compliance with the License.
-You may obtain a copy of the License at
-
- http://www.apache.org/licenses/LICENSE-2.0
-
-Unless required by applicable law or agreed to in writing, software
-distributed under the License is distributed on an "AS IS" BASIS,
-WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-See the License for the specific language governing permissions and
-limitations under the License.
-*/
-
-package dockershim
-
-import (
- "context"
- "errors"
- "fmt"
- "io"
- "strconv"
- "time"
-
- "github.com/armon/circbuf"
- dockertypes "github.com/docker/docker/api/types"
-
- "k8s.io/api/core/v1"
- metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
- kubetypes "k8s.io/apimachinery/pkg/types"
- "k8s.io/klog/v2"
- kubecontainer "k8s.io/kubernetes/pkg/kubelet/container"
-
- "k8s.io/kubernetes/pkg/kubelet/dockershim/libdocker"
-)
-
-// We define `DockerLegacyService` in `pkg/kubelet/legacy`, instead of in this
-// file. We make this decision because `pkg/kubelet` depends on
-// `DockerLegacyService`, and we want to be able to build the `kubelet` without
-// relying on `github.com/docker/docker` or `pkg/kubelet/dockershim`.
-//
-// See https://github.com/kubernetes/enhancements/blob/master/keps/sig-node/1547-building-kubelet-without-docker/README.md
-// for details.
-
-// GetContainerLogs get container logs directly from docker daemon.
-func (d *dockerService) GetContainerLogs(_ context.Context, pod *v1.Pod, containerID kubecontainer.ContainerID, logOptions *v1.PodLogOptions, stdout, stderr io.Writer) error {
- container, err := d.client.InspectContainer(containerID.ID)
- if err != nil {
- return err
- }
-
- var since int64
- if logOptions.SinceSeconds != nil {
- t := metav1.Now().Add(-time.Duration(*logOptions.SinceSeconds) * time.Second)
- since = t.Unix()
- }
- if logOptions.SinceTime != nil {
- since = logOptions.SinceTime.Unix()
- }
- opts := dockertypes.ContainerLogsOptions{
- ShowStdout: true,
- ShowStderr: true,
- Since: strconv.FormatInt(since, 10),
- Timestamps: logOptions.Timestamps,
- Follow: logOptions.Follow,
- }
- if logOptions.TailLines != nil {
- opts.Tail = strconv.FormatInt(*logOptions.TailLines, 10)
- }
-
- if logOptions.LimitBytes != nil {
- // stdout and stderr share the total write limit
- max := *logOptions.LimitBytes
- stderr = sharedLimitWriter(stderr, &max)
- stdout = sharedLimitWriter(stdout, &max)
- }
- sopts := libdocker.StreamOptions{
- OutputStream: stdout,
- ErrorStream: stderr,
- RawTerminal: container.Config.Tty,
- }
- err = d.client.Logs(containerID.ID, opts, sopts)
- if errors.Is(err, errMaximumWrite) {
- klog.V(2).InfoS("Finished logs, hit byte limit", "byteLimit", *logOptions.LimitBytes)
- err = nil
- }
- return err
-}
-
-// GetContainerLogTail attempts to read up to MaxContainerTerminationMessageLogLength
-// from the end of the log when docker is configured with a log driver other than json-log.
-// It reads up to MaxContainerTerminationMessageLogLines lines.
-func (d *dockerService) GetContainerLogTail(uid kubetypes.UID, name, namespace string, containerID kubecontainer.ContainerID) (string, error) {
- value := int64(kubecontainer.MaxContainerTerminationMessageLogLines)
- buf, _ := circbuf.NewBuffer(kubecontainer.MaxContainerTerminationMessageLogLength)
- // Although this is not a full spec pod, dockerLegacyService.GetContainerLogs() currently completely ignores its pod param
- pod := &v1.Pod{
- ObjectMeta: metav1.ObjectMeta{
- UID: uid,
- Name: name,
- Namespace: namespace,
- },
- }
- err := d.GetContainerLogs(context.Background(), pod, containerID, &v1.PodLogOptions{TailLines: &value}, buf, buf)
- if err != nil {
- return "", err
- }
- return buf.String(), nil
-}
-
-// criSupportedLogDrivers are log drivers supported by native CRI integration.
-var criSupportedLogDrivers = []string{"json-file"}
-
-// IsCRISupportedLogDriver checks whether the logging driver used by docker is
-// supported by native CRI integration.
-func (d *dockerService) IsCRISupportedLogDriver() (bool, error) {
- info, err := d.client.Info()
- if err != nil {
- return false, fmt.Errorf("failed to get docker info: %v", err)
- }
- for _, driver := range criSupportedLogDrivers {
- if info.LoggingDriver == driver {
- return true, nil
- }
- }
- return false, nil
-}
diff --git a/pkg/kubelet/dockershim/docker_logs.go b/pkg/kubelet/dockershim/docker_logs.go
deleted file mode 100644
index d68903ef15c..00000000000
--- a/pkg/kubelet/dockershim/docker_logs.go
+++ /dev/null
@@ -1,32 +0,0 @@
-//go:build !dockerless
-// +build !dockerless
-
-/*
-Copyright 2018 The Kubernetes Authors.
-
-Licensed under the Apache License, Version 2.0 (the "License");
-you may not use this file except in compliance with the License.
-You may obtain a copy of the License at
-
- http://www.apache.org/licenses/LICENSE-2.0
-
-Unless required by applicable law or agreed to in writing, software
-distributed under the License is distributed on an "AS IS" BASIS,
-WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-See the License for the specific language governing permissions and
-limitations under the License.
-*/
-
-package dockershim
-
-import (
- "context"
- "fmt"
-
- runtimeapi "k8s.io/cri-api/pkg/apis/runtime/v1alpha2"
-)
-
-// ReopenContainerLog reopens the container log file.
-func (ds *dockerService) ReopenContainerLog(_ context.Context, _ *runtimeapi.ReopenContainerLogRequest) (*runtimeapi.ReopenContainerLogResponse, error) {
- return nil, fmt.Errorf("docker does not support reopening container log files")
-}
diff --git a/pkg/kubelet/dockershim/docker_sandbox.go b/pkg/kubelet/dockershim/docker_sandbox.go
deleted file mode 100644
index 8914786a91d..00000000000
--- a/pkg/kubelet/dockershim/docker_sandbox.go
+++ /dev/null
@@ -1,773 +0,0 @@
-//go:build !dockerless
-// +build !dockerless
-
-/*
-Copyright 2016 The Kubernetes Authors.
-
-Licensed under the Apache License, Version 2.0 (the "License");
-you may not use this file except in compliance with the License.
-You may obtain a copy of the License at
-
- http://www.apache.org/licenses/LICENSE-2.0
-
-Unless required by applicable law or agreed to in writing, software
-distributed under the License is distributed on an "AS IS" BASIS,
-WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-See the License for the specific language governing permissions and
-limitations under the License.
-*/
-
-package dockershim
-
-import (
- "context"
- "encoding/json"
- "fmt"
- "os"
- "strings"
- "time"
-
- dockertypes "github.com/docker/docker/api/types"
- dockercontainer "github.com/docker/docker/api/types/container"
- dockerfilters "github.com/docker/docker/api/types/filters"
- utilerrors "k8s.io/apimachinery/pkg/util/errors"
- runtimeapi "k8s.io/cri-api/pkg/apis/runtime/v1alpha2"
- "k8s.io/klog/v2"
- "k8s.io/kubernetes/pkg/kubelet/checkpointmanager"
- "k8s.io/kubernetes/pkg/kubelet/checkpointmanager/errors"
- kubecontainer "k8s.io/kubernetes/pkg/kubelet/container"
- "k8s.io/kubernetes/pkg/kubelet/dockershim/libdocker"
- "k8s.io/kubernetes/pkg/kubelet/types"
-)
-
-const (
- defaultSandboxImage = "k8s.gcr.io/pause:3.6"
-
- // Various default sandbox resources requests/limits.
- defaultSandboxCPUshares int64 = 2
-
- // defaultSandboxOOMAdj is the oom score adjustment for the docker
- // sandbox container. Using this OOM adj makes it very unlikely, but not
- // impossible, that the defaultSandox will experience an oom kill. -998
- // is chosen to signify sandbox should be OOM killed before other more
- // vital processes like the docker daemon, the kubelet, etc...
- defaultSandboxOOMAdj int = -998
-
- // Name of the underlying container runtime
- runtimeName = "docker"
-)
-
-var (
- // Termination grace period
- defaultSandboxGracePeriod = time.Duration(10) * time.Second
-)
-
-// Returns whether the sandbox network is ready, and whether the sandbox is known
-func (ds *dockerService) getNetworkReady(podSandboxID string) (bool, bool) {
- ds.networkReadyLock.Lock()
- defer ds.networkReadyLock.Unlock()
- ready, ok := ds.networkReady[podSandboxID]
- return ready, ok
-}
-
-func (ds *dockerService) setNetworkReady(podSandboxID string, ready bool) {
- ds.networkReadyLock.Lock()
- defer ds.networkReadyLock.Unlock()
- ds.networkReady[podSandboxID] = ready
-}
-
-func (ds *dockerService) clearNetworkReady(podSandboxID string) {
- ds.networkReadyLock.Lock()
- defer ds.networkReadyLock.Unlock()
- delete(ds.networkReady, podSandboxID)
-}
-
-// RunPodSandbox creates and starts a pod-level sandbox. Runtimes should ensure
-// the sandbox is in ready state.
-// For docker, PodSandbox is implemented by a container holding the network
-// namespace for the pod.
-// Note: docker doesn't use LogDirectory (yet).
-func (ds *dockerService) RunPodSandbox(ctx context.Context, r *runtimeapi.RunPodSandboxRequest) (*runtimeapi.RunPodSandboxResponse, error) {
- config := r.GetConfig()
-
- // Step 1: Pull the image for the sandbox.
- image := defaultSandboxImage
- podSandboxImage := ds.podSandboxImage
- if len(podSandboxImage) != 0 {
- image = podSandboxImage
- }
-
- // NOTE: To use a custom sandbox image in a private repository, users need to configure the nodes with credentials properly.
- // see: https://kubernetes.io/docs/user-guide/images/#configuring-nodes-to-authenticate-to-a-private-registry
- // Only pull sandbox image when it's not present - v1.PullIfNotPresent.
- if err := ensureSandboxImageExists(ds.client, image); err != nil {
- return nil, err
- }
-
- // Step 2: Create the sandbox container.
- if r.GetRuntimeHandler() != "" && r.GetRuntimeHandler() != runtimeName {
- return nil, fmt.Errorf("RuntimeHandler %q not supported", r.GetRuntimeHandler())
- }
- createConfig, err := ds.makeSandboxDockerConfig(config, image)
- if err != nil {
- return nil, fmt.Errorf("failed to make sandbox docker config for pod %q: %v", config.Metadata.Name, err)
- }
- createResp, err := ds.client.CreateContainer(*createConfig)
- if err != nil {
- createResp, err = recoverFromCreationConflictIfNeeded(ds.client, *createConfig, err)
- }
-
- if err != nil || createResp == nil {
- return nil, fmt.Errorf("failed to create a sandbox for pod %q: %v", config.Metadata.Name, err)
- }
- resp := &runtimeapi.RunPodSandboxResponse{PodSandboxId: createResp.ID}
-
- ds.setNetworkReady(createResp.ID, false)
- defer func(e *error) {
- // Set networking ready depending on the error return of
- // the parent function
- if *e == nil {
- ds.setNetworkReady(createResp.ID, true)
- }
- }(&err)
-
- // Step 3: Create Sandbox Checkpoint.
- if err = ds.checkpointManager.CreateCheckpoint(createResp.ID, constructPodSandboxCheckpoint(config)); err != nil {
- return nil, err
- }
-
- // Step 4: Start the sandbox container.
- // Assume kubelet's garbage collector would remove the sandbox later, if
- // startContainer failed.
- err = ds.client.StartContainer(createResp.ID)
- if err != nil {
- return nil, fmt.Errorf("failed to start sandbox container for pod %q: %v", config.Metadata.Name, err)
- }
-
- // Rewrite resolv.conf file generated by docker.
- // NOTE: cluster dns settings aren't passed anymore to docker api in all cases,
- // not only for pods with host network: the resolver conf will be overwritten
- // after sandbox creation to override docker's behaviour. This resolv.conf
- // file is shared by all containers of the same pod, and needs to be modified
- // only once per pod.
- if dnsConfig := config.GetDnsConfig(); dnsConfig != nil {
- containerInfo, err := ds.client.InspectContainer(createResp.ID)
- if err != nil {
- return nil, fmt.Errorf("failed to inspect sandbox container for pod %q: %v", config.Metadata.Name, err)
- }
-
- if err := rewriteResolvFile(containerInfo.ResolvConfPath, dnsConfig.Servers, dnsConfig.Searches, dnsConfig.Options); err != nil {
- return nil, fmt.Errorf("rewrite resolv.conf failed for pod %q: %v", config.Metadata.Name, err)
- }
- }
-
- // Do not invoke network plugins if in hostNetwork mode.
- if config.GetLinux().GetSecurityContext().GetNamespaceOptions().GetNetwork() == runtimeapi.NamespaceMode_NODE {
- return resp, nil
- }
-
- // Step 5: Setup networking for the sandbox.
- // All pod networking is setup by a CNI plugin discovered at startup time.
- // This plugin assigns the pod ip, sets up routes inside the sandbox,
- // creates interfaces etc. In theory, its jurisdiction ends with pod
- // sandbox networking, but it might insert iptables rules or open ports
- // on the host as well, to satisfy parts of the pod spec that aren't
- // recognized by the CNI standard yet.
- cID := kubecontainer.BuildContainerID(runtimeName, createResp.ID)
- networkOptions := make(map[string]string)
- if dnsConfig := config.GetDnsConfig(); dnsConfig != nil {
- // Build DNS options.
- dnsOption, err := json.Marshal(dnsConfig)
- if err != nil {
- return nil, fmt.Errorf("failed to marshal dns config for pod %q: %v", config.Metadata.Name, err)
- }
- networkOptions["dns"] = string(dnsOption)
- }
- err = ds.network.SetUpPod(config.GetMetadata().Namespace, config.GetMetadata().Name, cID, config.Annotations, networkOptions)
- if err != nil {
- errList := []error{fmt.Errorf("failed to set up sandbox container %q network for pod %q: %v", createResp.ID, config.Metadata.Name, err)}
-
- // Ensure network resources are cleaned up even if the plugin
- // succeeded but an error happened between that success and here.
- err = ds.network.TearDownPod(config.GetMetadata().Namespace, config.GetMetadata().Name, cID)
- if err != nil {
- errList = append(errList, fmt.Errorf("failed to clean up sandbox container %q network for pod %q: %v", createResp.ID, config.Metadata.Name, err))
- }
-
- err = ds.client.StopContainer(createResp.ID, defaultSandboxGracePeriod)
- if err != nil {
- errList = append(errList, fmt.Errorf("failed to stop sandbox container %q for pod %q: %v", createResp.ID, config.Metadata.Name, err))
- }
-
- return resp, utilerrors.NewAggregate(errList)
- }
-
- return resp, nil
-}
-
-// StopPodSandbox stops the sandbox. If there are any running containers in the
-// sandbox, they should be force terminated.
-// TODO: This function blocks sandbox teardown on networking teardown. Is it
-// better to cut our losses assuming an out of band GC routine will cleanup
-// after us?
-func (ds *dockerService) StopPodSandbox(ctx context.Context, r *runtimeapi.StopPodSandboxRequest) (*runtimeapi.StopPodSandboxResponse, error) {
- var namespace, name string
- var hostNetwork bool
-
- podSandboxID := r.PodSandboxId
- resp := &runtimeapi.StopPodSandboxResponse{}
-
- // Try to retrieve minimal sandbox information from docker daemon or sandbox checkpoint.
- inspectResult, metadata, statusErr := ds.getPodSandboxDetails(podSandboxID)
- if statusErr == nil {
- namespace = metadata.Namespace
- name = metadata.Name
- hostNetwork = (networkNamespaceMode(inspectResult) == runtimeapi.NamespaceMode_NODE)
- } else {
- checkpoint := NewPodSandboxCheckpoint("", "", &CheckpointData{})
- checkpointErr := ds.checkpointManager.GetCheckpoint(podSandboxID, checkpoint)
-
- // Proceed if both sandbox container and checkpoint could not be found. This means that following
- // actions will only have sandbox ID and not have pod namespace and name information.
- // Return error if encounter any unexpected error.
- if checkpointErr != nil {
- if checkpointErr != errors.ErrCheckpointNotFound {
- err := ds.checkpointManager.RemoveCheckpoint(podSandboxID)
- if err != nil {
- klog.ErrorS(err, "Failed to delete corrupt checkpoint for sandbox", "podSandboxID", podSandboxID)
- }
- }
- if libdocker.IsContainerNotFoundError(statusErr) {
- klog.InfoS("Both sandbox container and checkpoint could not be found. Proceed without further sandbox information.", "podSandboxID", podSandboxID)
- } else {
- return nil, utilerrors.NewAggregate([]error{
- fmt.Errorf("failed to get checkpoint for sandbox %q: %v", podSandboxID, checkpointErr),
- fmt.Errorf("failed to get sandbox status: %v", statusErr)})
- }
- } else {
- _, name, namespace, _, hostNetwork = checkpoint.GetData()
- }
- }
-
- // WARNING: The following operations made the following assumption:
- // 1. kubelet will retry on any error returned by StopPodSandbox.
- // 2. tearing down network and stopping sandbox container can succeed in any sequence.
- // This depends on the implementation detail of network plugin and proper error handling.
- // For kubenet, if tearing down network failed and sandbox container is stopped, kubelet
- // will retry. On retry, kubenet will not be able to retrieve network namespace of the sandbox
- // since it is stopped. With empty network namespace, CNI bridge plugin will conduct best
- // effort clean up and will not return error.
- errList := []error{}
- ready, ok := ds.getNetworkReady(podSandboxID)
- if !hostNetwork && (ready || !ok) {
- // Only tear down the pod network if we haven't done so already
- cID := kubecontainer.BuildContainerID(runtimeName, podSandboxID)
- err := ds.network.TearDownPod(namespace, name, cID)
- if err == nil {
- ds.setNetworkReady(podSandboxID, false)
- } else {
- errList = append(errList, err)
- }
- }
- if err := ds.client.StopContainer(podSandboxID, defaultSandboxGracePeriod); err != nil {
- // Do not return error if the container does not exist
- if !libdocker.IsContainerNotFoundError(err) {
- klog.ErrorS(err, "Failed to stop sandbox", "podSandboxID", podSandboxID)
- errList = append(errList, err)
- } else {
- // remove the checkpoint for any sandbox that is not found in the runtime
- ds.checkpointManager.RemoveCheckpoint(podSandboxID)
- }
- }
-
- if len(errList) == 0 {
- return resp, nil
- }
-
- // TODO: Stop all running containers in the sandbox.
- return nil, utilerrors.NewAggregate(errList)
-}
-
-// RemovePodSandbox removes the sandbox. If there are running containers in the
-// sandbox, they should be forcibly removed.
-func (ds *dockerService) RemovePodSandbox(ctx context.Context, r *runtimeapi.RemovePodSandboxRequest) (*runtimeapi.RemovePodSandboxResponse, error) {
- podSandboxID := r.PodSandboxId
- var errs []error
-
- opts := dockertypes.ContainerListOptions{All: true}
-
- opts.Filters = dockerfilters.NewArgs()
- f := newDockerFilter(&opts.Filters)
- f.AddLabel(sandboxIDLabelKey, podSandboxID)
-
- containers, err := ds.client.ListContainers(opts)
- if err != nil {
- errs = append(errs, err)
- }
-
- // Remove all containers in the sandbox.
- for i := range containers {
- if _, err := ds.RemoveContainer(ctx, &runtimeapi.RemoveContainerRequest{ContainerId: containers[i].ID}); err != nil && !libdocker.IsContainerNotFoundError(err) {
- errs = append(errs, err)
- }
- }
-
- // Remove the sandbox container.
- err = ds.client.RemoveContainer(podSandboxID, dockertypes.ContainerRemoveOptions{RemoveVolumes: true, Force: true})
- if err == nil || libdocker.IsContainerNotFoundError(err) {
- // Only clear network ready when the sandbox has actually been
- // removed from docker or doesn't exist
- ds.clearNetworkReady(podSandboxID)
- } else {
- errs = append(errs, err)
- }
-
- // Remove the checkpoint of the sandbox.
- if err := ds.checkpointManager.RemoveCheckpoint(podSandboxID); err != nil {
- errs = append(errs, err)
- }
- if len(errs) == 0 {
- return &runtimeapi.RemovePodSandboxResponse{}, nil
- }
- return nil, utilerrors.NewAggregate(errs)
-}
-
-// getIPsFromPlugin interrogates the network plugin for sandbox IPs.
-func (ds *dockerService) getIPsFromPlugin(sandbox *dockertypes.ContainerJSON) ([]string, error) {
- metadata, err := parseSandboxName(sandbox.Name)
- if err != nil {
- return nil, err
- }
- msg := fmt.Sprintf("Couldn't find network status for %s/%s through plugin", metadata.Namespace, metadata.Name)
- cID := kubecontainer.BuildContainerID(runtimeName, sandbox.ID)
- networkStatus, err := ds.network.GetPodNetworkStatus(metadata.Namespace, metadata.Name, cID)
- if err != nil {
- return nil, err
- }
- if networkStatus == nil {
- return nil, fmt.Errorf("%v: invalid network status for", msg)
- }
-
- ips := make([]string, 0)
- for _, ip := range networkStatus.IPs {
- ips = append(ips, ip.String())
- }
- // if we don't have any ip in our list then cni is using classic primary IP only
- if len(ips) == 0 {
- ips = append(ips, networkStatus.IP.String())
- }
- return ips, nil
-}
-
-// getIPs returns the ip given the output of `docker inspect` on a pod sandbox,
-// first interrogating any registered plugins, then simply trusting the ip
-// in the sandbox itself. We look for an ipv4 address before ipv6.
-func (ds *dockerService) getIPs(podSandboxID string, sandbox *dockertypes.ContainerJSON) []string {
- if sandbox.NetworkSettings == nil {
- return nil
- }
- if networkNamespaceMode(sandbox) == runtimeapi.NamespaceMode_NODE {
- // For sandboxes using host network, the shim is not responsible for
- // reporting the IP.
- return nil
- }
-
- // Don't bother getting IP if the pod is known and networking isn't ready
- ready, ok := ds.getNetworkReady(podSandboxID)
- if ok && !ready {
- return nil
- }
-
- ips, err := ds.getIPsFromPlugin(sandbox)
- if err == nil {
- return ips
- }
-
- ips = make([]string, 0)
- // TODO: trusting the docker ip is not a great idea. However docker uses
- // eth0 by default and so does CNI, so if we find a docker IP here, we
- // conclude that the plugin must have failed setup, or forgotten its ip.
- // This is not a sensible assumption for plugins across the board, but if
- // a plugin doesn't want this behavior, it can throw an error.
- if sandbox.NetworkSettings.IPAddress != "" {
- ips = append(ips, sandbox.NetworkSettings.IPAddress)
- }
- if sandbox.NetworkSettings.GlobalIPv6Address != "" {
- ips = append(ips, sandbox.NetworkSettings.GlobalIPv6Address)
- }
-
- // If all else fails, warn but don't return an error, as pod status
- // should generally not return anything except fatal errors
- // FIXME: handle network errors by restarting the pod somehow?
- klog.InfoS("Failed to read pod IP from plugin/docker", "err", err)
- return ips
-}
-
-// Returns the inspect container response, the sandbox metadata, and network namespace mode
-func (ds *dockerService) getPodSandboxDetails(podSandboxID string) (*dockertypes.ContainerJSON, *runtimeapi.PodSandboxMetadata, error) {
- resp, err := ds.client.InspectContainer(podSandboxID)
- if err != nil {
- return nil, nil, err
- }
-
- metadata, err := parseSandboxName(resp.Name)
- if err != nil {
- return nil, nil, err
- }
-
- return resp, metadata, nil
-}
-
-// PodSandboxStatus returns the status of the PodSandbox.
-func (ds *dockerService) PodSandboxStatus(ctx context.Context, req *runtimeapi.PodSandboxStatusRequest) (*runtimeapi.PodSandboxStatusResponse, error) {
- podSandboxID := req.PodSandboxId
-
- r, metadata, err := ds.getPodSandboxDetails(podSandboxID)
- if err != nil {
- return nil, err
- }
-
- // Parse the timestamps.
- createdAt, _, _, err := getContainerTimestamps(r)
- if err != nil {
- return nil, fmt.Errorf("failed to parse timestamp for container %q: %v", podSandboxID, err)
- }
- ct := createdAt.UnixNano()
-
- // Translate container to sandbox state.
- state := runtimeapi.PodSandboxState_SANDBOX_NOTREADY
- if r.State.Running {
- state = runtimeapi.PodSandboxState_SANDBOX_READY
- }
-
- var ips []string
- // TODO: Remove this when sandbox is available on windows
- // This is a workaround for windows, where sandbox is not in use, and pod IP is determined through containers belonging to the Pod.
- if ips = ds.determinePodIPBySandboxID(podSandboxID); len(ips) == 0 {
- ips = ds.getIPs(podSandboxID, r)
- }
-
- // ip is primary ips
- // ips is all other ips
- ip := ""
- if len(ips) != 0 {
- ip = ips[0]
- ips = ips[1:]
- }
-
- labels, annotations := extractLabels(r.Config.Labels)
- status := &runtimeapi.PodSandboxStatus{
- Id: r.ID,
- State: state,
- CreatedAt: ct,
- Metadata: metadata,
- Labels: labels,
- Annotations: annotations,
- Network: &runtimeapi.PodSandboxNetworkStatus{
- Ip: ip,
- },
- Linux: &runtimeapi.LinuxPodSandboxStatus{
- Namespaces: &runtimeapi.Namespace{
- Options: &runtimeapi.NamespaceOption{
- Network: networkNamespaceMode(r),
- Pid: pidNamespaceMode(r),
- Ipc: ipcNamespaceMode(r),
- },
- },
- },
- }
- // add additional IPs
- additionalPodIPs := make([]*runtimeapi.PodIP, 0, len(ips))
- for _, ip := range ips {
- additionalPodIPs = append(additionalPodIPs, &runtimeapi.PodIP{
- Ip: ip,
- })
- }
- status.Network.AdditionalIps = additionalPodIPs
- return &runtimeapi.PodSandboxStatusResponse{Status: status}, nil
-}
-
-// ListPodSandbox returns a list of Sandbox.
-func (ds *dockerService) ListPodSandbox(_ context.Context, r *runtimeapi.ListPodSandboxRequest) (*runtimeapi.ListPodSandboxResponse, error) {
- filter := r.GetFilter()
-
- // By default, list all containers whether they are running or not.
- opts := dockertypes.ContainerListOptions{All: true}
- filterOutReadySandboxes := false
-
- opts.Filters = dockerfilters.NewArgs()
- f := newDockerFilter(&opts.Filters)
- // Add filter to select only sandbox containers.
- f.AddLabel(containerTypeLabelKey, containerTypeLabelSandbox)
-
- if filter != nil {
- if filter.Id != "" {
- f.Add("id", filter.Id)
- }
- if filter.State != nil {
- if filter.GetState().State == runtimeapi.PodSandboxState_SANDBOX_READY {
- // Only list running containers.
- opts.All = false
- } else {
- // runtimeapi.PodSandboxState_SANDBOX_NOTREADY can mean the
- // container is in any of the non-running state (e.g., created,
- // exited). We can't tell docker to filter out running
- // containers directly, so we'll need to filter them out
- // ourselves after getting the results.
- filterOutReadySandboxes = true
- }
- }
-
- if filter.LabelSelector != nil {
- for k, v := range filter.LabelSelector {
- f.AddLabel(k, v)
- }
- }
- }
-
- // Make sure we get the list of checkpoints first so that we don't include
- // new PodSandboxes that are being created right now.
- var err error
- checkpoints := []string{}
- if filter == nil {
- checkpoints, err = ds.checkpointManager.ListCheckpoints()
- if err != nil {
- klog.ErrorS(err, "Failed to list checkpoints")
- }
- }
-
- containers, err := ds.client.ListContainers(opts)
- if err != nil {
- return nil, err
- }
-
- // Convert docker containers to runtime api sandboxes.
- result := []*runtimeapi.PodSandbox{}
- // using map as set
- sandboxIDs := make(map[string]bool)
- for i := range containers {
- c := containers[i]
- converted, err := containerToRuntimeAPISandbox(&c)
- if err != nil {
- klog.V(4).InfoS("Unable to convert docker to runtime API sandbox", "containerName", c.Names, "err", err)
- continue
- }
- if filterOutReadySandboxes && converted.State == runtimeapi.PodSandboxState_SANDBOX_READY {
- continue
- }
- sandboxIDs[converted.Id] = true
- result = append(result, converted)
- }
-
- // Include sandbox that could only be found with its checkpoint if no filter is applied
- // These PodSandbox will only include PodSandboxID, Name, Namespace.
- // These PodSandbox will be in PodSandboxState_SANDBOX_NOTREADY state.
- for _, id := range checkpoints {
- if _, ok := sandboxIDs[id]; ok {
- continue
- }
- checkpoint := NewPodSandboxCheckpoint("", "", &CheckpointData{})
- err := ds.checkpointManager.GetCheckpoint(id, checkpoint)
- if err != nil {
- klog.ErrorS(err, "Failed to retrieve checkpoint for sandbox", "sandboxID", id)
- if err == errors.ErrCorruptCheckpoint {
- err = ds.checkpointManager.RemoveCheckpoint(id)
- if err != nil {
- klog.ErrorS(err, "Failed to delete corrupt checkpoint for sandbox", "sandboxID", id)
- }
- }
- continue
- }
- result = append(result, checkpointToRuntimeAPISandbox(id, checkpoint))
- }
-
- return &runtimeapi.ListPodSandboxResponse{Items: result}, nil
-}
-
-// applySandboxLinuxOptions applies LinuxPodSandboxConfig to dockercontainer.HostConfig and dockercontainer.ContainerCreateConfig.
-func (ds *dockerService) applySandboxLinuxOptions(hc *dockercontainer.HostConfig, lc *runtimeapi.LinuxPodSandboxConfig, createConfig *dockertypes.ContainerCreateConfig, image string, separator rune) error {
- if lc == nil {
- return nil
- }
- // Apply security context.
- if err := applySandboxSecurityContext(lc, createConfig.Config, hc, ds.network, separator); err != nil {
- return err
- }
-
- // Set sysctls.
- hc.Sysctls = lc.Sysctls
- return nil
-}
-
-func (ds *dockerService) applySandboxResources(hc *dockercontainer.HostConfig, lc *runtimeapi.LinuxPodSandboxConfig) error {
- hc.Resources = dockercontainer.Resources{
- MemorySwap: DefaultMemorySwap(),
- CPUShares: defaultSandboxCPUshares,
- // Use docker's default cpu quota/period.
- }
-
- if lc != nil {
- // Apply Cgroup options.
- cgroupParent, err := ds.GenerateExpectedCgroupParent(lc.CgroupParent)
- if err != nil {
- return err
- }
- hc.CgroupParent = cgroupParent
- }
- return nil
-}
-
-// makeSandboxDockerConfig returns dockertypes.ContainerCreateConfig based on runtimeapi.PodSandboxConfig.
-func (ds *dockerService) makeSandboxDockerConfig(c *runtimeapi.PodSandboxConfig, image string) (*dockertypes.ContainerCreateConfig, error) {
- // Merge annotations and labels because docker supports only labels.
- labels := makeLabels(c.GetLabels(), c.GetAnnotations())
- // Apply a label to distinguish sandboxes from regular containers.
- labels[containerTypeLabelKey] = containerTypeLabelSandbox
- // Apply a container name label for infra container. This is used in summary v1.
- // TODO(random-liu): Deprecate this label once container metrics is directly got from CRI.
- labels[types.KubernetesContainerNameLabel] = sandboxContainerName
-
- hc := &dockercontainer.HostConfig{
- IpcMode: dockercontainer.IpcMode("shareable"),
- }
- createConfig := &dockertypes.ContainerCreateConfig{
- Name: makeSandboxName(c),
- Config: &dockercontainer.Config{
- Hostname: c.Hostname,
- // TODO: Handle environment variables.
- Image: image,
- Labels: labels,
- },
- HostConfig: hc,
- }
-
- // Apply linux-specific options.
- if err := ds.applySandboxLinuxOptions(hc, c.GetLinux(), createConfig, image, securityOptSeparator); err != nil {
- return nil, err
- }
-
- // Set port mappings.
- exposedPorts, portBindings := makePortsAndBindings(c.GetPortMappings())
- createConfig.Config.ExposedPorts = exposedPorts
- hc.PortBindings = portBindings
-
- hc.OomScoreAdj = defaultSandboxOOMAdj
-
- // Apply resource options.
- if err := ds.applySandboxResources(hc, c.GetLinux()); err != nil {
- return nil, err
- }
-
- // Set security options.
- securityOpts := ds.getSandBoxSecurityOpts(securityOptSeparator)
- hc.SecurityOpt = append(hc.SecurityOpt, securityOpts...)
-
- return createConfig, nil
-}
-
-// networkNamespaceMode returns the network runtimeapi.NamespaceMode for this container.
-// Supports: POD, NODE
-func networkNamespaceMode(container *dockertypes.ContainerJSON) runtimeapi.NamespaceMode {
- if container != nil && container.HostConfig != nil && string(container.HostConfig.NetworkMode) == namespaceModeHost {
- return runtimeapi.NamespaceMode_NODE
- }
- return runtimeapi.NamespaceMode_POD
-}
-
-// pidNamespaceMode returns the PID runtimeapi.NamespaceMode for this container.
-// Supports: CONTAINER, NODE
-// TODO(verb): add support for POD PID namespace sharing
-func pidNamespaceMode(container *dockertypes.ContainerJSON) runtimeapi.NamespaceMode {
- if container != nil && container.HostConfig != nil && string(container.HostConfig.PidMode) == namespaceModeHost {
- return runtimeapi.NamespaceMode_NODE
- }
- return runtimeapi.NamespaceMode_CONTAINER
-}
-
-// ipcNamespaceMode returns the IPC runtimeapi.NamespaceMode for this container.
-// Supports: POD, NODE
-func ipcNamespaceMode(container *dockertypes.ContainerJSON) runtimeapi.NamespaceMode {
- if container != nil && container.HostConfig != nil && string(container.HostConfig.IpcMode) == namespaceModeHost {
- return runtimeapi.NamespaceMode_NODE
- }
- return runtimeapi.NamespaceMode_POD
-}
-
-func constructPodSandboxCheckpoint(config *runtimeapi.PodSandboxConfig) checkpointmanager.Checkpoint {
- data := CheckpointData{}
- for _, pm := range config.GetPortMappings() {
- proto := toCheckpointProtocol(pm.Protocol)
- data.PortMappings = append(data.PortMappings, &PortMapping{
- HostPort: &pm.HostPort,
- ContainerPort: &pm.ContainerPort,
- Protocol: &proto,
- HostIP: pm.HostIp,
- })
- }
- if config.GetLinux().GetSecurityContext().GetNamespaceOptions().GetNetwork() == runtimeapi.NamespaceMode_NODE {
- data.HostNetwork = true
- }
- return NewPodSandboxCheckpoint(config.Metadata.Namespace, config.Metadata.Name, &data)
-}
-
-func toCheckpointProtocol(protocol runtimeapi.Protocol) Protocol {
- switch protocol {
- case runtimeapi.Protocol_TCP:
- return protocolTCP
- case runtimeapi.Protocol_UDP:
- return protocolUDP
- case runtimeapi.Protocol_SCTP:
- return protocolSCTP
- }
- klog.InfoS("Unknown protocol, defaulting to TCP", "protocol", protocol)
- return protocolTCP
-}
-
-// rewriteResolvFile rewrites resolv.conf file generated by docker.
-func rewriteResolvFile(resolvFilePath string, dns []string, dnsSearch []string, dnsOptions []string) error {
- if len(resolvFilePath) == 0 {
- klog.ErrorS(nil, "ResolvConfPath is empty.")
- return nil
- }
-
- if _, err := os.Stat(resolvFilePath); os.IsNotExist(err) {
- return fmt.Errorf("ResolvConfPath %q does not exist", resolvFilePath)
- }
-
- var resolvFileContent []string
- for _, srv := range dns {
- resolvFileContent = append(resolvFileContent, "nameserver "+srv)
- }
-
- if len(dnsSearch) > 0 {
- resolvFileContent = append(resolvFileContent, "search "+strings.Join(dnsSearch, " "))
- }
-
- if len(dnsOptions) > 0 {
- resolvFileContent = append(resolvFileContent, "options "+strings.Join(dnsOptions, " "))
- }
-
- if len(resolvFileContent) > 0 {
- resolvFileContentStr := strings.Join(resolvFileContent, "\n")
- resolvFileContentStr += "\n"
-
- klog.V(4).InfoS("Will attempt to re-write config file", "path", resolvFilePath, "fileContent", resolvFileContent)
- if err := rewriteFile(resolvFilePath, resolvFileContentStr); err != nil {
- klog.ErrorS(err, "Resolv.conf could not be updated")
- return err
- }
- }
-
- return nil
-}
-
-func rewriteFile(filePath, stringToWrite string) error {
- f, err := os.OpenFile(filePath, os.O_TRUNC|os.O_WRONLY, 0644)
- if err != nil {
- return err
- }
- defer f.Close()
-
- _, err = f.WriteString(stringToWrite)
- return err
-}
diff --git a/pkg/kubelet/dockershim/docker_sandbox_linux_test.go b/pkg/kubelet/dockershim/docker_sandbox_linux_test.go
deleted file mode 100644
index 2c6acfbf440..00000000000
--- a/pkg/kubelet/dockershim/docker_sandbox_linux_test.go
+++ /dev/null
@@ -1,39 +0,0 @@
-//go:build linux && !dockerless
-// +build linux,!dockerless
-
-/*
-Copyright 2021 The Kubernetes Authors.
-
-Licensed under the Apache License, Version 2.0 (the "License");
-you may not use this file except in compliance with the License.
-You may obtain a copy of the License at
-
- http://www.apache.org/licenses/LICENSE-2.0
-
-Unless required by applicable law or agreed to in writing, software
-distributed under the License is distributed on an "AS IS" BASIS,
-WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-See the License for the specific language governing permissions and
-limitations under the License.
-*/
-
-package dockershim
-
-import (
- "testing"
-
- "github.com/stretchr/testify/assert"
-)
-
-// TestSandboxHasLeastPrivilegesConfig tests that the sandbox is set with no-new-privileges
-// and it uses runtime/default seccomp profile.
-func TestSandboxHasLeastPrivilegesConfig(t *testing.T) {
- ds, _, _ := newTestDockerService()
- config := makeSandboxConfig("foo", "bar", "1", 0)
-
- // test the default
- createConfig, err := ds.makeSandboxDockerConfig(config, defaultSandboxImage)
- assert.NoError(t, err)
- assert.Equal(t, len(createConfig.HostConfig.SecurityOpt), 1, "sandbox should use runtime/default")
- assert.Equal(t, "no-new-privileges", createConfig.HostConfig.SecurityOpt[0], "no-new-privileges not set")
-}
diff --git a/pkg/kubelet/dockershim/docker_sandbox_test.go b/pkg/kubelet/dockershim/docker_sandbox_test.go
deleted file mode 100644
index c2c2e2c6100..00000000000
--- a/pkg/kubelet/dockershim/docker_sandbox_test.go
+++ /dev/null
@@ -1,314 +0,0 @@
-//go:build !dockerless
-// +build !dockerless
-
-/*
-Copyright 2016 The Kubernetes Authors.
-
-Licensed under the Apache License, Version 2.0 (the "License");
-you may not use this file except in compliance with the License.
-You may obtain a copy of the License at
-
- http://www.apache.org/licenses/LICENSE-2.0
-
-Unless required by applicable law or agreed to in writing, software
-distributed under the License is distributed on an "AS IS" BASIS,
-WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-See the License for the specific language governing permissions and
-limitations under the License.
-*/
-
-package dockershim
-
-import (
- "errors"
- "fmt"
- "math/rand"
- "net"
- "testing"
- "time"
-
- "github.com/golang/mock/gomock"
- "github.com/stretchr/testify/assert"
- "github.com/stretchr/testify/require"
-
- runtimeapi "k8s.io/cri-api/pkg/apis/runtime/v1alpha2"
- kubecontainer "k8s.io/kubernetes/pkg/kubelet/container"
- "k8s.io/kubernetes/pkg/kubelet/dockershim/libdocker"
- "k8s.io/kubernetes/pkg/kubelet/dockershim/network"
- nettest "k8s.io/kubernetes/pkg/kubelet/dockershim/network/testing"
- "k8s.io/kubernetes/pkg/kubelet/types"
-)
-
-// A helper to create a basic config.
-func makeSandboxConfig(name, namespace, uid string, attempt uint32) *runtimeapi.PodSandboxConfig {
- return makeSandboxConfigWithLabelsAndAnnotations(name, namespace, uid, attempt, map[string]string{}, map[string]string{})
-}
-
-func makeSandboxConfigWithLabelsAndAnnotations(name, namespace, uid string, attempt uint32, labels, annotations map[string]string) *runtimeapi.PodSandboxConfig {
- return &runtimeapi.PodSandboxConfig{
- Metadata: &runtimeapi.PodSandboxMetadata{
- Name: name,
- Namespace: namespace,
- Uid: uid,
- Attempt: attempt,
- },
- Labels: labels,
- Annotations: annotations,
- }
-}
-
-// TestListSandboxes creates several sandboxes and then list them to check
-// whether the correct metadatas, states, and labels are returned.
-func TestListSandboxes(t *testing.T) {
- ds, _, fakeClock := newTestDockerService()
- name, namespace := "foo", "bar"
- configs := []*runtimeapi.PodSandboxConfig{}
- for i := 0; i < 3; i++ {
- c := makeSandboxConfigWithLabelsAndAnnotations(fmt.Sprintf("%s%d", name, i),
- fmt.Sprintf("%s%d", namespace, i), fmt.Sprintf("%d", i), 0,
- map[string]string{"label": fmt.Sprintf("foo%d", i)},
- map[string]string{"annotation": fmt.Sprintf("bar%d", i)},
- )
- configs = append(configs, c)
- }
-
- expected := []*runtimeapi.PodSandbox{}
- state := runtimeapi.PodSandboxState_SANDBOX_READY
- var createdAt int64 = fakeClock.Now().UnixNano()
- for i := range configs {
- runResp, err := ds.RunPodSandbox(getTestCTX(), &runtimeapi.RunPodSandboxRequest{Config: configs[i]})
- require.NoError(t, err)
- // Prepend to the expected list because ListPodSandbox returns
- // the most recent sandbox first.
- expected = append([]*runtimeapi.PodSandbox{{
- Metadata: configs[i].Metadata,
- Id: runResp.PodSandboxId,
- State: state,
- CreatedAt: createdAt,
- Labels: configs[i].Labels,
- Annotations: configs[i].Annotations,
- }}, expected...)
- }
- listResp, err := ds.ListPodSandbox(getTestCTX(), &runtimeapi.ListPodSandboxRequest{})
- require.NoError(t, err)
- assert.Len(t, listResp.Items, len(expected))
- assert.Equal(t, expected, listResp.Items)
-}
-
-// TestSandboxStatus tests the basic lifecycle operations and verify that
-// the status returned reflects the operations performed.
-func TestSandboxStatus(t *testing.T) {
- ds, fDocker, fClock := newTestDockerService()
- labels := map[string]string{"label": "foobar1"}
- annotations := map[string]string{"annotation": "abc"}
- config := makeSandboxConfigWithLabelsAndAnnotations("foo", "bar", "1", 0, labels, annotations)
- r := rand.New(rand.NewSource(0)).Uint32()
- podIP := fmt.Sprintf("10.%d.%d.%d", byte(r>>16), byte(r>>8), byte(r))
-
- state := runtimeapi.PodSandboxState_SANDBOX_READY
- ct := int64(0)
- expected := &runtimeapi.PodSandboxStatus{
- State: state,
- CreatedAt: ct,
- Metadata: config.Metadata,
- Network: &runtimeapi.PodSandboxNetworkStatus{Ip: podIP, AdditionalIps: []*runtimeapi.PodIP{}},
- Linux: &runtimeapi.LinuxPodSandboxStatus{
- Namespaces: &runtimeapi.Namespace{
- Options: &runtimeapi.NamespaceOption{
- Pid: runtimeapi.NamespaceMode_CONTAINER,
- },
- },
- },
- Labels: labels,
- Annotations: annotations,
- }
-
- // Create the sandbox.
- fClock.SetTime(time.Now())
- expected.CreatedAt = fClock.Now().UnixNano()
- runResp, err := ds.RunPodSandbox(getTestCTX(), &runtimeapi.RunPodSandboxRequest{Config: config})
- require.NoError(t, err)
- id := runResp.PodSandboxId
-
- // Check internal labels
- c, err := fDocker.InspectContainer(id)
- assert.NoError(t, err)
- assert.Equal(t, c.Config.Labels[containerTypeLabelKey], containerTypeLabelSandbox)
- assert.Equal(t, c.Config.Labels[types.KubernetesContainerNameLabel], sandboxContainerName)
-
- expected.Id = id // ID is only known after the creation.
- statusResp, err := ds.PodSandboxStatus(getTestCTX(), &runtimeapi.PodSandboxStatusRequest{PodSandboxId: id})
- require.NoError(t, err)
- assert.Equal(t, expected, statusResp.Status)
-
- // Stop the sandbox.
- expected.State = runtimeapi.PodSandboxState_SANDBOX_NOTREADY
- _, err = ds.StopPodSandbox(getTestCTX(), &runtimeapi.StopPodSandboxRequest{PodSandboxId: id})
- require.NoError(t, err)
- // IP not valid after sandbox stop
- expected.Network.Ip = ""
- expected.Network.AdditionalIps = []*runtimeapi.PodIP{}
- statusResp, err = ds.PodSandboxStatus(getTestCTX(), &runtimeapi.PodSandboxStatusRequest{PodSandboxId: id})
- require.NoError(t, err)
- assert.Equal(t, expected, statusResp.Status)
-
- // Remove the container.
- _, err = ds.RemovePodSandbox(getTestCTX(), &runtimeapi.RemovePodSandboxRequest{PodSandboxId: id})
- require.NoError(t, err)
- statusResp, err = ds.PodSandboxStatus(getTestCTX(), &runtimeapi.PodSandboxStatusRequest{PodSandboxId: id})
- assert.Error(t, err, fmt.Sprintf("status of sandbox: %+v", statusResp))
-}
-
-// TestSandboxStatusAfterRestart tests that retrieving sandbox status returns
-// an IP address even if RunPodSandbox() was not yet called for this pod, as
-// would happen on kubelet restart
-func TestSandboxStatusAfterRestart(t *testing.T) {
- ds, _, fClock := newTestDockerService()
- config := makeSandboxConfig("foo", "bar", "1", 0)
- r := rand.New(rand.NewSource(0)).Uint32()
- podIP := fmt.Sprintf("10.%d.%d.%d", byte(r>>16), byte(r>>8), byte(r))
- state := runtimeapi.PodSandboxState_SANDBOX_READY
- ct := int64(0)
- expected := &runtimeapi.PodSandboxStatus{
- State: state,
- CreatedAt: ct,
- Metadata: config.Metadata,
- Network: &runtimeapi.PodSandboxNetworkStatus{Ip: podIP, AdditionalIps: []*runtimeapi.PodIP{}},
- Linux: &runtimeapi.LinuxPodSandboxStatus{
- Namespaces: &runtimeapi.Namespace{
- Options: &runtimeapi.NamespaceOption{
- Pid: runtimeapi.NamespaceMode_CONTAINER,
- },
- },
- },
- Labels: map[string]string{},
- Annotations: map[string]string{},
- }
-
- // Create the sandbox.
- fClock.SetTime(time.Now())
- expected.CreatedAt = fClock.Now().UnixNano()
-
- createConfig, err := ds.makeSandboxDockerConfig(config, defaultSandboxImage)
- assert.NoError(t, err)
-
- createResp, err := ds.client.CreateContainer(*createConfig)
- assert.NoError(t, err)
- err = ds.client.StartContainer(createResp.ID)
- assert.NoError(t, err)
-
- // Check status without RunPodSandbox() having set up networking
- expected.Id = createResp.ID // ID is only known after the creation.
-
- statusResp, err := ds.PodSandboxStatus(getTestCTX(), &runtimeapi.PodSandboxStatusRequest{PodSandboxId: createResp.ID})
- require.NoError(t, err)
- assert.Equal(t, expected, statusResp.Status)
-}
-
-// TestNetworkPluginInvocation checks that the right SetUpPod and TearDownPod
-// calls are made when we run/stop a sandbox.
-func TestNetworkPluginInvocation(t *testing.T) {
- ds, _, _ := newTestDockerService()
- ctrl := gomock.NewController(t)
- defer ctrl.Finish()
- mockPlugin := nettest.NewMockNetworkPlugin(ctrl)
- ds.network = network.NewPluginManager(mockPlugin)
-
- name := "foo0"
- ns := "bar0"
- c := makeSandboxConfigWithLabelsAndAnnotations(
- name, ns, "0", 0,
- map[string]string{"label": name},
- map[string]string{"annotation": ns},
- )
- cID := kubecontainer.ContainerID{Type: runtimeName, ID: libdocker.GetFakeContainerID(fmt.Sprintf("/%v", makeSandboxName(c)))}
-
- mockPlugin.EXPECT().Name().Return("mockNetworkPlugin").AnyTimes()
- setup := mockPlugin.EXPECT().SetUpPod(ns, name, cID, map[string]string{"annotation": ns}, map[string]string{})
- mockPlugin.EXPECT().TearDownPod(ns, name, cID).After(setup)
-
- _, err := ds.RunPodSandbox(getTestCTX(), &runtimeapi.RunPodSandboxRequest{Config: c})
- require.NoError(t, err)
- _, err = ds.StopPodSandbox(getTestCTX(), &runtimeapi.StopPodSandboxRequest{PodSandboxId: cID.ID})
- require.NoError(t, err)
-}
-
-// TestHostNetworkPluginInvocation checks that *no* SetUp/TearDown calls happen
-// for host network sandboxes.
-func TestHostNetworkPluginInvocation(t *testing.T) {
- ds, _, _ := newTestDockerService()
- ctrl := gomock.NewController(t)
- defer ctrl.Finish()
- mockPlugin := nettest.NewMockNetworkPlugin(ctrl)
- ds.network = network.NewPluginManager(mockPlugin)
-
- name := "foo0"
- ns := "bar0"
- c := makeSandboxConfigWithLabelsAndAnnotations(
- name, ns, "0", 0,
- map[string]string{"label": name},
- map[string]string{"annotation": ns},
- )
- c.Linux = &runtimeapi.LinuxPodSandboxConfig{
- SecurityContext: &runtimeapi.LinuxSandboxSecurityContext{
- NamespaceOptions: &runtimeapi.NamespaceOption{
- Network: runtimeapi.NamespaceMode_NODE,
- },
- },
- }
- cID := kubecontainer.ContainerID{Type: runtimeName, ID: libdocker.GetFakeContainerID(fmt.Sprintf("/%v", makeSandboxName(c)))}
-
- // No calls to network plugin are expected
- _, err := ds.RunPodSandbox(getTestCTX(), &runtimeapi.RunPodSandboxRequest{Config: c})
- require.NoError(t, err)
-
- _, err = ds.StopPodSandbox(getTestCTX(), &runtimeapi.StopPodSandboxRequest{PodSandboxId: cID.ID})
- require.NoError(t, err)
-}
-
-// TestSetUpPodFailure checks that the sandbox should be not ready when it
-// hits a SetUpPod failure.
-func TestSetUpPodFailure(t *testing.T) {
- ds, _, _ := newTestDockerService()
- ctrl := gomock.NewController(t)
- defer ctrl.Finish()
- mockPlugin := nettest.NewMockNetworkPlugin(ctrl)
- ds.network = network.NewPluginManager(mockPlugin)
-
- name := "foo0"
- ns := "bar0"
- c := makeSandboxConfigWithLabelsAndAnnotations(
- name, ns, "0", 0,
- map[string]string{"label": name},
- map[string]string{"annotation": ns},
- )
- cID := kubecontainer.ContainerID{Type: runtimeName, ID: libdocker.GetFakeContainerID(fmt.Sprintf("/%v", makeSandboxName(c)))}
- mockPlugin.EXPECT().Name().Return("mockNetworkPlugin").AnyTimes()
- mockPlugin.EXPECT().SetUpPod(ns, name, cID, map[string]string{"annotation": ns}, map[string]string{}).Return(errors.New("setup pod error")).AnyTimes()
- // If SetUpPod() fails, we expect TearDownPod() to immediately follow
- mockPlugin.EXPECT().TearDownPod(ns, name, cID)
- // Assume network plugin doesn't return error, dockershim should still be able to return not ready correctly.
- mockPlugin.EXPECT().GetPodNetworkStatus(ns, name, cID).Return(&network.PodNetworkStatus{IP: net.IP("127.0.0.01")}, nil).AnyTimes()
-
- t.Logf("RunPodSandbox should return error")
- _, err := ds.RunPodSandbox(getTestCTX(), &runtimeapi.RunPodSandboxRequest{Config: c})
- assert.Error(t, err)
-
- t.Logf("PodSandboxStatus should be not ready")
- statusResp, err := ds.PodSandboxStatus(getTestCTX(), &runtimeapi.PodSandboxStatusRequest{PodSandboxId: cID.ID})
- require.NoError(t, err)
- assert.Equal(t, runtimeapi.PodSandboxState_SANDBOX_NOTREADY, statusResp.Status.State)
-
- t.Logf("ListPodSandbox should also show not ready")
- listResp, err := ds.ListPodSandbox(getTestCTX(), &runtimeapi.ListPodSandboxRequest{})
- require.NoError(t, err)
- var sandbox *runtimeapi.PodSandbox
- for _, s := range listResp.Items {
- if s.Id == cID.ID {
- sandbox = s
- break
- }
- }
- assert.NotNil(t, sandbox)
- assert.Equal(t, runtimeapi.PodSandboxState_SANDBOX_NOTREADY, sandbox.State)
-}
diff --git a/pkg/kubelet/dockershim/docker_service.go b/pkg/kubelet/dockershim/docker_service.go
deleted file mode 100644
index c6ad4aed011..00000000000
--- a/pkg/kubelet/dockershim/docker_service.go
+++ /dev/null
@@ -1,579 +0,0 @@
-//go:build !dockerless
-// +build !dockerless
-
-/*
-Copyright 2016 The Kubernetes Authors.
-
-Licensed under the Apache License, Version 2.0 (the "License");
-you may not use this file except in compliance with the License.
-You may obtain a copy of the License at
-
- http://www.apache.org/licenses/LICENSE-2.0
-
-Unless required by applicable law or agreed to in writing, software
-distributed under the License is distributed on an "AS IS" BASIS,
-WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-See the License for the specific language governing permissions and
-limitations under the License.
-*/
-
-package dockershim
-
-import (
- "context"
- "fmt"
- "net/http"
- "os"
- "path"
- "path/filepath"
- "runtime"
- "sync"
- "time"
-
- "github.com/blang/semver"
- dockertypes "github.com/docker/docker/api/types"
- "k8s.io/klog/v2"
-
- v1 "k8s.io/api/core/v1"
- runtimeapi "k8s.io/cri-api/pkg/apis/runtime/v1alpha2"
- kubeletconfig "k8s.io/kubernetes/pkg/kubelet/apis/config"
- "k8s.io/kubernetes/pkg/kubelet/checkpointmanager"
- "k8s.io/kubernetes/pkg/kubelet/checkpointmanager/errors"
- kubecontainer "k8s.io/kubernetes/pkg/kubelet/container"
- "k8s.io/kubernetes/pkg/kubelet/cri/streaming"
- "k8s.io/kubernetes/pkg/kubelet/dockershim/cm"
- "k8s.io/kubernetes/pkg/kubelet/dockershim/network"
- "k8s.io/kubernetes/pkg/kubelet/dockershim/network/cni"
- "k8s.io/kubernetes/pkg/kubelet/dockershim/network/hostport"
- "k8s.io/kubernetes/pkg/kubelet/dockershim/network/kubenet"
- "k8s.io/kubernetes/pkg/kubelet/legacy"
- "k8s.io/kubernetes/pkg/kubelet/util/cache"
-
- "k8s.io/kubernetes/pkg/kubelet/dockershim/libdocker"
- "k8s.io/kubernetes/pkg/kubelet/dockershim/metrics"
-)
-
-const (
- dockerRuntimeName = "docker"
- kubeAPIVersion = "0.1.0"
-
- // String used to detect docker host mode for various namespaces (e.g.
- // networking). Must match the value returned by docker inspect -f
- // '{{.HostConfig.NetworkMode}}'.
- namespaceModeHost = "host"
-
- dockerNetNSFmt = "/proc/%v/ns/net"
-
- // Internal docker labels used to identify whether a container is a sandbox
- // or a regular container.
- // TODO: This is not backward compatible with older containers. We will
- // need to add filtering based on names.
- containerTypeLabelKey = "io.kubernetes.docker.type"
- containerTypeLabelSandbox = "podsandbox"
- containerTypeLabelContainer = "container"
- containerLogPathLabelKey = "io.kubernetes.container.logpath"
- sandboxIDLabelKey = "io.kubernetes.sandbox.id"
-
- // The expiration time of version cache.
- versionCacheTTL = 60 * time.Second
-
- defaultCgroupDriver = "cgroupfs"
-
- // TODO: https://github.com/kubernetes/kubernetes/pull/31169 provides experimental
- // defaulting of host user namespace that may be enabled when the docker daemon
- // is using remapped UIDs.
- // Dockershim should provide detection support for a remapping environment .
- // This should be included in the feature proposal. Defaulting may still occur according
- // to kubelet behavior and system settings in addition to any API flags that may be introduced.
-)
-
-// CRIService includes all methods necessary for a CRI server.
-type CRIService interface {
- runtimeapi.RuntimeServiceServer
- runtimeapi.ImageServiceServer
- Start() error
-}
-
-// DockerService is an interface that embeds the new RuntimeService and
-// ImageService interfaces.
-type DockerService interface {
- CRIService
-
- // For serving streaming calls.
- http.Handler
-
- // For supporting legacy features.
- legacy.DockerLegacyService
-}
-
-// NetworkPluginSettings is the subset of kubelet runtime args we pass
-// to the container runtime shim so it can probe for network plugins.
-// In the future we will feed these directly to a standalone container
-// runtime process.
-type NetworkPluginSettings struct {
- // HairpinMode is best described by comments surrounding the kubelet arg
- HairpinMode kubeletconfig.HairpinMode
- // NonMasqueradeCIDR is the range of ips which should *not* be included
- // in any MASQUERADE rules applied by the plugin
- NonMasqueradeCIDR string
- // PluginName is the name of the plugin, runtime shim probes for
- PluginName string
- // PluginBinDirString is a list of directiores delimited by commas, in
- // which the binaries for the plugin with PluginName may be found.
- PluginBinDirString string
- // PluginBinDirs is an array of directories in which the binaries for
- // the plugin with PluginName may be found. The admin is responsible for
- // provisioning these binaries before-hand.
- PluginBinDirs []string
- // PluginConfDir is the directory in which the admin places a CNI conf.
- // Depending on the plugin, this may be an optional field, eg: kubenet
- // generates its own plugin conf.
- PluginConfDir string
- // PluginCacheDir is the directory in which CNI should store cache files.
- PluginCacheDir string
- // MTU is the desired MTU for network devices created by the plugin.
- MTU int
-}
-
-// namespaceGetter is a wrapper around the dockerService that implements
-// the network.NamespaceGetter interface.
-type namespaceGetter struct {
- ds *dockerService
-}
-
-func (n *namespaceGetter) GetNetNS(containerID string) (string, error) {
- return n.ds.GetNetNS(containerID)
-}
-
-// portMappingGetter is a wrapper around the dockerService that implements
-// the network.PortMappingGetter interface.
-type portMappingGetter struct {
- ds *dockerService
-}
-
-func (p *portMappingGetter) GetPodPortMappings(containerID string) ([]*hostport.PortMapping, error) {
- return p.ds.GetPodPortMappings(containerID)
-}
-
-// dockerNetworkHost implements network.Host by wrapping the legacy host passed in by the kubelet
-// and dockerServices which implements the rest of the network host interfaces.
-// The legacy host methods are slated for deletion.
-type dockerNetworkHost struct {
- *namespaceGetter
- *portMappingGetter
-}
-
-var internalLabelKeys = []string{containerTypeLabelKey, containerLogPathLabelKey, sandboxIDLabelKey}
-
-// ClientConfig is parameters used to initialize docker client
-type ClientConfig struct {
- DockerEndpoint string
- RuntimeRequestTimeout time.Duration
- ImagePullProgressDeadline time.Duration
-
- // Configuration for fake docker client
- EnableSleep bool
- WithTraceDisabled bool
-}
-
-// NewDockerClientFromConfig create a docker client from given configure
-// return nil if nil configure is given.
-func NewDockerClientFromConfig(config *ClientConfig) libdocker.Interface {
- if config != nil {
- // Create docker client.
- client := libdocker.ConnectToDockerOrDie(
- config.DockerEndpoint,
- config.RuntimeRequestTimeout,
- config.ImagePullProgressDeadline,
- )
- return client
- }
-
- return nil
-}
-
-// NewDockerService creates a new `DockerService` struct.
-// NOTE: Anything passed to DockerService should be eventually handled in another way when we switch to running the shim as a different process.
-func NewDockerService(config *ClientConfig, podSandboxImage string, streamingConfig *streaming.Config, pluginSettings *NetworkPluginSettings,
- cgroupsName string, kubeCgroupDriver string, dockershimRootDir string) (DockerService, error) {
-
- client := NewDockerClientFromConfig(config)
-
- c := libdocker.NewInstrumentedInterface(client)
-
- checkpointManager, err := checkpointmanager.NewCheckpointManager(filepath.Join(dockershimRootDir, sandboxCheckpointDir))
- if err != nil {
- return nil, err
- }
-
- ds := &dockerService{
- client: c,
- os: kubecontainer.RealOS{},
- podSandboxImage: podSandboxImage,
- streamingRuntime: &streamingRuntime{
- client: client,
- execHandler: &NativeExecHandler{},
- },
- containerManager: cm.NewContainerManager(cgroupsName, client),
- checkpointManager: checkpointManager,
- networkReady: make(map[string]bool),
- containerCleanupInfos: make(map[string]*containerCleanupInfo),
- }
-
- // check docker version compatibility.
- if err = ds.checkVersionCompatibility(); err != nil {
- return nil, err
- }
-
- // create streaming server if configured.
- if streamingConfig != nil {
- var err error
- ds.streamingServer, err = streaming.NewServer(*streamingConfig, ds.streamingRuntime)
- if err != nil {
- return nil, err
- }
- }
-
- // Determine the hairpin mode.
- if err := effectiveHairpinMode(pluginSettings); err != nil {
- // This is a non-recoverable error. Returning it up the callstack will just
- // lead to retries of the same failure, so just fail hard.
- return nil, err
- }
- klog.InfoS("Hairpin mode is set", "hairpinMode", pluginSettings.HairpinMode)
-
- // dockershim currently only supports CNI plugins.
- pluginSettings.PluginBinDirs = cni.SplitDirs(pluginSettings.PluginBinDirString)
- cniPlugins := cni.ProbeNetworkPlugins(pluginSettings.PluginConfDir, pluginSettings.PluginCacheDir, pluginSettings.PluginBinDirs)
- cniPlugins = append(cniPlugins, kubenet.NewPlugin(pluginSettings.PluginBinDirs, pluginSettings.PluginCacheDir))
- netHost := &dockerNetworkHost{
- &namespaceGetter{ds},
- &portMappingGetter{ds},
- }
- plug, err := network.InitNetworkPlugin(cniPlugins, pluginSettings.PluginName, netHost, pluginSettings.HairpinMode, pluginSettings.NonMasqueradeCIDR, pluginSettings.MTU)
- if err != nil {
- return nil, fmt.Errorf("didn't find compatible CNI plugin with given settings %+v: %v", pluginSettings, err)
- }
- ds.network = network.NewPluginManager(plug)
- klog.InfoS("Docker cri networking managed by the network plugin", "networkPluginName", plug.Name())
-
- dockerInfo, err := ds.client.Info()
- if err != nil {
- return nil, fmt.Errorf("Failed to execute Info() call to the Docker client")
- }
- klog.InfoS("Docker Info", "dockerInfo", dockerInfo)
- ds.dockerRootDir = dockerInfo.DockerRootDir
-
- // skipping cgroup driver checks for Windows
- if runtime.GOOS == "linux" {
- cgroupDriver := defaultCgroupDriver
- if len(dockerInfo.CgroupDriver) == 0 {
- klog.InfoS("No cgroup driver is set in Docker")
- klog.InfoS("Falling back to use the default driver", "cgroupDriver", cgroupDriver)
- } else {
- cgroupDriver = dockerInfo.CgroupDriver
- }
- if len(kubeCgroupDriver) != 0 && kubeCgroupDriver != cgroupDriver {
- return nil, fmt.Errorf("misconfiguration: kubelet cgroup driver: %q is different from docker cgroup driver: %q", kubeCgroupDriver, cgroupDriver)
- }
- klog.InfoS("Setting cgroupDriver", "cgroupDriver", cgroupDriver)
- ds.cgroupDriver = cgroupDriver
- }
-
- ds.versionCache = cache.NewObjectCache(
- func() (interface{}, error) {
- return ds.getDockerVersion()
- },
- versionCacheTTL,
- )
-
- // Register prometheus metrics.
- metrics.Register()
-
- return ds, nil
-}
-
-type dockerService struct {
- client libdocker.Interface
- os kubecontainer.OSInterface
- podSandboxImage string
- streamingRuntime *streamingRuntime
- streamingServer streaming.Server
-
- network *network.PluginManager
- // Map of podSandboxID :: network-is-ready
- networkReady map[string]bool
- networkReadyLock sync.Mutex
-
- containerManager cm.ContainerManager
- // cgroup driver used by Docker runtime.
- cgroupDriver string
- checkpointManager checkpointmanager.CheckpointManager
- // caches the version of the runtime.
- // To be compatible with multiple docker versions, we need to perform
- // version checking for some operations. Use this cache to avoid querying
- // the docker daemon every time we need to do such checks.
- versionCache *cache.ObjectCache
-
- // docker root directory
- dockerRootDir string
-
- // containerCleanupInfos maps container IDs to the `containerCleanupInfo` structs
- // needed to clean up after containers have been removed.
- // (see `applyPlatformSpecificDockerConfig` and `performPlatformSpecificContainerCleanup`
- // methods for more info).
- containerCleanupInfos map[string]*containerCleanupInfo
- cleanupInfosLock sync.RWMutex
-}
-
-// TODO: handle context.
-
-// Version returns the runtime name, runtime version and runtime API version
-func (ds *dockerService) Version(_ context.Context, r *runtimeapi.VersionRequest) (*runtimeapi.VersionResponse, error) {
- v, err := ds.getDockerVersion()
- if err != nil {
- return nil, err
- }
- return &runtimeapi.VersionResponse{
- Version: kubeAPIVersion,
- RuntimeName: dockerRuntimeName,
- RuntimeVersion: v.Version,
- RuntimeApiVersion: v.APIVersion,
- }, nil
-}
-
-// getDockerVersion gets the version information from docker.
-func (ds *dockerService) getDockerVersion() (*dockertypes.Version, error) {
- v, err := ds.client.Version()
- if err != nil {
- return nil, fmt.Errorf("failed to get docker version: %v", err)
- }
- // Docker API version (e.g., 1.23) is not semver compatible. Add a ".0"
- // suffix to remedy this.
- v.APIVersion = fmt.Sprintf("%s.0", v.APIVersion)
- return v, nil
-}
-
-// UpdateRuntimeConfig updates the runtime config. Currently only handles podCIDR updates.
-func (ds *dockerService) UpdateRuntimeConfig(_ context.Context, r *runtimeapi.UpdateRuntimeConfigRequest) (*runtimeapi.UpdateRuntimeConfigResponse, error) {
- runtimeConfig := r.GetRuntimeConfig()
- if runtimeConfig == nil {
- return &runtimeapi.UpdateRuntimeConfigResponse{}, nil
- }
-
- klog.InfoS("Docker cri received runtime config", "runtimeConfig", runtimeConfig)
- if ds.network != nil && runtimeConfig.NetworkConfig.PodCidr != "" {
- event := make(map[string]interface{})
- event[network.NET_PLUGIN_EVENT_POD_CIDR_CHANGE_DETAIL_CIDR] = runtimeConfig.NetworkConfig.PodCidr
- ds.network.Event(network.NET_PLUGIN_EVENT_POD_CIDR_CHANGE, event)
- }
-
- return &runtimeapi.UpdateRuntimeConfigResponse{}, nil
-}
-
-// GetNetNS returns the network namespace of the given containerID. The ID
-// supplied is typically the ID of a pod sandbox. This getter doesn't try
-// to map non-sandbox IDs to their respective sandboxes.
-func (ds *dockerService) GetNetNS(podSandboxID string) (string, error) {
- r, err := ds.client.InspectContainer(podSandboxID)
- if err != nil {
- return "", err
- }
- return getNetworkNamespace(r)
-}
-
-// GetPodPortMappings returns the port mappings of the given podSandbox ID.
-func (ds *dockerService) GetPodPortMappings(podSandboxID string) ([]*hostport.PortMapping, error) {
- // TODO: get portmappings from docker labels for backward compatibility
- checkpoint := NewPodSandboxCheckpoint("", "", &CheckpointData{})
- err := ds.checkpointManager.GetCheckpoint(podSandboxID, checkpoint)
- // Return empty portMappings if checkpoint is not found
- if err != nil {
- if err == errors.ErrCheckpointNotFound {
- return nil, nil
- }
- errRem := ds.checkpointManager.RemoveCheckpoint(podSandboxID)
- if errRem != nil {
- klog.ErrorS(errRem, "Failed to delete corrupt checkpoint for sandbox", "podSandboxID", podSandboxID)
- }
- return nil, err
- }
- _, _, _, checkpointedPortMappings, _ := checkpoint.GetData()
- portMappings := make([]*hostport.PortMapping, 0, len(checkpointedPortMappings))
- for _, pm := range checkpointedPortMappings {
- proto := toAPIProtocol(*pm.Protocol)
- portMappings = append(portMappings, &hostport.PortMapping{
- HostPort: *pm.HostPort,
- ContainerPort: *pm.ContainerPort,
- Protocol: proto,
- HostIP: pm.HostIP,
- })
- }
- return portMappings, nil
-}
-
-// Start initializes and starts components in dockerService.
-func (ds *dockerService) Start() error {
- ds.initCleanup()
-
- go func() {
- if err := ds.streamingServer.Start(true); err != nil {
- klog.ErrorS(err, "Streaming server stopped unexpectedly")
- os.Exit(1)
- }
- }()
-
- return ds.containerManager.Start()
-}
-
-// initCleanup is responsible for cleaning up any crufts left by previous
-// runs. If there are any errors, it simply logs them.
-func (ds *dockerService) initCleanup() {
- errors := ds.platformSpecificContainerInitCleanup()
-
- for _, err := range errors {
- klog.InfoS("Initialization error", "err", err)
- }
-}
-
-// Status returns the status of the runtime.
-func (ds *dockerService) Status(_ context.Context, r *runtimeapi.StatusRequest) (*runtimeapi.StatusResponse, error) {
- runtimeReady := &runtimeapi.RuntimeCondition{
- Type: runtimeapi.RuntimeReady,
- Status: true,
- }
- networkReady := &runtimeapi.RuntimeCondition{
- Type: runtimeapi.NetworkReady,
- Status: true,
- }
- conditions := []*runtimeapi.RuntimeCondition{runtimeReady, networkReady}
- if _, err := ds.client.Version(); err != nil {
- runtimeReady.Status = false
- runtimeReady.Reason = "DockerDaemonNotReady"
- runtimeReady.Message = fmt.Sprintf("docker: failed to get docker version: %v", err)
- }
- if err := ds.network.Status(); err != nil {
- networkReady.Status = false
- networkReady.Reason = "NetworkPluginNotReady"
- networkReady.Message = fmt.Sprintf("docker: network plugin is not ready: %v", err)
- }
- status := &runtimeapi.RuntimeStatus{Conditions: conditions}
- return &runtimeapi.StatusResponse{Status: status}, nil
-}
-
-func (ds *dockerService) ServeHTTP(w http.ResponseWriter, r *http.Request) {
- if ds.streamingServer != nil {
- ds.streamingServer.ServeHTTP(w, r)
- } else {
- http.NotFound(w, r)
- }
-}
-
-// GenerateExpectedCgroupParent returns cgroup parent in syntax expected by cgroup driver
-func (ds *dockerService) GenerateExpectedCgroupParent(cgroupParent string) (string, error) {
- if cgroupParent != "" {
- // if docker uses the systemd cgroup driver, it expects *.slice style names for cgroup parent.
- // if we configured kubelet to use --cgroup-driver=cgroupfs, and docker is configured to use systemd driver
- // docker will fail to launch the container because the name we provide will not be a valid slice.
- // this is a very good thing.
- if ds.cgroupDriver == "systemd" {
- // Pass only the last component of the cgroup path to systemd.
- cgroupParent = path.Base(cgroupParent)
- }
- }
- klog.V(3).InfoS("Setting cgroup parent", "cgroupParent", cgroupParent)
- return cgroupParent, nil
-}
-
-// checkVersionCompatibility verifies whether docker is in a compatible version.
-func (ds *dockerService) checkVersionCompatibility() error {
- apiVersion, err := ds.getDockerAPIVersion()
- if err != nil {
- return err
- }
-
- minAPIVersion, err := semver.Parse(libdocker.MinimumDockerAPIVersion)
- if err != nil {
- return err
- }
-
- // Verify the docker version.
- result := apiVersion.Compare(minAPIVersion)
- if result < 0 {
- return fmt.Errorf("docker API version is older than %s", libdocker.MinimumDockerAPIVersion)
- }
-
- return nil
-}
-
-// getDockerAPIVersion gets the semver-compatible docker api version.
-func (ds *dockerService) getDockerAPIVersion() (*semver.Version, error) {
- var dv *dockertypes.Version
- var err error
- if ds.versionCache != nil {
- dv, err = ds.getDockerVersionFromCache()
- } else {
- dv, err = ds.getDockerVersion()
- }
- if err != nil {
- return nil, err
- }
-
- apiVersion, err := semver.Parse(dv.APIVersion)
- if err != nil {
- return nil, err
- }
- return &apiVersion, nil
-}
-
-func (ds *dockerService) getDockerVersionFromCache() (*dockertypes.Version, error) {
- // We only store on key in the cache.
- const dummyKey = "version"
- value, err := ds.versionCache.Get(dummyKey)
- if err != nil {
- return nil, err
- }
- dv, ok := value.(*dockertypes.Version)
- if !ok {
- return nil, fmt.Errorf("converted to *dockertype.Version error")
- }
- return dv, nil
-}
-
-func toAPIProtocol(protocol Protocol) v1.Protocol {
- switch protocol {
- case protocolTCP:
- return v1.ProtocolTCP
- case protocolUDP:
- return v1.ProtocolUDP
- case protocolSCTP:
- return v1.ProtocolSCTP
- }
- klog.InfoS("Unknown protocol, defaulting to TCP", "protocol", protocol)
- return v1.ProtocolTCP
-}
-
-// effectiveHairpinMode determines the effective hairpin mode given the
-// configured mode, and whether cbr0 should be configured.
-func effectiveHairpinMode(s *NetworkPluginSettings) error {
- // The hairpin mode setting doesn't matter if:
- // - We're not using a bridge network. This is hard to check because we might
- // be using a plugin.
- // - It's set to hairpin-veth for a container runtime that doesn't know how
- // to set the hairpin flag on the veth's of containers. Currently the
- // docker runtime is the only one that understands this.
- // - It's set to "none".
- if s.HairpinMode == kubeletconfig.PromiscuousBridge || s.HairpinMode == kubeletconfig.HairpinVeth {
- if s.HairpinMode == kubeletconfig.PromiscuousBridge && s.PluginName != "kubenet" {
- // This is not a valid combination, since promiscuous-bridge only works on kubenet. Users might be using the
- // default values (from before the hairpin-mode flag existed) and we
- // should keep the old behavior.
- klog.InfoS("Hairpin mode is set but kubenet is not enabled, falling back to HairpinVeth", "hairpinMode", s.HairpinMode)
- s.HairpinMode = kubeletconfig.HairpinVeth
- return nil
- }
- } else if s.HairpinMode != kubeletconfig.HairpinNone {
- return fmt.Errorf("unknown value: %q", s.HairpinMode)
- }
- return nil
-}
diff --git a/pkg/kubelet/dockershim/docker_service_test.go b/pkg/kubelet/dockershim/docker_service_test.go
deleted file mode 100644
index 6f89594b69e..00000000000
--- a/pkg/kubelet/dockershim/docker_service_test.go
+++ /dev/null
@@ -1,171 +0,0 @@
-//go:build !dockerless
-// +build !dockerless
-
-/*
-Copyright 2016 The Kubernetes Authors.
-
-Licensed under the Apache License, Version 2.0 (the "License");
-you may not use this file except in compliance with the License.
-You may obtain a copy of the License at
-
- http://www.apache.org/licenses/LICENSE-2.0
-
-Unless required by applicable law or agreed to in writing, software
-distributed under the License is distributed on an "AS IS" BASIS,
-WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-See the License for the specific language governing permissions and
-limitations under the License.
-*/
-
-package dockershim
-
-import (
- "errors"
- "math/rand"
- "testing"
- "time"
-
- "github.com/blang/semver"
- dockertypes "github.com/docker/docker/api/types"
- "github.com/golang/mock/gomock"
- "github.com/stretchr/testify/assert"
- "github.com/stretchr/testify/require"
- runtimeapi "k8s.io/cri-api/pkg/apis/runtime/v1alpha2"
- "k8s.io/kubernetes/pkg/kubelet/checkpointmanager"
- containertest "k8s.io/kubernetes/pkg/kubelet/container/testing"
- "k8s.io/kubernetes/pkg/kubelet/dockershim/libdocker"
- "k8s.io/kubernetes/pkg/kubelet/dockershim/network"
- nettest "k8s.io/kubernetes/pkg/kubelet/dockershim/network/testing"
- "k8s.io/kubernetes/pkg/kubelet/util/cache"
- testingclock "k8s.io/utils/clock/testing"
-)
-
-type mockCheckpointManager struct {
- checkpoint map[string]*PodSandboxCheckpoint
-}
-
-func (ckm *mockCheckpointManager) CreateCheckpoint(checkpointKey string, checkpoint checkpointmanager.Checkpoint) error {
- ckm.checkpoint[checkpointKey] = checkpoint.(*PodSandboxCheckpoint)
- return nil
-}
-
-func (ckm *mockCheckpointManager) GetCheckpoint(checkpointKey string, checkpoint checkpointmanager.Checkpoint) error {
- *(checkpoint.(*PodSandboxCheckpoint)) = *(ckm.checkpoint[checkpointKey])
- return nil
-}
-
-func (ckm *mockCheckpointManager) RemoveCheckpoint(checkpointKey string) error {
- _, ok := ckm.checkpoint[checkpointKey]
- if ok {
- delete(ckm.checkpoint, "moo")
- }
- return nil
-}
-
-func (ckm *mockCheckpointManager) ListCheckpoints() ([]string, error) {
- var keys []string
- for key := range ckm.checkpoint {
- keys = append(keys, key)
- }
- return keys, nil
-}
-
-func newMockCheckpointManager() checkpointmanager.CheckpointManager {
- return &mockCheckpointManager{checkpoint: make(map[string]*PodSandboxCheckpoint)}
-}
-
-func newTestDockerService() (*dockerService, *libdocker.FakeDockerClient, *testingclock.FakeClock) {
- fakeClock := testingclock.NewFakeClock(time.Time{})
- c := libdocker.NewFakeDockerClient().WithClock(fakeClock).WithVersion("1.11.2", "1.23").WithRandSource(rand.NewSource(0))
- pm := network.NewPluginManager(&network.NoopNetworkPlugin{})
- ckm := newMockCheckpointManager()
- return &dockerService{
- client: c,
- os: &containertest.FakeOS{},
- network: pm,
- checkpointManager: ckm,
- networkReady: make(map[string]bool),
- dockerRootDir: "/docker/root/dir",
- }, c, fakeClock
-}
-
-func newTestDockerServiceWithVersionCache() (*dockerService, *libdocker.FakeDockerClient, *testingclock.FakeClock) {
- ds, c, fakeClock := newTestDockerService()
- ds.versionCache = cache.NewObjectCache(
- func() (interface{}, error) {
- return ds.getDockerVersion()
- },
- time.Hour*10,
- )
- return ds, c, fakeClock
-}
-
-// TestStatus tests the runtime status logic.
-func TestStatus(t *testing.T) {
- ds, fDocker, _ := newTestDockerService()
-
- assertStatus := func(expected map[string]bool, status *runtimeapi.RuntimeStatus) {
- conditions := status.GetConditions()
- assert.Equal(t, len(expected), len(conditions))
- for k, v := range expected {
- for _, c := range conditions {
- if k == c.Type {
- assert.Equal(t, v, c.Status)
- }
- }
- }
- }
-
- // Should report ready status if version returns no error.
- statusResp, err := ds.Status(getTestCTX(), &runtimeapi.StatusRequest{})
- require.NoError(t, err)
- assertStatus(map[string]bool{
- runtimeapi.RuntimeReady: true,
- runtimeapi.NetworkReady: true,
- }, statusResp.Status)
-
- // Should not report ready status if version returns error.
- fDocker.InjectError("version", errors.New("test error"))
- statusResp, err = ds.Status(getTestCTX(), &runtimeapi.StatusRequest{})
- assert.NoError(t, err)
- assertStatus(map[string]bool{
- runtimeapi.RuntimeReady: false,
- runtimeapi.NetworkReady: true,
- }, statusResp.Status)
-
- // Should not report ready status is network plugin returns error.
- ctrl := gomock.NewController(t)
- defer ctrl.Finish()
- mockPlugin := nettest.NewMockNetworkPlugin(ctrl)
- ds.network = network.NewPluginManager(mockPlugin)
- mockPlugin.EXPECT().Status().Return(errors.New("network error"))
- statusResp, err = ds.Status(getTestCTX(), &runtimeapi.StatusRequest{})
- assert.NoError(t, err)
- assertStatus(map[string]bool{
- runtimeapi.RuntimeReady: true,
- runtimeapi.NetworkReady: false,
- }, statusResp.Status)
-}
-
-func TestVersion(t *testing.T) {
- ds, _, _ := newTestDockerService()
-
- expectedVersion := &dockertypes.Version{Version: "1.11.2", APIVersion: "1.23.0"}
- v, err := ds.getDockerVersion()
- require.NoError(t, err)
- assert.Equal(t, expectedVersion, v)
-
- expectedAPIVersion := &semver.Version{Major: 1, Minor: 23, Patch: 0}
- apiVersion, err := ds.getDockerAPIVersion()
- require.NoError(t, err)
- assert.Equal(t, expectedAPIVersion, apiVersion)
-}
-
-func TestAPIVersionWithCache(t *testing.T) {
- ds, _, _ := newTestDockerServiceWithVersionCache()
-
- expected := &semver.Version{Major: 1, Minor: 23, Patch: 0}
- version, err := ds.getDockerAPIVersion()
- require.NoError(t, err)
- assert.Equal(t, expected, version)
-}
diff --git a/pkg/kubelet/dockershim/docker_stats.go b/pkg/kubelet/dockershim/docker_stats.go
deleted file mode 100644
index e673bcd1170..00000000000
--- a/pkg/kubelet/dockershim/docker_stats.go
+++ /dev/null
@@ -1,91 +0,0 @@
-//go:build !dockerless
-// +build !dockerless
-
-/*
-Copyright 2019 The Kubernetes Authors.
-
-Licensed under the Apache License, Version 2.0 (the "License");
-you may not use this file except in compliance with the License.
-You may obtain a copy of the License at
-
- http://www.apache.org/licenses/LICENSE-2.0
-
-Unless required by applicable law or agreed to in writing, software
-distributed under the License is distributed on an "AS IS" BASIS,
-WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-See the License for the specific language governing permissions and
-limitations under the License.
-*/
-
-package dockershim
-
-import (
- "context"
- "errors"
- "fmt"
-
- runtimeapi "k8s.io/cri-api/pkg/apis/runtime/v1alpha2"
-)
-
-var ErrNotImplemented = errors.New("Not implemented")
-
-// ContainerStats returns stats for a container stats request based on container id.
-func (ds *dockerService) ContainerStats(ctx context.Context, r *runtimeapi.ContainerStatsRequest) (*runtimeapi.ContainerStatsResponse, error) {
- filter := &runtimeapi.ContainerFilter{
- Id: r.ContainerId,
- }
- listResp, err := ds.ListContainers(ctx, &runtimeapi.ListContainersRequest{Filter: filter})
- if err != nil {
- return nil, err
- }
- if len(listResp.Containers) != 1 {
- return nil, fmt.Errorf("container with id %s not found", r.ContainerId)
- }
- stats, err := ds.getContainerStats(listResp.Containers[0])
- if err != nil {
- return nil, err
- }
- return &runtimeapi.ContainerStatsResponse{Stats: stats}, nil
-}
-
-// ListContainerStats returns stats for a list container stats request based on a filter.
-func (ds *dockerService) ListContainerStats(ctx context.Context, r *runtimeapi.ListContainerStatsRequest) (*runtimeapi.ListContainerStatsResponse, error) {
- containerStatsFilter := r.GetFilter()
- filter := &runtimeapi.ContainerFilter{}
-
- if containerStatsFilter != nil {
- filter.Id = containerStatsFilter.Id
- filter.PodSandboxId = containerStatsFilter.PodSandboxId
- filter.LabelSelector = containerStatsFilter.LabelSelector
- }
-
- listResp, err := ds.ListContainers(ctx, &runtimeapi.ListContainersRequest{Filter: filter})
- if err != nil {
- return nil, err
- }
-
- var stats []*runtimeapi.ContainerStats
- for _, container := range listResp.Containers {
- containerStats, err := ds.getContainerStats(container)
- if err != nil {
- return nil, err
- }
- if containerStats != nil {
- stats = append(stats, containerStats)
- }
- }
-
- return &runtimeapi.ListContainerStatsResponse{Stats: stats}, nil
-}
-
-// PodSandboxStats returns stats for a pod sandbox based on pod sandbox id.
-// This function is not implemented for the dockershim.
-func (ds *dockerService) PodSandboxStats(_ context.Context, r *runtimeapi.PodSandboxStatsRequest) (*runtimeapi.PodSandboxStatsResponse, error) {
- return nil, ErrNotImplemented
-}
-
-// ListPodSandboxStats returns stats for a list of pod sandboxes based on a filter.
-// This function is not implemented for the dockershim.
-func (ds *dockerService) ListPodSandboxStats(ctx context.Context, r *runtimeapi.ListPodSandboxStatsRequest) (*runtimeapi.ListPodSandboxStatsResponse, error) {
- return nil, ErrNotImplemented
-}
diff --git a/pkg/kubelet/dockershim/docker_stats_linux.go b/pkg/kubelet/dockershim/docker_stats_linux.go
deleted file mode 100644
index b7be67a6fe4..00000000000
--- a/pkg/kubelet/dockershim/docker_stats_linux.go
+++ /dev/null
@@ -1,63 +0,0 @@
-//go:build linux && !dockerless
-// +build linux,!dockerless
-
-/*
-Copyright 2017 The Kubernetes Authors.
-
-Licensed under the Apache License, Version 2.0 (the "License");
-you may not use this file except in compliance with the License.
-You may obtain a copy of the License at
-
- http://www.apache.org/licenses/LICENSE-2.0
-
-Unless required by applicable law or agreed to in writing, software
-distributed under the License is distributed on an "AS IS" BASIS,
-WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-See the License for the specific language governing permissions and
-limitations under the License.
-*/
-
-package dockershim
-
-import (
- "time"
-
- runtimeapi "k8s.io/cri-api/pkg/apis/runtime/v1alpha2"
-)
-
-func (ds *dockerService) getContainerStats(c *runtimeapi.Container) (*runtimeapi.ContainerStats, error) {
- statsJSON, err := ds.client.GetContainerStats(c.Id)
- if err != nil {
- return nil, err
- }
-
- containerJSON, err := ds.client.InspectContainerWithSize(c.Id)
- if err != nil {
- return nil, err
- }
-
- dockerStats := statsJSON.Stats
- timestamp := time.Now().UnixNano()
- containerStats := &runtimeapi.ContainerStats{
- Attributes: &runtimeapi.ContainerAttributes{
- Id: c.Id,
- Metadata: c.Metadata,
- Labels: c.Labels,
- Annotations: c.Annotations,
- },
- Cpu: &runtimeapi.CpuUsage{
- Timestamp: timestamp,
- UsageCoreNanoSeconds: &runtimeapi.UInt64Value{Value: dockerStats.CPUStats.CPUUsage.TotalUsage},
- },
- Memory: &runtimeapi.MemoryUsage{
- Timestamp: timestamp,
- WorkingSetBytes: &runtimeapi.UInt64Value{Value: dockerStats.MemoryStats.PrivateWorkingSet},
- },
- WritableLayer: &runtimeapi.FilesystemUsage{
- Timestamp: timestamp,
- FsId: &runtimeapi.FilesystemIdentifier{Mountpoint: ds.dockerRootDir},
- UsedBytes: &runtimeapi.UInt64Value{Value: uint64(*containerJSON.SizeRw)},
- },
- }
- return containerStats, nil
-}
diff --git a/pkg/kubelet/dockershim/docker_stats_test.go b/pkg/kubelet/dockershim/docker_stats_test.go
deleted file mode 100644
index a26743aa7d3..00000000000
--- a/pkg/kubelet/dockershim/docker_stats_test.go
+++ /dev/null
@@ -1,82 +0,0 @@
-//go:build linux && !dockerless
-// +build linux,!dockerless
-
-/*
-Copyright 2019 The Kubernetes Authors.
-
-Licensed under the Apache License, Version 2.0 (the "License");
-you may not use this file except in compliance with the License.
-You may obtain a copy of the License at
-
- http://www.apache.org/licenses/LICENSE-2.0
-
-Unless required by applicable law or agreed to in writing, software
-distributed under the License is distributed on an "AS IS" BASIS,
-WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-See the License for the specific language governing permissions and
-limitations under the License.
-*/
-
-package dockershim
-
-import (
- "testing"
-
- dockertypes "github.com/docker/docker/api/types"
- "github.com/docker/docker/api/types/container"
- "github.com/stretchr/testify/assert"
- runtimeapi "k8s.io/cri-api/pkg/apis/runtime/v1alpha2"
- "k8s.io/kubernetes/pkg/kubelet/dockershim/libdocker"
-)
-
-func TestContainerStats(t *testing.T) {
- labels := map[string]string{containerTypeLabelKey: containerTypeLabelContainer}
- tests := map[string]struct {
- containerID string
- container *libdocker.FakeContainer
- containerStats *dockertypes.StatsJSON
- calledDetails []libdocker.CalledDetail
- }{
- "container exists": {
- "k8s_fake_container",
- &libdocker.FakeContainer{
- ID: "k8s_fake_container",
- Name: "k8s_fake_container_1_2_1",
- Config: &container.Config{
- Labels: labels,
- },
- },
- &dockertypes.StatsJSON{},
- []libdocker.CalledDetail{
- libdocker.NewCalledDetail("list", nil),
- libdocker.NewCalledDetail("get_container_stats", nil),
- libdocker.NewCalledDetail("inspect_container_withsize", nil),
- },
- },
- "container doesn't exists": {
- "k8s_nonexistant_fake_container",
- &libdocker.FakeContainer{
- ID: "k8s_fake_container",
- Name: "k8s_fake_container_1_2_1",
- Config: &container.Config{
- Labels: labels,
- },
- },
- &dockertypes.StatsJSON{},
- []libdocker.CalledDetail{
- libdocker.NewCalledDetail("list", nil),
- },
- },
- }
-
- for name, test := range tests {
- t.Run(name, func(t *testing.T) {
- ds, fakeDocker, _ := newTestDockerService()
- fakeDocker.SetFakeContainers([]*libdocker.FakeContainer{test.container})
- fakeDocker.InjectContainerStats(map[string]*dockertypes.StatsJSON{test.container.ID: test.containerStats})
- ds.ContainerStats(getTestCTX(), &runtimeapi.ContainerStatsRequest{ContainerId: test.containerID})
- err := fakeDocker.AssertCallDetails(test.calledDetails...)
- assert.NoError(t, err)
- })
- }
-}
diff --git a/pkg/kubelet/dockershim/docker_stats_unsupported.go b/pkg/kubelet/dockershim/docker_stats_unsupported.go
deleted file mode 100644
index 1f3f5747a4b..00000000000
--- a/pkg/kubelet/dockershim/docker_stats_unsupported.go
+++ /dev/null
@@ -1,30 +0,0 @@
-//go:build !linux && !windows && !dockerless
-// +build !linux,!windows,!dockerless
-
-/*
-Copyright 2017 The Kubernetes Authors.
-
-Licensed under the Apache License, Version 2.0 (the "License");
-you may not use this file except in compliance with the License.
-You may obtain a copy of the License at
-
- http://www.apache.org/licenses/LICENSE-2.0
-
-Unless required by applicable law or agreed to in writing, software
-distributed under the License is distributed on an "AS IS" BASIS,
-WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-See the License for the specific language governing permissions and
-limitations under the License.
-*/
-
-package dockershim
-
-import (
- "fmt"
-
- runtimeapi "k8s.io/cri-api/pkg/apis/runtime/v1alpha2"
-)
-
-func (ds *dockerService) getContainerStats(c *runtimeapi.Container) (*runtimeapi.ContainerStats, error) {
- return nil, fmt.Errorf("not implemented")
-}
diff --git a/pkg/kubelet/dockershim/docker_stats_windows.go b/pkg/kubelet/dockershim/docker_stats_windows.go
deleted file mode 100644
index df9f0b37597..00000000000
--- a/pkg/kubelet/dockershim/docker_stats_windows.go
+++ /dev/null
@@ -1,91 +0,0 @@
-//go:build windows && !dockerless
-// +build windows,!dockerless
-
-/*
-Copyright 2017 The Kubernetes Authors.
-
-Licensed under the Apache License, Version 2.0 (the "License");
-you may not use this file except in compliance with the License.
-You may obtain a copy of the License at
-
- http://www.apache.org/licenses/LICENSE-2.0
-
-Unless required by applicable law or agreed to in writing, software
-distributed under the License is distributed on an "AS IS" BASIS,
-WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-See the License for the specific language governing permissions and
-limitations under the License.
-*/
-
-package dockershim
-
-import (
- "strings"
- "time"
-
- "github.com/Microsoft/hcsshim"
- runtimeapi "k8s.io/cri-api/pkg/apis/runtime/v1alpha2"
- "k8s.io/klog/v2"
-)
-
-func (ds *dockerService) getContainerStats(c *runtimeapi.Container) (*runtimeapi.ContainerStats, error) {
- hcsshimContainer, err := hcsshim.OpenContainer(c.Id)
- if err != nil {
- // As we moved from using Docker stats to hcsshim directly, we may query HCS with already exited container IDs.
- // That will typically happen with init-containers in Exited state. Docker still knows about them but the HCS does not.
- // As we don't want to block stats retrieval for other containers, we only log errors.
- if !hcsshim.IsNotExist(err) && !hcsshim.IsAlreadyStopped(err) {
- klog.V(4).InfoS("Error opening container (stats will be missing)", "containerID", c.Id, "err", err)
- }
- return nil, nil
- }
- defer func() {
- closeErr := hcsshimContainer.Close()
- if closeErr != nil {
- klog.ErrorS(closeErr, "Error closing container", "containerID", c.Id)
- }
- }()
-
- stats, err := hcsshimContainer.Statistics()
- if err != nil {
- if strings.Contains(err.Error(), "0x5") || strings.Contains(err.Error(), "0xc0370105") {
- // When the container is just created, querying for stats causes access errors because it hasn't started yet
- // This is transient; skip container for now
- //
- // These hcs errors do not have helpers exposed in public package so need to query for the known codes
- // https://github.com/microsoft/hcsshim/blob/master/internal/hcs/errors.go
- // PR to expose helpers in hcsshim: https://github.com/microsoft/hcsshim/pull/933
- klog.V(4).InfoS("Container is not in a state that stats can be accessed. This occurs when the container is created but not started.", "containerID", c.Id, "err", err)
- return nil, nil
- }
- return nil, err
- }
-
- timestamp := time.Now().UnixNano()
- containerStats := &runtimeapi.ContainerStats{
- Attributes: &runtimeapi.ContainerAttributes{
- Id: c.Id,
- Metadata: c.Metadata,
- Labels: c.Labels,
- Annotations: c.Annotations,
- },
- Cpu: &runtimeapi.CpuUsage{
- Timestamp: timestamp,
- // have to multiply cpu usage by 100 since stats units is in 100's of nano seconds for Windows
- UsageCoreNanoSeconds: &runtimeapi.UInt64Value{Value: stats.Processor.TotalRuntime100ns * 100},
- },
- Memory: &runtimeapi.MemoryUsage{
- Timestamp: timestamp,
- WorkingSetBytes: &runtimeapi.UInt64Value{Value: stats.Memory.UsagePrivateWorkingSetBytes},
- },
- WritableLayer: &runtimeapi.FilesystemUsage{
- Timestamp: timestamp,
- FsId: &runtimeapi.FilesystemIdentifier{Mountpoint: ds.dockerRootDir},
- // used bytes from image are not implemented on Windows
- // don't query for it since it is expensive to call docker over named pipe
- // https://github.com/moby/moby/blob/1ba54a5fd0ba293db3bea46cd67604b593f2048b/daemon/images/image_windows.go#L11-L14
- UsedBytes: &runtimeapi.UInt64Value{Value: 0},
- },
- }
- return containerStats, nil
-}
diff --git a/pkg/kubelet/dockershim/docker_streaming.go b/pkg/kubelet/dockershim/docker_streaming.go
deleted file mode 100644
index 106a5ff1f43..00000000000
--- a/pkg/kubelet/dockershim/docker_streaming.go
+++ /dev/null
@@ -1,209 +0,0 @@
-//go:build !dockerless
-// +build !dockerless
-
-/*
-Copyright 2016 The Kubernetes Authors.
-
-Licensed under the Apache License, Version 2.0 (the "License");
-you may not use this file except in compliance with the License.
-You may obtain a copy of the License at
-
- http://www.apache.org/licenses/LICENSE-2.0
-
-Unless required by applicable law or agreed to in writing, software
-distributed under the License is distributed on an "AS IS" BASIS,
-WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-See the License for the specific language governing permissions and
-limitations under the License.
-*/
-
-package dockershim
-
-import (
- "bytes"
- "context"
- "errors"
- "fmt"
- "io"
- "math"
- "time"
- "unsafe"
-
- dockertypes "github.com/docker/docker/api/types"
- "google.golang.org/grpc/codes"
- "google.golang.org/grpc/status"
-
- "k8s.io/client-go/tools/remotecommand"
- runtimeapiv1 "k8s.io/cri-api/pkg/apis/runtime/v1"
- runtimeapi "k8s.io/cri-api/pkg/apis/runtime/v1alpha2"
- kubecontainer "k8s.io/kubernetes/pkg/kubelet/container"
- "k8s.io/kubernetes/pkg/kubelet/cri/streaming"
- "k8s.io/kubernetes/pkg/kubelet/dockershim/libdocker"
- "k8s.io/kubernetes/pkg/kubelet/util/ioutils"
- utilexec "k8s.io/utils/exec"
-)
-
-type streamingRuntime struct {
- client libdocker.Interface
- execHandler ExecHandler
-}
-
-var _ streaming.Runtime = &streamingRuntime{}
-
-const maxMsgSize = 1024 * 1024 * 16
-
-func (r *streamingRuntime) Exec(containerID string, cmd []string, in io.Reader, out, err io.WriteCloser, tty bool, resize <-chan remotecommand.TerminalSize) error {
- return r.exec(context.TODO(), containerID, cmd, in, out, err, tty, resize, 0)
-}
-
-// Internal version of Exec adds a timeout.
-func (r *streamingRuntime) exec(ctx context.Context, containerID string, cmd []string, in io.Reader, out, errw io.WriteCloser, tty bool, resize <-chan remotecommand.TerminalSize, timeout time.Duration) error {
- container, err := checkContainerStatus(r.client, containerID)
- if err != nil {
- return err
- }
-
- return r.execHandler.ExecInContainer(ctx, r.client, container, cmd, in, out, errw, tty, resize, timeout)
-}
-
-func (r *streamingRuntime) Attach(containerID string, in io.Reader, out, errw io.WriteCloser, tty bool, resize <-chan remotecommand.TerminalSize) error {
- _, err := checkContainerStatus(r.client, containerID)
- if err != nil {
- return err
- }
-
- return attachContainer(r.client, containerID, in, out, errw, tty, resize)
-}
-
-func (r *streamingRuntime) PortForward(podSandboxID string, port int32, stream io.ReadWriteCloser) error {
- if port < 0 || port > math.MaxUint16 {
- return fmt.Errorf("invalid port %d", port)
- }
- return r.portForward(podSandboxID, port, stream)
-}
-
-// ExecSync executes a command in the container, and returns the stdout output.
-// If command exits with a non-zero exit code, an error is returned.
-func (ds *dockerService) ExecSync(ctx context.Context, req *runtimeapi.ExecSyncRequest) (*runtimeapi.ExecSyncResponse, error) {
- timeout := time.Duration(req.Timeout) * time.Second
- var stdoutBuffer, stderrBuffer bytes.Buffer
- err := ds.streamingRuntime.exec(ctx, req.ContainerId, req.Cmd,
- nil, // in
- ioutils.WriteCloserWrapper(ioutils.LimitWriter(&stdoutBuffer, maxMsgSize)),
- ioutils.WriteCloserWrapper(ioutils.LimitWriter(&stderrBuffer, maxMsgSize)),
- false, // tty
- nil, // resize
- timeout)
-
- // kubelet's remote runtime expects a grpc error with status code DeadlineExceeded on time out.
- if errors.Is(err, context.DeadlineExceeded) {
- return nil, status.Errorf(codes.DeadlineExceeded, err.Error())
- }
-
- var exitCode int32
- if err != nil {
- exitError, ok := err.(utilexec.ExitError)
- if !ok {
- return nil, err
- }
-
- exitCode = int32(exitError.ExitStatus())
- }
- return &runtimeapi.ExecSyncResponse{
- Stdout: stdoutBuffer.Bytes(),
- Stderr: stderrBuffer.Bytes(),
- ExitCode: exitCode,
- }, nil
-}
-
-// Exec prepares a streaming endpoint to execute a command in the container, and returns the address.
-func (ds *dockerService) Exec(_ context.Context, req *runtimeapi.ExecRequest) (*runtimeapi.ExecResponse, error) {
- if ds.streamingServer == nil {
- return nil, streaming.NewErrorStreamingDisabled("exec")
- }
- _, err := checkContainerStatus(ds.client, req.ContainerId)
- if err != nil {
- return nil, err
- }
- // This conversion has been copied from the functions in
- // pkg/kubelet/cri/remote/conversion.go
- r := (*runtimeapiv1.ExecRequest)(unsafe.Pointer(req))
- resp, err := ds.streamingServer.GetExec(r)
- if err != nil {
- return nil, err
- }
- return (*runtimeapi.ExecResponse)(unsafe.Pointer(resp)), nil
-}
-
-// Attach prepares a streaming endpoint to attach to a running container, and returns the address.
-func (ds *dockerService) Attach(_ context.Context, req *runtimeapi.AttachRequest) (*runtimeapi.AttachResponse, error) {
- if ds.streamingServer == nil {
- return nil, streaming.NewErrorStreamingDisabled("attach")
- }
- _, err := checkContainerStatus(ds.client, req.ContainerId)
- if err != nil {
- return nil, err
- }
- // This conversion has been copied from the functions in
- // pkg/kubelet/cri/remote/conversion.go
- r := (*runtimeapiv1.AttachRequest)(unsafe.Pointer(req))
- resp, err := ds.streamingServer.GetAttach(r)
- if err != nil {
- return nil, err
- }
- return (*runtimeapi.AttachResponse)(unsafe.Pointer(resp)), nil
-}
-
-// PortForward prepares a streaming endpoint to forward ports from a PodSandbox, and returns the address.
-func (ds *dockerService) PortForward(_ context.Context, req *runtimeapi.PortForwardRequest) (*runtimeapi.PortForwardResponse, error) {
- if ds.streamingServer == nil {
- return nil, streaming.NewErrorStreamingDisabled("port forward")
- }
- _, err := checkContainerStatus(ds.client, req.PodSandboxId)
- if err != nil {
- return nil, err
- }
- // TODO(tallclair): Verify that ports are exposed.
- // This conversion has been copied from the functions in
- // pkg/kubelet/cri/remote/conversion.go
- r := (*runtimeapiv1.PortForwardRequest)(unsafe.Pointer(req))
- resp, err := ds.streamingServer.GetPortForward(r)
- if err != nil {
- return nil, err
- }
- return (*runtimeapi.PortForwardResponse)(unsafe.Pointer(resp)), nil
-}
-
-func checkContainerStatus(client libdocker.Interface, containerID string) (*dockertypes.ContainerJSON, error) {
- container, err := client.InspectContainer(containerID)
- if err != nil {
- return nil, err
- }
- if !container.State.Running {
- return nil, fmt.Errorf("container not running (%s)", container.ID)
- }
- return container, nil
-}
-
-func attachContainer(client libdocker.Interface, containerID string, stdin io.Reader, stdout, stderr io.WriteCloser, tty bool, resize <-chan remotecommand.TerminalSize) error {
- // Have to start this before the call to client.AttachToContainer because client.AttachToContainer is a blocking
- // call :-( Otherwise, resize events don't get processed and the terminal never resizes.
- kubecontainer.HandleResizing(resize, func(size remotecommand.TerminalSize) {
- client.ResizeContainerTTY(containerID, uint(size.Height), uint(size.Width))
- })
-
- // TODO(random-liu): Do we really use the *Logs* field here?
- opts := dockertypes.ContainerAttachOptions{
- Stream: true,
- Stdin: stdin != nil,
- Stdout: stdout != nil,
- Stderr: stderr != nil,
- }
- sopts := libdocker.StreamOptions{
- InputStream: stdin,
- OutputStream: stdout,
- ErrorStream: stderr,
- RawTerminal: tty,
- }
- return client.AttachToContainer(containerID, opts, sopts)
-}
diff --git a/pkg/kubelet/dockershim/docker_streaming_others.go b/pkg/kubelet/dockershim/docker_streaming_others.go
deleted file mode 100644
index eb041c1073c..00000000000
--- a/pkg/kubelet/dockershim/docker_streaming_others.go
+++ /dev/null
@@ -1,87 +0,0 @@
-//go:build !windows && !dockerless
-// +build !windows,!dockerless
-
-/*
-Copyright 2019 The Kubernetes Authors.
-
-Licensed under the Apache License, Version 2.0 (the "License");
-you may not use this file except in compliance with the License.
-You may obtain a copy of the License at
-
- http://www.apache.org/licenses/LICENSE-2.0
-
-Unless required by applicable law or agreed to in writing, software
-distributed under the License is distributed on an "AS IS" BASIS,
-WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-See the License for the specific language governing permissions and
-limitations under the License.
-*/
-
-package dockershim
-
-import (
- "bytes"
- "fmt"
- "io"
- "os/exec"
- "strings"
-
- "k8s.io/klog/v2"
-)
-
-func (r *streamingRuntime) portForward(podSandboxID string, port int32, stream io.ReadWriteCloser) error {
- container, err := r.client.InspectContainer(podSandboxID)
- if err != nil {
- return err
- }
-
- if !container.State.Running {
- return fmt.Errorf("container not running (%s)", container.ID)
- }
-
- containerPid := container.State.Pid
- socatPath, lookupErr := exec.LookPath("socat")
- if lookupErr != nil {
- return fmt.Errorf("unable to do port forwarding: socat not found")
- }
-
- args := []string{"-t", fmt.Sprintf("%d", containerPid), "-n", socatPath, "-", fmt.Sprintf("TCP4:localhost:%d", port)}
-
- nsenterPath, lookupErr := exec.LookPath("nsenter")
- if lookupErr != nil {
- return fmt.Errorf("unable to do port forwarding: nsenter not found")
- }
-
- commandString := fmt.Sprintf("%s %s", nsenterPath, strings.Join(args, " "))
- klog.V(4).InfoS("Executing port forwarding command", "command", commandString)
-
- command := exec.Command(nsenterPath, args...)
- command.Stdout = stream
-
- stderr := new(bytes.Buffer)
- command.Stderr = stderr
-
- // If we use Stdin, command.Run() won't return until the goroutine that's copying
- // from stream finishes. Unfortunately, if you have a client like telnet connected
- // via port forwarding, as long as the user's telnet client is connected to the user's
- // local listener that port forwarding sets up, the telnet session never exits. This
- // means that even if socat has finished running, command.Run() won't ever return
- // (because the client still has the connection and stream open).
- //
- // The work around is to use StdinPipe(), as Wait() (called by Run()) closes the pipe
- // when the command (socat) exits.
- inPipe, err := command.StdinPipe()
- if err != nil {
- return fmt.Errorf("unable to do port forwarding: error creating stdin pipe: %v", err)
- }
- go func() {
- io.Copy(inPipe, stream)
- inPipe.Close()
- }()
-
- if err := command.Run(); err != nil {
- return fmt.Errorf("%v: %s", err, stderr.String())
- }
-
- return nil
-}
diff --git a/pkg/kubelet/dockershim/docker_streaming_windows.go b/pkg/kubelet/dockershim/docker_streaming_windows.go
deleted file mode 100644
index cad09a9980d..00000000000
--- a/pkg/kubelet/dockershim/docker_streaming_windows.go
+++ /dev/null
@@ -1,39 +0,0 @@
-//go:build windows && !dockerless
-// +build windows,!dockerless
-
-/*
-Copyright 2019 The Kubernetes Authors.
-
-Licensed under the Apache License, Version 2.0 (the "License");
-you may not use this file except in compliance with the License.
-You may obtain a copy of the License at
-
- http://www.apache.org/licenses/LICENSE-2.0
-
-Unless required by applicable law or agreed to in writing, software
-distributed under the License is distributed on an "AS IS" BASIS,
-WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-See the License for the specific language governing permissions and
-limitations under the License.
-*/
-
-package dockershim
-
-import (
- "bytes"
- "context"
- "fmt"
- "io"
-
- "k8s.io/kubernetes/pkg/kubelet/util/ioutils"
-)
-
-func (r *streamingRuntime) portForward(podSandboxID string, port int32, stream io.ReadWriteCloser) error {
- stderr := new(bytes.Buffer)
- err := r.exec(context.TODO(), podSandboxID, []string{"wincat.exe", "127.0.0.1", fmt.Sprint(port)}, stream, stream, ioutils.WriteCloserWrapper(stderr), false, nil, 0)
- if err != nil {
- return fmt.Errorf("%v: %s", err, stderr.String())
- }
-
- return nil
-}
diff --git a/pkg/kubelet/dockershim/dockershim_nodocker.go b/pkg/kubelet/dockershim/dockershim_nodocker.go
deleted file mode 100644
index 9cb89fa7906..00000000000
--- a/pkg/kubelet/dockershim/dockershim_nodocker.go
+++ /dev/null
@@ -1,20 +0,0 @@
-//go:build dockerless
-// +build dockerless
-
-/*
-Copyright 2020 The Kubernetes Authors.
-
-Licensed under the Apache License, Version 2.0 (the "License");
-you may not use this file except in compliance with the License.
-You may obtain a copy of the License at
-
- http://www.apache.org/licenses/LICENSE-2.0
-
-Unless required by applicable law or agreed to in writing, software
-distributed under the License is distributed on an "AS IS" BASIS,
-WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-See the License for the specific language governing permissions and
-limitations under the License.
-*/
-
-package dockershim
diff --git a/pkg/kubelet/dockershim/exec.go b/pkg/kubelet/dockershim/exec.go
deleted file mode 100644
index 6c8aac5989e..00000000000
--- a/pkg/kubelet/dockershim/exec.go
+++ /dev/null
@@ -1,162 +0,0 @@
-//go:build !dockerless
-// +build !dockerless
-
-/*
-Copyright 2015 The Kubernetes Authors.
-
-Licensed under the Apache License, Version 2.0 (the "License");
-you may not use this file except in compliance with the License.
-You may obtain a copy of the License at
-
- http://www.apache.org/licenses/LICENSE-2.0
-
-Unless required by applicable law or agreed to in writing, software
-distributed under the License is distributed on an "AS IS" BASIS,
-WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-See the License for the specific language governing permissions and
-limitations under the License.
-*/
-
-package dockershim
-
-import (
- "context"
- "fmt"
- "io"
- "time"
-
- dockertypes "github.com/docker/docker/api/types"
-
- utilfeature "k8s.io/apiserver/pkg/util/feature"
- "k8s.io/client-go/tools/remotecommand"
- "k8s.io/klog/v2"
- "k8s.io/kubernetes/pkg/features"
- kubecontainer "k8s.io/kubernetes/pkg/kubelet/container"
- "k8s.io/kubernetes/pkg/kubelet/dockershim/libdocker"
-)
-
-// ExecHandler knows how to execute a command in a running Docker container.
-type ExecHandler interface {
- ExecInContainer(ctx context.Context, client libdocker.Interface, container *dockertypes.ContainerJSON, cmd []string, stdin io.Reader, stdout, stderr io.WriteCloser, tty bool, resize <-chan remotecommand.TerminalSize, timeout time.Duration) error
-}
-
-type dockerExitError struct {
- Inspect *dockertypes.ContainerExecInspect
-}
-
-func (d *dockerExitError) String() string {
- return d.Error()
-}
-
-func (d *dockerExitError) Error() string {
- return fmt.Sprintf("Error executing in Docker Container: %d", d.Inspect.ExitCode)
-}
-
-func (d *dockerExitError) Exited() bool {
- return !d.Inspect.Running
-}
-
-func (d *dockerExitError) ExitStatus() int {
- return d.Inspect.ExitCode
-}
-
-// NativeExecHandler executes commands in Docker containers using Docker's exec API.
-type NativeExecHandler struct{}
-
-// ExecInContainer executes the cmd in container using the Docker's exec API
-func (*NativeExecHandler) ExecInContainer(ctx context.Context, client libdocker.Interface, container *dockertypes.ContainerJSON, cmd []string, stdin io.Reader, stdout, stderr io.WriteCloser, tty bool, resize <-chan remotecommand.TerminalSize, timeout time.Duration) error {
- done := make(chan struct{})
- defer close(done)
-
- createOpts := dockertypes.ExecConfig{
- Cmd: cmd,
- AttachStdin: stdin != nil,
- AttachStdout: stdout != nil,
- AttachStderr: stderr != nil,
- Tty: tty,
- }
- execObj, err := client.CreateExec(container.ID, createOpts)
- if err != nil {
- return fmt.Errorf("failed to exec in container - Exec setup failed - %v", err)
- }
-
- // Have to start this before the call to client.StartExec because client.StartExec is a blocking
- // call :-( Otherwise, resize events don't get processed and the terminal never resizes.
- //
- // We also have to delay attempting to send a terminal resize request to docker until after the
- // exec has started; otherwise, the initial resize request will fail.
- execStarted := make(chan struct{})
- go func() {
- select {
- case <-execStarted:
- // client.StartExec has started the exec, so we can start resizing
- case <-done:
- // ExecInContainer has returned, so short-circuit
- return
- }
-
- kubecontainer.HandleResizing(resize, func(size remotecommand.TerminalSize) {
- client.ResizeExecTTY(execObj.ID, uint(size.Height), uint(size.Width))
- })
- }()
-
- startOpts := dockertypes.ExecStartCheck{Detach: false, Tty: tty}
- streamOpts := libdocker.StreamOptions{
- InputStream: stdin,
- OutputStream: stdout,
- ErrorStream: stderr,
- RawTerminal: tty,
- ExecStarted: execStarted,
- }
-
- if timeout > 0 && utilfeature.DefaultFeatureGate.Enabled(features.ExecProbeTimeout) {
- var cancel context.CancelFunc
- ctx, cancel = context.WithTimeout(ctx, timeout)
- defer cancel()
- }
-
- // StartExec is a blocking call, so we need to run it concurrently and catch
- // its error in a channel
- execErr := make(chan error, 1)
- go func() {
- execErr <- client.StartExec(execObj.ID, startOpts, streamOpts)
- }()
-
- select {
- case <-ctx.Done():
- return ctx.Err()
- case err := <-execErr:
- if err != nil {
- return err
- }
- }
-
- // InspectExec may not always return latest state of exec, so call it a few times until
- // it returns an exec inspect that shows that the process is no longer running.
- retries := 0
- maxRetries := 5
- ticker := time.NewTicker(2 * time.Second)
- defer ticker.Stop()
- for {
- inspect, err := client.InspectExec(execObj.ID)
- if err != nil {
- return err
- }
-
- if !inspect.Running {
- if inspect.ExitCode != 0 {
- return &dockerExitError{inspect}
- }
-
- return nil
- }
-
- retries++
- if retries == maxRetries {
- klog.ErrorS(nil, "Exec session in the container terminated but process still running!", "execSession", execObj.ID, "containerID", container.ID)
- return nil
- }
-
- <-ticker.C
- }
-}
diff --git a/pkg/kubelet/dockershim/exec_test.go b/pkg/kubelet/dockershim/exec_test.go
deleted file mode 100644
index 184ccf8f032..00000000000
--- a/pkg/kubelet/dockershim/exec_test.go
+++ /dev/null
@@ -1,191 +0,0 @@
-//go:build !dockerless
-// +build !dockerless
-
-/*
-Copyright 2020 The Kubernetes Authors.
-
-Licensed under the Apache License, Version 2.0 (the "License");
-you may not use this file except in compliance with the License.
-You may obtain a copy of the License at
-
- http://www.apache.org/licenses/LICENSE-2.0
-
-Unless required by applicable law or agreed to in writing, software
-distributed under the License is distributed on an "AS IS" BASIS,
-WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-See the License for the specific language governing permissions and
-limitations under the License.
-*/
-
-package dockershim
-
-import (
- "context"
- "fmt"
- "io"
- "testing"
- "time"
-
- dockertypes "github.com/docker/docker/api/types"
- "github.com/golang/mock/gomock"
- "github.com/stretchr/testify/assert"
-
- utilfeature "k8s.io/apiserver/pkg/util/feature"
- "k8s.io/client-go/tools/remotecommand"
- featuregatetesting "k8s.io/component-base/featuregate/testing"
- "k8s.io/kubernetes/pkg/features"
- "k8s.io/kubernetes/pkg/kubelet/dockershim/libdocker"
- mockclient "k8s.io/kubernetes/pkg/kubelet/dockershim/libdocker/testing"
-)
-
-func TestExecInContainer(t *testing.T) {
-
- testcases := []struct {
- description string
- timeout time.Duration
- returnCreateExec1 *dockertypes.IDResponse
- returnCreateExec2 error
- returnStartExec error
- returnInspectExec1 *dockertypes.ContainerExecInspect
- returnInspectExec2 error
- execProbeTimeout bool
- startExecDelay time.Duration
- expectError error
- }{{
- description: "ExecInContainer succeeds",
- timeout: time.Minute,
- returnCreateExec1: &dockertypes.IDResponse{ID: "12345678"},
- returnCreateExec2: nil,
- returnStartExec: nil,
- returnInspectExec1: &dockertypes.ContainerExecInspect{
- ExecID: "200",
- ContainerID: "12345678",
- Running: false,
- ExitCode: 0,
- Pid: 100},
- returnInspectExec2: nil,
- execProbeTimeout: true,
- expectError: nil,
- }, {
- description: "CreateExec returns an error",
- timeout: time.Minute,
- returnCreateExec1: nil,
- returnCreateExec2: fmt.Errorf("error in CreateExec()"),
- returnStartExec: nil,
- returnInspectExec1: nil,
- returnInspectExec2: nil,
- execProbeTimeout: true,
- expectError: fmt.Errorf("failed to exec in container - Exec setup failed - error in CreateExec()"),
- }, {
- description: "StartExec returns an error",
- timeout: time.Minute,
- returnCreateExec1: &dockertypes.IDResponse{ID: "12345678"},
- returnCreateExec2: nil,
- returnStartExec: fmt.Errorf("error in StartExec()"),
- returnInspectExec1: nil,
- returnInspectExec2: nil,
- execProbeTimeout: true,
- expectError: fmt.Errorf("error in StartExec()"),
- }, {
- description: "InspectExec returns an error",
- timeout: time.Minute,
- returnCreateExec1: &dockertypes.IDResponse{ID: "12345678"},
- returnCreateExec2: nil,
- returnStartExec: nil,
- returnInspectExec1: nil,
- returnInspectExec2: fmt.Errorf("error in InspectExec()"),
- execProbeTimeout: true,
- expectError: fmt.Errorf("error in InspectExec()"),
- }, {
- description: "ExecInContainer returns context DeadlineExceeded",
- timeout: 1 * time.Second,
- returnCreateExec1: &dockertypes.IDResponse{ID: "12345678"},
- returnCreateExec2: nil,
- returnStartExec: context.DeadlineExceeded,
- returnInspectExec1: &dockertypes.ContainerExecInspect{
- ExecID: "200",
- ContainerID: "12345678",
- Running: true,
- ExitCode: 0,
- Pid: 100},
- returnInspectExec2: nil,
- execProbeTimeout: true,
- expectError: context.DeadlineExceeded,
- }, {
- description: "[ExecProbeTimeout=true] StartExec that takes longer than the probe timeout returns context.DeadlineExceeded",
- timeout: 1 * time.Second,
- returnCreateExec1: &dockertypes.IDResponse{ID: "12345678"},
- returnCreateExec2: nil,
- startExecDelay: 5 * time.Second,
- returnStartExec: fmt.Errorf("error in StartExec()"),
- returnInspectExec1: nil,
- returnInspectExec2: nil,
- execProbeTimeout: true,
- expectError: context.DeadlineExceeded,
- }, {
- description: "[ExecProbeTimeout=false] StartExec that takes longer than the probe timeout returns a error",
- timeout: 1 * time.Second,
- returnCreateExec1: &dockertypes.IDResponse{ID: "12345678"},
- returnCreateExec2: nil,
- startExecDelay: 5 * time.Second,
- returnStartExec: fmt.Errorf("error in StartExec()"),
- returnInspectExec1: nil,
- returnInspectExec2: nil,
- execProbeTimeout: false,
- expectError: fmt.Errorf("error in StartExec()"),
- }}
-
- eh := &NativeExecHandler{}
- // to avoid the default calling Finish(). More details in https://github.com/golang/mock/pull/422/
- ctrl := gomock.NewController(struct{ gomock.TestReporter }{t})
- container := getFakeContainerJSON()
- cmd := []string{"/bin/bash"}
- var stdin io.Reader
- var stdout, stderr io.WriteCloser
- var resize <-chan remotecommand.TerminalSize
-
- for _, tc := range testcases {
- // these tests cannot be run in parallel due to the fact that they are feature gate dependent
- tc := tc
- t.Run(tc.description, func(t *testing.T) {
- defer featuregatetesting.SetFeatureGateDuringTest(t, utilfeature.DefaultFeatureGate, features.ExecProbeTimeout, tc.execProbeTimeout)()
-
- mockClient := mockclient.NewMockInterface(ctrl)
- mockClient.EXPECT().CreateExec(gomock.Any(), gomock.Any()).Return(
- tc.returnCreateExec1,
- tc.returnCreateExec2)
- mockClient.EXPECT().StartExec(gomock.Any(), gomock.Any(), gomock.Any()).Do(func(_ string, _ dockertypes.ExecStartCheck, _ libdocker.StreamOptions) { time.Sleep(tc.startExecDelay) }).Return(tc.returnStartExec)
- mockClient.EXPECT().InspectExec(gomock.Any()).Return(
- tc.returnInspectExec1,
- tc.returnInspectExec2)
-
- // use parent context of 2 minutes since that's the default remote
- // runtime connection timeout used by dockershim
- ctx, cancel := context.WithTimeout(context.Background(), 2*time.Minute)
- defer cancel()
- err := eh.ExecInContainer(ctx, mockClient, container, cmd, stdin, stdout, stderr, false, resize, tc.timeout)
- assert.Equal(t, tc.expectError, err)
- })
- }
-}
-
-func getFakeContainerJSON() *dockertypes.ContainerJSON {
- return &dockertypes.ContainerJSON{
- ContainerJSONBase: &dockertypes.ContainerJSONBase{
- ID: "12345678",
- Name: "fake_name",
- Image: "fake_image",
- State: &dockertypes.ContainerState{
- Running: false,
- ExitCode: 0,
- Pid: 100,
- StartedAt: "2020-10-13T01:00:00-08:00",
- FinishedAt: "2020-10-13T02:00:00-08:00",
- },
- Created: "2020-10-13T01:00:00-08:00",
- HostConfig: nil,
- },
- Config: nil,
- NetworkSettings: &dockertypes.NetworkSettings{},
- }
-}
diff --git a/pkg/kubelet/dockershim/helpers.go b/pkg/kubelet/dockershim/helpers.go
deleted file mode 100644
index ca3db5e1818..00000000000
--- a/pkg/kubelet/dockershim/helpers.go
+++ /dev/null
@@ -1,442 +0,0 @@
-//go:build !dockerless
-// +build !dockerless
-
-/*
-Copyright 2016 The Kubernetes Authors.
-
-Licensed under the Apache License, Version 2.0 (the "License");
-you may not use this file except in compliance with the License.
-You may obtain a copy of the License at
-
- http://www.apache.org/licenses/LICENSE-2.0
-
-Unless required by applicable law or agreed to in writing, software
-distributed under the License is distributed on an "AS IS" BASIS,
-WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-See the License for the specific language governing permissions and
-limitations under the License.
-*/
-
-package dockershim
-
-import (
- "errors"
- "fmt"
- "io"
- "regexp"
- "strconv"
- "strings"
- "sync/atomic"
-
- dockertypes "github.com/docker/docker/api/types"
- dockercontainer "github.com/docker/docker/api/types/container"
- dockerfilters "github.com/docker/docker/api/types/filters"
- dockernat "github.com/docker/go-connections/nat"
- "k8s.io/klog/v2"
-
- v1 "k8s.io/api/core/v1"
- utilerrors "k8s.io/apimachinery/pkg/util/errors"
- runtimeapi "k8s.io/cri-api/pkg/apis/runtime/v1alpha2"
- "k8s.io/kubernetes/pkg/credentialprovider"
- "k8s.io/kubernetes/pkg/kubelet/dockershim/libdocker"
- "k8s.io/kubernetes/pkg/kubelet/types"
- "k8s.io/kubernetes/pkg/util/parsers"
-)
-
-const (
- annotationPrefix = "annotation."
- securityOptSeparator = '='
-)
-
-var (
- conflictRE = regexp.MustCompile(`Conflict. (?:.)+ is already in use by container \"?([0-9a-z]+)\"?`)
-
- // this is hacky, but extremely common.
- // if a container starts but the executable file is not found, runc gives a message that matches
- startRE = regexp.MustCompile(`\\\\\\\"(.*)\\\\\\\": executable file not found`)
-
- defaultSeccompOpt = []dockerOpt{{"seccomp", v1.SeccompProfileNameUnconfined, ""}}
-)
-
-// generateEnvList converts KeyValue list to a list of strings, in the form of
-// '=', which can be understood by docker.
-func generateEnvList(envs []*runtimeapi.KeyValue) (result []string) {
- for _, env := range envs {
- result = append(result, fmt.Sprintf("%s=%s", env.Key, env.Value))
- }
- return
-}
-
-// makeLabels converts annotations to labels and merge them with the given
-// labels. This is necessary because docker does not support annotations;
-// we *fake* annotations using labels. Note that docker labels are not
-// updatable.
-func makeLabels(labels, annotations map[string]string) map[string]string {
- merged := make(map[string]string)
- for k, v := range labels {
- merged[k] = v
- }
- for k, v := range annotations {
- // Assume there won't be conflict.
- merged[fmt.Sprintf("%s%s", annotationPrefix, k)] = v
- }
- return merged
-}
-
-// extractLabels converts raw docker labels to the CRI labels and annotations.
-// It also filters out internal labels used by this shim.
-func extractLabels(input map[string]string) (map[string]string, map[string]string) {
- labels := make(map[string]string)
- annotations := make(map[string]string)
- for k, v := range input {
- // Check if the key is used internally by the shim.
- internal := false
- for _, internalKey := range internalLabelKeys {
- if k == internalKey {
- internal = true
- break
- }
- }
- if internal {
- continue
- }
-
- // Delete the container name label for the sandbox. It is added in the shim,
- // should not be exposed via CRI.
- if k == types.KubernetesContainerNameLabel &&
- input[containerTypeLabelKey] == containerTypeLabelSandbox {
- continue
- }
-
- // Check if the label should be treated as an annotation.
- if strings.HasPrefix(k, annotationPrefix) {
- annotations[strings.TrimPrefix(k, annotationPrefix)] = v
- continue
- }
- labels[k] = v
- }
- return labels, annotations
-}
-
-// generateMountBindings converts the mount list to a list of strings that
-// can be understood by docker.
-// ':[:options]', where 'options'
-// is a comma-separated list of the following strings:
-// 'ro', if the path is read only
-// 'Z', if the volume requires SELinux relabeling
-// propagation mode such as 'rslave'
-func generateMountBindings(mounts []*runtimeapi.Mount) []string {
- result := make([]string, 0, len(mounts))
- for _, m := range mounts {
- bind := fmt.Sprintf("%s:%s", m.HostPath, m.ContainerPath)
- var attrs []string
- if m.Readonly {
- attrs = append(attrs, "ro")
- }
- // Only request relabeling if the pod provides an SELinux context. If the pod
- // does not provide an SELinux context relabeling will label the volume with
- // the container's randomly allocated MCS label. This would restrict access
- // to the volume to the container which mounts it first.
- if m.SelinuxRelabel {
- attrs = append(attrs, "Z")
- }
- switch m.Propagation {
- case runtimeapi.MountPropagation_PROPAGATION_PRIVATE:
- // noop, private is default
- case runtimeapi.MountPropagation_PROPAGATION_BIDIRECTIONAL:
- attrs = append(attrs, "rshared")
- case runtimeapi.MountPropagation_PROPAGATION_HOST_TO_CONTAINER:
- attrs = append(attrs, "rslave")
- default:
- klog.InfoS("Unknown propagation mode for hostPath", "path", m.HostPath)
- // Falls back to "private"
- }
-
- if len(attrs) > 0 {
- bind = fmt.Sprintf("%s:%s", bind, strings.Join(attrs, ","))
- }
- result = append(result, bind)
- }
- return result
-}
-
-func makePortsAndBindings(pm []*runtimeapi.PortMapping) (dockernat.PortSet, map[dockernat.Port][]dockernat.PortBinding) {
- exposedPorts := dockernat.PortSet{}
- portBindings := map[dockernat.Port][]dockernat.PortBinding{}
- for _, port := range pm {
- exteriorPort := port.HostPort
- if exteriorPort == 0 {
- // No need to do port binding when HostPort is not specified
- continue
- }
- interiorPort := port.ContainerPort
- // Some of this port stuff is under-documented voodoo.
- // See http://stackoverflow.com/questions/20428302/binding-a-port-to-a-host-interface-using-the-rest-api
- var protocol string
- switch port.Protocol {
- case runtimeapi.Protocol_UDP:
- protocol = "/udp"
- case runtimeapi.Protocol_TCP:
- protocol = "/tcp"
- case runtimeapi.Protocol_SCTP:
- protocol = "/sctp"
- default:
- klog.InfoS("Unknown protocol, defaulting to TCP", "protocol", port.Protocol)
- protocol = "/tcp"
- }
-
- dockerPort := dockernat.Port(strconv.Itoa(int(interiorPort)) + protocol)
- exposedPorts[dockerPort] = struct{}{}
-
- hostBinding := dockernat.PortBinding{
- HostPort: strconv.Itoa(int(exteriorPort)),
- HostIP: port.HostIp,
- }
-
- // Allow multiple host ports bind to same docker port
- if existedBindings, ok := portBindings[dockerPort]; ok {
- // If a docker port already map to a host port, just append the host ports
- portBindings[dockerPort] = append(existedBindings, hostBinding)
- } else {
- // Otherwise, it's fresh new port binding
- portBindings[dockerPort] = []dockernat.PortBinding{
- hostBinding,
- }
- }
- }
- return exposedPorts, portBindings
-}
-
-// getApparmorSecurityOpts gets apparmor options from container config.
-func getApparmorSecurityOpts(sc *runtimeapi.LinuxContainerSecurityContext, separator rune) ([]string, error) {
- if sc == nil || sc.ApparmorProfile == "" {
- return nil, nil
- }
-
- appArmorOpts, err := getAppArmorOpts(sc.ApparmorProfile)
- if err != nil {
- return nil, err
- }
-
- fmtOpts := fmtDockerOpts(appArmorOpts, separator)
- return fmtOpts, nil
-}
-
-// dockerFilter wraps around dockerfilters.Args and provides methods to modify
-// the filter easily.
-type dockerFilter struct {
- args *dockerfilters.Args
-}
-
-func newDockerFilter(args *dockerfilters.Args) *dockerFilter {
- return &dockerFilter{args: args}
-}
-
-func (f *dockerFilter) Add(key, value string) {
- f.args.Add(key, value)
-}
-
-func (f *dockerFilter) AddLabel(key, value string) {
- f.Add("label", fmt.Sprintf("%s=%s", key, value))
-}
-
-// parseUserFromImageUser splits the user out of an user:group string.
-func parseUserFromImageUser(id string) string {
- if id == "" {
- return id
- }
- // split instances where the id may contain user:group
- if strings.Contains(id, ":") {
- return strings.Split(id, ":")[0]
- }
- // no group, just return the id
- return id
-}
-
-// getUserFromImageUser gets uid or user name of the image user.
-// If user is numeric, it will be treated as uid; or else, it is treated as user name.
-func getUserFromImageUser(imageUser string) (*int64, string) {
- user := parseUserFromImageUser(imageUser)
- // return both nil if user is not specified in the image.
- if user == "" {
- return nil, ""
- }
- // user could be either uid or user name. Try to interpret as numeric uid.
- uid, err := strconv.ParseInt(user, 10, 64)
- if err != nil {
- // If user is non numeric, assume it's user name.
- return nil, user
- }
- // If user is a numeric uid.
- return &uid, ""
-}
-
-// See #33189. If the previous attempt to create a sandbox container name FOO
-// failed due to "device or resource busy", it is possible that docker did
-// not clean up properly and has inconsistent internal state. Docker would
-// not report the existence of FOO, but would complain if user wants to
-// create a new container named FOO. To work around this, we parse the error
-// message to identify failure caused by naming conflict, and try to remove
-// the old container FOO.
-// See #40443. Sometimes even removal may fail with "no such container" error.
-// In that case we have to create the container with a randomized name.
-// TODO(random-liu): Remove this work around after docker 1.11 is deprecated.
-// TODO(#33189): Monitor the tests to see if the fix is sufficient.
-func recoverFromCreationConflictIfNeeded(client libdocker.Interface, createConfig dockertypes.ContainerCreateConfig, err error) (*dockercontainer.ContainerCreateCreatedBody, error) {
- matches := conflictRE.FindStringSubmatch(err.Error())
- if len(matches) != 2 {
- return nil, err
- }
-
- id := matches[1]
- klog.InfoS("Unable to create pod sandbox due to conflict. Attempting to remove sandbox", "containerID", id)
- rmErr := client.RemoveContainer(id, dockertypes.ContainerRemoveOptions{RemoveVolumes: true})
- if rmErr == nil {
- klog.V(2).InfoS("Successfully removed conflicting container", "containerID", id)
- return nil, err
- }
- klog.ErrorS(rmErr, "Failed to remove the conflicting container", "containerID", id)
- // Return if the error is not container not found error.
- if !libdocker.IsContainerNotFoundError(rmErr) {
- return nil, err
- }
-
- // randomize the name to avoid conflict.
- createConfig.Name = randomizeName(createConfig.Name)
- klog.V(2).InfoS("Create the container with the randomized name", "containerName", createConfig.Name)
- return client.CreateContainer(createConfig)
-}
-
-// transformStartContainerError does regex parsing on returned error
-// for where container runtimes are giving less than ideal error messages.
-func transformStartContainerError(err error) error {
- if err == nil {
- return nil
- }
- matches := startRE.FindStringSubmatch(err.Error())
- if len(matches) > 0 {
- return fmt.Errorf("executable not found in $PATH")
- }
- return err
-}
-
-// ensureSandboxImageExists pulls the sandbox image when it's not present.
-func ensureSandboxImageExists(client libdocker.Interface, image string) error {
- _, err := client.InspectImageByRef(image)
- if err == nil {
- return nil
- }
- if !libdocker.IsImageNotFoundError(err) {
- return fmt.Errorf("failed to inspect sandbox image %q: %v", image, err)
- }
-
- repoToPull, _, _, err := parsers.ParseImageName(image)
- if err != nil {
- return err
- }
-
- keyring := credentialprovider.NewDockerKeyring()
- creds, withCredentials := keyring.Lookup(repoToPull)
- if !withCredentials {
- klog.V(3).InfoS("Pulling the image without credentials", "image", image)
-
- err := client.PullImage(image, dockertypes.AuthConfig{}, dockertypes.ImagePullOptions{})
- if err != nil {
- return fmt.Errorf("failed pulling image %q: %v", image, err)
- }
-
- return nil
- }
-
- var pullErrs []error
- for _, currentCreds := range creds {
- authConfig := dockertypes.AuthConfig(currentCreds)
- err := client.PullImage(image, authConfig, dockertypes.ImagePullOptions{})
- // If there was no error, return success
- if err == nil {
- return nil
- }
-
- pullErrs = append(pullErrs, err)
- }
-
- return utilerrors.NewAggregate(pullErrs)
-}
-
-func getAppArmorOpts(profile string) ([]dockerOpt, error) {
- if profile == "" || profile == v1.AppArmorBetaProfileRuntimeDefault {
- // The docker applies the default profile by default.
- return nil, nil
- }
-
- // Return unconfined profile explicitly
- if profile == v1.AppArmorBetaProfileNameUnconfined {
- return []dockerOpt{{"apparmor", v1.AppArmorBetaProfileNameUnconfined, ""}}, nil
- }
-
- // Assume validation has already happened.
- profileName := strings.TrimPrefix(profile, v1.AppArmorBetaProfileNamePrefix)
- return []dockerOpt{{"apparmor", profileName, ""}}, nil
-}
-
-// fmtDockerOpts formats the docker security options using the given separator.
-func fmtDockerOpts(opts []dockerOpt, sep rune) []string {
- fmtOpts := make([]string, len(opts))
- for i, opt := range opts {
- fmtOpts[i] = fmt.Sprintf("%s%c%s", opt.key, sep, opt.value)
- }
- return fmtOpts
-}
-
-type dockerOpt struct {
- // The key-value pair passed to docker.
- key, value string
- // The alternative value to use in log/event messages.
- msg string
-}
-
-// Expose key/value from dockerOpt.
-func (d dockerOpt) GetKV() (string, string) {
- return d.key, d.value
-}
-
-// sharedWriteLimiter limits the total output written across one or more streams.
-type sharedWriteLimiter struct {
- delegate io.Writer
- limit *int64
-}
-
-func (w sharedWriteLimiter) Write(p []byte) (int, error) {
- if len(p) == 0 {
- return 0, nil
- }
- limit := atomic.LoadInt64(w.limit)
- if limit <= 0 {
- return 0, errMaximumWrite
- }
- var truncated bool
- if limit < int64(len(p)) {
- p = p[0:limit]
- truncated = true
- }
- n, err := w.delegate.Write(p)
- if n > 0 {
- atomic.AddInt64(w.limit, -1*int64(n))
- }
- if err == nil && truncated {
- err = errMaximumWrite
- }
- return n, err
-}
-
-func sharedLimitWriter(w io.Writer, limit *int64) io.Writer {
- if w == nil {
- return nil
- }
- return &sharedWriteLimiter{
- delegate: w,
- limit: limit,
- }
-}
-
-var errMaximumWrite = errors.New("maximum write")
diff --git a/pkg/kubelet/dockershim/helpers_linux.go b/pkg/kubelet/dockershim/helpers_linux.go
deleted file mode 100644
index 154c503a1ca..00000000000
--- a/pkg/kubelet/dockershim/helpers_linux.go
+++ /dev/null
@@ -1,158 +0,0 @@
-//go:build linux && !dockerless
-// +build linux,!dockerless
-
-/*
-Copyright 2015 The Kubernetes Authors.
-
-Licensed under the Apache License, Version 2.0 (the "License");
-you may not use this file except in compliance with the License.
-You may obtain a copy of the License at
-
- http://www.apache.org/licenses/LICENSE-2.0
-
-Unless required by applicable law or agreed to in writing, software
-distributed under the License is distributed on an "AS IS" BASIS,
-WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-See the License for the specific language governing permissions and
-limitations under the License.
-*/
-
-package dockershim
-
-import (
- "bytes"
- "crypto/md5"
- "encoding/json"
- "fmt"
- "io/ioutil"
- "path/filepath"
- "strings"
-
- "github.com/blang/semver"
- dockertypes "github.com/docker/docker/api/types"
- dockercontainer "github.com/docker/docker/api/types/container"
- v1 "k8s.io/api/core/v1"
- runtimeapi "k8s.io/cri-api/pkg/apis/runtime/v1alpha2"
-)
-
-// DefaultMemorySwap always returns 0 for no memory swap in a sandbox
-func DefaultMemorySwap() int64 {
- return 0
-}
-
-func (ds *dockerService) getSecurityOpts(seccompProfile string, separator rune) ([]string, error) {
- // Apply seccomp options.
- seccompSecurityOpts, err := getSeccompSecurityOpts(seccompProfile, separator)
- if err != nil {
- return nil, fmt.Errorf("failed to generate seccomp security options for container: %v", err)
- }
-
- return seccompSecurityOpts, nil
-}
-
-func (ds *dockerService) getSandBoxSecurityOpts(separator rune) []string {
- // run sandbox with no-new-privileges and using runtime/default
- // sending no "seccomp=" means docker will use default profile
- return []string{"no-new-privileges"}
-}
-
-func getSeccompDockerOpts(seccompProfile string) ([]dockerOpt, error) {
- if seccompProfile == "" || seccompProfile == v1.SeccompProfileNameUnconfined {
- // return early the default
- return defaultSeccompOpt, nil
- }
-
- if seccompProfile == v1.SeccompProfileRuntimeDefault || seccompProfile == v1.DeprecatedSeccompProfileDockerDefault {
- // return nil so docker will load the default seccomp profile
- return nil, nil
- }
-
- if !strings.HasPrefix(seccompProfile, v1.SeccompLocalhostProfileNamePrefix) {
- return nil, fmt.Errorf("unknown seccomp profile option: %s", seccompProfile)
- }
-
- // get the full path of seccomp profile when prefixed with 'localhost/'.
- fname := strings.TrimPrefix(seccompProfile, v1.SeccompLocalhostProfileNamePrefix)
- if !filepath.IsAbs(fname) {
- return nil, fmt.Errorf("seccomp profile path must be absolute, but got relative path %q", fname)
- }
- file, err := ioutil.ReadFile(filepath.FromSlash(fname))
- if err != nil {
- return nil, fmt.Errorf("cannot load seccomp profile %q: %v", fname, err)
- }
-
- b := bytes.NewBuffer(nil)
- if err := json.Compact(b, file); err != nil {
- return nil, err
- }
- // Rather than the full profile, just put the filename & md5sum in the event log.
- msg := fmt.Sprintf("%s(md5:%x)", fname, md5.Sum(file))
-
- return []dockerOpt{{"seccomp", b.String(), msg}}, nil
-}
-
-// getSeccompSecurityOpts gets container seccomp options from container seccomp profile.
-// It is an experimental feature and may be promoted to official runtime api in the future.
-func getSeccompSecurityOpts(seccompProfile string, separator rune) ([]string, error) {
- seccompOpts, err := getSeccompDockerOpts(seccompProfile)
- if err != nil {
- return nil, err
- }
- return fmtDockerOpts(seccompOpts, separator), nil
-}
-
-func (ds *dockerService) updateCreateConfig(
- createConfig *dockertypes.ContainerCreateConfig,
- config *runtimeapi.ContainerConfig,
- sandboxConfig *runtimeapi.PodSandboxConfig,
- podSandboxID string, securityOptSep rune, apiVersion *semver.Version) error {
- // Apply Linux-specific options if applicable.
- if lc := config.GetLinux(); lc != nil {
- // TODO: Check if the units are correct.
- // TODO: Can we assume the defaults are sane?
- rOpts := lc.GetResources()
- if rOpts != nil {
- createConfig.HostConfig.Resources = dockercontainer.Resources{
- // Memory and MemorySwap are set to the same value, this prevents containers from using any swap.
- Memory: rOpts.MemoryLimitInBytes,
- MemorySwap: rOpts.MemoryLimitInBytes,
- CPUShares: rOpts.CpuShares,
- CPUQuota: rOpts.CpuQuota,
- CPUPeriod: rOpts.CpuPeriod,
- CpusetCpus: rOpts.CpusetCpus,
- CpusetMems: rOpts.CpusetMems,
- }
- createConfig.HostConfig.OomScoreAdj = int(rOpts.OomScoreAdj)
- }
- // Note: ShmSize is handled in kube_docker_client.go
-
- // Apply security context.
- if err := applyContainerSecurityContext(lc, podSandboxID, createConfig.Config, createConfig.HostConfig, securityOptSep); err != nil {
- return fmt.Errorf("failed to apply container security context for container %q: %v", config.Metadata.Name, err)
- }
- }
-
- // Apply cgroupsParent derived from the sandbox config.
- if lc := sandboxConfig.GetLinux(); lc != nil {
- // Apply Cgroup options.
- cgroupParent, err := ds.GenerateExpectedCgroupParent(lc.CgroupParent)
- if err != nil {
- return fmt.Errorf("failed to generate cgroup parent in expected syntax for container %q: %v", config.Metadata.Name, err)
- }
- createConfig.HostConfig.CgroupParent = cgroupParent
- }
-
- return nil
-}
-
-func (ds *dockerService) determinePodIPBySandboxID(uid string) []string {
- return nil
-}
-
-func getNetworkNamespace(c *dockertypes.ContainerJSON) (string, error) {
- if c.State.Pid == 0 {
- // Docker reports pid 0 for an exited container.
- return "", fmt.Errorf("cannot find network namespace for the terminated container %q", c.ID)
- }
- return fmt.Sprintf(dockerNetNSFmt, c.State.Pid), nil
-}
diff --git a/pkg/kubelet/dockershim/helpers_linux_test.go b/pkg/kubelet/dockershim/helpers_linux_test.go
deleted file mode 100644
index ada9f565ee6..00000000000
--- a/pkg/kubelet/dockershim/helpers_linux_test.go
+++ /dev/null
@@ -1,109 +0,0 @@
-//go:build linux && !dockerless
-// +build linux,!dockerless
-
-/*
-Copyright 2017 The Kubernetes Authors.
-
-Licensed under the Apache License, Version 2.0 (the "License");
-you may not use this file except in compliance with the License.
-You may obtain a copy of the License at
-
- http://www.apache.org/licenses/LICENSE-2.0
-
-Unless required by applicable law or agreed to in writing, software
-distributed under the License is distributed on an "AS IS" BASIS,
-WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-See the License for the specific language governing permissions and
-limitations under the License.
-*/
-
-package dockershim
-
-import (
- "fmt"
- "io/ioutil"
- "os"
- "path/filepath"
- "testing"
-
- "github.com/stretchr/testify/assert"
- "github.com/stretchr/testify/require"
- "k8s.io/api/core/v1"
-)
-
-func TestGetSeccompSecurityOpts(t *testing.T) {
- tests := []struct {
- msg string
- seccompProfile string
- expectedOpts []string
- }{{
- msg: "No security annotations",
- seccompProfile: "",
- expectedOpts: []string{"seccomp=unconfined"},
- }, {
- msg: "Seccomp unconfined",
- seccompProfile: "unconfined",
- expectedOpts: []string{"seccomp=unconfined"},
- }, {
- msg: "Seccomp default",
- seccompProfile: v1.SeccompProfileRuntimeDefault,
- expectedOpts: nil,
- }, {
- msg: "Seccomp deprecated default",
- seccompProfile: v1.DeprecatedSeccompProfileDockerDefault,
- expectedOpts: nil,
- }}
-
- for i, test := range tests {
- opts, err := getSeccompSecurityOpts(test.seccompProfile, '=')
- assert.NoError(t, err, "TestCase[%d]: %s", i, test.msg)
- assert.Len(t, opts, len(test.expectedOpts), "TestCase[%d]: %s", i, test.msg)
- for _, opt := range test.expectedOpts {
- assert.Contains(t, opts, opt, "TestCase[%d]: %s", i, test.msg)
- }
- }
-}
-
-func TestLoadSeccompLocalhostProfiles(t *testing.T) {
- tmpdir, err := ioutil.TempDir("", "seccomp-local-profile-test")
- require.NoError(t, err)
- defer os.RemoveAll(tmpdir)
- testProfile := `{"foo": "bar"}`
- err = ioutil.WriteFile(filepath.Join(tmpdir, "test"), []byte(testProfile), 0644)
- require.NoError(t, err)
-
- tests := []struct {
- msg string
- seccompProfile string
- expectedOpts []string
- expectErr bool
- }{{
- msg: "Seccomp localhost/test profile should return correct seccomp profiles",
- seccompProfile: "localhost/" + filepath.Join(tmpdir, "test"),
- expectedOpts: []string{`seccomp={"foo":"bar"}`},
- expectErr: false,
- }, {
- msg: "Non-existent profile should return error",
- seccompProfile: "localhost/" + filepath.Join(tmpdir, "fixtures/non-existent"),
- expectedOpts: nil,
- expectErr: true,
- }, {
- msg: "Relative profile path should return error",
- seccompProfile: "localhost/fixtures/test",
- expectedOpts: nil,
- expectErr: true,
- }}
-
- for i, test := range tests {
- opts, err := getSeccompSecurityOpts(test.seccompProfile, '=')
- if test.expectErr {
- assert.Error(t, err, fmt.Sprintf("TestCase[%d]: %s", i, test.msg))
- continue
- }
- assert.NoError(t, err, "TestCase[%d]: %s", i, test.msg)
- assert.Len(t, opts, len(test.expectedOpts), "TestCase[%d]: %s", i, test.msg)
- for _, opt := range test.expectedOpts {
- assert.Contains(t, opts, opt, "TestCase[%d]: %s", i, test.msg)
- }
- }
-}
diff --git a/pkg/kubelet/dockershim/helpers_test.go b/pkg/kubelet/dockershim/helpers_test.go
deleted file mode 100644
index b70b447f705..00000000000
--- a/pkg/kubelet/dockershim/helpers_test.go
+++ /dev/null
@@ -1,441 +0,0 @@
-//go:build !dockerless
-// +build !dockerless
-
-/*
-Copyright 2016 The Kubernetes Authors.
-
-Licensed under the Apache License, Version 2.0 (the "License");
-you may not use this file except in compliance with the License.
-You may obtain a copy of the License at
-
- http://www.apache.org/licenses/LICENSE-2.0
-
-Unless required by applicable law or agreed to in writing, software
-distributed under the License is distributed on an "AS IS" BASIS,
-WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-See the License for the specific language governing permissions and
-limitations under the License.
-*/
-
-package dockershim
-
-import (
- "bytes"
- "errors"
- "fmt"
- "sync"
- "testing"
-
- dockertypes "github.com/docker/docker/api/types"
- dockernat "github.com/docker/go-connections/nat"
- "github.com/stretchr/testify/assert"
- "github.com/stretchr/testify/require"
-
- v1 "k8s.io/api/core/v1"
- runtimeapi "k8s.io/cri-api/pkg/apis/runtime/v1alpha2"
- "k8s.io/kubernetes/pkg/kubelet/dockershim/libdocker"
-)
-
-func TestLabelsAndAnnotationsRoundTrip(t *testing.T) {
- expectedLabels := map[string]string{"foo.123.abc": "baz", "bar.456.xyz": "qwe"}
- expectedAnnotations := map[string]string{"uio.ert": "dfs", "jkl": "asd"}
- // Merge labels and annotations into docker labels.
- dockerLabels := makeLabels(expectedLabels, expectedAnnotations)
- // Extract labels and annotations from docker labels.
- actualLabels, actualAnnotations := extractLabels(dockerLabels)
- assert.Equal(t, expectedLabels, actualLabels)
- assert.Equal(t, expectedAnnotations, actualAnnotations)
-}
-
-// TestGetApparmorSecurityOpts tests the logic of generating container apparmor options from sandbox annotations.
-func TestGetApparmorSecurityOpts(t *testing.T) {
- makeConfig := func(profile string) *runtimeapi.LinuxContainerSecurityContext {
- return &runtimeapi.LinuxContainerSecurityContext{
- ApparmorProfile: profile,
- }
- }
-
- tests := []struct {
- msg string
- config *runtimeapi.LinuxContainerSecurityContext
- expectedOpts []string
- }{{
- msg: "No AppArmor options",
- config: makeConfig(""),
- expectedOpts: nil,
- }, {
- msg: "AppArmor runtime/default",
- config: makeConfig("runtime/default"),
- expectedOpts: []string{},
- }, {
- msg: "AppArmor local profile",
- config: makeConfig(v1.AppArmorBetaProfileNamePrefix + "foo"),
- expectedOpts: []string{"apparmor=foo"},
- }}
-
- for i, test := range tests {
- opts, err := getApparmorSecurityOpts(test.config, '=')
- assert.NoError(t, err, "TestCase[%d]: %s", i, test.msg)
- assert.Len(t, opts, len(test.expectedOpts), "TestCase[%d]: %s", i, test.msg)
- for _, opt := range test.expectedOpts {
- assert.Contains(t, opts, opt, "TestCase[%d]: %s", i, test.msg)
- }
- }
-}
-
-// TestGetUserFromImageUser tests the logic of getting image uid or user name of image user.
-func TestGetUserFromImageUser(t *testing.T) {
- newI64 := func(i int64) *int64 { return &i }
- for c, test := range map[string]struct {
- user string
- uid *int64
- name string
- }{
- "no gid": {
- user: "0",
- uid: newI64(0),
- },
- "uid/gid": {
- user: "0:1",
- uid: newI64(0),
- },
- "empty user": {
- user: "",
- },
- "multiple spearators": {
- user: "1:2:3",
- uid: newI64(1),
- },
- "root username": {
- user: "root:root",
- name: "root",
- },
- "username": {
- user: "test:test",
- name: "test",
- },
- } {
- t.Logf("TestCase - %q", c)
- actualUID, actualName := getUserFromImageUser(test.user)
- assert.Equal(t, test.uid, actualUID)
- assert.Equal(t, test.name, actualName)
- }
-}
-
-func TestParsingCreationConflictError(t *testing.T) {
- // Expected error message from docker.
- msgs := []string{
- "Conflict. The name \"/k8s_POD_pfpod_e2e-tests-port-forwarding-dlxt2_81a3469e-99e1-11e6-89f2-42010af00002_0\" is already in use by container 24666ab8c814d16f986449e504ea0159468ddf8da01897144a770f66dce0e14e. You have to remove (or rename) that container to be able to reuse that name.",
- "Conflict. The name \"/k8s_POD_pfpod_e2e-tests-port-forwarding-dlxt2_81a3469e-99e1-11e6-89f2-42010af00002_0\" is already in use by container \"24666ab8c814d16f986449e504ea0159468ddf8da01897144a770f66dce0e14e\". You have to remove (or rename) that container to be able to reuse that name.",
- }
-
- for _, msg := range msgs {
- matches := conflictRE.FindStringSubmatch(msg)
- require.Len(t, matches, 2)
- require.Equal(t, matches[1], "24666ab8c814d16f986449e504ea0159468ddf8da01897144a770f66dce0e14e")
- }
-}
-
-func TestEnsureSandboxImageExists(t *testing.T) {
- sandboxImage := "gcr.io/test/image"
- authConfig := dockertypes.AuthConfig{Username: "user", Password: "pass"}
- for desc, test := range map[string]struct {
- injectImage bool
- imgNeedsAuth bool
- injectErr error
- calls []string
- err bool
- configJSON string
- }{
- "should not pull image when it already exists": {
- injectImage: true,
- injectErr: nil,
- calls: []string{"inspect_image"},
- },
- "should pull image when it doesn't exist": {
- injectImage: false,
- injectErr: libdocker.ImageNotFoundError{ID: "image_id"},
- calls: []string{"inspect_image", "pull"},
- },
- "should return error when inspect image fails": {
- injectImage: false,
- injectErr: fmt.Errorf("arbitrary error"),
- calls: []string{"inspect_image"},
- err: true,
- },
- "should return error when image pull needs private auth, but none provided": {
- injectImage: true,
- imgNeedsAuth: true,
- injectErr: libdocker.ImageNotFoundError{ID: "image_id"},
- calls: []string{"inspect_image", "pull"},
- err: true,
- },
- } {
- t.Logf("TestCase: %q", desc)
- _, fakeDocker, _ := newTestDockerService()
- if test.injectImage {
- images := []dockertypes.ImageSummary{{ID: sandboxImage}}
- fakeDocker.InjectImages(images)
- if test.imgNeedsAuth {
- fakeDocker.MakeImagesPrivate(images, authConfig)
- }
- }
- fakeDocker.InjectError("inspect_image", test.injectErr)
-
- err := ensureSandboxImageExists(fakeDocker, sandboxImage)
- assert.NoError(t, fakeDocker.AssertCalls(test.calls))
- assert.Equal(t, test.err, err != nil)
- }
-}
-
-func TestMakePortsAndBindings(t *testing.T) {
- for desc, test := range map[string]struct {
- pm []*runtimeapi.PortMapping
- exposedPorts dockernat.PortSet
- portmappings map[dockernat.Port][]dockernat.PortBinding
- }{
- "no port mapping": {
- pm: nil,
- exposedPorts: map[dockernat.Port]struct{}{},
- portmappings: map[dockernat.Port][]dockernat.PortBinding{},
- },
- "tcp port mapping": {
- pm: []*runtimeapi.PortMapping{
- {
- Protocol: runtimeapi.Protocol_TCP,
- ContainerPort: 80,
- HostPort: 80,
- },
- },
- exposedPorts: map[dockernat.Port]struct{}{
- "80/tcp": {},
- },
- portmappings: map[dockernat.Port][]dockernat.PortBinding{
- "80/tcp": {
- {
- HostPort: "80",
- },
- },
- },
- },
- "udp port mapping": {
- pm: []*runtimeapi.PortMapping{
- {
- Protocol: runtimeapi.Protocol_UDP,
- ContainerPort: 80,
- HostPort: 80,
- },
- },
- exposedPorts: map[dockernat.Port]struct{}{
- "80/udp": {},
- },
- portmappings: map[dockernat.Port][]dockernat.PortBinding{
- "80/udp": {
- {
- HostPort: "80",
- },
- },
- },
- },
- "multiple port mappings": {
- pm: []*runtimeapi.PortMapping{
- {
- Protocol: runtimeapi.Protocol_TCP,
- ContainerPort: 80,
- HostPort: 80,
- },
- {
- Protocol: runtimeapi.Protocol_TCP,
- ContainerPort: 80,
- HostPort: 81,
- },
- },
- exposedPorts: map[dockernat.Port]struct{}{
- "80/tcp": {},
- },
- portmappings: map[dockernat.Port][]dockernat.PortBinding{
- "80/tcp": {
- {
- HostPort: "80",
- },
- {
- HostPort: "81",
- },
- },
- },
- },
- } {
- t.Logf("TestCase: %s", desc)
- actualExposedPorts, actualPortMappings := makePortsAndBindings(test.pm)
- assert.Equal(t, test.exposedPorts, actualExposedPorts)
- assert.Equal(t, test.portmappings, actualPortMappings)
- }
-}
-
-func TestGenerateMountBindings(t *testing.T) {
- mounts := []*runtimeapi.Mount{
- // everything default
- {
- HostPath: "/mnt/1",
- ContainerPath: "/var/lib/mysql/1",
- },
- // readOnly
- {
- HostPath: "/mnt/2",
- ContainerPath: "/var/lib/mysql/2",
- Readonly: true,
- },
- // SELinux
- {
- HostPath: "/mnt/3",
- ContainerPath: "/var/lib/mysql/3",
- SelinuxRelabel: true,
- },
- // Propagation private
- {
- HostPath: "/mnt/4",
- ContainerPath: "/var/lib/mysql/4",
- Propagation: runtimeapi.MountPropagation_PROPAGATION_PRIVATE,
- },
- // Propagation rslave
- {
- HostPath: "/mnt/5",
- ContainerPath: "/var/lib/mysql/5",
- Propagation: runtimeapi.MountPropagation_PROPAGATION_HOST_TO_CONTAINER,
- },
- // Propagation rshared
- {
- HostPath: "/mnt/6",
- ContainerPath: "/var/lib/mysql/6",
- Propagation: runtimeapi.MountPropagation_PROPAGATION_BIDIRECTIONAL,
- },
- // Propagation unknown (falls back to private)
- {
- HostPath: "/mnt/7",
- ContainerPath: "/var/lib/mysql/7",
- Propagation: runtimeapi.MountPropagation(42),
- },
- // Everything
- {
- HostPath: "/mnt/8",
- ContainerPath: "/var/lib/mysql/8",
- Readonly: true,
- SelinuxRelabel: true,
- Propagation: runtimeapi.MountPropagation_PROPAGATION_BIDIRECTIONAL,
- },
- }
- expectedResult := []string{
- "/mnt/1:/var/lib/mysql/1",
- "/mnt/2:/var/lib/mysql/2:ro",
- "/mnt/3:/var/lib/mysql/3:Z",
- "/mnt/4:/var/lib/mysql/4",
- "/mnt/5:/var/lib/mysql/5:rslave",
- "/mnt/6:/var/lib/mysql/6:rshared",
- "/mnt/7:/var/lib/mysql/7",
- "/mnt/8:/var/lib/mysql/8:ro,Z,rshared",
- }
- result := generateMountBindings(mounts)
-
- assert.Equal(t, expectedResult, result)
-}
-
-func TestLimitedWriter(t *testing.T) {
- max := func(x, y int64) int64 {
- if x > y {
- return x
- }
- return y
- }
- for name, tc := range map[string]struct {
- w bytes.Buffer
- toWrite string
- limit int64
- wants string
- wantsErr error
- }{
- "nil": {},
- "neg": {
- toWrite: "a",
- wantsErr: errMaximumWrite,
- limit: -1,
- },
- "1byte-over": {
- toWrite: "a",
- wantsErr: errMaximumWrite,
- },
- "1byte-maxed": {
- toWrite: "a",
- wants: "a",
- limit: 1,
- },
- "1byte-under": {
- toWrite: "a",
- wants: "a",
- limit: 2,
- },
- "6byte-over": {
- toWrite: "foobar",
- wants: "foo",
- limit: 3,
- wantsErr: errMaximumWrite,
- },
- "6byte-maxed": {
- toWrite: "foobar",
- wants: "foobar",
- limit: 6,
- },
- "6byte-under": {
- toWrite: "foobar",
- wants: "foobar",
- limit: 20,
- },
- } {
- t.Run(name, func(t *testing.T) {
- limit := tc.limit
- w := sharedLimitWriter(&tc.w, &limit)
- n, err := w.Write([]byte(tc.toWrite))
- if int64(n) > max(0, tc.limit) {
- t.Fatalf("bytes written (%d) exceeds limit (%d)", n, tc.limit)
- }
- if (err != nil) != (tc.wantsErr != nil) {
- if err != nil {
- t.Fatal("unexpected error:", err)
- }
- t.Fatal("expected error:", err)
- }
- if err != nil {
- if !errors.Is(err, tc.wantsErr) {
- t.Fatal("expected error: ", tc.wantsErr, " instead of: ", err)
- }
- if !errors.Is(err, errMaximumWrite) {
- return
- }
- // check contents for errMaximumWrite
- }
- if s := tc.w.String(); s != tc.wants {
- t.Fatalf("expected %q instead of %q", tc.wants, s)
- }
- })
- }
-
- // test concurrency. run this test a bunch of times to attempt to flush
- // out any data races or concurrency issues.
- for i := 0; i < 1000; i++ {
- var (
- b1, b2 bytes.Buffer
- limit = int64(10)
- w1 = sharedLimitWriter(&b1, &limit)
- w2 = sharedLimitWriter(&b2, &limit)
- ch = make(chan struct{})
- wg sync.WaitGroup
- )
- wg.Add(2)
- go func() { defer wg.Done(); <-ch; w1.Write([]byte("hello")) }()
- go func() { defer wg.Done(); <-ch; w2.Write([]byte("world")) }()
- close(ch)
- wg.Wait()
- if limit != 0 {
- t.Fatalf("expected max limit to be reached, instead of %d", limit)
- }
- }
-}
diff --git a/pkg/kubelet/dockershim/helpers_unsupported.go b/pkg/kubelet/dockershim/helpers_unsupported.go
deleted file mode 100644
index 3c655c5551f..00000000000
--- a/pkg/kubelet/dockershim/helpers_unsupported.go
+++ /dev/null
@@ -1,62 +0,0 @@
-//go:build !linux && !windows && !dockerless
-// +build !linux,!windows,!dockerless
-
-/*
-Copyright 2015 The Kubernetes Authors.
-
-Licensed under the Apache License, Version 2.0 (the "License");
-you may not use this file except in compliance with the License.
-You may obtain a copy of the License at
-
- http://www.apache.org/licenses/LICENSE-2.0
-
-Unless required by applicable law or agreed to in writing, software
-distributed under the License is distributed on an "AS IS" BASIS,
-WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-See the License for the specific language governing permissions and
-limitations under the License.
-*/
-
-package dockershim
-
-import (
- "fmt"
-
- "github.com/blang/semver"
- dockertypes "github.com/docker/docker/api/types"
- runtimeapi "k8s.io/cri-api/pkg/apis/runtime/v1alpha2"
- "k8s.io/klog/v2"
-)
-
-// DefaultMemorySwap always returns -1 for no memory swap in a sandbox
-func DefaultMemorySwap() int64 {
- return -1
-}
-
-func (ds *dockerService) getSecurityOpts(seccompProfile string, separator rune) ([]string, error) {
- klog.InfoS("getSecurityOpts is unsupported in this build")
- return nil, nil
-}
-
-func (ds *dockerService) getSandBoxSecurityOpts(separator rune) []string {
- klog.InfoS("getSandBoxSecurityOpts is unsupported in this build")
- return nil
-}
-
-func (ds *dockerService) updateCreateConfig(
- createConfig *dockertypes.ContainerCreateConfig,
- config *runtimeapi.ContainerConfig,
- sandboxConfig *runtimeapi.PodSandboxConfig,
- podSandboxID string, securityOptSep rune, apiVersion *semver.Version) error {
- klog.InfoS("updateCreateConfig is unsupported in this build")
- return nil
-}
-
-func (ds *dockerService) determinePodIPBySandboxID(uid string) []string {
- klog.InfoS("determinePodIPBySandboxID is unsupported in this build")
- return nil
-}
-
-func getNetworkNamespace(c *dockertypes.ContainerJSON) (string, error) {
- return "", fmt.Errorf("unsupported platform")
-}
diff --git a/pkg/kubelet/dockershim/helpers_windows.go b/pkg/kubelet/dockershim/helpers_windows.go
deleted file mode 100644
index 1fd8155bec4..00000000000
--- a/pkg/kubelet/dockershim/helpers_windows.go
+++ /dev/null
@@ -1,161 +0,0 @@
-//go:build windows && !dockerless
-// +build windows,!dockerless
-
-/*
-Copyright 2015 The Kubernetes Authors.
-
-Licensed under the Apache License, Version 2.0 (the "License");
-you may not use this file except in compliance with the License.
-You may obtain a copy of the License at
-
- http://www.apache.org/licenses/LICENSE-2.0
-
-Unless required by applicable law or agreed to in writing, software
-distributed under the License is distributed on an "AS IS" BASIS,
-WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-See the License for the specific language governing permissions and
-limitations under the License.
-*/
-
-package dockershim
-
-import (
- "os"
- "runtime"
-
- "github.com/blang/semver"
- dockertypes "github.com/docker/docker/api/types"
- dockercontainer "github.com/docker/docker/api/types/container"
- dockerfilters "github.com/docker/docker/api/types/filters"
- "k8s.io/klog/v2"
-
- runtimeapi "k8s.io/cri-api/pkg/apis/runtime/v1alpha2"
-)
-
-// DefaultMemorySwap always returns 0 for no memory swap in a sandbox
-func DefaultMemorySwap() int64 {
- return 0
-}
-
-func (ds *dockerService) getSecurityOpts(seccompProfile string, separator rune) ([]string, error) {
- if seccompProfile != "" {
- klog.InfoS("seccomp annotations are not supported on windows")
- }
- return nil, nil
-}
-
-func (ds *dockerService) getSandBoxSecurityOpts(separator rune) []string {
- // Currently, Windows container does not support privileged mode, so no no-new-privileges flag can be returned directly like Linux
- // If the future Windows container has new support for privileged mode, we can adjust it here
- return nil
-}
-
-func (ds *dockerService) updateCreateConfig(
- createConfig *dockertypes.ContainerCreateConfig,
- config *runtimeapi.ContainerConfig,
- sandboxConfig *runtimeapi.PodSandboxConfig,
- podSandboxID string, securityOptSep rune, apiVersion *semver.Version) error {
- if networkMode := os.Getenv("CONTAINER_NETWORK"); networkMode != "" {
- createConfig.HostConfig.NetworkMode = dockercontainer.NetworkMode(networkMode)
- } else {
- // Todo: Refactor this call in future for calling methods directly in security_context.go
- modifyHostOptionsForContainer(nil, podSandboxID, createConfig.HostConfig)
- }
-
- // Apply Windows-specific options if applicable.
- if wc := config.GetWindows(); wc != nil {
- rOpts := wc.GetResources()
- if rOpts != nil {
- // Precedence and units for these are described at length in kuberuntime_container_windows.go - generateWindowsContainerConfig()
- createConfig.HostConfig.Resources = dockercontainer.Resources{
- Memory: rOpts.MemoryLimitInBytes,
- CPUShares: rOpts.CpuShares,
- CPUCount: rOpts.CpuCount,
- NanoCPUs: rOpts.CpuMaximum * int64(runtime.NumCPU()) * (1e9 / 10000),
- }
- }
-
- // Apply security context.
- applyWindowsContainerSecurityContext(wc.GetSecurityContext(), createConfig.Config, createConfig.HostConfig)
- }
-
- return nil
-}
-
-// applyWindowsContainerSecurityContext updates docker container options according to security context.
-func applyWindowsContainerSecurityContext(wsc *runtimeapi.WindowsContainerSecurityContext, config *dockercontainer.Config, hc *dockercontainer.HostConfig) {
- if wsc == nil {
- return
- }
-
- if wsc.GetRunAsUsername() != "" {
- config.User = wsc.GetRunAsUsername()
- }
-}
-
-func (ds *dockerService) determinePodIPBySandboxID(sandboxID string) []string {
- opts := dockertypes.ContainerListOptions{
- All: true,
- Filters: dockerfilters.NewArgs(),
- }
-
- f := newDockerFilter(&opts.Filters)
- f.AddLabel(containerTypeLabelKey, containerTypeLabelContainer)
- f.AddLabel(sandboxIDLabelKey, sandboxID)
- containers, err := ds.client.ListContainers(opts)
- if err != nil {
- return nil
- }
-
- for _, c := range containers {
- r, err := ds.client.InspectContainer(c.ID)
- if err != nil {
- continue
- }
-
- // Versions and feature support
- // ============================
- // Windows version == Windows Server, Version 1709, Supports both sandbox and non-sandbox case
- // Windows version == Windows Server 2016 Support only non-sandbox case
- // Windows version < Windows Server 2016 is Not Supported
-
- // Sandbox support in Windows mandates CNI Plugin.
- // Presence of CONTAINER_NETWORK flag is considered as non-Sandbox cases here
-
- // Todo: Add a kernel version check for more validation
-
- if networkMode := os.Getenv("CONTAINER_NETWORK"); networkMode == "" {
- // On Windows, every container that is created in a Sandbox, needs to invoke CNI plugin again for adding the Network,
- // with the shared container name as NetNS info,
- // This is passed down to the platform to replicate some necessary information to the new container
-
- //
- // This place is chosen as a hack for now, since ds.getIP would end up calling CNI's addToNetwork
- // That is why addToNetwork is required to be idempotent
-
- // Instead of relying on this call, an explicit call to addToNetwork should be
- // done immediately after ContainerCreation, in case of Windows only. TBD Issue # to handle this
-
- // Do not return any IP, so that we would continue and get the IP of the Sandbox.
- // Windows 1709 and 1803 doesn't have the Namespace support, so getIP() is called
- // to replicate the DNS registry key to the Workload container (IP/Gateway/MAC is
- // set separately than DNS).
- // TODO(feiskyer): remove this workaround after Namespace is supported in Windows RS5.
- ds.getIPs(sandboxID, r)
- } else {
- // ds.getIP will call the CNI plugin to fetch the IP
- if containerIPs := ds.getIPs(c.ID, r); len(containerIPs) != 0 {
- return containerIPs
- }
- }
- }
-
- return nil
-}
-
-func getNetworkNamespace(c *dockertypes.ContainerJSON) (string, error) {
- // Currently in windows there is no identifier exposed for network namespace
- // Like docker, the referenced container id is used to figure out the network namespace id internally by the platform
- // so returning the docker networkMode (which holds container:[ for network namespace here
- return string(c.HostConfig.NetworkMode), nil
-}
diff --git a/pkg/kubelet/dockershim/libdocker/client.go b/pkg/kubelet/dockershim/libdocker/client.go
deleted file mode 100644
index a1f3feb44ce..00000000000
--- a/pkg/kubelet/dockershim/libdocker/client.go
+++ /dev/null
@@ -1,101 +0,0 @@
-//go:build !dockerless
-// +build !dockerless
-
-/*
-Copyright 2014 The Kubernetes Authors.
-
-Licensed under the Apache License, Version 2.0 (the "License");
-you may not use this file except in compliance with the License.
-You may obtain a copy of the License at
-
- http://www.apache.org/licenses/LICENSE-2.0
-
-Unless required by applicable law or agreed to in writing, software
-distributed under the License is distributed on an "AS IS" BASIS,
-WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-See the License for the specific language governing permissions and
-limitations under the License.
-*/
-
-//go:generate mockgen -copyright_file=$BUILD_TAG_FILE -source=client.go -destination=testing/mock_client.go -package=testing Interface
-package libdocker
-
-import (
- "os"
- "time"
-
- dockertypes "github.com/docker/docker/api/types"
- dockercontainer "github.com/docker/docker/api/types/container"
- dockerimagetypes "github.com/docker/docker/api/types/image"
- dockerapi "github.com/docker/docker/client"
- "k8s.io/klog/v2"
-)
-
-const (
- // https://docs.docker.com/engine/reference/api/docker_remote_api/
- // docker version should be at least 1.13.1
- MinimumDockerAPIVersion = "1.26.0"
-
- // Status of a container returned by ListContainers.
- StatusRunningPrefix = "Up"
- StatusCreatedPrefix = "Created"
- StatusExitedPrefix = "Exited"
-
- // Fake docker endpoint
- FakeDockerEndpoint = "fake://"
-)
-
-// Interface is an abstract interface for testability. It abstracts the interface of docker client.
-type Interface interface {
- ListContainers(options dockertypes.ContainerListOptions) ([]dockertypes.Container, error)
- InspectContainer(id string) (*dockertypes.ContainerJSON, error)
- InspectContainerWithSize(id string) (*dockertypes.ContainerJSON, error)
- CreateContainer(dockertypes.ContainerCreateConfig) (*dockercontainer.ContainerCreateCreatedBody, error)
- StartContainer(id string) error
- StopContainer(id string, timeout time.Duration) error
- UpdateContainerResources(id string, updateConfig dockercontainer.UpdateConfig) error
- RemoveContainer(id string, opts dockertypes.ContainerRemoveOptions) error
- InspectImageByRef(imageRef string) (*dockertypes.ImageInspect, error)
- InspectImageByID(imageID string) (*dockertypes.ImageInspect, error)
- ListImages(opts dockertypes.ImageListOptions) ([]dockertypes.ImageSummary, error)
- PullImage(image string, auth dockertypes.AuthConfig, opts dockertypes.ImagePullOptions) error
- RemoveImage(image string, opts dockertypes.ImageRemoveOptions) ([]dockertypes.ImageDeleteResponseItem, error)
- ImageHistory(id string) ([]dockerimagetypes.HistoryResponseItem, error)
- Logs(string, dockertypes.ContainerLogsOptions, StreamOptions) error
- Version() (*dockertypes.Version, error)
- Info() (*dockertypes.Info, error)
- CreateExec(string, dockertypes.ExecConfig) (*dockertypes.IDResponse, error)
- StartExec(string, dockertypes.ExecStartCheck, StreamOptions) error
- InspectExec(id string) (*dockertypes.ContainerExecInspect, error)
- AttachToContainer(string, dockertypes.ContainerAttachOptions, StreamOptions) error
- ResizeContainerTTY(id string, height, width uint) error
- ResizeExecTTY(id string, height, width uint) error
- GetContainerStats(id string) (*dockertypes.StatsJSON, error)
-}
-
-// Get a *dockerapi.Client, either using the endpoint passed in, or using
-// DOCKER_HOST, DOCKER_TLS_VERIFY, and DOCKER_CERT path per their spec
-func getDockerClient(dockerEndpoint string) (*dockerapi.Client, error) {
- if len(dockerEndpoint) > 0 {
- klog.InfoS("Connecting to docker on the dockerEndpoint", "endpoint", dockerEndpoint)
- return dockerapi.NewClientWithOpts(dockerapi.WithHost(dockerEndpoint), dockerapi.WithVersion(""))
- }
- return dockerapi.NewClientWithOpts(dockerapi.FromEnv)
-}
-
-// ConnectToDockerOrDie creates docker client connecting to docker daemon.
-// If the endpoint passed in is "fake://", a fake docker client
-// will be returned. The program exits if error occurs. The requestTimeout
-// is the timeout for docker requests. If timeout is exceeded, the request
-// will be cancelled and throw out an error. If requestTimeout is 0, a default
-// value will be applied.
-func ConnectToDockerOrDie(dockerEndpoint string, requestTimeout, imagePullProgressDeadline time.Duration) Interface {
- client, err := getDockerClient(dockerEndpoint)
- if err != nil {
- klog.ErrorS(err, "Couldn't connect to docker")
- os.Exit(1)
-
- }
- klog.InfoS("Start docker client with request timeout", "timeout", requestTimeout)
- return newKubeDockerClient(client, requestTimeout, imagePullProgressDeadline)
-}
diff --git a/pkg/kubelet/dockershim/libdocker/fake_client.go b/pkg/kubelet/dockershim/libdocker/fake_client.go
deleted file mode 100644
index 2b9937b98e5..00000000000
--- a/pkg/kubelet/dockershim/libdocker/fake_client.go
+++ /dev/null
@@ -1,854 +0,0 @@
-//go:build !dockerless
-// +build !dockerless
-
-/*
-Copyright 2014 The Kubernetes Authors.
-
-Licensed under the Apache License, Version 2.0 (the "License");
-you may not use this file except in compliance with the License.
-You may obtain a copy of the License at
-
- http://www.apache.org/licenses/LICENSE-2.0
-
-Unless required by applicable law or agreed to in writing, software
-distributed under the License is distributed on an "AS IS" BASIS,
-WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-See the License for the specific language governing permissions and
-limitations under the License.
-*/
-
-package libdocker
-
-import (
- "encoding/json"
- "fmt"
- "hash/fnv"
- "math/rand"
- "os"
- "reflect"
- "strconv"
- "strings"
- "sync"
- "time"
-
- dockertypes "github.com/docker/docker/api/types"
- dockercontainer "github.com/docker/docker/api/types/container"
- dockerimagetypes "github.com/docker/docker/api/types/image"
-
- v1 "k8s.io/api/core/v1"
- "k8s.io/utils/clock"
-)
-
-type CalledDetail struct {
- name string
- arguments []interface{}
-}
-
-// NewCalledDetail create a new call detail item.
-func NewCalledDetail(name string, arguments []interface{}) CalledDetail {
- return CalledDetail{name: name, arguments: arguments}
-}
-
-// FakeDockerClient is a simple fake docker client, so that kubelet can be run for testing without requiring a real docker setup.
-type FakeDockerClient struct {
- sync.Mutex
- Clock clock.Clock
- RunningContainerList []dockertypes.Container
- ExitedContainerList []dockertypes.Container
- ContainerMap map[string]*dockertypes.ContainerJSON
- ImageInspects map[string]*dockertypes.ImageInspect
- Images []dockertypes.ImageSummary
- ImageIDsNeedingAuth map[string]dockertypes.AuthConfig
- Errors map[string]error
- called []CalledDetail
- pulled []string
- EnableTrace bool
- RandGenerator *rand.Rand
-
- // Created, Started, Stopped and Removed all contain container docker ID
- Created []string
- Started []string
- Stopped []string
- Removed []string
- // Images pulled by ref (name or ID).
- ImagesPulled []string
-
- VersionInfo dockertypes.Version
- Information dockertypes.Info
- ExecInspect *dockertypes.ContainerExecInspect
- execCmd []string
- EnableSleep bool
- ImageHistoryMap map[string][]dockerimagetypes.HistoryResponseItem
- ContainerStatsMap map[string]*dockertypes.StatsJSON
-}
-
-const (
- // Notice that if someday we also have minimum docker version requirement, this should also be updated.
- fakeDockerVersion = "1.13.1"
-
- fakeImageSize = 1024
-
- // Docker prepends '/' to the container name.
- dockerNamePrefix = "/"
-)
-
-func NewFakeDockerClient() *FakeDockerClient {
- return &FakeDockerClient{
- // Docker's API version does not include the patch number.
- VersionInfo: dockertypes.Version{Version: fakeDockerVersion, APIVersion: strings.TrimSuffix(MinimumDockerAPIVersion, ".0")},
- Errors: make(map[string]error),
- ContainerMap: make(map[string]*dockertypes.ContainerJSON),
- Clock: clock.RealClock{},
- // default this to true, so that we trace calls, image pulls and container lifecycle
- EnableTrace: true,
- ExecInspect: &dockertypes.ContainerExecInspect{},
- ImageInspects: make(map[string]*dockertypes.ImageInspect),
- ImageIDsNeedingAuth: make(map[string]dockertypes.AuthConfig),
- RandGenerator: rand.New(rand.NewSource(time.Now().UnixNano())),
- }
-}
-
-func (f *FakeDockerClient) WithClock(c clock.Clock) *FakeDockerClient {
- f.Lock()
- defer f.Unlock()
- f.Clock = c
- return f
-}
-
-func (f *FakeDockerClient) WithVersion(version, apiVersion string) *FakeDockerClient {
- f.Lock()
- defer f.Unlock()
- f.VersionInfo = dockertypes.Version{Version: version, APIVersion: apiVersion}
- return f
-}
-
-func (f *FakeDockerClient) WithTraceDisabled() *FakeDockerClient {
- f.Lock()
- defer f.Unlock()
- f.EnableTrace = false
- return f
-}
-
-func (f *FakeDockerClient) WithRandSource(source rand.Source) *FakeDockerClient {
- f.Lock()
- defer f.Unlock()
- f.RandGenerator = rand.New(source)
- return f
-}
-
-func (f *FakeDockerClient) appendCalled(callDetail CalledDetail) {
- if f.EnableTrace {
- f.called = append(f.called, callDetail)
- }
-}
-
-func (f *FakeDockerClient) appendPulled(pull string) {
- if f.EnableTrace {
- f.pulled = append(f.pulled, pull)
- }
-}
-
-func (f *FakeDockerClient) appendContainerTrace(traceCategory string, containerName string) {
- if !f.EnableTrace {
- return
- }
- switch traceCategory {
- case "Created":
- f.Created = append(f.Created, containerName)
- case "Started":
- f.Started = append(f.Started, containerName)
- case "Stopped":
- f.Stopped = append(f.Stopped, containerName)
- case "Removed":
- f.Removed = append(f.Removed, containerName)
- }
-}
-
-func (f *FakeDockerClient) InjectError(fn string, err error) {
- f.Lock()
- defer f.Unlock()
- f.Errors[fn] = err
-}
-
-func (f *FakeDockerClient) InjectErrors(errs map[string]error) {
- f.Lock()
- defer f.Unlock()
- for fn, err := range errs {
- f.Errors[fn] = err
- }
-}
-
-func (f *FakeDockerClient) ClearErrors() {
- f.Lock()
- defer f.Unlock()
- f.Errors = map[string]error{}
-}
-
-func (f *FakeDockerClient) ClearCalls() {
- f.Lock()
- defer f.Unlock()
- f.called = []CalledDetail{}
- f.pulled = []string{}
- f.Created = []string{}
- f.Started = []string{}
- f.Stopped = []string{}
- f.Removed = []string{}
-}
-
-func (f *FakeDockerClient) getCalledNames() []string {
- names := []string{}
- for _, detail := range f.called {
- names = append(names, detail.name)
- }
- return names
-}
-
-// Because the new data type returned by engine-api is too complex to manually initialize, we need a
-// fake container which is easier to initialize.
-type FakeContainer struct {
- ID string
- Name string
- Running bool
- ExitCode int
- Pid int
- CreatedAt time.Time
- StartedAt time.Time
- FinishedAt time.Time
- Config *dockercontainer.Config
- HostConfig *dockercontainer.HostConfig
-}
-
-// convertFakeContainer converts the fake container to real container
-func convertFakeContainer(f *FakeContainer) *dockertypes.ContainerJSON {
- if f.Config == nil {
- f.Config = &dockercontainer.Config{}
- }
- if f.HostConfig == nil {
- f.HostConfig = &dockercontainer.HostConfig{}
- }
- fakeRWSize := int64(40)
- return &dockertypes.ContainerJSON{
- ContainerJSONBase: &dockertypes.ContainerJSONBase{
- ID: f.ID,
- Name: f.Name,
- Image: f.Config.Image,
- State: &dockertypes.ContainerState{
- Running: f.Running,
- ExitCode: f.ExitCode,
- Pid: f.Pid,
- StartedAt: dockerTimestampToString(f.StartedAt),
- FinishedAt: dockerTimestampToString(f.FinishedAt),
- },
- Created: dockerTimestampToString(f.CreatedAt),
- HostConfig: f.HostConfig,
- SizeRw: &fakeRWSize,
- },
- Config: f.Config,
- NetworkSettings: &dockertypes.NetworkSettings{},
- }
-}
-
-func (f *FakeDockerClient) SetFakeContainers(containers []*FakeContainer) {
- f.Lock()
- defer f.Unlock()
- // Reset the lists and the map.
- f.ContainerMap = map[string]*dockertypes.ContainerJSON{}
- f.RunningContainerList = []dockertypes.Container{}
- f.ExitedContainerList = []dockertypes.Container{}
-
- for i := range containers {
- c := containers[i]
- f.ContainerMap[c.ID] = convertFakeContainer(c)
- container := dockertypes.Container{
- Names: []string{c.Name},
- ID: c.ID,
- }
- if c.Config != nil {
- container.Labels = c.Config.Labels
- }
- if c.Running {
- f.RunningContainerList = append(f.RunningContainerList, container)
- } else {
- f.ExitedContainerList = append(f.ExitedContainerList, container)
- }
- }
-}
-
-func (f *FakeDockerClient) AssertCalls(calls []string) (err error) {
- f.Lock()
- defer f.Unlock()
-
- if !reflect.DeepEqual(calls, f.getCalledNames()) {
- err = fmt.Errorf("expected %#v, got %#v", calls, f.getCalledNames())
- }
-
- return
-}
-
-func (f *FakeDockerClient) AssertCallDetails(calls ...CalledDetail) (err error) {
- f.Lock()
- defer f.Unlock()
-
- if !reflect.DeepEqual(calls, f.called) {
- err = fmt.Errorf("expected %#v, got %#v", calls, f.called)
- }
-
- return
-}
-
-func (f *FakeDockerClient) popError(op string) error {
- if f.Errors == nil {
- return nil
- }
- err, ok := f.Errors[op]
- if ok {
- delete(f.Errors, op)
- return err
- }
- return nil
-}
-
-// ListContainers is a test-spy implementation of Interface.ListContainers.
-// It adds an entry "list" to the internal method call record.
-func (f *FakeDockerClient) ListContainers(options dockertypes.ContainerListOptions) ([]dockertypes.Container, error) {
- f.Lock()
- defer f.Unlock()
- f.appendCalled(CalledDetail{name: "list"})
- err := f.popError("list")
- containerList := append([]dockertypes.Container{}, f.RunningContainerList...)
- if options.All {
- // Although the container is not sorted, but the container with the same name should be in order,
- // that is enough for us now.
- containerList = append(containerList, f.ExitedContainerList...)
- }
- // Filters containers with id, only support 1 id.
- idFilters := options.Filters.Get("id")
- if len(idFilters) != 0 {
- var filtered []dockertypes.Container
- for _, container := range containerList {
- for _, idFilter := range idFilters {
- if container.ID == idFilter {
- filtered = append(filtered, container)
- break
- }
- }
- }
- containerList = filtered
- }
- // Filters containers with status, only support 1 status.
- statusFilters := options.Filters.Get("status")
- if len(statusFilters) == 1 {
- var filtered []dockertypes.Container
- for _, container := range containerList {
- for _, statusFilter := range statusFilters {
- if toDockerContainerStatus(container.Status) == statusFilter {
- filtered = append(filtered, container)
- break
- }
- }
- }
- containerList = filtered
- }
- // Filters containers with label filter.
- labelFilters := options.Filters.Get("label")
- if len(labelFilters) != 0 {
- var filtered []dockertypes.Container
- for _, container := range containerList {
- match := true
- for _, labelFilter := range labelFilters {
- kv := strings.Split(labelFilter, "=")
- if len(kv) != 2 {
- return nil, fmt.Errorf("invalid label filter %q", labelFilter)
- }
- if container.Labels[kv[0]] != kv[1] {
- match = false
- break
- }
- }
- if match {
- filtered = append(filtered, container)
- }
- }
- containerList = filtered
- }
- return containerList, err
-}
-
-func toDockerContainerStatus(state string) string {
- switch {
- case strings.HasPrefix(state, StatusCreatedPrefix):
- return "created"
- case strings.HasPrefix(state, StatusRunningPrefix):
- return "running"
- case strings.HasPrefix(state, StatusExitedPrefix):
- return "exited"
- default:
- return "unknown"
- }
-}
-
-// InspectContainer is a test-spy implementation of Interface.InspectContainer.
-// It adds an entry "inspect" to the internal method call record.
-func (f *FakeDockerClient) InspectContainer(id string) (*dockertypes.ContainerJSON, error) {
- f.Lock()
- defer f.Unlock()
- f.appendCalled(CalledDetail{name: "inspect_container"})
- err := f.popError("inspect_container")
- if container, ok := f.ContainerMap[id]; ok {
- return container, err
- }
- if err != nil {
- // Use the custom error if it exists.
- return nil, err
- }
- return nil, fmt.Errorf("container %q not found", id)
-}
-
-// InspectContainerWithSize is a test-spy implementation of Interface.InspectContainerWithSize.
-// It adds an entry "inspect" to the internal method call record.
-func (f *FakeDockerClient) InspectContainerWithSize(id string) (*dockertypes.ContainerJSON, error) {
- f.Lock()
- defer f.Unlock()
- f.appendCalled(CalledDetail{name: "inspect_container_withsize"})
- err := f.popError("inspect_container_withsize")
- if container, ok := f.ContainerMap[id]; ok {
- return container, err
- }
- if err != nil {
- // Use the custom error if it exists.
- return nil, err
- }
- return nil, fmt.Errorf("container %q not found", id)
-}
-
-// InspectImageByRef is a test-spy implementation of Interface.InspectImageByRef.
-// It adds an entry "inspect" to the internal method call record.
-func (f *FakeDockerClient) InspectImageByRef(name string) (*dockertypes.ImageInspect, error) {
- f.Lock()
- defer f.Unlock()
- f.appendCalled(CalledDetail{name: "inspect_image"})
- if err := f.popError("inspect_image"); err != nil {
- return nil, err
- }
- if result, ok := f.ImageInspects[name]; ok {
- return result, nil
- }
- return nil, ImageNotFoundError{name}
-}
-
-// InspectImageByID is a test-spy implementation of Interface.InspectImageByID.
-// It adds an entry "inspect" to the internal method call record.
-func (f *FakeDockerClient) InspectImageByID(name string) (*dockertypes.ImageInspect, error) {
- f.Lock()
- defer f.Unlock()
- f.appendCalled(CalledDetail{name: "inspect_image"})
- if err := f.popError("inspect_image"); err != nil {
- return nil, err
- }
- if result, ok := f.ImageInspects[name]; ok {
- return result, nil
- }
- return nil, ImageNotFoundError{name}
-}
-
-// Sleeps random amount of time with the normal distribution with given mean and stddev
-// (in milliseconds), we never sleep less than cutOffMillis
-func (f *FakeDockerClient) normalSleep(mean, stdDev, cutOffMillis int) {
- if !f.EnableSleep {
- return
- }
- cutoff := (time.Duration)(cutOffMillis) * time.Millisecond
- delay := (time.Duration)(f.RandGenerator.NormFloat64()*float64(stdDev)+float64(mean)) * time.Millisecond
- if delay < cutoff {
- delay = cutoff
- }
- time.Sleep(delay)
-}
-
-// GetFakeContainerID generates a fake container id from container name with a hash.
-func GetFakeContainerID(name string) string {
- hash := fnv.New64a()
- hash.Write([]byte(name))
- return strconv.FormatUint(hash.Sum64(), 16)
-}
-
-// CreateContainer is a test-spy implementation of Interface.CreateContainer.
-// It adds an entry "create" to the internal method call record.
-func (f *FakeDockerClient) CreateContainer(c dockertypes.ContainerCreateConfig) (*dockercontainer.ContainerCreateCreatedBody, error) {
- f.Lock()
- defer f.Unlock()
- f.appendCalled(CalledDetail{name: "create"})
- if err := f.popError("create"); err != nil {
- return nil, err
- }
- // This is not a very good fake. We'll just add this container's name to the list.
- name := dockerNamePrefix + c.Name
- id := GetFakeContainerID(name)
- f.appendContainerTrace("Created", id)
- timestamp := f.Clock.Now()
- // The newest container should be in front, because we assume so in GetPodStatus()
- f.RunningContainerList = append([]dockertypes.Container{
- {ID: id, Names: []string{name}, Image: c.Config.Image, Created: timestamp.Unix(), State: StatusCreatedPrefix, Labels: c.Config.Labels},
- }, f.RunningContainerList...)
- f.ContainerMap[id] = convertFakeContainer(&FakeContainer{
- ID: id, Name: name, Config: c.Config, HostConfig: c.HostConfig, CreatedAt: timestamp})
-
- f.normalSleep(100, 25, 25)
-
- return &dockercontainer.ContainerCreateCreatedBody{ID: id}, nil
-}
-
-// StartContainer is a test-spy implementation of Interface.StartContainer.
-// It adds an entry "start" to the internal method call record.
-func (f *FakeDockerClient) StartContainer(id string) error {
- f.Lock()
- defer f.Unlock()
- f.appendCalled(CalledDetail{name: "start"})
- if err := f.popError("start"); err != nil {
- return err
- }
- f.appendContainerTrace("Started", id)
- container, ok := f.ContainerMap[id]
- if container.HostConfig.NetworkMode.IsContainer() {
- hostContainerID := container.HostConfig.NetworkMode.ConnectedContainer()
- found := false
- for _, container := range f.RunningContainerList {
- if container.ID == hostContainerID {
- found = true
- }
- }
- if !found {
- return fmt.Errorf("failed to start container \"%s\": Error response from daemon: cannot join network of a non running container: %s", id, hostContainerID)
- }
- }
- timestamp := f.Clock.Now()
- if !ok {
- container = convertFakeContainer(&FakeContainer{ID: id, Name: id, CreatedAt: timestamp})
- }
- container.State.Running = true
- container.State.Pid = os.Getpid()
- container.State.StartedAt = dockerTimestampToString(timestamp)
- r := f.RandGenerator.Uint32()
- container.NetworkSettings.IPAddress = fmt.Sprintf("10.%d.%d.%d", byte(r>>16), byte(r>>8), byte(r))
- f.ContainerMap[id] = container
- f.updateContainerStatus(id, StatusRunningPrefix)
- f.normalSleep(200, 50, 50)
- return nil
-}
-
-// StopContainer is a test-spy implementation of Interface.StopContainer.
-// It adds an entry "stop" to the internal method call record.
-func (f *FakeDockerClient) StopContainer(id string, timeout time.Duration) error {
- f.Lock()
- defer f.Unlock()
- f.appendCalled(CalledDetail{name: "stop"})
- if err := f.popError("stop"); err != nil {
- return err
- }
- f.appendContainerTrace("Stopped", id)
- // Container status should be Updated before container moved to ExitedContainerList
- f.updateContainerStatus(id, StatusExitedPrefix)
- var newList []dockertypes.Container
- for _, container := range f.RunningContainerList {
- if container.ID == id {
- // The newest exited container should be in front. Because we assume so in GetPodStatus()
- f.ExitedContainerList = append([]dockertypes.Container{container}, f.ExitedContainerList...)
- continue
- }
- newList = append(newList, container)
- }
- f.RunningContainerList = newList
- container, ok := f.ContainerMap[id]
- if !ok {
- container = convertFakeContainer(&FakeContainer{
- ID: id,
- Name: id,
- Running: false,
- StartedAt: time.Now().Add(-time.Second),
- FinishedAt: time.Now(),
- })
- } else {
- container.State.FinishedAt = dockerTimestampToString(f.Clock.Now())
- container.State.Running = false
- }
- f.ContainerMap[id] = container
- f.normalSleep(200, 50, 50)
- return nil
-}
-
-func (f *FakeDockerClient) RemoveContainer(id string, opts dockertypes.ContainerRemoveOptions) error {
- f.Lock()
- defer f.Unlock()
- f.appendCalled(CalledDetail{name: "remove"})
- err := f.popError("remove")
- if err != nil {
- return err
- }
- for i := range f.ExitedContainerList {
- if f.ExitedContainerList[i].ID == id {
- delete(f.ContainerMap, id)
- f.ExitedContainerList = append(f.ExitedContainerList[:i], f.ExitedContainerList[i+1:]...)
- f.appendContainerTrace("Removed", id)
- return nil
- }
-
- }
- for i := range f.RunningContainerList {
- // allow removal of running containers which are not running
- if f.RunningContainerList[i].ID == id && !f.ContainerMap[id].State.Running {
- delete(f.ContainerMap, id)
- f.RunningContainerList = append(f.RunningContainerList[:i], f.RunningContainerList[i+1:]...)
- f.appendContainerTrace("Removed", id)
- return nil
- }
- }
- // To be a good fake, report error if container is not stopped.
- return fmt.Errorf("container not stopped")
-}
-
-func (f *FakeDockerClient) UpdateContainerResources(id string, updateConfig dockercontainer.UpdateConfig) error {
- return nil
-}
-
-// Logs is a test-spy implementation of Interface.Logs.
-// It adds an entry "logs" to the internal method call record.
-func (f *FakeDockerClient) Logs(id string, opts dockertypes.ContainerLogsOptions, sopts StreamOptions) error {
- f.Lock()
- defer f.Unlock()
- f.appendCalled(CalledDetail{name: "logs"})
- return f.popError("logs")
-}
-
-func (f *FakeDockerClient) isAuthorizedForImage(image string, auth dockertypes.AuthConfig) bool {
- if reqd, exists := f.ImageIDsNeedingAuth[image]; !exists {
- return true // no auth needed
- } else {
- return auth.Username == reqd.Username && auth.Password == reqd.Password
- }
-}
-
-// PullImage is a test-spy implementation of Interface.PullImage.
-// It adds an entry "pull" to the internal method call record.
-func (f *FakeDockerClient) PullImage(image string, auth dockertypes.AuthConfig, opts dockertypes.ImagePullOptions) error {
- f.Lock()
- defer f.Unlock()
- f.appendCalled(CalledDetail{name: "pull"})
- err := f.popError("pull")
- if err == nil {
- if !f.isAuthorizedForImage(image, auth) {
- return ImageNotFoundError{ID: image}
- }
-
- authJson, _ := json.Marshal(auth)
- inspect := createImageInspectFromRef(image)
- f.ImageInspects[image] = inspect
- f.appendPulled(fmt.Sprintf("%s using %s", image, string(authJson)))
- f.Images = append(f.Images, *createImageFromImageInspect(*inspect))
- f.ImagesPulled = append(f.ImagesPulled, image)
- }
- return err
-}
-
-func (f *FakeDockerClient) Version() (*dockertypes.Version, error) {
- f.Lock()
- defer f.Unlock()
- v := f.VersionInfo
- return &v, f.popError("version")
-}
-
-func (f *FakeDockerClient) Info() (*dockertypes.Info, error) {
- return &f.Information, nil
-}
-
-func (f *FakeDockerClient) CreateExec(id string, opts dockertypes.ExecConfig) (*dockertypes.IDResponse, error) {
- f.Lock()
- defer f.Unlock()
- f.execCmd = opts.Cmd
- f.appendCalled(CalledDetail{name: "create_exec"})
- return &dockertypes.IDResponse{ID: "12345678"}, nil
-}
-
-func (f *FakeDockerClient) StartExec(startExec string, opts dockertypes.ExecStartCheck, sopts StreamOptions) error {
- f.Lock()
- defer f.Unlock()
- f.appendCalled(CalledDetail{name: "start_exec"})
- return nil
-}
-
-func (f *FakeDockerClient) AttachToContainer(id string, opts dockertypes.ContainerAttachOptions, sopts StreamOptions) error {
- f.Lock()
- defer f.Unlock()
- f.appendCalled(CalledDetail{name: "attach"})
- return nil
-}
-
-func (f *FakeDockerClient) InspectExec(id string) (*dockertypes.ContainerExecInspect, error) {
- return f.ExecInspect, f.popError("inspect_exec")
-}
-
-func (f *FakeDockerClient) ListImages(opts dockertypes.ImageListOptions) ([]dockertypes.ImageSummary, error) {
- f.Lock()
- defer f.Unlock()
- f.appendCalled(CalledDetail{name: "list_images"})
- err := f.popError("list_images")
- return f.Images, err
-}
-
-func (f *FakeDockerClient) RemoveImage(image string, opts dockertypes.ImageRemoveOptions) ([]dockertypes.ImageDeleteResponseItem, error) {
- f.Lock()
- defer f.Unlock()
- f.appendCalled(CalledDetail{name: "remove_image", arguments: []interface{}{image, opts}})
- err := f.popError("remove_image")
- if err == nil {
- for i := range f.Images {
- if f.Images[i].ID == image {
- f.Images = append(f.Images[:i], f.Images[i+1:]...)
- break
- }
- }
- }
- return []dockertypes.ImageDeleteResponseItem{{Deleted: image}}, err
-}
-
-func (f *FakeDockerClient) InjectImages(images []dockertypes.ImageSummary) {
- f.Lock()
- defer f.Unlock()
- f.Images = append(f.Images, images...)
- for _, i := range images {
- f.ImageInspects[i.ID] = createImageInspectFromImage(i)
- }
-}
-
-func (f *FakeDockerClient) MakeImagesPrivate(images []dockertypes.ImageSummary, auth dockertypes.AuthConfig) {
- f.Lock()
- defer f.Unlock()
- for _, i := range images {
- f.ImageIDsNeedingAuth[i.ID] = auth
- }
-}
-
-func (f *FakeDockerClient) ResetImages() {
- f.Lock()
- defer f.Unlock()
- f.Images = []dockertypes.ImageSummary{}
- f.ImageInspects = make(map[string]*dockertypes.ImageInspect)
- f.ImageIDsNeedingAuth = make(map[string]dockertypes.AuthConfig)
-}
-
-func (f *FakeDockerClient) InjectImageInspects(inspects []dockertypes.ImageInspect) {
- f.Lock()
- defer f.Unlock()
- for i := range inspects {
- inspect := inspects[i]
- f.Images = append(f.Images, *createImageFromImageInspect(inspect))
- f.ImageInspects[inspect.ID] = &inspect
- }
-}
-
-func (f *FakeDockerClient) updateContainerStatus(id, status string) {
- for i := range f.RunningContainerList {
- if f.RunningContainerList[i].ID == id {
- f.RunningContainerList[i].Status = status
- }
- }
-}
-
-func (f *FakeDockerClient) ResizeExecTTY(id string, height, width uint) error {
- f.Lock()
- defer f.Unlock()
- f.appendCalled(CalledDetail{name: "resize_exec"})
- return nil
-}
-
-func (f *FakeDockerClient) ResizeContainerTTY(id string, height, width uint) error {
- f.Lock()
- defer f.Unlock()
- f.appendCalled(CalledDetail{name: "resize_container"})
- return nil
-}
-
-func createImageInspectFromRef(ref string) *dockertypes.ImageInspect {
- return &dockertypes.ImageInspect{
- ID: ref,
- RepoTags: []string{ref},
- // Image size is required to be non-zero for CRI integration.
- VirtualSize: fakeImageSize,
- Size: fakeImageSize,
- Config: &dockercontainer.Config{},
- }
-}
-
-func createImageInspectFromImage(image dockertypes.ImageSummary) *dockertypes.ImageInspect {
- return &dockertypes.ImageInspect{
- ID: image.ID,
- RepoTags: image.RepoTags,
- // Image size is required to be non-zero for CRI integration.
- VirtualSize: fakeImageSize,
- Size: fakeImageSize,
- Config: &dockercontainer.Config{},
- }
-}
-
-func createImageFromImageInspect(inspect dockertypes.ImageInspect) *dockertypes.ImageSummary {
- return &dockertypes.ImageSummary{
- ID: inspect.ID,
- RepoTags: inspect.RepoTags,
- // Image size is required to be non-zero for CRI integration.
- VirtualSize: fakeImageSize,
- Size: fakeImageSize,
- }
-}
-
-// dockerTimestampToString converts the timestamp to string
-func dockerTimestampToString(t time.Time) string {
- return t.Format(time.RFC3339Nano)
-}
-
-func (f *FakeDockerClient) ImageHistory(id string) ([]dockerimagetypes.HistoryResponseItem, error) {
- f.Lock()
- defer f.Unlock()
- f.appendCalled(CalledDetail{name: "image_history"})
- history := f.ImageHistoryMap[id]
- return history, nil
-}
-
-func (f *FakeDockerClient) InjectImageHistory(data map[string][]dockerimagetypes.HistoryResponseItem) {
- f.Lock()
- defer f.Unlock()
- f.ImageHistoryMap = data
-}
-
-// FakeDockerPuller is meant to be a simple wrapper around FakeDockerClient.
-// Please do not add more functionalities to it.
-type FakeDockerPuller struct {
- client Interface
-}
-
-func (f *FakeDockerPuller) Pull(image string, _ []v1.Secret) error {
- return f.client.PullImage(image, dockertypes.AuthConfig{}, dockertypes.ImagePullOptions{})
-}
-
-func (f *FakeDockerPuller) GetImageRef(image string) (string, error) {
- _, err := f.client.InspectImageByRef(image)
- if err != nil && IsImageNotFoundError(err) {
- return "", nil
- }
- return image, err
-}
-
-func (f *FakeDockerClient) InjectContainerStats(data map[string]*dockertypes.StatsJSON) {
- f.Lock()
- defer f.Unlock()
- f.ContainerStatsMap = data
-}
-
-func (f *FakeDockerClient) GetContainerStats(id string) (*dockertypes.StatsJSON, error) {
- f.Lock()
- defer f.Unlock()
- f.appendCalled(CalledDetail{name: "get_container_stats"})
- stats, ok := f.ContainerStatsMap[id]
- if !ok {
- return nil, fmt.Errorf("container %q not found", id)
- }
- return stats, nil
-}
diff --git a/pkg/kubelet/dockershim/libdocker/helpers.go b/pkg/kubelet/dockershim/libdocker/helpers.go
deleted file mode 100644
index 57918629ad6..00000000000
--- a/pkg/kubelet/dockershim/libdocker/helpers.go
+++ /dev/null
@@ -1,166 +0,0 @@
-//go:build !dockerless
-// +build !dockerless
-
-/*
-Copyright 2014 The Kubernetes Authors.
-
-Licensed under the Apache License, Version 2.0 (the "License");
-you may not use this file except in compliance with the License.
-You may obtain a copy of the License at
-
- http://www.apache.org/licenses/LICENSE-2.0
-
-Unless required by applicable law or agreed to in writing, software
-distributed under the License is distributed on an "AS IS" BASIS,
-WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-See the License for the specific language governing permissions and
-limitations under the License.
-*/
-
-package libdocker
-
-import (
- "strings"
- "time"
-
- dockerref "github.com/docker/distribution/reference"
- dockertypes "github.com/docker/docker/api/types"
- godigest "github.com/opencontainers/go-digest"
- "k8s.io/klog/v2"
-)
-
-// ParseDockerTimestamp parses the timestamp returned by Interface from string to time.Time
-func ParseDockerTimestamp(s string) (time.Time, error) {
- // Timestamp returned by Docker is in time.RFC3339Nano format.
- return time.Parse(time.RFC3339Nano, s)
-}
-
-// matchImageTagOrSHA checks if the given image specifier is a valid image ref,
-// and that it matches the given image. It should fail on things like image IDs
-// (config digests) and other digest-only references, but succeed on image names
-// (`foo`), tag references (`foo:bar`), and manifest digest references
-// (`foo@sha256:xyz`).
-func matchImageTagOrSHA(inspected dockertypes.ImageInspect, image string) bool {
- // The image string follows the grammar specified here
- // https://github.com/docker/distribution/blob/master/reference/reference.go#L4
- named, err := dockerref.ParseNormalizedNamed(image)
- if err != nil {
- klog.V(4).InfoS("Couldn't parse image reference", "image", image, "err", err)
- return false
- }
- _, isTagged := named.(dockerref.Tagged)
- digest, isDigested := named.(dockerref.Digested)
- if !isTagged && !isDigested {
- // No Tag or SHA specified, so just return what we have
- return true
- }
-
- if isTagged {
- // Check the RepoTags for a match.
- for _, tag := range inspected.RepoTags {
- // An image name (without the tag/digest) can be [hostname '/'] component ['/' component]*
- // Because either the RepoTag or the name *may* contain the
- // hostname or not, we only check for the suffix match.
- if strings.HasSuffix(image, tag) || strings.HasSuffix(tag, image) {
- return true
- } else {
- // TODO: We need to remove this hack when project atomic based
- // docker distro(s) like centos/fedora/rhel image fix problems on
- // their end.
- // Say the tag is "docker.io/busybox:latest"
- // and the image is "docker.io/library/busybox:latest"
- t, err := dockerref.ParseNormalizedNamed(tag)
- if err != nil {
- continue
- }
- // the parsed/normalized tag will look like
- // reference.taggedReference {
- // namedRepository: reference.repository {
- // domain: "docker.io",
- // path: "library/busybox"
- // },
- // tag: "latest"
- // }
- // If it does not have tags then we bail out
- t2, ok := t.(dockerref.Tagged)
- if !ok {
- continue
- }
- // normalized tag would look like "docker.io/library/busybox:latest"
- // note the library get added in the string
- normalizedTag := t2.String()
- if normalizedTag == "" {
- continue
- }
- if strings.HasSuffix(image, normalizedTag) || strings.HasSuffix(normalizedTag, image) {
- return true
- }
- }
- }
- }
-
- if isDigested {
- for _, repoDigest := range inspected.RepoDigests {
- named, err := dockerref.ParseNormalizedNamed(repoDigest)
- if err != nil {
- klog.V(4).InfoS("Couldn't parse image RepoDigest reference", "digest", repoDigest, "err", err)
- continue
- }
- if d, isDigested := named.(dockerref.Digested); isDigested {
- if digest.Digest().Algorithm().String() == d.Digest().Algorithm().String() &&
- digest.Digest().Hex() == d.Digest().Hex() {
- return true
- }
- }
- }
-
- // process the ID as a digest
- id, err := godigest.Parse(inspected.ID)
- if err != nil {
- klog.V(4).InfoS("Couldn't parse image ID reference", "imageID", id, "err", err)
- return false
- }
- if digest.Digest().Algorithm().String() == id.Algorithm().String() && digest.Digest().Hex() == id.Hex() {
- return true
- }
- }
- klog.V(4).InfoS("Inspected image ID does not match image", "inspectedImageID", inspected.ID, "image", image)
- return false
-}
-
-// matchImageIDOnly checks that the given image specifier is a digest-only
-// reference, and that it matches the given image.
-func matchImageIDOnly(inspected dockertypes.ImageInspect, image string) bool {
- // If the image ref is literally equal to the inspected image's ID,
- // just return true here (this might be the case for Docker 1.9,
- // where we won't have a digest for the ID)
- if inspected.ID == image {
- return true
- }
-
- // Otherwise, we should try actual parsing to be more correct
- ref, err := dockerref.Parse(image)
- if err != nil {
- klog.V(4).InfoS("Couldn't parse image reference", "image", image, "err", err)
- return false
- }
-
- digest, isDigested := ref.(dockerref.Digested)
- if !isDigested {
- klog.V(4).InfoS("The image reference was not a digest reference", "image", image)
- return false
- }
-
- id, err := godigest.Parse(inspected.ID)
- if err != nil {
- klog.V(4).InfoS("Couldn't parse image ID reference", "imageID", id, "err", err)
- return false
- }
-
- if digest.Digest().Algorithm().String() == id.Algorithm().String() && digest.Digest().Hex() == id.Hex() {
- return true
- }
-
- klog.V(4).InfoS("The image reference does not directly refer to the given image's ID", "image", image, "inspectedImageID", inspected.ID)
- return false
-}
diff --git a/pkg/kubelet/dockershim/libdocker/helpers_test.go b/pkg/kubelet/dockershim/libdocker/helpers_test.go
deleted file mode 100644
index e70f0055254..00000000000
--- a/pkg/kubelet/dockershim/libdocker/helpers_test.go
+++ /dev/null
@@ -1,273 +0,0 @@
-//go:build !dockerless
-// +build !dockerless
-
-/*
-Copyright 2014 The Kubernetes Authors.
-
-Licensed under the Apache License, Version 2.0 (the "License");
-you may not use this file except in compliance with the License.
-You may obtain a copy of the License at
-
- http://www.apache.org/licenses/LICENSE-2.0
-
-Unless required by applicable law or agreed to in writing, software
-distributed under the License is distributed on an "AS IS" BASIS,
-WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-See the License for the specific language governing permissions and
-limitations under the License.
-*/
-
-package libdocker
-
-import (
- "fmt"
- "testing"
-
- dockertypes "github.com/docker/docker/api/types"
- "github.com/stretchr/testify/assert"
-)
-
-func TestMatchImageTagOrSHA(t *testing.T) {
- for i, testCase := range []struct {
- Inspected dockertypes.ImageInspect
- Image string
- Output bool
- }{
- {
- Inspected: dockertypes.ImageInspect{RepoTags: []string{"ubuntu:latest"}},
- Image: "ubuntu",
- Output: true,
- },
- {
- Inspected: dockertypes.ImageInspect{RepoTags: []string{"ubuntu:14.04"}},
- Image: "ubuntu:latest",
- Output: false,
- },
- {
- Inspected: dockertypes.ImageInspect{RepoTags: []string{"colemickens/hyperkube-amd64:217.9beff63"}},
- Image: "colemickens/hyperkube-amd64:217.9beff63",
- Output: true,
- },
- {
- Inspected: dockertypes.ImageInspect{RepoTags: []string{"colemickens/hyperkube-amd64:217.9beff63"}},
- Image: "docker.io/colemickens/hyperkube-amd64:217.9beff63",
- Output: true,
- },
- {
- Inspected: dockertypes.ImageInspect{RepoTags: []string{"docker.io/kubernetes/pause:latest"}},
- Image: "kubernetes/pause:latest",
- Output: true,
- },
- {
- Inspected: dockertypes.ImageInspect{
- ID: "sha256:2208f7a29005d226d1ee33a63e33af1f47af6156c740d7d23c7948e8d282d53d",
- },
- Image: "myimage@sha256:2208f7a29005d226d1ee33a63e33af1f47af6156c740d7d23c7948e8d282d53d",
- Output: true,
- },
- {
- Inspected: dockertypes.ImageInspect{
- ID: "sha256:2208f7a29005d226d1ee33a63e33af1f47af6156c740d7d23c7948e8d282d53d",
- },
- Image: "myimage@sha256:2208f7a29005",
- Output: false,
- },
- {
- Inspected: dockertypes.ImageInspect{
- ID: "sha256:2208f7a29005d226d1ee33a63e33af1f47af6156c740d7d23c7948e8d282d53d",
- },
- Image: "myimage@sha256:2208",
- Output: false,
- },
- {
- // mismatched ID is ignored
- Inspected: dockertypes.ImageInspect{
- ID: "sha256:2208f7a29005d226d1ee33a63e33af1f47af6156c740d7d23c7948e8d282d53d",
- },
- Image: "myimage@sha256:0000f7a29005d226d1ee33a63e33af1f47af6156c740d7d23c7948e8d282d53d",
- Output: false,
- },
- {
- // invalid digest is ignored
- Inspected: dockertypes.ImageInspect{
- ID: "sha256:unparseable",
- },
- Image: "myimage@sha256:unparseable",
- Output: false,
- },
- {
- // v1 schema images can be pulled in one format and returned in another
- Inspected: dockertypes.ImageInspect{
- ID: "sha256:9bbdf247c91345f0789c10f50a57e36a667af1189687ad1de88a6243d05a2227",
- RepoDigests: []string{"centos/ruby-23-centos7@sha256:940584acbbfb0347272112d2eb95574625c0c60b4e2fdadb139de5859cf754bf"},
- },
- Image: "centos/ruby-23-centos7@sha256:940584acbbfb0347272112d2eb95574625c0c60b4e2fdadb139de5859cf754bf",
- Output: true,
- },
- {
- Inspected: dockertypes.ImageInspect{
- ID: "sha256:9bbdf247c91345f0789c10f50a57e36a667af1189687ad1de88a6243d05a2227",
- RepoTags: []string{"docker.io/busybox:latest"},
- },
- Image: "docker.io/library/busybox:latest",
- Output: true,
- },
- {
- // RepoDigest match is required
- Inspected: dockertypes.ImageInspect{
- ID: "",
- RepoDigests: []string{"docker.io/centos/ruby-23-centos7@sha256:000084acbbfb0347272112d2eb95574625c0c60b4e2fdadb139de5859cf754bf"},
- },
- Image: "centos/ruby-23-centos7@sha256:940584acbbfb0347272112d2eb95574625c0c60b4e2fdadb139de5859cf754bf",
- Output: false,
- },
- {
- // RepoDigest match is allowed
- Inspected: dockertypes.ImageInspect{
- ID: "sha256:9bbdf247c91345f0789c10f50a57e36a667af1189687ad1de88a6243d05a2227",
- RepoDigests: []string{"docker.io/centos/ruby-23-centos7@sha256:940584acbbfb0347272112d2eb95574625c0c60b4e2fdadb139de5859cf754bf"},
- },
- Image: "centos/ruby-23-centos7@sha256:940584acbbfb0347272112d2eb95574625c0c60b4e2fdadb139de5859cf754bf",
- Output: true,
- },
- {
- // RepoDigest and ID are checked
- Inspected: dockertypes.ImageInspect{
- ID: "sha256:940584acbbfb0347272112d2eb95574625c0c60b4e2fdadb139de5859cf754bf",
- RepoDigests: []string{"docker.io/centos/ruby-23-centos7@sha256:9bbdf247c91345f0789c10f50a57e36a667af1189687ad1de88a6243d05a2227"},
- },
- Image: "centos/ruby-23-centos7@sha256:940584acbbfb0347272112d2eb95574625c0c60b4e2fdadb139de5859cf754bf",
- Output: true,
- },
- {
- // unparseable RepoDigests are skipped
- Inspected: dockertypes.ImageInspect{
- ID: "sha256:9bbdf247c91345f0789c10f50a57e36a667af1189687ad1de88a6243d05a2227",
- RepoDigests: []string{
- "centos/ruby-23-centos7@sha256:unparseable",
- "docker.io/centos/ruby-23-centos7@sha256:940584acbbfb0347272112d2eb95574625c0c60b4e2fdadb139de5859cf754bf",
- },
- },
- Image: "centos/ruby-23-centos7@sha256:940584acbbfb0347272112d2eb95574625c0c60b4e2fdadb139de5859cf754bf",
- Output: true,
- },
- {
- // unparseable RepoDigest is ignored
- Inspected: dockertypes.ImageInspect{
- ID: "sha256:9bbdf247c91345f0789c10f50a57e36a667af1189687ad1de88a6243d05a2227",
- RepoDigests: []string{"docker.io/centos/ruby-23-centos7@sha256:unparseable"},
- },
- Image: "centos/ruby-23-centos7@sha256:940584acbbfb0347272112d2eb95574625c0c60b4e2fdadb139de5859cf754bf",
- Output: false,
- },
- {
- // unparseable image digest is ignored
- Inspected: dockertypes.ImageInspect{
- ID: "sha256:9bbdf247c91345f0789c10f50a57e36a667af1189687ad1de88a6243d05a2227",
- RepoDigests: []string{"docker.io/centos/ruby-23-centos7@sha256:unparseable"},
- },
- Image: "centos/ruby-23-centos7@sha256:unparseable",
- Output: false,
- },
- {
- // prefix match is rejected for ID and RepoDigest
- Inspected: dockertypes.ImageInspect{
- ID: "sha256:unparseable",
- RepoDigests: []string{"docker.io/centos/ruby-23-centos7@sha256:unparseable"},
- },
- Image: "sha256:unparseable",
- Output: false,
- },
- {
- // possible SHA prefix match is rejected for ID and RepoDigest because it is not in the named format
- Inspected: dockertypes.ImageInspect{
- ID: "sha256:0000f247c91345f0789c10f50a57e36a667af1189687ad1de88a6243d05a2227",
- RepoDigests: []string{"docker.io/centos/ruby-23-centos7@sha256:0000f247c91345f0789c10f50a57e36a667af1189687ad1de88a6243d05a2227"},
- },
- Image: "sha256:0000",
- Output: false,
- },
- } {
- match := matchImageTagOrSHA(testCase.Inspected, testCase.Image)
- assert.Equal(t, testCase.Output, match, testCase.Image+fmt.Sprintf(" is not a match (%d)", i))
- }
-}
-
-func TestMatchImageIDOnly(t *testing.T) {
- for i, testCase := range []struct {
- Inspected dockertypes.ImageInspect
- Image string
- Output bool
- }{
- // shouldn't match names or tagged names
- {
- Inspected: dockertypes.ImageInspect{RepoTags: []string{"ubuntu:latest"}},
- Image: "ubuntu",
- Output: false,
- },
- {
- Inspected: dockertypes.ImageInspect{RepoTags: []string{"colemickens/hyperkube-amd64:217.9beff63"}},
- Image: "colemickens/hyperkube-amd64:217.9beff63",
- Output: false,
- },
- // should match name@digest refs if they refer to the image ID (but only the full ID)
- {
- Inspected: dockertypes.ImageInspect{
- ID: "sha256:2208f7a29005d226d1ee33a63e33af1f47af6156c740d7d23c7948e8d282d53d",
- },
- Image: "myimage@sha256:2208f7a29005d226d1ee33a63e33af1f47af6156c740d7d23c7948e8d282d53d",
- Output: true,
- },
- {
- Inspected: dockertypes.ImageInspect{
- ID: "sha256:2208f7a29005d226d1ee33a63e33af1f47af6156c740d7d23c7948e8d282d53d",
- },
- Image: "myimage@sha256:2208f7a29005",
- Output: false,
- },
- {
- Inspected: dockertypes.ImageInspect{
- ID: "sha256:2208f7a29005d226d1ee33a63e33af1f47af6156c740d7d23c7948e8d282d53d",
- },
- Image: "myimage@sha256:2208",
- Output: false,
- },
- // should match when the IDs are literally the same
- {
- Inspected: dockertypes.ImageInspect{
- ID: "foobar",
- },
- Image: "foobar",
- Output: true,
- },
- // shouldn't match mismatched IDs
- {
- Inspected: dockertypes.ImageInspect{
- ID: "sha256:2208f7a29005d226d1ee33a63e33af1f47af6156c740d7d23c7948e8d282d53d",
- },
- Image: "myimage@sha256:0000f7a29005d226d1ee33a63e33af1f47af6156c740d7d23c7948e8d282d53d",
- Output: false,
- },
- // shouldn't match invalid IDs or refs
- {
- Inspected: dockertypes.ImageInspect{
- ID: "sha256:unparseable",
- },
- Image: "myimage@sha256:unparseable",
- Output: false,
- },
- // shouldn't match against repo digests
- {
- Inspected: dockertypes.ImageInspect{
- ID: "sha256:9bbdf247c91345f0789c10f50a57e36a667af1189687ad1de88a6243d05a2227",
- RepoDigests: []string{"centos/ruby-23-centos7@sha256:940584acbbfb0347272112d2eb95574625c0c60b4e2fdadb139de5859cf754bf"},
- },
- Image: "centos/ruby-23-centos7@sha256:940584acbbfb0347272112d2eb95574625c0c60b4e2fdadb139de5859cf754bf",
- Output: false,
- },
- } {
- match := matchImageIDOnly(testCase.Inspected, testCase.Image)
- assert.Equal(t, testCase.Output, match, fmt.Sprintf("%s is not a match (%d)", testCase.Image, i))
- }
-
-}
diff --git a/pkg/kubelet/dockershim/libdocker/instrumented_client.go b/pkg/kubelet/dockershim/libdocker/instrumented_client.go
deleted file mode 100644
index 1bd9b77ee07..00000000000
--- a/pkg/kubelet/dockershim/libdocker/instrumented_client.go
+++ /dev/null
@@ -1,275 +0,0 @@
-//go:build !dockerless
-// +build !dockerless
-
-/*
-Copyright 2015 The Kubernetes Authors.
-
-Licensed under the Apache License, Version 2.0 (the "License");
-you may not use this file except in compliance with the License.
-You may obtain a copy of the License at
-
- http://www.apache.org/licenses/LICENSE-2.0
-
-Unless required by applicable law or agreed to in writing, software
-distributed under the License is distributed on an "AS IS" BASIS,
-WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-See the License for the specific language governing permissions and
-limitations under the License.
-*/
-
-package libdocker
-
-import (
- "time"
-
- dockertypes "github.com/docker/docker/api/types"
- dockercontainer "github.com/docker/docker/api/types/container"
- dockerimagetypes "github.com/docker/docker/api/types/image"
-
- "k8s.io/kubernetes/pkg/kubelet/dockershim/metrics"
-)
-
-// instrumentedInterface wraps the Interface and records the operations
-// and errors metrics.
-type instrumentedInterface struct {
- client Interface
-}
-
-// NewInstrumentedInterface creates an instrumented Interface from an existing Interface.
-func NewInstrumentedInterface(dockerClient Interface) Interface {
- return instrumentedInterface{
- client: dockerClient,
- }
-}
-
-// recordOperation records the duration of the operation.
-func recordOperation(operation string, start time.Time) {
- metrics.DockerOperations.WithLabelValues(operation).Inc()
- metrics.DockerOperationsLatency.WithLabelValues(operation).Observe(metrics.SinceInSeconds(start))
-}
-
-// recordError records error for metric if an error occurred.
-func recordError(operation string, err error) {
- if err != nil {
- if _, ok := err.(operationTimeout); ok {
- metrics.DockerOperationsTimeout.WithLabelValues(operation).Inc()
- }
- // Docker operation timeout error is also a docker error, so we don't add else here.
- metrics.DockerOperationsErrors.WithLabelValues(operation).Inc()
- }
-}
-
-func (in instrumentedInterface) ListContainers(options dockertypes.ContainerListOptions) ([]dockertypes.Container, error) {
- const operation = "list_containers"
- defer recordOperation(operation, time.Now())
-
- out, err := in.client.ListContainers(options)
- recordError(operation, err)
- return out, err
-}
-
-func (in instrumentedInterface) InspectContainer(id string) (*dockertypes.ContainerJSON, error) {
- const operation = "inspect_container"
- defer recordOperation(operation, time.Now())
-
- out, err := in.client.InspectContainer(id)
- recordError(operation, err)
- return out, err
-}
-
-func (in instrumentedInterface) InspectContainerWithSize(id string) (*dockertypes.ContainerJSON, error) {
- const operation = "inspect_container_withsize"
- defer recordOperation(operation, time.Now())
-
- out, err := in.client.InspectContainerWithSize(id)
- recordError(operation, err)
- return out, err
-}
-
-func (in instrumentedInterface) CreateContainer(opts dockertypes.ContainerCreateConfig) (*dockercontainer.ContainerCreateCreatedBody, error) {
- const operation = "create_container"
- defer recordOperation(operation, time.Now())
-
- out, err := in.client.CreateContainer(opts)
- recordError(operation, err)
- return out, err
-}
-
-func (in instrumentedInterface) StartContainer(id string) error {
- const operation = "start_container"
- defer recordOperation(operation, time.Now())
-
- err := in.client.StartContainer(id)
- recordError(operation, err)
- return err
-}
-
-func (in instrumentedInterface) StopContainer(id string, timeout time.Duration) error {
- const operation = "stop_container"
- defer recordOperation(operation, time.Now())
-
- err := in.client.StopContainer(id, timeout)
- recordError(operation, err)
- return err
-}
-
-func (in instrumentedInterface) RemoveContainer(id string, opts dockertypes.ContainerRemoveOptions) error {
- const operation = "remove_container"
- defer recordOperation(operation, time.Now())
-
- err := in.client.RemoveContainer(id, opts)
- recordError(operation, err)
- return err
-}
-
-func (in instrumentedInterface) UpdateContainerResources(id string, updateConfig dockercontainer.UpdateConfig) error {
- const operation = "update_container"
- defer recordOperation(operation, time.Now())
-
- err := in.client.UpdateContainerResources(id, updateConfig)
- recordError(operation, err)
- return err
-}
-
-func (in instrumentedInterface) InspectImageByRef(image string) (*dockertypes.ImageInspect, error) {
- const operation = "inspect_image"
- defer recordOperation(operation, time.Now())
-
- out, err := in.client.InspectImageByRef(image)
- recordError(operation, err)
- return out, err
-}
-
-func (in instrumentedInterface) InspectImageByID(image string) (*dockertypes.ImageInspect, error) {
- const operation = "inspect_image"
- defer recordOperation(operation, time.Now())
-
- out, err := in.client.InspectImageByID(image)
- recordError(operation, err)
- return out, err
-}
-
-func (in instrumentedInterface) ListImages(opts dockertypes.ImageListOptions) ([]dockertypes.ImageSummary, error) {
- const operation = "list_images"
- defer recordOperation(operation, time.Now())
-
- out, err := in.client.ListImages(opts)
- recordError(operation, err)
- return out, err
-}
-
-func (in instrumentedInterface) PullImage(imageID string, auth dockertypes.AuthConfig, opts dockertypes.ImagePullOptions) error {
- const operation = "pull_image"
- defer recordOperation(operation, time.Now())
- err := in.client.PullImage(imageID, auth, opts)
- recordError(operation, err)
- return err
-}
-
-func (in instrumentedInterface) RemoveImage(image string, opts dockertypes.ImageRemoveOptions) ([]dockertypes.ImageDeleteResponseItem, error) {
- const operation = "remove_image"
- defer recordOperation(operation, time.Now())
-
- imageDelete, err := in.client.RemoveImage(image, opts)
- recordError(operation, err)
- return imageDelete, err
-}
-
-func (in instrumentedInterface) Logs(id string, opts dockertypes.ContainerLogsOptions, sopts StreamOptions) error {
- const operation = "logs"
- defer recordOperation(operation, time.Now())
-
- err := in.client.Logs(id, opts, sopts)
- recordError(operation, err)
- return err
-}
-
-func (in instrumentedInterface) Version() (*dockertypes.Version, error) {
- const operation = "version"
- defer recordOperation(operation, time.Now())
-
- out, err := in.client.Version()
- recordError(operation, err)
- return out, err
-}
-
-func (in instrumentedInterface) Info() (*dockertypes.Info, error) {
- const operation = "info"
- defer recordOperation(operation, time.Now())
-
- out, err := in.client.Info()
- recordError(operation, err)
- return out, err
-}
-
-func (in instrumentedInterface) CreateExec(id string, opts dockertypes.ExecConfig) (*dockertypes.IDResponse, error) {
- const operation = "create_exec"
- defer recordOperation(operation, time.Now())
-
- out, err := in.client.CreateExec(id, opts)
- recordError(operation, err)
- return out, err
-}
-
-func (in instrumentedInterface) StartExec(startExec string, opts dockertypes.ExecStartCheck, sopts StreamOptions) error {
- const operation = "start_exec"
- defer recordOperation(operation, time.Now())
-
- err := in.client.StartExec(startExec, opts, sopts)
- recordError(operation, err)
- return err
-}
-
-func (in instrumentedInterface) InspectExec(id string) (*dockertypes.ContainerExecInspect, error) {
- const operation = "inspect_exec"
- defer recordOperation(operation, time.Now())
-
- out, err := in.client.InspectExec(id)
- recordError(operation, err)
- return out, err
-}
-
-func (in instrumentedInterface) AttachToContainer(id string, opts dockertypes.ContainerAttachOptions, sopts StreamOptions) error {
- const operation = "attach"
- defer recordOperation(operation, time.Now())
-
- err := in.client.AttachToContainer(id, opts, sopts)
- recordError(operation, err)
- return err
-}
-
-func (in instrumentedInterface) ImageHistory(id string) ([]dockerimagetypes.HistoryResponseItem, error) {
- const operation = "image_history"
- defer recordOperation(operation, time.Now())
-
- out, err := in.client.ImageHistory(id)
- recordError(operation, err)
- return out, err
-}
-
-func (in instrumentedInterface) ResizeExecTTY(id string, height, width uint) error {
- const operation = "resize_exec"
- defer recordOperation(operation, time.Now())
-
- err := in.client.ResizeExecTTY(id, height, width)
- recordError(operation, err)
- return err
-}
-
-func (in instrumentedInterface) ResizeContainerTTY(id string, height, width uint) error {
- const operation = "resize_container"
- defer recordOperation(operation, time.Now())
-
- err := in.client.ResizeContainerTTY(id, height, width)
- recordError(operation, err)
- return err
-}
-
-func (in instrumentedInterface) GetContainerStats(id string) (*dockertypes.StatsJSON, error) {
- const operation = "stats"
- defer recordOperation(operation, time.Now())
-
- out, err := in.client.GetContainerStats(id)
- recordError(operation, err)
- return out, err
-}
diff --git a/pkg/kubelet/dockershim/libdocker/kube_docker_client.go b/pkg/kubelet/dockershim/libdocker/kube_docker_client.go
deleted file mode 100644
index 7df051f8174..00000000000
--- a/pkg/kubelet/dockershim/libdocker/kube_docker_client.go
+++ /dev/null
@@ -1,678 +0,0 @@
-//go:build !dockerless
-// +build !dockerless
-
-/*
-Copyright 2016 The Kubernetes Authors.
-
-Licensed under the Apache License, Version 2.0 (the "License");
-you may not use this file except in compliance with the License.
-You may obtain a copy of the License at
-
- http://www.apache.org/licenses/LICENSE-2.0
-
-Unless required by applicable law or agreed to in writing, software
-distributed under the License is distributed on an "AS IS" BASIS,
-WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-See the License for the specific language governing permissions and
-limitations under the License.
-*/
-
-package libdocker
-
-import (
- "bytes"
- "context"
- "encoding/base64"
- "encoding/json"
- "fmt"
- "io"
- "io/ioutil"
- "regexp"
- "sync"
- "time"
-
- "k8s.io/klog/v2"
-
- dockertypes "github.com/docker/docker/api/types"
- dockercontainer "github.com/docker/docker/api/types/container"
- dockerimagetypes "github.com/docker/docker/api/types/image"
- dockerapi "github.com/docker/docker/client"
- dockermessage "github.com/docker/docker/pkg/jsonmessage"
- dockerstdcopy "github.com/docker/docker/pkg/stdcopy"
-)
-
-// kubeDockerClient is a wrapped layer of docker client for kubelet internal use. This layer is added to:
-// 1) Redirect stream for exec and attach operations.
-// 2) Wrap the context in this layer to make the Interface cleaner.
-type kubeDockerClient struct {
- // timeout is the timeout of short running docker operations.
- timeout time.Duration
- // If no pulling progress is made before imagePullProgressDeadline, the image pulling will be cancelled.
- // Docker reports image progress for every 512kB block, so normally there shouldn't be too long interval
- // between progress updates.
- imagePullProgressDeadline time.Duration
- client *dockerapi.Client
-}
-
-// Make sure that kubeDockerClient implemented the Interface.
-var _ Interface = &kubeDockerClient{}
-
-// There are 2 kinds of docker operations categorized by running time:
-// * Long running operation: The long running operation could run for arbitrary long time, and the running time
-// usually depends on some uncontrollable factors. These operations include: PullImage, Logs, StartExec, AttachToContainer.
-// * Non-long running operation: Given the maximum load of the system, the non-long running operation should finish
-// in expected and usually short time. These include all other operations.
-// kubeDockerClient only applies timeout on non-long running operations.
-const (
- // defaultTimeout is the default timeout of short running docker operations.
- // Value is slightly offset from 2 minutes to make timeouts due to this
- // constant recognizable.
- defaultTimeout = 2*time.Minute - 1*time.Second
-
- // defaultShmSize is the default ShmSize to use (in bytes) if not specified.
- defaultShmSize = int64(1024 * 1024 * 64)
-
- // defaultImagePullingProgressReportInterval is the default interval of image pulling progress reporting.
- defaultImagePullingProgressReportInterval = 10 * time.Second
-)
-
-// newKubeDockerClient creates an kubeDockerClient from an existing docker client. If requestTimeout is 0,
-// defaultTimeout will be applied.
-func newKubeDockerClient(dockerClient *dockerapi.Client, requestTimeout, imagePullProgressDeadline time.Duration) Interface {
- if requestTimeout == 0 {
- requestTimeout = defaultTimeout
- }
-
- k := &kubeDockerClient{
- client: dockerClient,
- timeout: requestTimeout,
- imagePullProgressDeadline: imagePullProgressDeadline,
- }
-
- // Notice that this assumes that docker is running before kubelet is started.
- ctx, cancel := k.getTimeoutContext()
- defer cancel()
- dockerClient.NegotiateAPIVersion(ctx)
-
- return k
-}
-
-func (d *kubeDockerClient) ListContainers(options dockertypes.ContainerListOptions) ([]dockertypes.Container, error) {
- ctx, cancel := d.getTimeoutContext()
- defer cancel()
- containers, err := d.client.ContainerList(ctx, options)
- if ctxErr := contextError(ctx); ctxErr != nil {
- return nil, ctxErr
- }
- if err != nil {
- return nil, err
- }
- return containers, nil
-}
-
-func (d *kubeDockerClient) InspectContainer(id string) (*dockertypes.ContainerJSON, error) {
- ctx, cancel := d.getTimeoutContext()
- defer cancel()
- containerJSON, err := d.client.ContainerInspect(ctx, id)
- if ctxErr := contextError(ctx); ctxErr != nil {
- return nil, ctxErr
- }
- if err != nil {
- return nil, err
- }
- return &containerJSON, nil
-}
-
-// InspectContainerWithSize is currently only used for Windows container stats
-func (d *kubeDockerClient) InspectContainerWithSize(id string) (*dockertypes.ContainerJSON, error) {
- ctx, cancel := d.getTimeoutContext()
- defer cancel()
- // Inspects the container including the fields SizeRw and SizeRootFs.
- containerJSON, _, err := d.client.ContainerInspectWithRaw(ctx, id, true)
- if ctxErr := contextError(ctx); ctxErr != nil {
- return nil, ctxErr
- }
- if err != nil {
- return nil, err
- }
- return &containerJSON, nil
-}
-
-func (d *kubeDockerClient) CreateContainer(opts dockertypes.ContainerCreateConfig) (*dockercontainer.ContainerCreateCreatedBody, error) {
- ctx, cancel := d.getTimeoutContext()
- defer cancel()
- // we provide an explicit default shm size as to not depend on docker daemon.
- // TODO: evaluate exposing this as a knob in the API
- if opts.HostConfig != nil && opts.HostConfig.ShmSize <= 0 {
- opts.HostConfig.ShmSize = defaultShmSize
- }
- createResp, err := d.client.ContainerCreate(ctx, opts.Config, opts.HostConfig, opts.NetworkingConfig, nil, opts.Name)
- if ctxErr := contextError(ctx); ctxErr != nil {
- return nil, ctxErr
- }
- if err != nil {
- return nil, err
- }
- return &createResp, nil
-}
-
-func (d *kubeDockerClient) StartContainer(id string) error {
- ctx, cancel := d.getTimeoutContext()
- defer cancel()
- err := d.client.ContainerStart(ctx, id, dockertypes.ContainerStartOptions{})
- if ctxErr := contextError(ctx); ctxErr != nil {
- return ctxErr
- }
- return err
-}
-
-// Stopping an already stopped container will not cause an error in dockerapi.
-func (d *kubeDockerClient) StopContainer(id string, timeout time.Duration) error {
- ctx, cancel := d.getCustomTimeoutContext(timeout)
- defer cancel()
- err := d.client.ContainerStop(ctx, id, &timeout)
- if ctxErr := contextError(ctx); ctxErr != nil {
- return ctxErr
- }
- return err
-}
-
-func (d *kubeDockerClient) RemoveContainer(id string, opts dockertypes.ContainerRemoveOptions) error {
- ctx, cancel := d.getTimeoutContext()
- defer cancel()
- err := d.client.ContainerRemove(ctx, id, opts)
- if ctxErr := contextError(ctx); ctxErr != nil {
- return ctxErr
- }
- return err
-}
-
-func (d *kubeDockerClient) UpdateContainerResources(id string, updateConfig dockercontainer.UpdateConfig) error {
- ctx, cancel := d.getTimeoutContext()
- defer cancel()
- _, err := d.client.ContainerUpdate(ctx, id, updateConfig)
- if ctxErr := contextError(ctx); ctxErr != nil {
- return ctxErr
- }
- return err
-}
-
-func (d *kubeDockerClient) inspectImageRaw(ref string) (*dockertypes.ImageInspect, error) {
- ctx, cancel := d.getTimeoutContext()
- defer cancel()
- resp, _, err := d.client.ImageInspectWithRaw(ctx, ref)
- if ctxErr := contextError(ctx); ctxErr != nil {
- return nil, ctxErr
- }
- if err != nil {
- if dockerapi.IsErrNotFound(err) {
- err = ImageNotFoundError{ID: ref}
- }
- return nil, err
- }
-
- return &resp, nil
-}
-
-func (d *kubeDockerClient) InspectImageByID(imageID string) (*dockertypes.ImageInspect, error) {
- resp, err := d.inspectImageRaw(imageID)
- if err != nil {
- return nil, err
- }
-
- if !matchImageIDOnly(*resp, imageID) {
- return nil, ImageNotFoundError{ID: imageID}
- }
- return resp, nil
-}
-
-func (d *kubeDockerClient) InspectImageByRef(imageRef string) (*dockertypes.ImageInspect, error) {
- resp, err := d.inspectImageRaw(imageRef)
- if err != nil {
- return nil, err
- }
-
- if !matchImageTagOrSHA(*resp, imageRef) {
- return nil, ImageNotFoundError{ID: imageRef}
- }
- return resp, nil
-}
-
-func (d *kubeDockerClient) ImageHistory(id string) ([]dockerimagetypes.HistoryResponseItem, error) {
- ctx, cancel := d.getTimeoutContext()
- defer cancel()
- resp, err := d.client.ImageHistory(ctx, id)
- if ctxErr := contextError(ctx); ctxErr != nil {
- return nil, ctxErr
- }
- return resp, err
-}
-
-func (d *kubeDockerClient) ListImages(opts dockertypes.ImageListOptions) ([]dockertypes.ImageSummary, error) {
- ctx, cancel := d.getTimeoutContext()
- defer cancel()
- images, err := d.client.ImageList(ctx, opts)
- if ctxErr := contextError(ctx); ctxErr != nil {
- return nil, ctxErr
- }
- if err != nil {
- return nil, err
- }
- return images, nil
-}
-
-func base64EncodeAuth(auth dockertypes.AuthConfig) (string, error) {
- var buf bytes.Buffer
- if err := json.NewEncoder(&buf).Encode(auth); err != nil {
- return "", err
- }
- return base64.URLEncoding.EncodeToString(buf.Bytes()), nil
-}
-
-// progress is a wrapper of dockermessage.JSONMessage with a lock protecting it.
-type progress struct {
- sync.RWMutex
- // message stores the latest docker json message.
- message *dockermessage.JSONMessage
- // timestamp of the latest update.
- timestamp time.Time
-}
-
-func newProgress() *progress {
- return &progress{timestamp: time.Now()}
-}
-
-func (p *progress) set(msg *dockermessage.JSONMessage) {
- p.Lock()
- defer p.Unlock()
- p.message = msg
- p.timestamp = time.Now()
-}
-
-func (p *progress) get() (string, time.Time) {
- p.RLock()
- defer p.RUnlock()
- if p.message == nil {
- return "No progress", p.timestamp
- }
- // The following code is based on JSONMessage.Display
- var prefix string
- if p.message.ID != "" {
- prefix = fmt.Sprintf("%s: ", p.message.ID)
- }
- if p.message.Progress == nil {
- return fmt.Sprintf("%s%s", prefix, p.message.Status), p.timestamp
- }
- return fmt.Sprintf("%s%s %s", prefix, p.message.Status, p.message.Progress.String()), p.timestamp
-}
-
-// progressReporter keeps the newest image pulling progress and periodically report the newest progress.
-type progressReporter struct {
- *progress
- image string
- cancel context.CancelFunc
- stopCh chan struct{}
- imagePullProgressDeadline time.Duration
-}
-
-// newProgressReporter creates a new progressReporter for specific image with specified reporting interval
-func newProgressReporter(image string, cancel context.CancelFunc, imagePullProgressDeadline time.Duration) *progressReporter {
- return &progressReporter{
- progress: newProgress(),
- image: image,
- cancel: cancel,
- stopCh: make(chan struct{}),
- imagePullProgressDeadline: imagePullProgressDeadline,
- }
-}
-
-// start starts the progressReporter
-func (p *progressReporter) start() {
- go func() {
- ticker := time.NewTicker(defaultImagePullingProgressReportInterval)
- defer ticker.Stop()
- for {
- // TODO(random-liu): Report as events.
- select {
- case <-ticker.C:
- progress, timestamp := p.progress.get()
- // If there is no progress for p.imagePullProgressDeadline, cancel the operation.
- if time.Since(timestamp) > p.imagePullProgressDeadline {
- klog.ErrorS(nil, "Cancel pulling image because of exceed image pull deadline, record latest progress", "image", p.image, "deadline", p.imagePullProgressDeadline, "progress", progress)
- p.cancel()
- return
- }
- klog.V(2).InfoS("Pulling image", "image", p.image, "progress", progress)
- case <-p.stopCh:
- progress, _ := p.progress.get()
- klog.V(2).InfoS("Stop pulling image", "image", p.image, "progress", progress)
- return
- }
- }
- }()
-}
-
-// stop stops the progressReporter
-func (p *progressReporter) stop() {
- close(p.stopCh)
-}
-
-func (d *kubeDockerClient) PullImage(image string, auth dockertypes.AuthConfig, opts dockertypes.ImagePullOptions) error {
- // RegistryAuth is the base64 encoded credentials for the registry
- base64Auth, err := base64EncodeAuth(auth)
- if err != nil {
- return err
- }
- opts.RegistryAuth = base64Auth
- ctx, cancel := d.getCancelableContext()
- defer cancel()
- resp, err := d.client.ImagePull(ctx, image, opts)
- if err != nil {
- return err
- }
- defer resp.Close()
- reporter := newProgressReporter(image, cancel, d.imagePullProgressDeadline)
- reporter.start()
- defer reporter.stop()
- decoder := json.NewDecoder(resp)
- for {
- var msg dockermessage.JSONMessage
- err := decoder.Decode(&msg)
- if err == io.EOF {
- break
- }
- if err != nil {
- return err
- }
- if msg.Error != nil {
- return msg.Error
- }
- reporter.set(&msg)
- }
- return nil
-}
-
-func (d *kubeDockerClient) RemoveImage(image string, opts dockertypes.ImageRemoveOptions) ([]dockertypes.ImageDeleteResponseItem, error) {
- ctx, cancel := d.getTimeoutContext()
- defer cancel()
- resp, err := d.client.ImageRemove(ctx, image, opts)
- if ctxErr := contextError(ctx); ctxErr != nil {
- return nil, ctxErr
- }
- if dockerapi.IsErrNotFound(err) {
- return nil, ImageNotFoundError{ID: image}
- }
- return resp, err
-}
-
-func (d *kubeDockerClient) Logs(id string, opts dockertypes.ContainerLogsOptions, sopts StreamOptions) error {
- ctx, cancel := d.getCancelableContext()
- defer cancel()
- resp, err := d.client.ContainerLogs(ctx, id, opts)
- if ctxErr := contextError(ctx); ctxErr != nil {
- return ctxErr
- }
- if err != nil {
- return err
- }
- defer resp.Close()
- return d.redirectResponseToOutputStream(sopts.RawTerminal, sopts.OutputStream, sopts.ErrorStream, resp)
-}
-
-func (d *kubeDockerClient) Version() (*dockertypes.Version, error) {
- ctx, cancel := d.getTimeoutContext()
- defer cancel()
- resp, err := d.client.ServerVersion(ctx)
- if ctxErr := contextError(ctx); ctxErr != nil {
- return nil, ctxErr
- }
- if err != nil {
- return nil, err
- }
- return &resp, nil
-}
-
-func (d *kubeDockerClient) Info() (*dockertypes.Info, error) {
- ctx, cancel := d.getTimeoutContext()
- defer cancel()
- resp, err := d.client.Info(ctx)
- if ctxErr := contextError(ctx); ctxErr != nil {
- return nil, ctxErr
- }
- if err != nil {
- return nil, err
- }
- return &resp, nil
-}
-
-// TODO(random-liu): Add unit test for exec and attach functions, just like what go-dockerclient did.
-func (d *kubeDockerClient) CreateExec(id string, opts dockertypes.ExecConfig) (*dockertypes.IDResponse, error) {
- ctx, cancel := d.getTimeoutContext()
- defer cancel()
- resp, err := d.client.ContainerExecCreate(ctx, id, opts)
- if ctxErr := contextError(ctx); ctxErr != nil {
- return nil, ctxErr
- }
- if err != nil {
- return nil, err
- }
- return &resp, nil
-}
-
-func (d *kubeDockerClient) StartExec(startExec string, opts dockertypes.ExecStartCheck, sopts StreamOptions) error {
- ctx, cancel := d.getCancelableContext()
- defer cancel()
- if opts.Detach {
- err := d.client.ContainerExecStart(ctx, startExec, opts)
- if ctxErr := contextError(ctx); ctxErr != nil {
- return ctxErr
- }
- return err
- }
- resp, err := d.client.ContainerExecAttach(ctx, startExec, dockertypes.ExecStartCheck{
- Detach: opts.Detach,
- Tty: opts.Tty,
- })
- if ctxErr := contextError(ctx); ctxErr != nil {
- return ctxErr
- }
- if err != nil {
- return err
- }
- defer resp.Close()
-
- if sopts.ExecStarted != nil {
- // Send a message to the channel indicating that the exec has started. This is needed so
- // interactive execs can handle resizing correctly - the request to resize the TTY has to happen
- // after the call to d.client.ContainerExecAttach, and because d.holdHijackedConnection below
- // blocks, we use sopts.ExecStarted to signal the caller that it's ok to resize.
- sopts.ExecStarted <- struct{}{}
- }
-
- return d.holdHijackedConnection(sopts.RawTerminal || opts.Tty, sopts.InputStream, sopts.OutputStream, sopts.ErrorStream, resp)
-}
-
-func (d *kubeDockerClient) InspectExec(id string) (*dockertypes.ContainerExecInspect, error) {
- ctx, cancel := d.getTimeoutContext()
- defer cancel()
- resp, err := d.client.ContainerExecInspect(ctx, id)
- if ctxErr := contextError(ctx); ctxErr != nil {
- return nil, ctxErr
- }
- if err != nil {
- return nil, err
- }
- return &resp, nil
-}
-
-func (d *kubeDockerClient) AttachToContainer(id string, opts dockertypes.ContainerAttachOptions, sopts StreamOptions) error {
- ctx, cancel := d.getCancelableContext()
- defer cancel()
- resp, err := d.client.ContainerAttach(ctx, id, opts)
- if ctxErr := contextError(ctx); ctxErr != nil {
- return ctxErr
- }
- if err != nil {
- return err
- }
- defer resp.Close()
- return d.holdHijackedConnection(sopts.RawTerminal, sopts.InputStream, sopts.OutputStream, sopts.ErrorStream, resp)
-}
-
-func (d *kubeDockerClient) ResizeExecTTY(id string, height, width uint) error {
- ctx, cancel := d.getCancelableContext()
- defer cancel()
- return d.client.ContainerExecResize(ctx, id, dockertypes.ResizeOptions{
- Height: height,
- Width: width,
- })
-}
-
-func (d *kubeDockerClient) ResizeContainerTTY(id string, height, width uint) error {
- ctx, cancel := d.getCancelableContext()
- defer cancel()
- return d.client.ContainerResize(ctx, id, dockertypes.ResizeOptions{
- Height: height,
- Width: width,
- })
-}
-
-// GetContainerStats is currently only used for Windows container stats
-func (d *kubeDockerClient) GetContainerStats(id string) (*dockertypes.StatsJSON, error) {
- ctx, cancel := d.getCancelableContext()
- defer cancel()
-
- response, err := d.client.ContainerStats(ctx, id, false)
- if err != nil {
- return nil, err
- }
-
- dec := json.NewDecoder(response.Body)
- var stats dockertypes.StatsJSON
- err = dec.Decode(&stats)
- if err != nil {
- return nil, err
- }
-
- defer response.Body.Close()
- return &stats, nil
-}
-
-// redirectResponseToOutputStream redirect the response stream to stdout and stderr. When tty is true, all stream will
-// only be redirected to stdout.
-func (d *kubeDockerClient) redirectResponseToOutputStream(tty bool, outputStream, errorStream io.Writer, resp io.Reader) error {
- if outputStream == nil {
- outputStream = ioutil.Discard
- }
- if errorStream == nil {
- errorStream = ioutil.Discard
- }
- var err error
- if tty {
- _, err = io.Copy(outputStream, resp)
- } else {
- _, err = dockerstdcopy.StdCopy(outputStream, errorStream, resp)
- }
- return err
-}
-
-// holdHijackedConnection hold the HijackedResponse, redirect the inputStream to the connection, and redirect the response
-// stream to stdout and stderr. NOTE: If needed, we could also add context in this function.
-func (d *kubeDockerClient) holdHijackedConnection(tty bool, inputStream io.Reader, outputStream, errorStream io.Writer, resp dockertypes.HijackedResponse) error {
- receiveStdout := make(chan error)
- if outputStream != nil || errorStream != nil {
- go func() {
- receiveStdout <- d.redirectResponseToOutputStream(tty, outputStream, errorStream, resp.Reader)
- }()
- }
-
- stdinDone := make(chan struct{})
- go func() {
- if inputStream != nil {
- io.Copy(resp.Conn, inputStream)
- }
- resp.CloseWrite()
- close(stdinDone)
- }()
-
- select {
- case err := <-receiveStdout:
- return err
- case <-stdinDone:
- if outputStream != nil || errorStream != nil {
- return <-receiveStdout
- }
- }
- return nil
-}
-
-// getCancelableContext returns a new cancelable context. For long running requests without timeout, we use cancelable
-// context to avoid potential resource leak, although the current implementation shouldn't leak resource.
-func (d *kubeDockerClient) getCancelableContext() (context.Context, context.CancelFunc) {
- return context.WithCancel(context.Background())
-}
-
-// getTimeoutContext returns a new context with default request timeout
-func (d *kubeDockerClient) getTimeoutContext() (context.Context, context.CancelFunc) {
- return context.WithTimeout(context.Background(), d.timeout)
-}
-
-// getCustomTimeoutContext returns a new context with a specific request timeout
-func (d *kubeDockerClient) getCustomTimeoutContext(timeout time.Duration) (context.Context, context.CancelFunc) {
- // Pick the larger of the two
- if d.timeout > timeout {
- timeout = d.timeout
- }
- return context.WithTimeout(context.Background(), timeout)
-}
-
-// contextError checks the context, and returns error if the context is timeout.
-func contextError(ctx context.Context) error {
- if ctx.Err() == context.DeadlineExceeded {
- return operationTimeout{err: ctx.Err()}
- }
- return ctx.Err()
-}
-
-// StreamOptions are the options used to configure the stream redirection
-type StreamOptions struct {
- RawTerminal bool
- InputStream io.Reader
- OutputStream io.Writer
- ErrorStream io.Writer
- ExecStarted chan struct{}
-}
-
-// operationTimeout is the error returned when the docker operations are timeout.
-type operationTimeout struct {
- err error
-}
-
-func (e operationTimeout) Error() string {
- return fmt.Sprintf("operation timeout: %v", e.err)
-}
-
-// containerNotFoundErrorRegx is the regexp of container not found error message.
-var containerNotFoundErrorRegx = regexp.MustCompile(`No such container: [0-9a-z]+`)
-
-// IsContainerNotFoundError checks whether the error is container not found error.
-func IsContainerNotFoundError(err error) bool {
- return containerNotFoundErrorRegx.MatchString(err.Error())
-}
-
-// ImageNotFoundError is the error returned by InspectImage when image not found.
-// Expose this to inject error in dockershim for testing.
-type ImageNotFoundError struct {
- ID string
-}
-
-func (e ImageNotFoundError) Error() string {
- return fmt.Sprintf("no such image: %q", e.ID)
-}
-
-// IsImageNotFoundError checks whether the error is image not found error. This is exposed
-// to share with dockershim.
-func IsImageNotFoundError(err error) bool {
- _, ok := err.(ImageNotFoundError)
- return ok
-}
diff --git a/pkg/kubelet/dockershim/libdocker/kube_docker_client_test.go b/pkg/kubelet/dockershim/libdocker/kube_docker_client_test.go
deleted file mode 100644
index 42dab7a7d84..00000000000
--- a/pkg/kubelet/dockershim/libdocker/kube_docker_client_test.go
+++ /dev/null
@@ -1,36 +0,0 @@
-//go:build !dockerless
-// +build !dockerless
-
-/*
-Copyright 2017 The Kubernetes Authors.
-
-Licensed under the Apache License, Version 2.0 (the "License");
-you may not use this file except in compliance with the License.
-You may obtain a copy of the License at
-
- http://www.apache.org/licenses/LICENSE-2.0
-
-Unless required by applicable law or agreed to in writing, software
-distributed under the License is distributed on an "AS IS" BASIS,
-WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-See the License for the specific language governing permissions and
-limitations under the License.
-*/
-
-package libdocker
-
-import (
- "fmt"
- "testing"
-
- "github.com/stretchr/testify/assert"
-)
-
-func TestIsContainerNotFoundError(t *testing.T) {
- // Expected error message from docker.
- containerNotFoundError := fmt.Errorf("Error response from daemon: No such container: 96e914f31579e44fe49b239266385330a9b2125abeb9254badd9fca74580c95a")
- otherError := fmt.Errorf("Error response from daemon: Other errors")
-
- assert.True(t, IsContainerNotFoundError(containerNotFoundError))
- assert.False(t, IsContainerNotFoundError(otherError))
-}
diff --git a/pkg/kubelet/dockershim/libdocker/testing/mock_client.go b/pkg/kubelet/dockershim/libdocker/testing/mock_client.go
deleted file mode 100644
index 72e7aba7fc6..00000000000
--- a/pkg/kubelet/dockershim/libdocker/testing/mock_client.go
+++ /dev/null
@@ -1,407 +0,0 @@
-//go:build !dockerless
-// +build !dockerless
-
-/*
-Copyright The Kubernetes Authors.
-
-Licensed under the Apache License, Version 2.0 (the "License");
-you may not use this file except in compliance with the License.
-You may obtain a copy of the License at
-
- http://www.apache.org/licenses/LICENSE-2.0
-
-Unless required by applicable law or agreed to in writing, software
-distributed under the License is distributed on an "AS IS" BASIS,
-WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-See the License for the specific language governing permissions and
-limitations under the License.
-*/
-
-// Code generated by MockGen. DO NOT EDIT.
-// Source: client.go
-
-// Package testing is a generated GoMock package.
-package testing
-
-import (
- types "github.com/docker/docker/api/types"
- container "github.com/docker/docker/api/types/container"
- image "github.com/docker/docker/api/types/image"
- gomock "github.com/golang/mock/gomock"
- libdocker "k8s.io/kubernetes/pkg/kubelet/dockershim/libdocker"
- reflect "reflect"
- time "time"
-)
-
-// MockInterface is a mock of Interface interface
-type MockInterface struct {
- ctrl *gomock.Controller
- recorder *MockInterfaceMockRecorder
-}
-
-// MockInterfaceMockRecorder is the mock recorder for MockInterface
-type MockInterfaceMockRecorder struct {
- mock *MockInterface
-}
-
-// NewMockInterface creates a new mock instance
-func NewMockInterface(ctrl *gomock.Controller) *MockInterface {
- mock := &MockInterface{ctrl: ctrl}
- mock.recorder = &MockInterfaceMockRecorder{mock}
- return mock
-}
-
-// EXPECT returns an object that allows the caller to indicate expected use
-func (m *MockInterface) EXPECT() *MockInterfaceMockRecorder {
- return m.recorder
-}
-
-// ListContainers mocks base method
-func (m *MockInterface) ListContainers(options types.ContainerListOptions) ([]types.Container, error) {
- m.ctrl.T.Helper()
- ret := m.ctrl.Call(m, "ListContainers", options)
- ret0, _ := ret[0].([]types.Container)
- ret1, _ := ret[1].(error)
- return ret0, ret1
-}
-
-// ListContainers indicates an expected call of ListContainers
-func (mr *MockInterfaceMockRecorder) ListContainers(options interface{}) *gomock.Call {
- mr.mock.ctrl.T.Helper()
- return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "ListContainers", reflect.TypeOf((*MockInterface)(nil).ListContainers), options)
-}
-
-// InspectContainer mocks base method
-func (m *MockInterface) InspectContainer(id string) (*types.ContainerJSON, error) {
- m.ctrl.T.Helper()
- ret := m.ctrl.Call(m, "InspectContainer", id)
- ret0, _ := ret[0].(*types.ContainerJSON)
- ret1, _ := ret[1].(error)
- return ret0, ret1
-}
-
-// InspectContainer indicates an expected call of InspectContainer
-func (mr *MockInterfaceMockRecorder) InspectContainer(id interface{}) *gomock.Call {
- mr.mock.ctrl.T.Helper()
- return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "InspectContainer", reflect.TypeOf((*MockInterface)(nil).InspectContainer), id)
-}
-
-// InspectContainerWithSize mocks base method
-func (m *MockInterface) InspectContainerWithSize(id string) (*types.ContainerJSON, error) {
- m.ctrl.T.Helper()
- ret := m.ctrl.Call(m, "InspectContainerWithSize", id)
- ret0, _ := ret[0].(*types.ContainerJSON)
- ret1, _ := ret[1].(error)
- return ret0, ret1
-}
-
-// InspectContainerWithSize indicates an expected call of InspectContainerWithSize
-func (mr *MockInterfaceMockRecorder) InspectContainerWithSize(id interface{}) *gomock.Call {
- mr.mock.ctrl.T.Helper()
- return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "InspectContainerWithSize", reflect.TypeOf((*MockInterface)(nil).InspectContainerWithSize), id)
-}
-
-// CreateContainer mocks base method
-func (m *MockInterface) CreateContainer(arg0 types.ContainerCreateConfig) (*container.ContainerCreateCreatedBody, error) {
- m.ctrl.T.Helper()
- ret := m.ctrl.Call(m, "CreateContainer", arg0)
- ret0, _ := ret[0].(*container.ContainerCreateCreatedBody)
- ret1, _ := ret[1].(error)
- return ret0, ret1
-}
-
-// CreateContainer indicates an expected call of CreateContainer
-func (mr *MockInterfaceMockRecorder) CreateContainer(arg0 interface{}) *gomock.Call {
- mr.mock.ctrl.T.Helper()
- return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "CreateContainer", reflect.TypeOf((*MockInterface)(nil).CreateContainer), arg0)
-}
-
-// StartContainer mocks base method
-func (m *MockInterface) StartContainer(id string) error {
- m.ctrl.T.Helper()
- ret := m.ctrl.Call(m, "StartContainer", id)
- ret0, _ := ret[0].(error)
- return ret0
-}
-
-// StartContainer indicates an expected call of StartContainer
-func (mr *MockInterfaceMockRecorder) StartContainer(id interface{}) *gomock.Call {
- mr.mock.ctrl.T.Helper()
- return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "StartContainer", reflect.TypeOf((*MockInterface)(nil).StartContainer), id)
-}
-
-// StopContainer mocks base method
-func (m *MockInterface) StopContainer(id string, timeout time.Duration) error {
- m.ctrl.T.Helper()
- ret := m.ctrl.Call(m, "StopContainer", id, timeout)
- ret0, _ := ret[0].(error)
- return ret0
-}
-
-// StopContainer indicates an expected call of StopContainer
-func (mr *MockInterfaceMockRecorder) StopContainer(id, timeout interface{}) *gomock.Call {
- mr.mock.ctrl.T.Helper()
- return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "StopContainer", reflect.TypeOf((*MockInterface)(nil).StopContainer), id, timeout)
-}
-
-// UpdateContainerResources mocks base method
-func (m *MockInterface) UpdateContainerResources(id string, updateConfig container.UpdateConfig) error {
- m.ctrl.T.Helper()
- ret := m.ctrl.Call(m, "UpdateContainerResources", id, updateConfig)
- ret0, _ := ret[0].(error)
- return ret0
-}
-
-// UpdateContainerResources indicates an expected call of UpdateContainerResources
-func (mr *MockInterfaceMockRecorder) UpdateContainerResources(id, updateConfig interface{}) *gomock.Call {
- mr.mock.ctrl.T.Helper()
- return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "UpdateContainerResources", reflect.TypeOf((*MockInterface)(nil).UpdateContainerResources), id, updateConfig)
-}
-
-// RemoveContainer mocks base method
-func (m *MockInterface) RemoveContainer(id string, opts types.ContainerRemoveOptions) error {
- m.ctrl.T.Helper()
- ret := m.ctrl.Call(m, "RemoveContainer", id, opts)
- ret0, _ := ret[0].(error)
- return ret0
-}
-
-// RemoveContainer indicates an expected call of RemoveContainer
-func (mr *MockInterfaceMockRecorder) RemoveContainer(id, opts interface{}) *gomock.Call {
- mr.mock.ctrl.T.Helper()
- return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "RemoveContainer", reflect.TypeOf((*MockInterface)(nil).RemoveContainer), id, opts)
-}
-
-// InspectImageByRef mocks base method
-func (m *MockInterface) InspectImageByRef(imageRef string) (*types.ImageInspect, error) {
- m.ctrl.T.Helper()
- ret := m.ctrl.Call(m, "InspectImageByRef", imageRef)
- ret0, _ := ret[0].(*types.ImageInspect)
- ret1, _ := ret[1].(error)
- return ret0, ret1
-}
-
-// InspectImageByRef indicates an expected call of InspectImageByRef
-func (mr *MockInterfaceMockRecorder) InspectImageByRef(imageRef interface{}) *gomock.Call {
- mr.mock.ctrl.T.Helper()
- return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "InspectImageByRef", reflect.TypeOf((*MockInterface)(nil).InspectImageByRef), imageRef)
-}
-
-// InspectImageByID mocks base method
-func (m *MockInterface) InspectImageByID(imageID string) (*types.ImageInspect, error) {
- m.ctrl.T.Helper()
- ret := m.ctrl.Call(m, "InspectImageByID", imageID)
- ret0, _ := ret[0].(*types.ImageInspect)
- ret1, _ := ret[1].(error)
- return ret0, ret1
-}
-
-// InspectImageByID indicates an expected call of InspectImageByID
-func (mr *MockInterfaceMockRecorder) InspectImageByID(imageID interface{}) *gomock.Call {
- mr.mock.ctrl.T.Helper()
- return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "InspectImageByID", reflect.TypeOf((*MockInterface)(nil).InspectImageByID), imageID)
-}
-
-// ListImages mocks base method
-func (m *MockInterface) ListImages(opts types.ImageListOptions) ([]types.ImageSummary, error) {
- m.ctrl.T.Helper()
- ret := m.ctrl.Call(m, "ListImages", opts)
- ret0, _ := ret[0].([]types.ImageSummary)
- ret1, _ := ret[1].(error)
- return ret0, ret1
-}
-
-// ListImages indicates an expected call of ListImages
-func (mr *MockInterfaceMockRecorder) ListImages(opts interface{}) *gomock.Call {
- mr.mock.ctrl.T.Helper()
- return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "ListImages", reflect.TypeOf((*MockInterface)(nil).ListImages), opts)
-}
-
-// PullImage mocks base method
-func (m *MockInterface) PullImage(image string, auth types.AuthConfig, opts types.ImagePullOptions) error {
- m.ctrl.T.Helper()
- ret := m.ctrl.Call(m, "PullImage", image, auth, opts)
- ret0, _ := ret[0].(error)
- return ret0
-}
-
-// PullImage indicates an expected call of PullImage
-func (mr *MockInterfaceMockRecorder) PullImage(image, auth, opts interface{}) *gomock.Call {
- mr.mock.ctrl.T.Helper()
- return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "PullImage", reflect.TypeOf((*MockInterface)(nil).PullImage), image, auth, opts)
-}
-
-// RemoveImage mocks base method
-func (m *MockInterface) RemoveImage(image string, opts types.ImageRemoveOptions) ([]types.ImageDeleteResponseItem, error) {
- m.ctrl.T.Helper()
- ret := m.ctrl.Call(m, "RemoveImage", image, opts)
- ret0, _ := ret[0].([]types.ImageDeleteResponseItem)
- ret1, _ := ret[1].(error)
- return ret0, ret1
-}
-
-// RemoveImage indicates an expected call of RemoveImage
-func (mr *MockInterfaceMockRecorder) RemoveImage(image, opts interface{}) *gomock.Call {
- mr.mock.ctrl.T.Helper()
- return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "RemoveImage", reflect.TypeOf((*MockInterface)(nil).RemoveImage), image, opts)
-}
-
-// ImageHistory mocks base method
-func (m *MockInterface) ImageHistory(id string) ([]image.HistoryResponseItem, error) {
- m.ctrl.T.Helper()
- ret := m.ctrl.Call(m, "ImageHistory", id)
- ret0, _ := ret[0].([]image.HistoryResponseItem)
- ret1, _ := ret[1].(error)
- return ret0, ret1
-}
-
-// ImageHistory indicates an expected call of ImageHistory
-func (mr *MockInterfaceMockRecorder) ImageHistory(id interface{}) *gomock.Call {
- mr.mock.ctrl.T.Helper()
- return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "ImageHistory", reflect.TypeOf((*MockInterface)(nil).ImageHistory), id)
-}
-
-// Logs mocks base method
-func (m *MockInterface) Logs(arg0 string, arg1 types.ContainerLogsOptions, arg2 libdocker.StreamOptions) error {
- m.ctrl.T.Helper()
- ret := m.ctrl.Call(m, "Logs", arg0, arg1, arg2)
- ret0, _ := ret[0].(error)
- return ret0
-}
-
-// Logs indicates an expected call of Logs
-func (mr *MockInterfaceMockRecorder) Logs(arg0, arg1, arg2 interface{}) *gomock.Call {
- mr.mock.ctrl.T.Helper()
- return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "Logs", reflect.TypeOf((*MockInterface)(nil).Logs), arg0, arg1, arg2)
-}
-
-// Version mocks base method
-func (m *MockInterface) Version() (*types.Version, error) {
- m.ctrl.T.Helper()
- ret := m.ctrl.Call(m, "Version")
- ret0, _ := ret[0].(*types.Version)
- ret1, _ := ret[1].(error)
- return ret0, ret1
-}
-
-// Version indicates an expected call of Version
-func (mr *MockInterfaceMockRecorder) Version() *gomock.Call {
- mr.mock.ctrl.T.Helper()
- return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "Version", reflect.TypeOf((*MockInterface)(nil).Version))
-}
-
-// Info mocks base method
-func (m *MockInterface) Info() (*types.Info, error) {
- m.ctrl.T.Helper()
- ret := m.ctrl.Call(m, "Info")
- ret0, _ := ret[0].(*types.Info)
- ret1, _ := ret[1].(error)
- return ret0, ret1
-}
-
-// Info indicates an expected call of Info
-func (mr *MockInterfaceMockRecorder) Info() *gomock.Call {
- mr.mock.ctrl.T.Helper()
- return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "Info", reflect.TypeOf((*MockInterface)(nil).Info))
-}
-
-// CreateExec mocks base method
-func (m *MockInterface) CreateExec(arg0 string, arg1 types.ExecConfig) (*types.IDResponse, error) {
- m.ctrl.T.Helper()
- ret := m.ctrl.Call(m, "CreateExec", arg0, arg1)
- ret0, _ := ret[0].(*types.IDResponse)
- ret1, _ := ret[1].(error)
- return ret0, ret1
-}
-
-// CreateExec indicates an expected call of CreateExec
-func (mr *MockInterfaceMockRecorder) CreateExec(arg0, arg1 interface{}) *gomock.Call {
- mr.mock.ctrl.T.Helper()
- return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "CreateExec", reflect.TypeOf((*MockInterface)(nil).CreateExec), arg0, arg1)
-}
-
-// StartExec mocks base method
-func (m *MockInterface) StartExec(arg0 string, arg1 types.ExecStartCheck, arg2 libdocker.StreamOptions) error {
- m.ctrl.T.Helper()
- ret := m.ctrl.Call(m, "StartExec", arg0, arg1, arg2)
- ret0, _ := ret[0].(error)
- return ret0
-}
-
-// StartExec indicates an expected call of StartExec
-func (mr *MockInterfaceMockRecorder) StartExec(arg0, arg1, arg2 interface{}) *gomock.Call {
- mr.mock.ctrl.T.Helper()
- return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "StartExec", reflect.TypeOf((*MockInterface)(nil).StartExec), arg0, arg1, arg2)
-}
-
-// InspectExec mocks base method
-func (m *MockInterface) InspectExec(id string) (*types.ContainerExecInspect, error) {
- m.ctrl.T.Helper()
- ret := m.ctrl.Call(m, "InspectExec", id)
- ret0, _ := ret[0].(*types.ContainerExecInspect)
- ret1, _ := ret[1].(error)
- return ret0, ret1
-}
-
-// InspectExec indicates an expected call of InspectExec
-func (mr *MockInterfaceMockRecorder) InspectExec(id interface{}) *gomock.Call {
- mr.mock.ctrl.T.Helper()
- return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "InspectExec", reflect.TypeOf((*MockInterface)(nil).InspectExec), id)
-}
-
-// AttachToContainer mocks base method
-func (m *MockInterface) AttachToContainer(arg0 string, arg1 types.ContainerAttachOptions, arg2 libdocker.StreamOptions) error {
- m.ctrl.T.Helper()
- ret := m.ctrl.Call(m, "AttachToContainer", arg0, arg1, arg2)
- ret0, _ := ret[0].(error)
- return ret0
-}
-
-// AttachToContainer indicates an expected call of AttachToContainer
-func (mr *MockInterfaceMockRecorder) AttachToContainer(arg0, arg1, arg2 interface{}) *gomock.Call {
- mr.mock.ctrl.T.Helper()
- return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "AttachToContainer", reflect.TypeOf((*MockInterface)(nil).AttachToContainer), arg0, arg1, arg2)
-}
-
-// ResizeContainerTTY mocks base method
-func (m *MockInterface) ResizeContainerTTY(id string, height, width uint) error {
- m.ctrl.T.Helper()
- ret := m.ctrl.Call(m, "ResizeContainerTTY", id, height, width)
- ret0, _ := ret[0].(error)
- return ret0
-}
-
-// ResizeContainerTTY indicates an expected call of ResizeContainerTTY
-func (mr *MockInterfaceMockRecorder) ResizeContainerTTY(id, height, width interface{}) *gomock.Call {
- mr.mock.ctrl.T.Helper()
- return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "ResizeContainerTTY", reflect.TypeOf((*MockInterface)(nil).ResizeContainerTTY), id, height, width)
-}
-
-// ResizeExecTTY mocks base method
-func (m *MockInterface) ResizeExecTTY(id string, height, width uint) error {
- m.ctrl.T.Helper()
- ret := m.ctrl.Call(m, "ResizeExecTTY", id, height, width)
- ret0, _ := ret[0].(error)
- return ret0
-}
-
-// ResizeExecTTY indicates an expected call of ResizeExecTTY
-func (mr *MockInterfaceMockRecorder) ResizeExecTTY(id, height, width interface{}) *gomock.Call {
- mr.mock.ctrl.T.Helper()
- return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "ResizeExecTTY", reflect.TypeOf((*MockInterface)(nil).ResizeExecTTY), id, height, width)
-}
-
-// GetContainerStats mocks base method
-func (m *MockInterface) GetContainerStats(id string) (*types.StatsJSON, error) {
- m.ctrl.T.Helper()
- ret := m.ctrl.Call(m, "GetContainerStats", id)
- ret0, _ := ret[0].(*types.StatsJSON)
- ret1, _ := ret[1].(error)
- return ret0, ret1
-}
-
-// GetContainerStats indicates an expected call of GetContainerStats
-func (mr *MockInterfaceMockRecorder) GetContainerStats(id interface{}) *gomock.Call {
- mr.mock.ctrl.T.Helper()
- return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "GetContainerStats", reflect.TypeOf((*MockInterface)(nil).GetContainerStats), id)
-}
diff --git a/pkg/kubelet/dockershim/metrics/metrics.go b/pkg/kubelet/dockershim/metrics/metrics.go
deleted file mode 100644
index be8d65b83ea..00000000000
--- a/pkg/kubelet/dockershim/metrics/metrics.go
+++ /dev/null
@@ -1,105 +0,0 @@
-//go:build !dockerless
-// +build !dockerless
-
-/*
-Copyright 2015 The Kubernetes Authors.
-
-Licensed under the Apache License, Version 2.0 (the "License");
-you may not use this file except in compliance with the License.
-You may obtain a copy of the License at
-
- http://www.apache.org/licenses/LICENSE-2.0
-
-Unless required by applicable law or agreed to in writing, software
-distributed under the License is distributed on an "AS IS" BASIS,
-WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-See the License for the specific language governing permissions and
-limitations under the License.
-*/
-
-package metrics
-
-import (
- "sync"
- "time"
-
- "k8s.io/component-base/metrics"
- "k8s.io/component-base/metrics/legacyregistry"
-)
-
-const (
- // DockerOperationsKey is the key for docker operation metrics.
- DockerOperationsKey = "docker_operations_total"
- // DockerOperationsLatencyKey is the key for the operation latency metrics.
- DockerOperationsLatencyKey = "docker_operations_duration_seconds"
- // DockerOperationsErrorsKey is the key for the operation error metrics.
- DockerOperationsErrorsKey = "docker_operations_errors_total"
- // DockerOperationsTimeoutKey is the key for the operation timeout metrics.
- DockerOperationsTimeoutKey = "docker_operations_timeout_total"
-
- // Keep the "kubelet" subsystem for backward compatibility.
- kubeletSubsystem = "kubelet"
-)
-
-var (
- // DockerOperationsLatency collects operation latency numbers by operation
- // type.
- DockerOperationsLatency = metrics.NewHistogramVec(
- &metrics.HistogramOpts{
- Subsystem: kubeletSubsystem,
- Name: DockerOperationsLatencyKey,
- Help: "Latency in seconds of Docker operations. Broken down by operation type.",
- Buckets: metrics.DefBuckets,
- StabilityLevel: metrics.ALPHA,
- },
- []string{"operation_type"},
- )
- // DockerOperations collects operation counts by operation type.
- DockerOperations = metrics.NewCounterVec(
- &metrics.CounterOpts{
- Subsystem: kubeletSubsystem,
- Name: DockerOperationsKey,
- Help: "Cumulative number of Docker operations by operation type.",
- StabilityLevel: metrics.ALPHA,
- },
- []string{"operation_type"},
- )
- // DockerOperationsErrors collects operation errors by operation
- // type.
- DockerOperationsErrors = metrics.NewCounterVec(
- &metrics.CounterOpts{
- Subsystem: kubeletSubsystem,
- Name: DockerOperationsErrorsKey,
- Help: "Cumulative number of Docker operation errors by operation type.",
- StabilityLevel: metrics.ALPHA,
- },
- []string{"operation_type"},
- )
- // DockerOperationsTimeout collects operation timeouts by operation type.
- DockerOperationsTimeout = metrics.NewCounterVec(
- &metrics.CounterOpts{
- Subsystem: kubeletSubsystem,
- Name: DockerOperationsTimeoutKey,
- Help: "Cumulative number of Docker operation timeout by operation type.",
- StabilityLevel: metrics.ALPHA,
- },
- []string{"operation_type"},
- )
-)
-
-var registerMetrics sync.Once
-
-// Register all metrics.
-func Register() {
- registerMetrics.Do(func() {
- legacyregistry.MustRegister(DockerOperationsLatency)
- legacyregistry.MustRegister(DockerOperations)
- legacyregistry.MustRegister(DockerOperationsErrors)
- legacyregistry.MustRegister(DockerOperationsTimeout)
- })
-}
-
-// SinceInSeconds gets the time since the specified start in seconds.
-func SinceInSeconds(start time.Time) float64 {
- return time.Since(start).Seconds()
-}
diff --git a/pkg/kubelet/dockershim/naming.go b/pkg/kubelet/dockershim/naming.go
deleted file mode 100644
index faf35b6bf68..00000000000
--- a/pkg/kubelet/dockershim/naming.go
+++ /dev/null
@@ -1,152 +0,0 @@
-//go:build !dockerless
-// +build !dockerless
-
-/*
-Copyright 2016 The Kubernetes Authors.
-
-Licensed under the Apache License, Version 2.0 (the "License");
-you may not use this file except in compliance with the License.
-You may obtain a copy of the License at
-
- http://www.apache.org/licenses/LICENSE-2.0
-
-Unless required by applicable law or agreed to in writing, software
-distributed under the License is distributed on an "AS IS" BASIS,
-WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-See the License for the specific language governing permissions and
-limitations under the License.
-*/
-
-package dockershim
-
-import (
- "fmt"
- "math/rand"
- "strconv"
- "strings"
-
- runtimeapi "k8s.io/cri-api/pkg/apis/runtime/v1alpha2"
- "k8s.io/kubernetes/pkg/kubelet/leaky"
-)
-
-// Container "names" are implementation details that do not concern
-// kubelet/CRI. This CRI shim uses names to fulfill the CRI requirement to
-// make sandbox/container creation idempotent. CRI states that there can
-// only exist one sandbox/container with the given metadata. To enforce this,
-// this shim constructs a name using the fields in the metadata so that
-// docker will reject the creation request if the name already exists.
-//
-// Note that changes to naming will likely break the backward compatibility.
-// Code must be added to ensure the shim knows how to recognize and extract
-// information the older containers.
-//
-// TODO: Add code to handle backward compatibility, i.e., making sure we can
-// recognize older containers and extract information from their names if
-// necessary.
-
-const (
- // kubePrefix is used to identify the containers/sandboxes on the node managed by kubelet
- kubePrefix = "k8s"
- // sandboxContainerName is a string to include in the docker container so
- // that users can easily identify the sandboxes.
- sandboxContainerName = leaky.PodInfraContainerName
- // Delimiter used to construct docker container names.
- nameDelimiter = "_"
- // DockerImageIDPrefix is the prefix of image id in container status.
- DockerImageIDPrefix = "docker://"
- // DockerPullableImageIDPrefix is the prefix of pullable image id in container status.
- DockerPullableImageIDPrefix = "docker-pullable://"
-)
-
-func makeSandboxName(s *runtimeapi.PodSandboxConfig) string {
- return strings.Join([]string{
- kubePrefix, // 0
- sandboxContainerName, // 1
- s.Metadata.Name, // 2
- s.Metadata.Namespace, // 3
- s.Metadata.Uid, // 4
- fmt.Sprintf("%d", s.Metadata.Attempt), // 5
- }, nameDelimiter)
-}
-
-func makeContainerName(s *runtimeapi.PodSandboxConfig, c *runtimeapi.ContainerConfig) string {
- return strings.Join([]string{
- kubePrefix, // 0
- c.Metadata.Name, // 1:
- s.Metadata.Name, // 2: sandbox name
- s.Metadata.Namespace, // 3: sandbox namesapce
- s.Metadata.Uid, // 4 sandbox uid
- fmt.Sprintf("%d", c.Metadata.Attempt), // 5
- }, nameDelimiter)
-}
-
-// randomizeName randomizes the container name. This should only be used when we hit the
-// docker container name conflict bug.
-func randomizeName(name string) string {
- return strings.Join([]string{
- name,
- fmt.Sprintf("%08x", rand.Uint32()),
- }, nameDelimiter)
-}
-
-func parseUint32(s string) (uint32, error) {
- n, err := strconv.ParseUint(s, 10, 32)
- if err != nil {
- return 0, err
- }
- return uint32(n), nil
-}
-
-// TODO: Evaluate whether we should rely on labels completely.
-func parseSandboxName(name string) (*runtimeapi.PodSandboxMetadata, error) {
- // Docker adds a "/" prefix to names. so trim it.
- name = strings.TrimPrefix(name, "/")
-
- parts := strings.Split(name, nameDelimiter)
- // Tolerate the random suffix.
- // TODO(random-liu): Remove 7 field case when docker 1.11 is deprecated.
- if len(parts) != 6 && len(parts) != 7 {
- return nil, fmt.Errorf("failed to parse the sandbox name: %q", name)
- }
- if parts[0] != kubePrefix {
- return nil, fmt.Errorf("container is not managed by kubernetes: %q", name)
- }
-
- attempt, err := parseUint32(parts[5])
- if err != nil {
- return nil, fmt.Errorf("failed to parse the sandbox name %q: %v", name, err)
- }
-
- return &runtimeapi.PodSandboxMetadata{
- Name: parts[2],
- Namespace: parts[3],
- Uid: parts[4],
- Attempt: attempt,
- }, nil
-}
-
-// TODO: Evaluate whether we should rely on labels completely.
-func parseContainerName(name string) (*runtimeapi.ContainerMetadata, error) {
- // Docker adds a "/" prefix to names. so trim it.
- name = strings.TrimPrefix(name, "/")
-
- parts := strings.Split(name, nameDelimiter)
- // Tolerate the random suffix.
- // TODO(random-liu): Remove 7 field case when docker 1.11 is deprecated.
- if len(parts) != 6 && len(parts) != 7 {
- return nil, fmt.Errorf("failed to parse the container name: %q", name)
- }
- if parts[0] != kubePrefix {
- return nil, fmt.Errorf("container is not managed by kubernetes: %q", name)
- }
-
- attempt, err := parseUint32(parts[5])
- if err != nil {
- return nil, fmt.Errorf("failed to parse the container name %q: %v", name, err)
- }
-
- return &runtimeapi.ContainerMetadata{
- Name: parts[1],
- Attempt: attempt,
- }, nil
-}
diff --git a/pkg/kubelet/dockershim/naming_test.go b/pkg/kubelet/dockershim/naming_test.go
deleted file mode 100644
index 16c3f6b7ad0..00000000000
--- a/pkg/kubelet/dockershim/naming_test.go
+++ /dev/null
@@ -1,109 +0,0 @@
-//go:build !dockerless
-// +build !dockerless
-
-/*
-Copyright 2016 The Kubernetes Authors.
-
-Licensed under the Apache License, Version 2.0 (the "License");
-you may not use this file except in compliance with the License.
-You may obtain a copy of the License at
-
- http://www.apache.org/licenses/LICENSE-2.0
-
-Unless required by applicable law or agreed to in writing, software
-distributed under the License is distributed on an "AS IS" BASIS,
-WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-See the License for the specific language governing permissions and
-limitations under the License.
-*/
-
-package dockershim
-
-import (
- "testing"
-
- "github.com/stretchr/testify/assert"
-
- runtimeapi "k8s.io/cri-api/pkg/apis/runtime/v1alpha2"
-)
-
-func TestSandboxNameRoundTrip(t *testing.T) {
- config := makeSandboxConfig("foo", "bar", "iamuid", 3)
- actualName := makeSandboxName(config)
- assert.Equal(t, "k8s_POD_foo_bar_iamuid_3", actualName)
-
- actualMetadata, err := parseSandboxName(actualName)
- assert.NoError(t, err)
- assert.Equal(t, config.Metadata, actualMetadata)
-}
-
-func TestNonParsableSandboxNames(t *testing.T) {
- // All names must start with the kubernetes prefix "k8s".
- _, err := parseSandboxName("owner_POD_foo_bar_iamuid_4")
- assert.Error(t, err)
-
- // All names must contain exactly 6 parts.
- _, err = parseSandboxName("k8s_POD_dummy_foo_bar_iamuid_4")
- assert.Error(t, err)
- _, err = parseSandboxName("k8s_foo_bar_iamuid_4")
- assert.Error(t, err)
-
- // Should be able to parse attempt number.
- _, err = parseSandboxName("k8s_POD_foo_bar_iamuid_notanumber")
- assert.Error(t, err)
-}
-
-func TestContainerNameRoundTrip(t *testing.T) {
- sConfig := makeSandboxConfig("foo", "bar", "iamuid", 3)
- name, attempt := "pause", uint32(5)
- config := &runtimeapi.ContainerConfig{
- Metadata: &runtimeapi.ContainerMetadata{
- Name: name,
- Attempt: attempt,
- },
- }
- actualName := makeContainerName(sConfig, config)
- assert.Equal(t, "k8s_pause_foo_bar_iamuid_5", actualName)
-
- actualMetadata, err := parseContainerName(actualName)
- assert.NoError(t, err)
- assert.Equal(t, config.Metadata, actualMetadata)
-}
-
-func TestNonParsableContainerNames(t *testing.T) {
- // All names must start with the kubernetes prefix "k8s".
- _, err := parseContainerName("owner_frontend_foo_bar_iamuid_4")
- assert.Error(t, err)
-
- // All names must contain exactly 6 parts.
- _, err = parseContainerName("k8s_frontend_dummy_foo_bar_iamuid_4")
- assert.Error(t, err)
- _, err = parseContainerName("k8s_foo_bar_iamuid_4")
- assert.Error(t, err)
-
- // Should be able to parse attempt number.
- _, err = parseContainerName("k8s_frontend_foo_bar_iamuid_notanumber")
- assert.Error(t, err)
-}
-
-func TestParseRandomizedNames(t *testing.T) {
- // Test randomized sandbox name.
- sConfig := makeSandboxConfig("foo", "bar", "iamuid", 3)
- sActualName := randomizeName(makeSandboxName(sConfig))
- sActualMetadata, err := parseSandboxName(sActualName)
- assert.NoError(t, err)
- assert.Equal(t, sConfig.Metadata, sActualMetadata)
-
- // Test randomized container name.
- name, attempt := "pause", uint32(5)
- config := &runtimeapi.ContainerConfig{
- Metadata: &runtimeapi.ContainerMetadata{
- Name: name,
- Attempt: attempt,
- },
- }
- actualName := randomizeName(makeContainerName(sConfig, config))
- actualMetadata, err := parseContainerName(actualName)
- assert.NoError(t, err)
- assert.Equal(t, config.Metadata, actualMetadata)
-}
diff --git a/pkg/kubelet/dockershim/network/OWNERS b/pkg/kubelet/dockershim/network/OWNERS
deleted file mode 100644
index e9845a74729..00000000000
--- a/pkg/kubelet/dockershim/network/OWNERS
+++ /dev/null
@@ -1,10 +0,0 @@
-# See the OWNERS docs at https://go.k8s.io/owners
-
-approvers:
-- sig-network-driver-approvers
-emeritus_approvers:
-- matchstick
-reviewers:
-- sig-network-reviewers
-labels:
-- sig/network
diff --git a/pkg/kubelet/dockershim/network/cni/cni.go b/pkg/kubelet/dockershim/network/cni/cni.go
deleted file mode 100644
index fd9d1dbb19f..00000000000
--- a/pkg/kubelet/dockershim/network/cni/cni.go
+++ /dev/null
@@ -1,472 +0,0 @@
-//go:build !dockerless
-// +build !dockerless
-
-/*
-Copyright 2014 The Kubernetes Authors.
-
-Licensed under the Apache License, Version 2.0 (the "License");
-you may not use this file except in compliance with the License.
-You may obtain a copy of the License at
-
- http://www.apache.org/licenses/LICENSE-2.0
-
-Unless required by applicable law or agreed to in writing, software
-distributed under the License is distributed on an "AS IS" BASIS,
-WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-See the License for the specific language governing permissions and
-limitations under the License.
-*/
-
-package cni
-
-import (
- "context"
- "encoding/json"
- "fmt"
- "math"
- "sort"
- "strings"
- "sync"
- "time"
-
- "github.com/containernetworking/cni/libcni"
- cnitypes "github.com/containernetworking/cni/pkg/types"
- "k8s.io/apimachinery/pkg/util/wait"
- runtimeapi "k8s.io/cri-api/pkg/apis/runtime/v1alpha2"
- "k8s.io/klog/v2"
- kubeletconfig "k8s.io/kubernetes/pkg/kubelet/apis/config"
- kubecontainer "k8s.io/kubernetes/pkg/kubelet/container"
- "k8s.io/kubernetes/pkg/kubelet/dockershim/network"
- "k8s.io/kubernetes/pkg/util/bandwidth"
- utilslice "k8s.io/kubernetes/pkg/util/slice"
- utilexec "k8s.io/utils/exec"
-)
-
-const (
- // CNIPluginName is the name of CNI plugin
- CNIPluginName = "cni"
-
- // defaultSyncConfigPeriod is the default period to sync CNI config
- // TODO: consider making this value configurable or to be a more appropriate value.
- defaultSyncConfigPeriod = time.Second * 5
-
- // supported capabilities
- // https://github.com/containernetworking/cni/blob/master/CONVENTIONS.md
- portMappingsCapability = "portMappings"
- ipRangesCapability = "ipRanges"
- bandwidthCapability = "bandwidth"
- dnsCapability = "dns"
-)
-
-type cniNetworkPlugin struct {
- network.NoopNetworkPlugin
-
- loNetwork *cniNetwork
-
- sync.RWMutex
- defaultNetwork *cniNetwork
-
- host network.Host
- execer utilexec.Interface
- nsenterPath string
- confDir string
- binDirs []string
- cacheDir string
- podCidr string
-}
-
-type cniNetwork struct {
- name string
- NetworkConfig *libcni.NetworkConfigList
- CNIConfig libcni.CNI
- Capabilities []string
-}
-
-// cniPortMapping maps to the standard CNI portmapping Capability
-// see: https://github.com/containernetworking/cni/blob/master/CONVENTIONS.md
-type cniPortMapping struct {
- HostPort int32 `json:"hostPort"`
- ContainerPort int32 `json:"containerPort"`
- Protocol string `json:"protocol"`
- HostIP string `json:"hostIP"`
-}
-
-// cniBandwidthEntry maps to the standard CNI bandwidth Capability
-// see: https://github.com/containernetworking/cni/blob/master/CONVENTIONS.md and
-// https://github.com/containernetworking/plugins/blob/master/plugins/meta/bandwidth/README.md
-type cniBandwidthEntry struct {
- // IngressRate is the bandwidth rate in bits per second for traffic through container. 0 for no limit. If IngressRate is set, IngressBurst must also be set
- IngressRate int `json:"ingressRate,omitempty"`
- // IngressBurst is the bandwidth burst in bits for traffic through container. 0 for no limit. If IngressBurst is set, IngressRate must also be set
- // NOTE: it's not used for now and defaults to 0. If IngressRate is set IngressBurst will be math.MaxInt32 ~ 2Gbit
- IngressBurst int `json:"ingressBurst,omitempty"`
- // EgressRate is the bandwidth is the bandwidth rate in bits per second for traffic through container. 0 for no limit. If EgressRate is set, EgressBurst must also be set
- EgressRate int `json:"egressRate,omitempty"`
- // EgressBurst is the bandwidth burst in bits for traffic through container. 0 for no limit. If EgressBurst is set, EgressRate must also be set
- // NOTE: it's not used for now and defaults to 0. If EgressRate is set EgressBurst will be math.MaxInt32 ~ 2Gbit
- EgressBurst int `json:"egressBurst,omitempty"`
-}
-
-// cniIPRange maps to the standard CNI ip range Capability
-type cniIPRange struct {
- Subnet string `json:"subnet"`
-}
-
-// cniDNSConfig maps to the windows CNI dns Capability.
-// see: https://github.com/containernetworking/cni/blob/master/CONVENTIONS.md
-// Note that dns capability is only used for Windows containers.
-type cniDNSConfig struct {
- // List of DNS servers of the cluster.
- Servers []string `json:"servers,omitempty"`
- // List of DNS search domains of the cluster.
- Searches []string `json:"searches,omitempty"`
- // List of DNS options.
- Options []string `json:"options,omitempty"`
-}
-
-// SplitDirs : split dirs by ","
-func SplitDirs(dirs string) []string {
- // Use comma rather than colon to work better with Windows too
- return strings.Split(dirs, ",")
-}
-
-// ProbeNetworkPlugins : get the network plugin based on cni conf file and bin file
-func ProbeNetworkPlugins(confDir, cacheDir string, binDirs []string) []network.NetworkPlugin {
- old := binDirs
- binDirs = make([]string, 0, len(binDirs))
- for _, dir := range old {
- if dir != "" {
- binDirs = append(binDirs, dir)
- }
- }
-
- plugin := &cniNetworkPlugin{
- defaultNetwork: nil,
- loNetwork: getLoNetwork(binDirs),
- execer: utilexec.New(),
- confDir: confDir,
- binDirs: binDirs,
- cacheDir: cacheDir,
- }
-
- // sync NetworkConfig in best effort during probing.
- plugin.syncNetworkConfig()
- return []network.NetworkPlugin{plugin}
-}
-
-func getDefaultCNINetwork(confDir string, binDirs []string) (*cniNetwork, error) {
- files, err := libcni.ConfFiles(confDir, []string{".conf", ".conflist", ".json"})
- switch {
- case err != nil:
- return nil, err
- case len(files) == 0:
- return nil, fmt.Errorf("no networks found in %s", confDir)
- }
-
- cniConfig := &libcni.CNIConfig{Path: binDirs}
-
- sort.Strings(files)
- for _, confFile := range files {
- var confList *libcni.NetworkConfigList
- if strings.HasSuffix(confFile, ".conflist") {
- confList, err = libcni.ConfListFromFile(confFile)
- if err != nil {
- klog.InfoS("Error loading CNI config list file", "path", confFile, "err", err)
- continue
- }
- } else {
- conf, err := libcni.ConfFromFile(confFile)
- if err != nil {
- klog.InfoS("Error loading CNI config file", "path", confFile, "err", err)
- continue
- }
- // Ensure the config has a "type" so we know what plugin to run.
- // Also catches the case where somebody put a conflist into a conf file.
- if conf.Network.Type == "" {
- klog.InfoS("Error loading CNI config file: no 'type'; perhaps this is a .conflist?", "path", confFile)
- continue
- }
-
- confList, err = libcni.ConfListFromConf(conf)
- if err != nil {
- klog.InfoS("Error converting CNI config file to list", "path", confFile, "err", err)
- continue
- }
- }
- if len(confList.Plugins) == 0 {
- klog.InfoS("CNI config list has no networks, skipping", "configList", string(confList.Bytes[:maxStringLengthInLog(len(confList.Bytes))]))
- continue
- }
-
- // Before using this CNI config, we have to validate it to make sure that
- // all plugins of this config exist on disk
- caps, err := cniConfig.ValidateNetworkList(context.TODO(), confList)
- if err != nil {
- klog.InfoS("Error validating CNI config list", "configList", string(confList.Bytes[:maxStringLengthInLog(len(confList.Bytes))]), "err", err)
- continue
- }
-
- klog.V(4).InfoS("Using CNI configuration file", "path", confFile)
-
- return &cniNetwork{
- name: confList.Name,
- NetworkConfig: confList,
- CNIConfig: cniConfig,
- Capabilities: caps,
- }, nil
- }
- return nil, fmt.Errorf("no valid networks found in %s", confDir)
-}
-
-func (plugin *cniNetworkPlugin) Init(host network.Host, hairpinMode kubeletconfig.HairpinMode, nonMasqueradeCIDR string, mtu int) error {
- err := plugin.platformInit()
- if err != nil {
- return err
- }
-
- plugin.host = host
-
- plugin.syncNetworkConfig()
-
- // start a goroutine to sync network config from confDir periodically to detect network config updates in every 5 seconds
- go wait.Forever(plugin.syncNetworkConfig, defaultSyncConfigPeriod)
-
- return nil
-}
-
-func (plugin *cniNetworkPlugin) syncNetworkConfig() {
- network, err := getDefaultCNINetwork(plugin.confDir, plugin.binDirs)
- if err != nil {
- klog.InfoS("Unable to update cni config", "err", err)
- return
- }
- plugin.setDefaultNetwork(network)
-}
-
-func (plugin *cniNetworkPlugin) getDefaultNetwork() *cniNetwork {
- plugin.RLock()
- defer plugin.RUnlock()
- return plugin.defaultNetwork
-}
-
-func (plugin *cniNetworkPlugin) setDefaultNetwork(n *cniNetwork) {
- plugin.Lock()
- defer plugin.Unlock()
- plugin.defaultNetwork = n
-}
-
-func (plugin *cniNetworkPlugin) checkInitialized() error {
- if plugin.getDefaultNetwork() == nil {
- return fmt.Errorf("cni config uninitialized")
- }
-
- if utilslice.ContainsString(plugin.getDefaultNetwork().Capabilities, ipRangesCapability, nil) && plugin.podCidr == "" {
- return fmt.Errorf("cni config needs ipRanges but no PodCIDR set")
- }
-
- return nil
-}
-
-// Event handles any change events. The only event ever sent is the PodCIDR change.
-// No network plugins support changing an already-set PodCIDR
-func (plugin *cniNetworkPlugin) Event(name string, details map[string]interface{}) {
- if name != network.NET_PLUGIN_EVENT_POD_CIDR_CHANGE {
- return
- }
-
- plugin.Lock()
- defer plugin.Unlock()
-
- podCIDR, ok := details[network.NET_PLUGIN_EVENT_POD_CIDR_CHANGE_DETAIL_CIDR].(string)
- if !ok {
- klog.InfoS("The event didn't contain pod CIDR", "event", network.NET_PLUGIN_EVENT_POD_CIDR_CHANGE)
- return
- }
-
- if plugin.podCidr != "" {
- klog.InfoS("Ignoring subsequent pod CIDR update to new cidr", "podCIDR", podCIDR)
- return
- }
-
- plugin.podCidr = podCIDR
-}
-
-func (plugin *cniNetworkPlugin) Name() string {
- return CNIPluginName
-}
-
-func (plugin *cniNetworkPlugin) Status() error {
- // Can't set up pods if we don't have any CNI network configs yet
- return plugin.checkInitialized()
-}
-
-func (plugin *cniNetworkPlugin) SetUpPod(namespace string, name string, id kubecontainer.ContainerID, annotations, options map[string]string) error {
- if err := plugin.checkInitialized(); err != nil {
- return err
- }
- netnsPath, err := plugin.host.GetNetNS(id.ID)
- if err != nil {
- return fmt.Errorf("CNI failed to retrieve network namespace path: %v", err)
- }
-
- // Todo get the timeout from parent ctx
- cniTimeoutCtx, cancelFunc := context.WithTimeout(context.Background(), network.CNITimeoutSec*time.Second)
- defer cancelFunc()
- // Windows doesn't have loNetwork. It comes only with Linux
- if plugin.loNetwork != nil {
- if _, err = plugin.addToNetwork(cniTimeoutCtx, plugin.loNetwork, name, namespace, id, netnsPath, annotations, options); err != nil {
- return err
- }
- }
-
- _, err = plugin.addToNetwork(cniTimeoutCtx, plugin.getDefaultNetwork(), name, namespace, id, netnsPath, annotations, options)
- return err
-}
-
-func (plugin *cniNetworkPlugin) TearDownPod(namespace string, name string, id kubecontainer.ContainerID) error {
- if err := plugin.checkInitialized(); err != nil {
- return err
- }
-
- // Lack of namespace should not be fatal on teardown
- netnsPath, err := plugin.host.GetNetNS(id.ID)
- if err != nil {
- klog.InfoS("CNI failed to retrieve network namespace path", "err", err)
- }
-
- // Todo get the timeout from parent ctx
- cniTimeoutCtx, cancelFunc := context.WithTimeout(context.Background(), network.CNITimeoutSec*time.Second)
- defer cancelFunc()
- // Windows doesn't have loNetwork. It comes only with Linux
- if plugin.loNetwork != nil {
- // Loopback network deletion failure should not be fatal on teardown
- if err := plugin.deleteFromNetwork(cniTimeoutCtx, plugin.loNetwork, name, namespace, id, netnsPath, nil); err != nil {
- klog.InfoS("CNI failed to delete loopback network", "err", err)
- }
- }
-
- return plugin.deleteFromNetwork(cniTimeoutCtx, plugin.getDefaultNetwork(), name, namespace, id, netnsPath, nil)
-}
-
-func (plugin *cniNetworkPlugin) addToNetwork(ctx context.Context, network *cniNetwork, podName string, podNamespace string, podSandboxID kubecontainer.ContainerID, podNetnsPath string, annotations, options map[string]string) (cnitypes.Result, error) {
- rt, err := plugin.buildCNIRuntimeConf(podName, podNamespace, podSandboxID, podNetnsPath, annotations, options)
- if err != nil {
- klog.ErrorS(err, "Error adding network when building cni runtime conf")
- return nil, err
- }
-
- netConf, cniNet := network.NetworkConfig, network.CNIConfig
- klog.V(4).InfoS("Adding pod to network", "pod", klog.KRef(podNamespace, podName), "podSandboxID", podSandboxID, "podNetnsPath", podNetnsPath, "networkType", netConf.Plugins[0].Network.Type, "networkName", netConf.Name)
- res, err := cniNet.AddNetworkList(ctx, netConf, rt)
- if err != nil {
- klog.ErrorS(err, "Error adding pod to network", "pod", klog.KRef(podNamespace, podName), "podSandboxID", podSandboxID, "podNetnsPath", podNetnsPath, "networkType", netConf.Plugins[0].Network.Type, "networkName", netConf.Name)
- return nil, err
- }
- klog.V(4).InfoS("Added pod to network", "pod", klog.KRef(podNamespace, podName), "podSandboxID", podSandboxID, "networkName", netConf.Name, "response", res)
- return res, nil
-}
-
-func (plugin *cniNetworkPlugin) deleteFromNetwork(ctx context.Context, network *cniNetwork, podName string, podNamespace string, podSandboxID kubecontainer.ContainerID, podNetnsPath string, annotations map[string]string) error {
- rt, err := plugin.buildCNIRuntimeConf(podName, podNamespace, podSandboxID, podNetnsPath, annotations, nil)
- if err != nil {
- klog.ErrorS(err, "Error deleting network when building cni runtime conf")
- return err
- }
- netConf, cniNet := network.NetworkConfig, network.CNIConfig
- klog.V(4).InfoS("Deleting pod from network", "pod", klog.KRef(podNamespace, podName), "podSandboxID", podSandboxID, "podNetnsPath", podNetnsPath, "networkType", netConf.Plugins[0].Network.Type, "networkName", netConf.Name)
- err = cniNet.DelNetworkList(ctx, netConf, rt)
- // The pod may not get deleted successfully at the first time.
- // Ignore "no such file or directory" error in case the network has already been deleted in previous attempts.
- if err != nil && !strings.Contains(err.Error(), "no such file or directory") {
- klog.ErrorS(err, "Error deleting pod from network", "pod", klog.KRef(podNamespace, podName), "podSandboxID", podSandboxID, "podNetnsPath", podNetnsPath, "networkType", netConf.Plugins[0].Network.Type, "networkName", netConf.Name)
- return err
- }
- klog.V(4).InfoS("Deleted pod from network", "pod", klog.KRef(podNamespace, podName), "podSandboxID", podSandboxID, "networkType", netConf.Plugins[0].Network.Type, "networkName", netConf.Name)
- return nil
-}
-
-func (plugin *cniNetworkPlugin) buildCNIRuntimeConf(podName string, podNs string, podSandboxID kubecontainer.ContainerID, podNetnsPath string, annotations, options map[string]string) (*libcni.RuntimeConf, error) {
- rt := &libcni.RuntimeConf{
- ContainerID: podSandboxID.ID,
- NetNS: podNetnsPath,
- IfName: network.DefaultInterfaceName,
- CacheDir: plugin.cacheDir,
- Args: [][2]string{
- {"IgnoreUnknown", "1"},
- {"K8S_POD_NAMESPACE", podNs},
- {"K8S_POD_NAME", podName},
- {"K8S_POD_INFRA_CONTAINER_ID", podSandboxID.ID},
- },
- }
-
- // port mappings are a cni capability-based args, rather than parameters
- // to a specific plugin
- portMappings, err := plugin.host.GetPodPortMappings(podSandboxID.ID)
- if err != nil {
- return nil, fmt.Errorf("could not retrieve port mappings: %v", err)
- }
- portMappingsParam := make([]cniPortMapping, 0, len(portMappings))
- for _, p := range portMappings {
- if p.HostPort <= 0 {
- continue
- }
- portMappingsParam = append(portMappingsParam, cniPortMapping{
- HostPort: p.HostPort,
- ContainerPort: p.ContainerPort,
- Protocol: strings.ToLower(string(p.Protocol)),
- HostIP: p.HostIP,
- })
- }
- rt.CapabilityArgs = map[string]interface{}{
- portMappingsCapability: portMappingsParam,
- }
-
- ingress, egress, err := bandwidth.ExtractPodBandwidthResources(annotations)
- if err != nil {
- return nil, fmt.Errorf("failed to get pod bandwidth from annotations: %v", err)
- }
- if ingress != nil || egress != nil {
- bandwidthParam := cniBandwidthEntry{}
- if ingress != nil {
- // see: https://github.com/containernetworking/cni/blob/master/CONVENTIONS.md and
- // https://github.com/containernetworking/plugins/blob/master/plugins/meta/bandwidth/README.md
- // Rates are in bits per second, burst values are in bits.
- bandwidthParam.IngressRate = int(ingress.Value())
- // Limit IngressBurst to math.MaxInt32, in practice limiting to 2Gbit is the equivalent of setting no limit
- bandwidthParam.IngressBurst = math.MaxInt32
- }
- if egress != nil {
- bandwidthParam.EgressRate = int(egress.Value())
- // Limit EgressBurst to math.MaxInt32, in practice limiting to 2Gbit is the equivalent of setting no limit
- bandwidthParam.EgressBurst = math.MaxInt32
- }
- rt.CapabilityArgs[bandwidthCapability] = bandwidthParam
- }
-
- // Set the PodCIDR
- rt.CapabilityArgs[ipRangesCapability] = [][]cniIPRange{{{Subnet: plugin.podCidr}}}
-
- // Set dns capability args.
- if dnsOptions, ok := options["dns"]; ok {
- dnsConfig := runtimeapi.DNSConfig{}
- err := json.Unmarshal([]byte(dnsOptions), &dnsConfig)
- if err != nil {
- return nil, fmt.Errorf("failed to unmarshal dns config %q: %v", dnsOptions, err)
- }
- if dnsParam := buildDNSCapabilities(&dnsConfig); dnsParam != nil {
- rt.CapabilityArgs[dnsCapability] = *dnsParam
- }
- }
-
- return rt, nil
-}
-
-func maxStringLengthInLog(length int) int {
- // we allow no more than 4096-length strings to be logged
- const maxStringLength = 4096
-
- if length < maxStringLength {
- return length
- }
- return maxStringLength
-}
diff --git a/pkg/kubelet/dockershim/network/cni/cni_others.go b/pkg/kubelet/dockershim/network/cni/cni_others.go
deleted file mode 100644
index 66446d30210..00000000000
--- a/pkg/kubelet/dockershim/network/cni/cni_others.go
+++ /dev/null
@@ -1,91 +0,0 @@
-//go:build !windows && !dockerless
-// +build !windows,!dockerless
-
-/*
-Copyright 2017 The Kubernetes Authors.
-
-Licensed under the Apache License, Version 2.0 (the "License");
-you may not use this file except in compliance with the License.
-You may obtain a copy of the License at
-
- http://www.apache.org/licenses/LICENSE-2.0
-
-Unless required by applicable law or agreed to in writing, software
-distributed under the License is distributed on an "AS IS" BASIS,
-WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-See the License for the specific language governing permissions and
-limitations under the License.
-*/
-
-package cni
-
-import (
- "fmt"
-
- "github.com/containernetworking/cni/libcni"
- runtimeapi "k8s.io/cri-api/pkg/apis/runtime/v1alpha2"
- kubecontainer "k8s.io/kubernetes/pkg/kubelet/container"
- "k8s.io/kubernetes/pkg/kubelet/dockershim/network"
-)
-
-func getLoNetwork(binDirs []string) *cniNetwork {
- loConfig, err := libcni.ConfListFromBytes([]byte(`{
- "cniVersion": "0.2.0",
- "name": "cni-loopback",
- "plugins":[{
- "type": "loopback"
- }]
-}`))
- if err != nil {
- // The hardcoded config above should always be valid and unit tests will
- // catch this
- panic(err)
- }
- loNetwork := &cniNetwork{
- name: "lo",
- NetworkConfig: loConfig,
- CNIConfig: &libcni.CNIConfig{Path: binDirs},
- }
-
- return loNetwork
-}
-
-func (plugin *cniNetworkPlugin) platformInit() error {
- var err error
- plugin.nsenterPath, err = plugin.execer.LookPath("nsenter")
- if err != nil {
- return err
- }
- return nil
-}
-
-// TODO: Use the addToNetwork function to obtain the IP of the Pod. That will assume idempotent ADD call to the plugin.
-// Also fix the runtime's call to Status function to be done only in the case that the IP is lost, no need to do periodic calls
-func (plugin *cniNetworkPlugin) GetPodNetworkStatus(namespace string, name string, id kubecontainer.ContainerID) (*network.PodNetworkStatus, error) {
- netnsPath, err := plugin.host.GetNetNS(id.ID)
- if err != nil {
- return nil, fmt.Errorf("CNI failed to retrieve network namespace path: %v", err)
- }
- if netnsPath == "" {
- return nil, fmt.Errorf("cannot find the network namespace, skipping pod network status for container %q", id)
- }
-
- ips, err := network.GetPodIPs(plugin.execer, plugin.nsenterPath, netnsPath, network.DefaultInterfaceName)
- if err != nil {
- return nil, err
- }
-
- if len(ips) == 0 {
- return nil, fmt.Errorf("cannot find pod IPs in the network namespace, skipping pod network status for container %q", id)
- }
-
- return &network.PodNetworkStatus{
- IP: ips[0],
- IPs: ips,
- }, nil
-}
-
-// buildDNSCapabilities builds cniDNSConfig from runtimeapi.DNSConfig.
-func buildDNSCapabilities(dnsConfig *runtimeapi.DNSConfig) *cniDNSConfig {
- return nil
-}
diff --git a/pkg/kubelet/dockershim/network/cni/cni_test.go b/pkg/kubelet/dockershim/network/cni/cni_test.go
deleted file mode 100644
index 71704cc52ce..00000000000
--- a/pkg/kubelet/dockershim/network/cni/cni_test.go
+++ /dev/null
@@ -1,360 +0,0 @@
-//go:build linux && !dockerless
-// +build linux,!dockerless
-
-/*
-Copyright 2014 The Kubernetes Authors.
-
-Licensed under the Apache License, Version 2.0 (the "License");
-you may not use this file except in compliance with the License.
-You may obtain a copy of the License at
-
- http://www.apache.org/licenses/LICENSE-2.0
-
-Unless required by applicable law or agreed to in writing, software
-distributed under the License is distributed on an "AS IS" BASIS,
-WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-See the License for the specific language governing permissions and
-limitations under the License.
-*/
-
-package cni
-
-import (
- "bytes"
- "encoding/json"
- "fmt"
- "io/ioutil"
- "math/rand"
- "net"
- "os"
- "path"
- "reflect"
- "testing"
- "text/template"
-
- types020 "github.com/containernetworking/cni/pkg/types/020"
- "github.com/stretchr/testify/mock"
- "github.com/stretchr/testify/require"
- "k8s.io/api/core/v1"
- clientset "k8s.io/client-go/kubernetes"
- utiltesting "k8s.io/client-go/util/testing"
- kubeletconfig "k8s.io/kubernetes/pkg/kubelet/apis/config"
- kubecontainer "k8s.io/kubernetes/pkg/kubelet/container"
- containertest "k8s.io/kubernetes/pkg/kubelet/container/testing"
- "k8s.io/kubernetes/pkg/kubelet/dockershim/network"
- "k8s.io/kubernetes/pkg/kubelet/dockershim/network/cni/testing"
- "k8s.io/kubernetes/pkg/kubelet/dockershim/network/hostport"
- networktest "k8s.io/kubernetes/pkg/kubelet/dockershim/network/testing"
- "k8s.io/utils/exec"
- fakeexec "k8s.io/utils/exec/testing"
-)
-
-// Returns .in file path, .out file path, and .env file path
-func installPluginUnderTest(t *testing.T, testBinDir, testConfDir, testDataDir, binName string, confName, podIP string) (string, string, string) {
- for _, dir := range []string{testBinDir, testConfDir, testDataDir} {
- err := os.MkdirAll(dir, 0777)
- if err != nil {
- t.Fatalf("Failed to create test plugin dir %s: %v", dir, err)
- }
- }
-
- const cniVersion = "0.2.0"
-
- confFile := path.Join(testConfDir, confName+".conf")
- f, err := os.Create(confFile)
- if err != nil {
- t.Fatalf("Failed to install plugin %s: %v", confFile, err)
- }
- networkConfig := fmt.Sprintf(`{ "cniVersion": "%s", "name": "%s", "type": "%s", "capabilities": {"portMappings": true, "bandwidth": true, "ipRanges": true} }`, cniVersion, confName, binName)
- _, err = f.WriteString(networkConfig)
- if err != nil {
- t.Fatalf("Failed to write network config file (%v)", err)
- }
- f.Close()
-
- pluginExec := path.Join(testBinDir, binName)
- f, err = os.Create(pluginExec)
- require.NoError(t, err)
-
- // TODO: use mock instead of fake shell script plugin
- const execScriptTempl = `#!/usr/bin/env bash
-echo -n "{ \"cniVersion\": \"{{.CNIVersion}}\", \"ip4\": { \"ip\": \"{{.PodIP}}/24\" } }"
-if [ "$CNI_COMMAND" = "VERSION" ]; then
- exit
-fi
-cat > {{.InputFile}}
-env > {{.OutputEnv}}
-echo "%@" >> {{.OutputEnv}}
-export $(echo ${CNI_ARGS} | sed 's/;/ /g') &> /dev/null
-mkdir -p {{.OutputDir}} &> /dev/null
-echo -n "$CNI_COMMAND $CNI_NETNS $K8S_POD_NAMESPACE $K8S_POD_NAME $K8S_POD_INFRA_CONTAINER_ID" >& {{.OutputFile}}`
-
- inputFile := path.Join(testDataDir, binName+".in")
- outputFile := path.Join(testDataDir, binName+".out")
- envFile := path.Join(testDataDir, binName+".env")
- execTemplateData := &map[string]interface{}{
- "InputFile": inputFile,
- "OutputFile": outputFile,
- "OutputEnv": envFile,
- "OutputDir": testDataDir,
- "CNIVersion": cniVersion,
- "PodIP": podIP,
- }
-
- tObj := template.Must(template.New("test").Parse(execScriptTempl))
- buf := &bytes.Buffer{}
- if err := tObj.Execute(buf, *execTemplateData); err != nil {
- t.Fatalf("Error in executing script template - %v", err)
- }
- execScript := buf.String()
- _, err = f.WriteString(execScript)
- if err != nil {
- t.Fatalf("Failed to write plugin exec - %v", err)
- }
-
- err = f.Chmod(0777)
- if err != nil {
- t.Fatalf("Failed to set exec perms on plugin")
- }
-
- f.Close()
-
- return inputFile, outputFile, envFile
-}
-
-func tearDownPlugin(tmpDir string) {
- err := os.RemoveAll(tmpDir)
- if err != nil {
- fmt.Printf("Error in cleaning up test: %v", err)
- }
-}
-
-type FakeNetworkHost struct {
- networktest.FakePortMappingGetter
- kubeClient clientset.Interface
- pods []*containertest.FakePod
-}
-
-func NewFakeHost(kubeClient clientset.Interface, pods []*containertest.FakePod, ports map[string][]*hostport.PortMapping) *FakeNetworkHost {
- host := &FakeNetworkHost{
- networktest.FakePortMappingGetter{PortMaps: ports},
- kubeClient,
- pods,
- }
- return host
-}
-
-func (fnh *FakeNetworkHost) GetPodByName(name, namespace string) (*v1.Pod, bool) {
- return nil, false
-}
-
-func (fnh *FakeNetworkHost) GetKubeClient() clientset.Interface {
- return fnh.kubeClient
-}
-
-func (fnh *FakeNetworkHost) GetNetNS(containerID string) (string, error) {
- for _, fp := range fnh.pods {
- for _, c := range fp.Pod.Containers {
- if c.ID.ID == containerID {
- return fp.NetnsPath, nil
- }
- }
- }
- return "", fmt.Errorf("container %q not found", containerID)
-}
-
-func (fnh *FakeNetworkHost) SupportsLegacyFeatures() bool {
- return true
-}
-
-func TestCNIPlugin(t *testing.T) {
- // install some random plugin
- netName := fmt.Sprintf("test%d", rand.Intn(1000))
- binName := fmt.Sprintf("test_vendor%d", rand.Intn(1000))
-
- podIP := "10.0.0.2"
- podIPOutput := fmt.Sprintf("4: eth0 inet %s/24 scope global dynamic eth0\\ valid_lft forever preferred_lft forever", podIP)
- fakeCmds := []fakeexec.FakeCommandAction{
- func(cmd string, args ...string) exec.Cmd {
- return fakeexec.InitFakeCmd(&fakeexec.FakeCmd{
- CombinedOutputScript: []fakeexec.FakeAction{
- func() ([]byte, []byte, error) {
- return []byte(podIPOutput), nil, nil
- },
- },
- }, cmd, args...)
- },
- func(cmd string, args ...string) exec.Cmd {
- return fakeexec.InitFakeCmd(&fakeexec.FakeCmd{
- CombinedOutputScript: []fakeexec.FakeAction{
- func() ([]byte, []byte, error) {
- return []byte(podIPOutput), nil, nil
- },
- },
- }, cmd, args...)
- },
- }
-
- fexec := &fakeexec.FakeExec{
- CommandScript: fakeCmds,
- LookPathFunc: func(file string) (string, error) {
- return fmt.Sprintf("/fake-bin/%s", file), nil
- },
- }
-
- mockLoCNI := &mock_cni.MockCNI{}
- // TODO mock for the test plugin too
-
- tmpDir := utiltesting.MkTmpdirOrDie("cni-test")
- testConfDir := path.Join(tmpDir, "etc", "cni", "net.d")
- testBinDir := path.Join(tmpDir, "opt", "cni", "bin")
- testDataDir := path.Join(tmpDir, "output")
- testCacheDir := path.Join(tmpDir, "var", "lib", "cni", "cache")
- defer tearDownPlugin(tmpDir)
- inputFile, outputFile, outputEnv := installPluginUnderTest(t, testBinDir, testConfDir, testDataDir, binName, netName, podIP)
-
- containerID := kubecontainer.ContainerID{Type: "test", ID: "test_infra_container"}
- pods := []*containertest.FakePod{{
- Pod: &kubecontainer.Pod{
- Containers: []*kubecontainer.Container{
- {ID: containerID},
- },
- },
- NetnsPath: "/proc/12345/ns/net",
- }}
-
- plugins := ProbeNetworkPlugins(testConfDir, testCacheDir, []string{testBinDir})
- if len(plugins) != 1 {
- t.Fatalf("Expected only one network plugin, got %d", len(plugins))
- }
- if plugins[0].Name() != "cni" {
- t.Fatalf("Expected CNI network plugin, got %q", plugins[0].Name())
- }
-
- cniPlugin, ok := plugins[0].(*cniNetworkPlugin)
- if !ok {
- t.Fatalf("Not a CNI network plugin!")
- }
- cniPlugin.execer = fexec
- cniPlugin.loNetwork.CNIConfig = mockLoCNI
-
- mockLoCNI.On("AddNetworkList", mock.AnythingOfType("*context.timerCtx"), cniPlugin.loNetwork.NetworkConfig, mock.AnythingOfType("*libcni.RuntimeConf")).Return(&types020.Result{IP4: &types020.IPConfig{IP: net.IPNet{IP: []byte{127, 0, 0, 1}}}}, nil)
- mockLoCNI.On("DelNetworkList", mock.AnythingOfType("*context.timerCtx"), cniPlugin.loNetwork.NetworkConfig, mock.AnythingOfType("*libcni.RuntimeConf")).Return(nil)
-
- // Check that status returns an error
- if err := cniPlugin.Status(); err == nil {
- t.Fatalf("cniPlugin returned non-err with no podCidr")
- }
-
- cniPlugin.Event(network.NET_PLUGIN_EVENT_POD_CIDR_CHANGE, map[string]interface{}{
- network.NET_PLUGIN_EVENT_POD_CIDR_CHANGE_DETAIL_CIDR: "10.0.2.0/24",
- })
-
- if err := cniPlugin.Status(); err != nil {
- t.Fatalf("unexpected status err: %v", err)
- }
-
- ports := map[string][]*hostport.PortMapping{
- containerID.ID: {
- {
- HostPort: 8008,
- ContainerPort: 80,
- Protocol: "UDP",
- HostIP: "0.0.0.0",
- },
- },
- }
- fakeHost := NewFakeHost(nil, pods, ports)
-
- plug, err := network.InitNetworkPlugin(plugins, "cni", fakeHost, kubeletconfig.HairpinNone, "10.0.0.0/8", network.UseDefaultMTU)
- if err != nil {
- t.Fatalf("Failed to select the desired plugin: %v", err)
- }
-
- bandwidthAnnotation := make(map[string]string)
- bandwidthAnnotation["kubernetes.io/ingress-bandwidth"] = "1M"
- bandwidthAnnotation["kubernetes.io/egress-bandwidth"] = "1M"
-
- // Set up the pod
- err = plug.SetUpPod("podNamespace", "podName", containerID, bandwidthAnnotation, nil)
- if err != nil {
- t.Errorf("Expected nil: %v", err)
- }
- eo, eerr := ioutil.ReadFile(outputEnv)
- output, err := ioutil.ReadFile(outputFile)
- if err != nil || eerr != nil {
- t.Errorf("Failed to read output file %s: %v (env %s err %v)", outputFile, err, eo, eerr)
- }
-
- expectedOutput := "ADD /proc/12345/ns/net podNamespace podName test_infra_container"
- if string(output) != expectedOutput {
- t.Errorf("Mismatch in expected output for setup hook. Expected '%s', got '%s'", expectedOutput, string(output))
- }
-
- // Verify the correct network configuration was passed
- inputConfig := struct {
- RuntimeConfig struct {
- PortMappings []map[string]interface{} `json:"portMappings"`
- Bandwidth map[string]interface{} `json:"bandwidth"`
- IPRanges [][]map[string]interface{} `json:"IPRanges"`
- } `json:"runtimeConfig"`
- }{}
- inputBytes, inerr := ioutil.ReadFile(inputFile)
- parseerr := json.Unmarshal(inputBytes, &inputConfig)
- if inerr != nil || parseerr != nil {
- t.Errorf("failed to parse reported cni input config %s: (%v %v)", inputFile, inerr, parseerr)
- }
- expectedMappings := []map[string]interface{}{
- // hah, golang always unmarshals unstructured json numbers as float64
- {"hostPort": 8008.0, "containerPort": 80.0, "protocol": "udp", "hostIP": "0.0.0.0"},
- }
- if !reflect.DeepEqual(inputConfig.RuntimeConfig.PortMappings, expectedMappings) {
- t.Errorf("mismatch in expected port mappings. expected %v got %v", expectedMappings, inputConfig.RuntimeConfig.PortMappings)
- }
- expectedBandwidth := map[string]interface{}{
- "ingressRate": 1000000.0, "egressRate": 1000000.0,
- "ingressBurst": 2147483647.0, "egressBurst": 2147483647.0,
- }
- if !reflect.DeepEqual(inputConfig.RuntimeConfig.Bandwidth, expectedBandwidth) {
- t.Errorf("mismatch in expected bandwidth. expected %v got %v", expectedBandwidth, inputConfig.RuntimeConfig.Bandwidth)
- }
-
- expectedIPRange := [][]map[string]interface{}{
- {
- {"subnet": "10.0.2.0/24"},
- },
- }
-
- if !reflect.DeepEqual(inputConfig.RuntimeConfig.IPRanges, expectedIPRange) {
- t.Errorf("mismatch in expected ipRange. expected %v got %v", expectedIPRange, inputConfig.RuntimeConfig.IPRanges)
- }
-
- // Get its IP address
- status, err := plug.GetPodNetworkStatus("podNamespace", "podName", containerID)
- if err != nil {
- t.Errorf("Failed to read pod network status: %v", err)
- }
- if status.IP.String() != podIP {
- t.Errorf("Expected pod IP %q but got %q", podIP, status.IP.String())
- }
-
- // Tear it down
- err = plug.TearDownPod("podNamespace", "podName", containerID)
- if err != nil {
- t.Errorf("Expected nil: %v", err)
- }
- output, err = ioutil.ReadFile(outputFile)
- require.NoError(t, err)
- expectedOutput = "DEL /proc/12345/ns/net podNamespace podName test_infra_container"
- if string(output) != expectedOutput {
- t.Errorf("Mismatch in expected output for setup hook. Expected '%s', got '%s'", expectedOutput, string(output))
- }
-
- mockLoCNI.AssertExpectations(t)
-}
-
-func TestLoNetNonNil(t *testing.T) {
- if conf := getLoNetwork(nil); conf == nil {
- t.Error("Expected non-nil lo network")
- }
-}
diff --git a/pkg/kubelet/dockershim/network/cni/cni_windows.go b/pkg/kubelet/dockershim/network/cni/cni_windows.go
deleted file mode 100644
index f1b4aca6fe5..00000000000
--- a/pkg/kubelet/dockershim/network/cni/cni_windows.go
+++ /dev/null
@@ -1,92 +0,0 @@
-//go:build windows && !dockerless
-// +build windows,!dockerless
-
-/*
-Copyright 2017 The Kubernetes Authors.
-
-Licensed under the Apache License, Version 2.0 (the "License");
-you may not use this file except in compliance with the License.
-You may obtain a copy of the License at
-
- http://www.apache.org/licenses/LICENSE-2.0
-
-Unless required by applicable law or agreed to in writing, software
-distributed under the License is distributed on an "AS IS" BASIS,
-WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-See the License for the specific language governing permissions and
-limitations under the License.
-*/
-
-package cni
-
-import (
- "context"
- "fmt"
- cniTypes020 "github.com/containernetworking/cni/pkg/types/020"
- runtimeapi "k8s.io/cri-api/pkg/apis/runtime/v1alpha2"
- "k8s.io/klog/v2"
- kubecontainer "k8s.io/kubernetes/pkg/kubelet/container"
- "k8s.io/kubernetes/pkg/kubelet/dockershim/network"
- "net"
- "time"
-)
-
-func getLoNetwork(binDirs []string) *cniNetwork {
- return nil
-}
-
-func (plugin *cniNetworkPlugin) platformInit() error {
- return nil
-}
-
-// GetPodNetworkStatus : Assuming addToNetwork is idempotent, we can call this API as many times as required to get the IPAddress
-func (plugin *cniNetworkPlugin) GetPodNetworkStatus(namespace string, name string, id kubecontainer.ContainerID) (*network.PodNetworkStatus, error) {
- netnsPath, err := plugin.host.GetNetNS(id.ID)
- if err != nil {
- return nil, fmt.Errorf("CNI failed to retrieve network namespace path: %v", err)
- }
-
- if plugin.getDefaultNetwork() == nil {
- return nil, fmt.Errorf("CNI network not yet initialized, skipping pod network status for container %q", id)
- }
-
- // Because the default remote runtime request timeout is 4 min,so set slightly less than 240 seconds
- // Todo get the timeout from parent ctx
- cniTimeoutCtx, cancelFunc := context.WithTimeout(context.Background(), network.CNITimeoutSec*time.Second)
- defer cancelFunc()
- result, err := plugin.addToNetwork(cniTimeoutCtx, plugin.getDefaultNetwork(), name, namespace, id, netnsPath, nil, nil)
- klog.V(5).InfoS("GetPodNetworkStatus", "result", result)
- if err != nil {
- klog.ErrorS(err, "Got error while adding to cni network")
- return nil, err
- }
-
- // Parse the result and get the IPAddress
- var result020 *cniTypes020.Result
- result020, err = cniTypes020.GetResult(result)
- if err != nil {
- klog.ErrorS(err, "Got error while cni parsing result")
- return nil, err
- }
-
- var list = []net.IP{result020.IP4.IP.IP}
-
- if result020.IP6 != nil {
- list = append(list, result020.IP6.IP.IP)
- }
-
- return &network.PodNetworkStatus{IP: result020.IP4.IP.IP, IPs: list}, nil
-}
-
-// buildDNSCapabilities builds cniDNSConfig from runtimeapi.DNSConfig.
-func buildDNSCapabilities(dnsConfig *runtimeapi.DNSConfig) *cniDNSConfig {
- if dnsConfig != nil {
- return &cniDNSConfig{
- Servers: dnsConfig.Servers,
- Searches: dnsConfig.Searches,
- Options: dnsConfig.Options,
- }
- }
-
- return nil
-}
diff --git a/pkg/kubelet/dockershim/network/cni/testing/mock_cni.go b/pkg/kubelet/dockershim/network/cni/testing/mock_cni.go
deleted file mode 100644
index 550137ef0e9..00000000000
--- a/pkg/kubelet/dockershim/network/cni/testing/mock_cni.go
+++ /dev/null
@@ -1,94 +0,0 @@
-//go:build !dockerless
-// +build !dockerless
-
-/*
-Copyright 2014 The Kubernetes Authors.
-
-Licensed under the Apache License, Version 2.0 (the "License");
-you may not use this file except in compliance with the License.
-You may obtain a copy of the License at
-
- http://www.apache.org/licenses/LICENSE-2.0
-
-Unless required by applicable law or agreed to in writing, software
-distributed under the License is distributed on an "AS IS" BASIS,
-WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-See the License for the specific language governing permissions and
-limitations under the License.
-*/
-
-// mock_cni is a mock of the `libcni.CNI` interface. It's a handwritten mock
-// because there are only two functions to deal with.
-package mock_cni
-
-import (
- "context"
-
- "github.com/containernetworking/cni/libcni"
- "github.com/containernetworking/cni/pkg/types"
- "github.com/stretchr/testify/mock"
-)
-
-type MockCNI struct {
- mock.Mock
-}
-
-func (m *MockCNI) AddNetwork(ctx context.Context, net *libcni.NetworkConfig, rt *libcni.RuntimeConf) (types.Result, error) {
- args := m.Called(ctx, net, rt)
- return args.Get(0).(types.Result), args.Error(1)
-}
-
-func (m *MockCNI) DelNetwork(ctx context.Context, net *libcni.NetworkConfig, rt *libcni.RuntimeConf) error {
- args := m.Called(ctx, net, rt)
- return args.Error(0)
-}
-
-func (m *MockCNI) DelNetworkList(ctx context.Context, net *libcni.NetworkConfigList, rt *libcni.RuntimeConf) error {
- args := m.Called(ctx, net, rt)
- return args.Error(0)
-}
-
-func (m *MockCNI) GetNetworkListCachedConfig(net *libcni.NetworkConfigList, rt *libcni.RuntimeConf) ([]byte, *libcni.RuntimeConf, error) {
- args := m.Called(net, rt)
- return args.Get(0).([]byte), args.Get(1).(*libcni.RuntimeConf), args.Error(1)
-}
-
-func (m *MockCNI) GetNetworkListCachedResult(net *libcni.NetworkConfigList, rt *libcni.RuntimeConf) (types.Result, error) {
- args := m.Called(net, rt)
- return args.Get(0).(types.Result), args.Error(1)
-}
-
-func (m *MockCNI) AddNetworkList(ctx context.Context, net *libcni.NetworkConfigList, rt *libcni.RuntimeConf) (types.Result, error) {
- args := m.Called(ctx, net, rt)
- return args.Get(0).(types.Result), args.Error(1)
-}
-
-func (m *MockCNI) CheckNetworkList(ctx context.Context, net *libcni.NetworkConfigList, rt *libcni.RuntimeConf) error {
- args := m.Called(ctx, net, rt)
- return args.Error(0)
-}
-
-func (m *MockCNI) CheckNetwork(ctx context.Context, net *libcni.NetworkConfig, rt *libcni.RuntimeConf) error {
- args := m.Called(ctx, net, rt)
- return args.Error(0)
-}
-
-func (m *MockCNI) GetNetworkCachedConfig(net *libcni.NetworkConfig, rt *libcni.RuntimeConf) ([]byte, *libcni.RuntimeConf, error) {
- args := m.Called(net, rt)
- return args.Get(0).([]byte), args.Get(1).(*libcni.RuntimeConf), args.Error(1)
-}
-
-func (m *MockCNI) GetNetworkCachedResult(net *libcni.NetworkConfig, rt *libcni.RuntimeConf) (types.Result, error) {
- args := m.Called(net, rt)
- return args.Get(0).(types.Result), args.Error(0)
-}
-
-func (m *MockCNI) ValidateNetworkList(ctx context.Context, net *libcni.NetworkConfigList) ([]string, error) {
- args := m.Called(ctx, net)
- return args.Get(0).([]string), args.Error(0)
-}
-
-func (m *MockCNI) ValidateNetwork(ctx context.Context, net *libcni.NetworkConfig) ([]string, error) {
- args := m.Called(ctx, net)
- return args.Get(0).([]string), args.Error(0)
-}
diff --git a/pkg/kubelet/dockershim/network/hairpin/hairpin.go b/pkg/kubelet/dockershim/network/hairpin/hairpin.go
deleted file mode 100644
index ca6fc3bb7fa..00000000000
--- a/pkg/kubelet/dockershim/network/hairpin/hairpin.go
+++ /dev/null
@@ -1,90 +0,0 @@
-//go:build !dockerless
-// +build !dockerless
-
-/*
-Copyright 2015 The Kubernetes Authors.
-
-Licensed under the Apache License, Version 2.0 (the "License");
-you may not use this file except in compliance with the License.
-You may obtain a copy of the License at
-
- http://www.apache.org/licenses/LICENSE-2.0
-
-Unless required by applicable law or agreed to in writing, software
-distributed under the License is distributed on an "AS IS" BASIS,
-WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-See the License for the specific language governing permissions and
-limitations under the License.
-*/
-
-package hairpin
-
-import (
- "fmt"
- "io/ioutil"
- "net"
- "os"
- "path"
- "regexp"
- "strconv"
-
- "k8s.io/klog/v2"
- "k8s.io/utils/exec"
-)
-
-const (
- sysfsNetPath = "/sys/devices/virtual/net"
- brportRelativePath = "brport"
- hairpinModeRelativePath = "hairpin_mode"
- hairpinEnable = "1"
-)
-
-var (
- ethtoolOutputRegex = regexp.MustCompile(`peer_ifindex: (\d+)`)
-)
-
-func findPairInterfaceOfContainerInterface(e exec.Interface, containerInterfaceName, containerDesc string, nsenterArgs []string) (string, error) {
- nsenterPath, err := e.LookPath("nsenter")
- if err != nil {
- return "", err
- }
- ethtoolPath, err := e.LookPath("ethtool")
- if err != nil {
- return "", err
- }
-
- nsenterArgs = append(nsenterArgs, "-F", "--", ethtoolPath, "--statistics", containerInterfaceName)
- output, err := e.Command(nsenterPath, nsenterArgs...).CombinedOutput()
- if err != nil {
- return "", fmt.Errorf("unable to query interface %s of container %s: %v: %s", containerInterfaceName, containerDesc, err, string(output))
- }
- // look for peer_ifindex
- match := ethtoolOutputRegex.FindSubmatch(output)
- if match == nil {
- return "", fmt.Errorf("no peer_ifindex in interface statistics for %s of container %s", containerInterfaceName, containerDesc)
- }
- peerIfIndex, err := strconv.Atoi(string(match[1]))
- if err != nil { // seems impossible (\d+ not numeric)
- return "", fmt.Errorf("peer_ifindex wasn't numeric: %s: %v", match[1], err)
- }
- iface, err := net.InterfaceByIndex(peerIfIndex)
- if err != nil {
- return "", err
- }
- return iface.Name, nil
-}
-
-func setUpInterface(ifName string) error {
- klog.V(3).InfoS("Enabling hairpin on interface", "interfaceName", ifName)
- ifPath := path.Join(sysfsNetPath, ifName)
- if _, err := os.Stat(ifPath); err != nil {
- return err
- }
- brportPath := path.Join(ifPath, brportRelativePath)
- if _, err := os.Stat(brportPath); err != nil && os.IsNotExist(err) {
- // Device is not on a bridge, so doesn't need hairpin mode
- return nil
- }
- hairpinModeFile := path.Join(brportPath, hairpinModeRelativePath)
- return ioutil.WriteFile(hairpinModeFile, []byte(hairpinEnable), 0644)
-}
diff --git a/pkg/kubelet/dockershim/network/hairpin/hairpin_test.go b/pkg/kubelet/dockershim/network/hairpin/hairpin_test.go
deleted file mode 100644
index 9930d569287..00000000000
--- a/pkg/kubelet/dockershim/network/hairpin/hairpin_test.go
+++ /dev/null
@@ -1,112 +0,0 @@
-//go:build !dockerless
-// +build !dockerless
-
-/*
-Copyright 2015 The Kubernetes Authors.
-
-Licensed under the Apache License, Version 2.0 (the "License");
-you may not use this file except in compliance with the License.
-You may obtain a copy of the License at
-
- http://www.apache.org/licenses/LICENSE-2.0
-
-Unless required by applicable law or agreed to in writing, software
-distributed under the License is distributed on an "AS IS" BASIS,
-WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-See the License for the specific language governing permissions and
-limitations under the License.
-*/
-
-package hairpin
-
-import (
- "errors"
- "fmt"
- "net"
- "os"
- "strings"
- "testing"
-
- "k8s.io/utils/exec"
- fakeexec "k8s.io/utils/exec/testing"
-)
-
-func TestFindPairInterfaceOfContainerInterface(t *testing.T) {
- // there should be at least "lo" on any system
- interfaces, _ := net.Interfaces()
- validOutput := fmt.Sprintf("garbage\n peer_ifindex: %d", interfaces[0].Index)
- invalidOutput := fmt.Sprintf("garbage\n unknown: %d", interfaces[0].Index)
-
- tests := []struct {
- output string
- err error
- expectedName string
- expectErr bool
- }{
- {
- output: validOutput,
- expectedName: interfaces[0].Name,
- },
- {
- output: invalidOutput,
- expectErr: true,
- },
- {
- output: validOutput,
- err: errors.New("error"),
- expectErr: true,
- },
- }
- for _, test := range tests {
- fcmd := fakeexec.FakeCmd{
- CombinedOutputScript: []fakeexec.FakeAction{
- func() ([]byte, []byte, error) { return []byte(test.output), nil, test.err },
- },
- }
- fexec := fakeexec.FakeExec{
- CommandScript: []fakeexec.FakeCommandAction{
- func(cmd string, args ...string) exec.Cmd {
- return fakeexec.InitFakeCmd(&fcmd, cmd, args...)
- },
- },
- LookPathFunc: func(file string) (string, error) {
- return fmt.Sprintf("/fake-bin/%s", file), nil
- },
- }
- nsenterArgs := []string{"-t", "123", "-n"}
- name, err := findPairInterfaceOfContainerInterface(&fexec, "eth0", "123", nsenterArgs)
- if test.expectErr {
- if err == nil {
- t.Errorf("unexpected non-error")
- }
- } else {
- if err != nil {
- t.Errorf("unexpected error: %v", err)
- }
- }
- if name != test.expectedName {
- t.Errorf("unexpected name: %s (expected: %s)", name, test.expectedName)
- }
- }
-}
-
-func TestSetUpInterfaceNonExistent(t *testing.T) {
- err := setUpInterface("non-existent")
- if err == nil {
- t.Errorf("unexpected non-error")
- }
- deviceDir := fmt.Sprintf("%s/%s", sysfsNetPath, "non-existent")
- if !strings.Contains(fmt.Sprintf("%v", err), deviceDir) {
- t.Errorf("should have tried to open %s", deviceDir)
- }
-}
-
-func TestSetUpInterfaceNotBridged(t *testing.T) {
- err := setUpInterface("lo")
- if err != nil {
- if os.IsNotExist(err) {
- t.Skipf("'lo' device does not exist??? (%v)", err)
- }
- t.Errorf("unexpected error: %v", err)
- }
-}
diff --git a/pkg/kubelet/dockershim/network/hostport/fake_iptables.go b/pkg/kubelet/dockershim/network/hostport/fake_iptables.go
deleted file mode 100644
index 6d7a51fce93..00000000000
--- a/pkg/kubelet/dockershim/network/hostport/fake_iptables.go
+++ /dev/null
@@ -1,375 +0,0 @@
-//go:build !dockerless
-// +build !dockerless
-
-/*
-Copyright 2016 The Kubernetes Authors.
-
-Licensed under the Apache License, Version 2.0 (the "License");
-you may not use this file except in compliance with the License.
-You may obtain a copy of the License at
-
- http://www.apache.org/licenses/LICENSE-2.0
-
-Unless required by applicable law or agreed to in writing, software
-distributed under the License is distributed on an "AS IS" BASIS,
-WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-See the License for the specific language governing permissions and
-limitations under the License.
-*/
-
-package hostport
-
-import (
- "bytes"
- "fmt"
- "strings"
- "time"
-
- "k8s.io/apimachinery/pkg/util/sets"
- utiliptables "k8s.io/kubernetes/pkg/util/iptables"
- netutils "k8s.io/utils/net"
-)
-
-type fakeChain struct {
- name utiliptables.Chain
- rules []string
-}
-
-type fakeTable struct {
- name utiliptables.Table
- chains map[string]*fakeChain
-}
-
-type fakeIPTables struct {
- tables map[string]*fakeTable
- builtinChains map[string]sets.String
- protocol utiliptables.Protocol
-}
-
-func NewFakeIPTables() *fakeIPTables {
- return &fakeIPTables{
- tables: make(map[string]*fakeTable),
- builtinChains: map[string]sets.String{
- string(utiliptables.TableFilter): sets.NewString("INPUT", "FORWARD", "OUTPUT"),
- string(utiliptables.TableNAT): sets.NewString("PREROUTING", "INPUT", "OUTPUT", "POSTROUTING"),
- string(utiliptables.TableMangle): sets.NewString("PREROUTING", "INPUT", "FORWARD", "OUTPUT", "POSTROUTING"),
- },
- protocol: utiliptables.ProtocolIPv4,
- }
-}
-
-func (f *fakeIPTables) getTable(tableName utiliptables.Table) (*fakeTable, error) {
- table, ok := f.tables[string(tableName)]
- if !ok {
- return nil, fmt.Errorf("table %s does not exist", tableName)
- }
- return table, nil
-}
-
-func (f *fakeIPTables) getChain(tableName utiliptables.Table, chainName utiliptables.Chain) (*fakeTable, *fakeChain, error) {
- table, err := f.getTable(tableName)
- if err != nil {
- return nil, nil, err
- }
-
- chain, ok := table.chains[string(chainName)]
- if !ok {
- return table, nil, fmt.Errorf("chain %s/%s does not exist", tableName, chainName)
- }
-
- return table, chain, nil
-}
-
-func (f *fakeIPTables) ensureChain(tableName utiliptables.Table, chainName utiliptables.Chain) (bool, *fakeChain) {
- table, chain, err := f.getChain(tableName, chainName)
- if err != nil {
- // either table or table+chain don't exist yet
- if table == nil {
- table = &fakeTable{
- name: tableName,
- chains: make(map[string]*fakeChain),
- }
- f.tables[string(tableName)] = table
- }
- chain := &fakeChain{
- name: chainName,
- rules: make([]string, 0),
- }
- table.chains[string(chainName)] = chain
- return false, chain
- }
- return true, chain
-}
-
-func (f *fakeIPTables) EnsureChain(tableName utiliptables.Table, chainName utiliptables.Chain) (bool, error) {
- existed, _ := f.ensureChain(tableName, chainName)
- return existed, nil
-}
-
-func (f *fakeIPTables) FlushChain(tableName utiliptables.Table, chainName utiliptables.Chain) error {
- _, chain, err := f.getChain(tableName, chainName)
- if err != nil {
- return err
- }
- chain.rules = make([]string, 0)
- return nil
-}
-
-func (f *fakeIPTables) DeleteChain(tableName utiliptables.Table, chainName utiliptables.Chain) error {
- table, _, err := f.getChain(tableName, chainName)
- if err != nil {
- return err
- }
- delete(table.chains, string(chainName))
- return nil
-}
-
-func (f *fakeIPTables) ChainExists(tableName utiliptables.Table, chainName utiliptables.Chain) (bool, error) {
- _, _, err := f.getChain(tableName, chainName)
- if err != nil {
- return false, err
- }
- return true, nil
-}
-
-// Returns index of rule in array; < 0 if rule is not found
-func findRule(chain *fakeChain, rule string) int {
- for i, candidate := range chain.rules {
- if rule == candidate {
- return i
- }
- }
- return -1
-}
-
-func (f *fakeIPTables) ensureRule(position utiliptables.RulePosition, tableName utiliptables.Table, chainName utiliptables.Chain, rule string) (bool, error) {
- _, chain, err := f.getChain(tableName, chainName)
- if err != nil {
- _, chain = f.ensureChain(tableName, chainName)
- }
-
- rule, err = normalizeRule(rule)
- if err != nil {
- return false, err
- }
- ruleIdx := findRule(chain, rule)
- if ruleIdx >= 0 {
- return true, nil
- }
-
- switch position {
- case utiliptables.Prepend:
- chain.rules = append([]string{rule}, chain.rules...)
- case utiliptables.Append:
- chain.rules = append(chain.rules, rule)
- default:
- return false, fmt.Errorf("unknown position argument %q", position)
- }
- return false, nil
-}
-
-func normalizeRule(rule string) (string, error) {
- normalized := ""
- remaining := strings.TrimSpace(rule)
- for {
- var end int
-
- if strings.HasPrefix(remaining, "--to-destination=") {
- remaining = strings.Replace(remaining, "=", " ", 1)
- }
-
- if remaining[0] == '"' {
- end = strings.Index(remaining[1:], "\"")
- if end < 0 {
- return "", fmt.Errorf("invalid rule syntax: mismatched quotes")
- }
- end += 2
- } else {
- end = strings.Index(remaining, " ")
- if end < 0 {
- end = len(remaining)
- }
- }
- arg := remaining[:end]
-
- // Normalize un-prefixed IP addresses like iptables does
- if netutils.ParseIPSloppy(arg) != nil {
- arg += "/32"
- }
-
- if len(normalized) > 0 {
- normalized += " "
- }
- normalized += strings.TrimSpace(arg)
- if len(remaining) == end {
- break
- }
- remaining = remaining[end+1:]
- }
- return normalized, nil
-}
-
-func (f *fakeIPTables) EnsureRule(position utiliptables.RulePosition, tableName utiliptables.Table, chainName utiliptables.Chain, args ...string) (bool, error) {
- ruleArgs := make([]string, 0)
- for _, arg := range args {
- // quote args with internal spaces (like comments)
- if strings.Contains(arg, " ") {
- arg = fmt.Sprintf("\"%s\"", arg)
- }
- ruleArgs = append(ruleArgs, arg)
- }
- return f.ensureRule(position, tableName, chainName, strings.Join(ruleArgs, " "))
-}
-
-func (f *fakeIPTables) DeleteRule(tableName utiliptables.Table, chainName utiliptables.Chain, args ...string) error {
- _, chain, err := f.getChain(tableName, chainName)
- if err == nil {
- rule := strings.Join(args, " ")
- ruleIdx := findRule(chain, rule)
- if ruleIdx < 0 {
- return nil
- }
- chain.rules = append(chain.rules[:ruleIdx], chain.rules[ruleIdx+1:]...)
- }
- return nil
-}
-
-func (f *fakeIPTables) IsIPv6() bool {
- return f.protocol == utiliptables.ProtocolIPv6
-}
-
-func (f *fakeIPTables) Protocol() utiliptables.Protocol {
- return f.protocol
-}
-
-func saveChain(chain *fakeChain, data *bytes.Buffer) {
- for _, rule := range chain.rules {
- data.WriteString(fmt.Sprintf("-A %s %s\n", chain.name, rule))
- }
-}
-
-func (f *fakeIPTables) SaveInto(tableName utiliptables.Table, buffer *bytes.Buffer) error {
- table, err := f.getTable(tableName)
- if err != nil {
- return err
- }
-
- buffer.WriteString(fmt.Sprintf("*%s\n", table.name))
-
- rules := bytes.NewBuffer(nil)
- for _, chain := range table.chains {
- buffer.WriteString(fmt.Sprintf(":%s - [0:0]\n", string(chain.name)))
- saveChain(chain, rules)
- }
- buffer.Write(rules.Bytes())
- buffer.WriteString("COMMIT\n")
- return nil
-}
-
-func (f *fakeIPTables) restore(restoreTableName utiliptables.Table, data []byte, flush utiliptables.FlushFlag) error {
- allLines := string(data)
- buf := bytes.NewBuffer(data)
- var tableName utiliptables.Table
- for {
- line, err := buf.ReadString('\n')
- if err != nil {
- break
- }
- if line[0] == '#' {
- continue
- }
-
- line = strings.TrimSuffix(line, "\n")
- if strings.HasPrefix(line, "*") {
- tableName = utiliptables.Table(line[1:])
- }
- if tableName != "" {
- if restoreTableName != "" && restoreTableName != tableName {
- continue
- }
- if strings.HasPrefix(line, ":") {
- chainName := utiliptables.Chain(strings.Split(line[1:], " ")[0])
- if flush == utiliptables.FlushTables {
- table, chain, err := f.getChain(tableName, chainName)
- if err != nil {
- return err
- }
- if chain != nil {
- delete(table.chains, string(chainName))
- }
- }
- _, _ = f.ensureChain(tableName, chainName)
- // The --noflush option for iptables-restore doesn't work for user-defined chains, only builtin chains.
- // We should flush user-defined chains if the chain is not to be deleted
- if !f.isBuiltinChain(tableName, chainName) && !strings.Contains(allLines, "-X "+string(chainName)) {
- if err := f.FlushChain(tableName, chainName); err != nil {
- return err
- }
- }
- } else if strings.HasPrefix(line, "-A") {
- parts := strings.Split(line, " ")
- if len(parts) < 3 {
- return fmt.Errorf("invalid iptables rule '%s'", line)
- }
- chainName := utiliptables.Chain(parts[1])
- rule := strings.TrimPrefix(line, fmt.Sprintf("-A %s ", chainName))
- _, err := f.ensureRule(utiliptables.Append, tableName, chainName, rule)
- if err != nil {
- return err
- }
- } else if strings.HasPrefix(line, "-I") {
- parts := strings.Split(line, " ")
- if len(parts) < 3 {
- return fmt.Errorf("invalid iptables rule '%s'", line)
- }
- chainName := utiliptables.Chain(parts[1])
- rule := strings.TrimPrefix(line, fmt.Sprintf("-I %s ", chainName))
- _, err := f.ensureRule(utiliptables.Prepend, tableName, chainName, rule)
- if err != nil {
- return err
- }
- } else if strings.HasPrefix(line, "-X") {
- parts := strings.Split(line, " ")
- if len(parts) < 2 {
- return fmt.Errorf("invalid iptables rule '%s'", line)
- }
- if err := f.DeleteChain(tableName, utiliptables.Chain(parts[1])); err != nil {
- return err
- }
- } else if line == "COMMIT" {
- if restoreTableName == tableName {
- return nil
- }
- tableName = ""
- }
- }
- }
-
- return nil
-}
-
-func (f *fakeIPTables) Restore(tableName utiliptables.Table, data []byte, flush utiliptables.FlushFlag, counters utiliptables.RestoreCountersFlag) error {
- return f.restore(tableName, data, flush)
-}
-
-func (f *fakeIPTables) RestoreAll(data []byte, flush utiliptables.FlushFlag, counters utiliptables.RestoreCountersFlag) error {
- return f.restore("", data, flush)
-}
-
-func (f *fakeIPTables) Monitor(canary utiliptables.Chain, tables []utiliptables.Table, reloadFunc func(), interval time.Duration, stopCh <-chan struct{}) {
-}
-
-func (f *fakeIPTables) isBuiltinChain(tableName utiliptables.Table, chainName utiliptables.Chain) bool {
- if builtinChains, ok := f.builtinChains[string(tableName)]; ok && builtinChains.Has(string(chainName)) {
- return true
- }
- return false
-}
-
-func (f *fakeIPTables) HasRandomFully() bool {
- return false
-}
-
-func (f *fakeIPTables) Present() bool {
- return true
-}
diff --git a/pkg/kubelet/dockershim/network/hostport/fake_iptables_test.go b/pkg/kubelet/dockershim/network/hostport/fake_iptables_test.go
deleted file mode 100644
index 41dd7ba7390..00000000000
--- a/pkg/kubelet/dockershim/network/hostport/fake_iptables_test.go
+++ /dev/null
@@ -1,59 +0,0 @@
-//go:build !dockerless
-// +build !dockerless
-
-/*
-Copyright 2016 The Kubernetes Authors.
-
-Licensed under the Apache License, Version 2.0 (the "License");
-you may not use this file except in compliance with the License.
-You may obtain a copy of the License at
-
- http://www.apache.org/licenses/LICENSE-2.0
-
-Unless required by applicable law or agreed to in writing, software
-distributed under the License is distributed on an "AS IS" BASIS,
-WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-See the License for the specific language governing permissions and
-limitations under the License.
-*/
-
-package hostport
-
-import (
- "bytes"
- "testing"
-
- "github.com/stretchr/testify/assert"
- utiliptables "k8s.io/kubernetes/pkg/util/iptables"
-)
-
-func TestRestoreFlushRules(t *testing.T) {
- iptables := NewFakeIPTables()
- rules := [][]string{
- {"-A", "KUBE-HOSTPORTS", "-m comment --comment \"pod3_ns1 hostport 8443\" -m tcp -p tcp --dport 8443 -j KUBE-HP-5N7UH5JAXCVP5UJR"},
- {"-A", "POSTROUTING", "-m comment --comment \"SNAT for localhost access to hostports\" -o cbr0 -s 127.0.0.0/8 -j MASQUERADE"},
- }
- natRules := bytes.NewBuffer(nil)
- writeLine(natRules, "*nat")
- for _, rule := range rules {
- _, err := iptables.EnsureChain(utiliptables.TableNAT, utiliptables.Chain(rule[1]))
- assert.NoError(t, err)
- _, err = iptables.ensureRule(utiliptables.RulePosition(rule[0]), utiliptables.TableNAT, utiliptables.Chain(rule[1]), rule[2])
- assert.NoError(t, err)
-
- writeLine(natRules, utiliptables.MakeChainLine(utiliptables.Chain(rule[1])))
- }
- writeLine(natRules, "COMMIT")
- assert.NoError(t, iptables.Restore(utiliptables.TableNAT, natRules.Bytes(), utiliptables.NoFlushTables, utiliptables.RestoreCounters))
- natTable, ok := iptables.tables[string(utiliptables.TableNAT)]
- assert.True(t, ok)
- // check KUBE-HOSTPORTS chain, should have been cleaned up
- hostportChain, ok := natTable.chains["KUBE-HOSTPORTS"]
- assert.True(t, ok)
- assert.Equal(t, 0, len(hostportChain.rules))
-
- // check builtin chains, should not been cleaned up
- postroutingChain, ok := natTable.chains["POSTROUTING"]
- assert.True(t, ok, string(postroutingChain.name))
- assert.Equal(t, 1, len(postroutingChain.rules))
-}
diff --git a/pkg/kubelet/dockershim/network/hostport/hostport.go b/pkg/kubelet/dockershim/network/hostport/hostport.go
deleted file mode 100644
index cb68abdc017..00000000000
--- a/pkg/kubelet/dockershim/network/hostport/hostport.go
+++ /dev/null
@@ -1,171 +0,0 @@
-//go:build !dockerless
-// +build !dockerless
-
-/*
-Copyright 2017 The Kubernetes Authors.
-
-Licensed under the Apache License, Version 2.0 (the "License");
-you may not use this file except in compliance with the License.
-You may obtain a copy of the License at
-
- http://www.apache.org/licenses/LICENSE-2.0
-
-Unless required by applicable law or agreed to in writing, software
-distributed under the License is distributed on an "AS IS" BASIS,
-WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-See the License for the specific language governing permissions and
-limitations under the License.
-*/
-
-package hostport
-
-import (
- "fmt"
- "net"
- "strconv"
- "strings"
-
- "k8s.io/klog/v2"
-
- v1 "k8s.io/api/core/v1"
- utiliptables "k8s.io/kubernetes/pkg/util/iptables"
-)
-
-const (
- // the hostport chain
- kubeHostportsChain utiliptables.Chain = "KUBE-HOSTPORTS"
- // prefix for hostport chains
- kubeHostportChainPrefix string = "KUBE-HP-"
-)
-
-// PortMapping represents a network port in a container
-type PortMapping struct {
- HostPort int32
- ContainerPort int32
- Protocol v1.Protocol
- HostIP string
-}
-
-// PodPortMapping represents a pod's network state and associated container port mappings
-type PodPortMapping struct {
- Namespace string
- Name string
- PortMappings []*PortMapping
- HostNetwork bool
- IP net.IP
-}
-
-// ipFamily refers to a specific family if not empty, i.e. "4" or "6".
-type ipFamily string
-
-// Constants for valid IPFamily:
-const (
- IPv4 ipFamily = "4"
- IPv6 ipFamily = "6"
-)
-
-type hostport struct {
- ipFamily ipFamily
- ip string
- port int32
- protocol string
-}
-
-type hostportOpener func(*hostport) (closeable, error)
-
-type closeable interface {
- Close() error
-}
-
-func openLocalPort(hp *hostport) (closeable, error) {
- // For ports on node IPs, open the actual port and hold it, even though we
- // use iptables to redirect traffic.
- // This ensures a) that it's safe to use that port and b) that (a) stays
- // true. The risk is that some process on the node (e.g. sshd or kubelet)
- // is using a port and we give that same port out to a Service. That would
- // be bad because iptables would silently claim the traffic but the process
- // would never know.
- // NOTE: We should not need to have a real listen()ing socket - bind()
- // should be enough, but I can't figure out a way to e2e test without
- // it. Tools like 'ss' and 'netstat' do not show sockets that are
- // bind()ed but not listen()ed, and at least the default debian netcat
- // has no way to avoid about 10 seconds of retries.
- var socket closeable
- // open the socket on the HostIP and HostPort specified
- address := net.JoinHostPort(hp.ip, strconv.Itoa(int(hp.port)))
- switch hp.protocol {
- case "tcp":
- network := "tcp" + string(hp.ipFamily)
- listener, err := net.Listen(network, address)
- if err != nil {
- return nil, err
- }
- socket = listener
- case "udp":
- network := "udp" + string(hp.ipFamily)
- addr, err := net.ResolveUDPAddr(network, address)
- if err != nil {
- return nil, err
- }
- conn, err := net.ListenUDP(network, addr)
- if err != nil {
- return nil, err
- }
- socket = conn
- default:
- return nil, fmt.Errorf("unknown protocol %q", hp.protocol)
- }
- klog.V(3).InfoS("Opened local port", "port", hp.String())
- return socket, nil
-}
-
-// portMappingToHostport creates hostport structure based on input portmapping
-func portMappingToHostport(portMapping *PortMapping, family ipFamily) hostport {
- return hostport{
- ipFamily: family,
- ip: portMapping.HostIP,
- port: portMapping.HostPort,
- protocol: strings.ToLower(string(portMapping.Protocol)),
- }
-}
-
-// ensureKubeHostportChains ensures the KUBE-HOSTPORTS chain is setup correctly
-func ensureKubeHostportChains(iptables utiliptables.Interface, natInterfaceName string) error {
- klog.V(4).InfoS("Ensuring kubelet hostport chains")
- // Ensure kubeHostportChain
- if _, err := iptables.EnsureChain(utiliptables.TableNAT, kubeHostportsChain); err != nil {
- return fmt.Errorf("failed to ensure that %s chain %s exists: %v", utiliptables.TableNAT, kubeHostportsChain, err)
- }
- tableChainsNeedJumpServices := []struct {
- table utiliptables.Table
- chain utiliptables.Chain
- }{
- {utiliptables.TableNAT, utiliptables.ChainOutput},
- {utiliptables.TableNAT, utiliptables.ChainPrerouting},
- }
- args := []string{
- "-m", "comment", "--comment", "kube hostport portals",
- "-m", "addrtype", "--dst-type", "LOCAL",
- "-j", string(kubeHostportsChain),
- }
- for _, tc := range tableChainsNeedJumpServices {
- // KUBE-HOSTPORTS chain needs to be appended to the system chains.
- // This ensures KUBE-SERVICES chain gets processed first.
- // Since rules in KUBE-HOSTPORTS chain matches broader cases, allow the more specific rules to be processed first.
- if _, err := iptables.EnsureRule(utiliptables.Append, tc.table, tc.chain, args...); err != nil {
- return fmt.Errorf("failed to ensure that %s chain %s jumps to %s: %v", tc.table, tc.chain, kubeHostportsChain, err)
- }
- }
- if natInterfaceName != "" && natInterfaceName != "lo" {
- // Need to SNAT traffic from localhost
- localhost := "127.0.0.0/8"
- if iptables.IsIPv6() {
- localhost = "::1/128"
- }
- args = []string{"-m", "comment", "--comment", "SNAT for localhost access to hostports", "-o", natInterfaceName, "-s", localhost, "-j", "MASQUERADE"}
- if _, err := iptables.EnsureRule(utiliptables.Append, utiliptables.TableNAT, utiliptables.ChainPostrouting, args...); err != nil {
- return fmt.Errorf("failed to ensure that %s chain %s jumps to MASQUERADE: %v", utiliptables.TableNAT, utiliptables.ChainPostrouting, err)
- }
- }
- return nil
-}
diff --git a/pkg/kubelet/dockershim/network/hostport/hostport_manager.go b/pkg/kubelet/dockershim/network/hostport/hostport_manager.go
deleted file mode 100644
index 3caa2007b62..00000000000
--- a/pkg/kubelet/dockershim/network/hostport/hostport_manager.go
+++ /dev/null
@@ -1,445 +0,0 @@
-//go:build !dockerless
-// +build !dockerless
-
-/*
-Copyright 2017 The Kubernetes Authors.
-
-Licensed under the Apache License, Version 2.0 (the "License");
-you may not use this file except in compliance with the License.
-You may obtain a copy of the License at
-
- http://www.apache.org/licenses/LICENSE-2.0
-
-Unless required by applicable law or agreed to in writing, software
-distributed under the License is distributed on an "AS IS" BASIS,
-WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-See the License for the specific language governing permissions and
-limitations under the License.
-*/
-
-package hostport
-
-import (
- "bytes"
- "crypto/sha256"
- "encoding/base32"
- "fmt"
- "net"
- "strconv"
- "strings"
- "sync"
-
- v1 "k8s.io/api/core/v1"
- utilerrors "k8s.io/apimachinery/pkg/util/errors"
- "k8s.io/klog/v2"
- iptablesproxy "k8s.io/kubernetes/pkg/proxy/iptables"
- "k8s.io/kubernetes/pkg/util/conntrack"
- utiliptables "k8s.io/kubernetes/pkg/util/iptables"
- "k8s.io/utils/exec"
- utilnet "k8s.io/utils/net"
-)
-
-// HostPortManager is an interface for adding and removing hostport for a given pod sandbox.
-type HostPortManager interface {
- // Add implements port mappings.
- // id should be a unique identifier for a pod, e.g. podSandboxID.
- // podPortMapping is the associated port mapping information for the pod.
- // natInterfaceName is the interface that localhost uses to talk to the given pod, if known.
- Add(id string, podPortMapping *PodPortMapping, natInterfaceName string) error
- // Remove cleans up matching port mappings
- // Remove must be able to clean up port mappings without pod IP
- Remove(id string, podPortMapping *PodPortMapping) error
-}
-
-type hostportManager struct {
- hostPortMap map[hostport]closeable
- execer exec.Interface
- conntrackFound bool
- iptables utiliptables.Interface
- portOpener hostportOpener
- mu sync.Mutex
-}
-
-// NewHostportManager creates a new HostPortManager
-func NewHostportManager(iptables utiliptables.Interface) HostPortManager {
- h := &hostportManager{
- hostPortMap: make(map[hostport]closeable),
- execer: exec.New(),
- iptables: iptables,
- portOpener: openLocalPort,
- }
- h.conntrackFound = conntrack.Exists(h.execer)
- if !h.conntrackFound {
- klog.InfoS("The binary conntrack is not installed, this can cause failures in network connection cleanup.")
- }
- return h
-}
-
-func (hm *hostportManager) Add(id string, podPortMapping *PodPortMapping, natInterfaceName string) (err error) {
- if podPortMapping == nil || podPortMapping.HostNetwork {
- return nil
- }
- podFullName := getPodFullName(podPortMapping)
- // IP.To16() returns nil if IP is not a valid IPv4 or IPv6 address
- if podPortMapping.IP.To16() == nil {
- return fmt.Errorf("invalid or missing IP of pod %s", podFullName)
- }
- podIP := podPortMapping.IP.String()
- isIPv6 := utilnet.IsIPv6(podPortMapping.IP)
-
- // skip if there is no hostport needed
- hostportMappings := gatherHostportMappings(podPortMapping, isIPv6)
- if len(hostportMappings) == 0 {
- return nil
- }
-
- if isIPv6 != hm.iptables.IsIPv6() {
- return fmt.Errorf("HostPortManager IP family mismatch: %v, isIPv6 - %v", podIP, isIPv6)
- }
-
- if err := ensureKubeHostportChains(hm.iptables, natInterfaceName); err != nil {
- return err
- }
-
- // Ensure atomicity for port opening and iptables operations
- hm.mu.Lock()
- defer hm.mu.Unlock()
-
- // try to open hostports
- ports, err := hm.openHostports(podPortMapping)
- if err != nil {
- return err
- }
- for hostport, socket := range ports {
- hm.hostPortMap[hostport] = socket
- }
-
- natChains := bytes.NewBuffer(nil)
- natRules := bytes.NewBuffer(nil)
- writeLine(natChains, "*nat")
-
- existingChains, existingRules, err := getExistingHostportIPTablesRules(hm.iptables)
- if err != nil {
- // clean up opened host port if encounter any error
- return utilerrors.NewAggregate([]error{err, hm.closeHostports(hostportMappings)})
- }
-
- newChains := []utiliptables.Chain{}
- conntrackPortsToRemove := []int{}
- for _, pm := range hostportMappings {
- protocol := strings.ToLower(string(pm.Protocol))
- chain := getHostportChain(id, pm)
- newChains = append(newChains, chain)
- if pm.Protocol == v1.ProtocolUDP {
- conntrackPortsToRemove = append(conntrackPortsToRemove, int(pm.HostPort))
- }
-
- // Add new hostport chain
- writeLine(natChains, utiliptables.MakeChainLine(chain))
-
- // Prepend the new chain to KUBE-HOSTPORTS
- // This avoids any leaking iptables rule that takes up the same port
- writeLine(natRules, "-I", string(kubeHostportsChain),
- "-m", "comment", "--comment", fmt.Sprintf(`"%s hostport %d"`, podFullName, pm.HostPort),
- "-m", protocol, "-p", protocol, "--dport", fmt.Sprintf("%d", pm.HostPort),
- "-j", string(chain),
- )
-
- // SNAT if the traffic comes from the pod itself
- writeLine(natRules, "-A", string(chain),
- "-m", "comment", "--comment", fmt.Sprintf(`"%s hostport %d"`, podFullName, pm.HostPort),
- "-s", podIP,
- "-j", string(iptablesproxy.KubeMarkMasqChain))
-
- // DNAT to the podIP:containerPort
- hostPortBinding := net.JoinHostPort(podIP, strconv.Itoa(int(pm.ContainerPort)))
- if pm.HostIP == "" || pm.HostIP == "0.0.0.0" || pm.HostIP == "::" {
- writeLine(natRules, "-A", string(chain),
- "-m", "comment", "--comment", fmt.Sprintf(`"%s hostport %d"`, podFullName, pm.HostPort),
- "-m", protocol, "-p", protocol,
- "-j", "DNAT", fmt.Sprintf("--to-destination=%s", hostPortBinding))
- } else {
- writeLine(natRules, "-A", string(chain),
- "-m", "comment", "--comment", fmt.Sprintf(`"%s hostport %d"`, podFullName, pm.HostPort),
- "-m", protocol, "-p", protocol, "-d", pm.HostIP,
- "-j", "DNAT", fmt.Sprintf("--to-destination=%s", hostPortBinding))
- }
- }
-
- // getHostportChain should be able to provide unique hostport chain name using hash
- // if there is a chain conflict or multiple Adds have been triggered for a single pod,
- // filtering should be able to avoid further problem
- filterChains(existingChains, newChains)
- existingRules = filterRules(existingRules, newChains)
-
- for _, chain := range existingChains {
- writeLine(natChains, chain)
- }
- for _, rule := range existingRules {
- writeLine(natRules, rule)
- }
- writeLine(natRules, "COMMIT")
-
- if err = hm.syncIPTables(append(natChains.Bytes(), natRules.Bytes()...)); err != nil {
- // clean up opened host port if encounter any error
- return utilerrors.NewAggregate([]error{err, hm.closeHostports(hostportMappings)})
- }
-
- // Remove conntrack entries just after adding the new iptables rules. If the conntrack entry is removed along with
- // the IP tables rule, it can be the case that the packets received by the node after iptables rule removal will
- // create a new conntrack entry without any DNAT. That will result in blackhole of the traffic even after correct
- // iptables rules have been added back.
- if hm.execer != nil && hm.conntrackFound {
- klog.InfoS("Starting to delete udp conntrack entries", "conntrackEntries", conntrackPortsToRemove, "isIPv6", isIPv6)
- for _, port := range conntrackPortsToRemove {
- err = conntrack.ClearEntriesForPort(hm.execer, port, isIPv6, v1.ProtocolUDP)
- if err != nil {
- klog.ErrorS(err, "Failed to clear udp conntrack for port", "port", port)
- }
- }
- }
- return nil
-}
-
-func (hm *hostportManager) Remove(id string, podPortMapping *PodPortMapping) (err error) {
- if podPortMapping == nil || podPortMapping.HostNetwork {
- return nil
- }
-
- hostportMappings := gatherHostportMappings(podPortMapping, hm.iptables.IsIPv6())
- if len(hostportMappings) == 0 {
- return nil
- }
-
- // Ensure atomicity for port closing and iptables operations
- hm.mu.Lock()
- defer hm.mu.Unlock()
-
- var existingChains map[utiliptables.Chain]string
- var existingRules []string
- existingChains, existingRules, err = getExistingHostportIPTablesRules(hm.iptables)
- if err != nil {
- return err
- }
-
- // Gather target hostport chains for removal
- chainsToRemove := []utiliptables.Chain{}
- for _, pm := range hostportMappings {
- chainsToRemove = append(chainsToRemove, getHostportChain(id, pm))
- }
-
- // remove rules that consists of target chains
- remainingRules := filterRules(existingRules, chainsToRemove)
-
- // gather target hostport chains that exists in iptables-save result
- existingChainsToRemove := []utiliptables.Chain{}
- for _, chain := range chainsToRemove {
- if _, ok := existingChains[chain]; ok {
- existingChainsToRemove = append(existingChainsToRemove, chain)
- }
- }
-
- // exit if there is nothing to remove
- // don´t forget to clean up opened pod host ports
- if len(existingChainsToRemove) == 0 {
- return hm.closeHostports(hostportMappings)
- }
-
- natChains := bytes.NewBuffer(nil)
- natRules := bytes.NewBuffer(nil)
- writeLine(natChains, "*nat")
- for _, chain := range existingChains {
- writeLine(natChains, chain)
- }
- for _, rule := range remainingRules {
- writeLine(natRules, rule)
- }
- for _, chain := range existingChainsToRemove {
- writeLine(natRules, "-X", string(chain))
- }
- writeLine(natRules, "COMMIT")
-
- if err := hm.syncIPTables(append(natChains.Bytes(), natRules.Bytes()...)); err != nil {
- return err
- }
-
- // clean up opened pod host ports
- return hm.closeHostports(hostportMappings)
-}
-
-// syncIPTables executes iptables-restore with given lines
-func (hm *hostportManager) syncIPTables(lines []byte) error {
- klog.V(3).InfoS("Restoring iptables rules", "iptableRules", lines)
- err := hm.iptables.RestoreAll(lines, utiliptables.NoFlushTables, utiliptables.RestoreCounters)
- if err != nil {
- return fmt.Errorf("failed to execute iptables-restore: %v", err)
- }
- return nil
-}
-
-// openHostports opens all given hostports using the given hostportOpener
-// If encounter any error, clean up and return the error
-// If all ports are opened successfully, return the hostport and socket mapping
-func (hm *hostportManager) openHostports(podPortMapping *PodPortMapping) (map[hostport]closeable, error) {
- var retErr error
- ports := make(map[hostport]closeable)
- for _, pm := range podPortMapping.PortMappings {
- if pm.HostPort <= 0 {
- continue
- }
-
- // We do not open host ports for SCTP ports, as we agreed in the Support of SCTP KEP
- if pm.Protocol == v1.ProtocolSCTP {
- continue
- }
-
- // HostIP IP family is not handled by this port opener
- if pm.HostIP != "" && utilnet.IsIPv6String(pm.HostIP) != hm.iptables.IsIPv6() {
- continue
- }
-
- hp := portMappingToHostport(pm, hm.getIPFamily())
- socket, err := hm.portOpener(&hp)
- if err != nil {
- retErr = fmt.Errorf("cannot open hostport %d for pod %s: %v", pm.HostPort, getPodFullName(podPortMapping), err)
- break
- }
- ports[hp] = socket
- }
-
- // If encounter any error, close all hostports that just got opened.
- if retErr != nil {
- for hp, socket := range ports {
- if err := socket.Close(); err != nil {
- klog.ErrorS(err, "Cannot clean up hostport for the pod", "podFullName", getPodFullName(podPortMapping), "port", hp.port)
- }
- }
- return nil, retErr
- }
- return ports, nil
-}
-
-// closeHostports tries to close all the listed host ports
-func (hm *hostportManager) closeHostports(hostportMappings []*PortMapping) error {
- errList := []error{}
- for _, pm := range hostportMappings {
- hp := portMappingToHostport(pm, hm.getIPFamily())
- if socket, ok := hm.hostPortMap[hp]; ok {
- klog.V(2).InfoS("Closing host port", "port", hp.String())
- if err := socket.Close(); err != nil {
- errList = append(errList, fmt.Errorf("failed to close host port %s: %v", hp.String(), err))
- continue
- }
- delete(hm.hostPortMap, hp)
- } else {
- klog.V(5).InfoS("Host port does not have an open socket", "port", hp.String())
- }
- }
- return utilerrors.NewAggregate(errList)
-}
-
-// getIPFamily returns the hostPortManager IP family
-func (hm *hostportManager) getIPFamily() ipFamily {
- family := IPv4
- if hm.iptables.IsIPv6() {
- family = IPv6
- }
- return family
-}
-
-// getHostportChain takes id, hostport and protocol for a pod and returns associated iptables chain.
-// This is computed by hashing (sha256) then encoding to base32 and truncating with the prefix
-// "KUBE-HP-". We do this because IPTables Chain Names must be <= 28 chars long, and the longer
-// they are the harder they are to read.
-// WARNING: Please do not change this function. Otherwise, HostportManager may not be able to
-// identify existing iptables chains.
-func getHostportChain(id string, pm *PortMapping) utiliptables.Chain {
- hash := sha256.Sum256([]byte(id + strconv.Itoa(int(pm.HostPort)) + string(pm.Protocol) + pm.HostIP))
- encoded := base32.StdEncoding.EncodeToString(hash[:])
- return utiliptables.Chain(kubeHostportChainPrefix + encoded[:16])
-}
-
-// gatherHostportMappings returns all the PortMappings which has hostport for a pod
-// it filters the PortMappings that use HostIP and doesn't match the IP family specified
-func gatherHostportMappings(podPortMapping *PodPortMapping, isIPv6 bool) []*PortMapping {
- mappings := []*PortMapping{}
- for _, pm := range podPortMapping.PortMappings {
- if pm.HostPort <= 0 {
- continue
- }
- if pm.HostIP != "" && utilnet.IsIPv6String(pm.HostIP) != isIPv6 {
- continue
- }
- mappings = append(mappings, pm)
- }
- return mappings
-}
-
-// getExistingHostportIPTablesRules retrieves raw data from iptables-save, parse it,
-// return all the hostport related chains and rules
-func getExistingHostportIPTablesRules(iptables utiliptables.Interface) (map[utiliptables.Chain]string, []string, error) {
- iptablesData := bytes.NewBuffer(nil)
- err := iptables.SaveInto(utiliptables.TableNAT, iptablesData)
- if err != nil { // if we failed to get any rules
- return nil, nil, fmt.Errorf("failed to execute iptables-save: %v", err)
- }
- existingNATChains := utiliptables.GetChainLines(utiliptables.TableNAT, iptablesData.Bytes())
-
- existingHostportChains := make(map[utiliptables.Chain]string)
- existingHostportRules := []string{}
-
- for chain := range existingNATChains {
- if strings.HasPrefix(string(chain), string(kubeHostportsChain)) || strings.HasPrefix(string(chain), kubeHostportChainPrefix) {
- existingHostportChains[chain] = string(existingNATChains[chain])
- }
- }
-
- for _, line := range strings.Split(iptablesData.String(), "\n") {
- if strings.HasPrefix(line, fmt.Sprintf("-A %s", kubeHostportChainPrefix)) ||
- strings.HasPrefix(line, fmt.Sprintf("-A %s", string(kubeHostportsChain))) {
- existingHostportRules = append(existingHostportRules, line)
- }
- }
- return existingHostportChains, existingHostportRules, nil
-}
-
-// filterRules filters input rules with input chains. Rules that did not involve any filter chain will be returned.
-// The order of the input rules is important and is preserved.
-func filterRules(rules []string, filters []utiliptables.Chain) []string {
- filtered := []string{}
- for _, rule := range rules {
- skip := false
- for _, filter := range filters {
- if strings.Contains(rule, string(filter)) {
- skip = true
- break
- }
- }
- if !skip {
- filtered = append(filtered, rule)
- }
- }
- return filtered
-}
-
-// filterChains deletes all entries of filter chains from chain map
-func filterChains(chains map[utiliptables.Chain]string, filterChains []utiliptables.Chain) {
- for _, chain := range filterChains {
- delete(chains, chain)
- }
-}
-
-func getPodFullName(pod *PodPortMapping) string {
- // Use underscore as the delimiter because it is not allowed in pod name
- // (DNS subdomain format), while allowed in the container name format.
- return pod.Name + "_" + pod.Namespace
-}
-
-// Join all words with spaces, terminate with newline and write to buf.
-func writeLine(buf *bytes.Buffer, words ...string) {
- buf.WriteString(strings.Join(words, " ") + "\n")
-}
-
-func (hp *hostport) String() string {
- return fmt.Sprintf("%s:%d", hp.protocol, hp.port)
-}
diff --git a/pkg/kubelet/dockershim/network/hostport/hostport_manager_test.go b/pkg/kubelet/dockershim/network/hostport/hostport_manager_test.go
deleted file mode 100644
index 1f0046a17a9..00000000000
--- a/pkg/kubelet/dockershim/network/hostport/hostport_manager_test.go
+++ /dev/null
@@ -1,734 +0,0 @@
-//go:build !dockerless
-// +build !dockerless
-
-/*
-Copyright 2017 The Kubernetes Authors.
-
-Licensed under the Apache License, Version 2.0 (the "License");
-you may not use this file except in compliance with the License.
-You may obtain a copy of the License at
-
- http://www.apache.org/licenses/LICENSE-2.0
-
-Unless required by applicable law or agreed to in writing, software
-distributed under the License is distributed on an "AS IS" BASIS,
-WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-See the License for the specific language governing permissions and
-limitations under the License.
-*/
-
-package hostport
-
-import (
- "bytes"
- "strings"
- "testing"
-
- "github.com/stretchr/testify/assert"
- v1 "k8s.io/api/core/v1"
- utiliptables "k8s.io/kubernetes/pkg/util/iptables"
- "k8s.io/utils/exec"
- netutils "k8s.io/utils/net"
-)
-
-func TestOpenCloseHostports(t *testing.T) {
- openPortCases := []struct {
- podPortMapping *PodPortMapping
- expectError bool
- }{
- // no portmaps
- {
- &PodPortMapping{
- Namespace: "ns1",
- Name: "n0",
- },
- false,
- },
- // allocate port 80/TCP, 8080/TCP and 443/TCP
- {
- &PodPortMapping{
- Namespace: "ns1",
- Name: "n1",
- PortMappings: []*PortMapping{
- {HostPort: 80, Protocol: v1.ProtocolTCP},
- {HostPort: 8080, Protocol: v1.ProtocolTCP},
- {HostPort: 443, Protocol: v1.ProtocolTCP},
- },
- },
- false,
- },
- // fail to allocate port previously allocated 80/TCP
- {
- &PodPortMapping{
- Namespace: "ns1",
- Name: "n2",
- PortMappings: []*PortMapping{
- {HostPort: 80, Protocol: v1.ProtocolTCP},
- },
- },
- true,
- },
- // fail to allocate port previously allocated 8080/TCP
- {
- &PodPortMapping{
- Namespace: "ns1",
- Name: "n3",
- PortMappings: []*PortMapping{
- {HostPort: 8081, Protocol: v1.ProtocolTCP},
- {HostPort: 8080, Protocol: v1.ProtocolTCP},
- },
- },
- true,
- },
- // allocate port 8081/TCP
- {
- &PodPortMapping{
- Namespace: "ns1",
- Name: "n3",
- PortMappings: []*PortMapping{
- {HostPort: 8081, Protocol: v1.ProtocolTCP},
- },
- },
- false,
- },
- // allocate port 7777/SCTP
- {
- &PodPortMapping{
- Namespace: "ns1",
- Name: "n4",
- PortMappings: []*PortMapping{
- {HostPort: 7777, Protocol: v1.ProtocolSCTP},
- },
- },
- false,
- },
- // same HostPort different HostIP
- {
- &PodPortMapping{
- Namespace: "ns1",
- Name: "n5",
- PortMappings: []*PortMapping{
- {HostPort: 8888, Protocol: v1.ProtocolUDP, HostIP: "127.0.0.1"},
- {HostPort: 8888, Protocol: v1.ProtocolUDP, HostIP: "127.0.0.2"},
- },
- },
- false,
- },
- // same HostPort different protocol
- {
- &PodPortMapping{
- Namespace: "ns1",
- Name: "n6",
- PortMappings: []*PortMapping{
- {HostPort: 9999, Protocol: v1.ProtocolTCP},
- {HostPort: 9999, Protocol: v1.ProtocolUDP},
- },
- },
- false,
- },
- }
-
- iptables := NewFakeIPTables()
- iptables.protocol = utiliptables.ProtocolIPv4
- portOpener := NewFakeSocketManager()
- manager := &hostportManager{
- hostPortMap: make(map[hostport]closeable),
- iptables: iptables,
- portOpener: portOpener.openFakeSocket,
- execer: exec.New(),
- }
-
- // open all hostports defined in the test cases
- for _, tc := range openPortCases {
- mapping, err := manager.openHostports(tc.podPortMapping)
- for hostport, socket := range mapping {
- manager.hostPortMap[hostport] = socket
- }
- if tc.expectError {
- assert.Error(t, err)
- continue
- }
- assert.NoError(t, err)
- // SCTP ports are not allocated
- countSctp := 0
- for _, pm := range tc.podPortMapping.PortMappings {
- if pm.Protocol == v1.ProtocolSCTP {
- countSctp++
- }
- }
- assert.EqualValues(t, len(mapping), len(tc.podPortMapping.PortMappings)-countSctp)
- }
-
- // We have following ports open: 80/TCP, 443/TCP, 8080/TCP, 8081/TCP,
- // 127.0.0.1:8888/TCP, 127.0.0.2:8888/TCP, 9999/TCP and 9999/UDP open now.
- assert.EqualValues(t, len(manager.hostPortMap), 8)
- closePortCases := []struct {
- portMappings []*PortMapping
- expectError bool
- }{
- {
- portMappings: nil,
- },
- {
-
- portMappings: []*PortMapping{
- {HostPort: 80, Protocol: v1.ProtocolTCP},
- {HostPort: 8080, Protocol: v1.ProtocolTCP},
- {HostPort: 443, Protocol: v1.ProtocolTCP},
- },
- },
- {
-
- portMappings: []*PortMapping{
- {HostPort: 80, Protocol: v1.ProtocolTCP},
- },
- },
- {
- portMappings: []*PortMapping{
- {HostPort: 8081, Protocol: v1.ProtocolTCP},
- {HostPort: 8080, Protocol: v1.ProtocolTCP},
- },
- },
- {
- portMappings: []*PortMapping{
- {HostPort: 8081, Protocol: v1.ProtocolTCP},
- },
- },
- {
- portMappings: []*PortMapping{
- {HostPort: 7070, Protocol: v1.ProtocolTCP},
- },
- },
- {
- portMappings: []*PortMapping{
- {HostPort: 7777, Protocol: v1.ProtocolSCTP},
- },
- },
- {
- portMappings: []*PortMapping{
- {HostPort: 8888, Protocol: v1.ProtocolUDP, HostIP: "127.0.0.1"},
- {HostPort: 8888, Protocol: v1.ProtocolUDP, HostIP: "127.0.0.2"},
- },
- },
- {
- portMappings: []*PortMapping{
- {HostPort: 9999, Protocol: v1.ProtocolTCP},
- {HostPort: 9999, Protocol: v1.ProtocolUDP},
- },
- },
- }
-
- // close all the hostports opened in previous step
- for _, tc := range closePortCases {
- err := manager.closeHostports(tc.portMappings)
- if tc.expectError {
- assert.Error(t, err)
- continue
- }
- assert.NoError(t, err)
- }
- // assert all elements in hostPortMap were cleared
- assert.Zero(t, len(manager.hostPortMap))
-}
-
-func TestHostportManager(t *testing.T) {
- iptables := NewFakeIPTables()
- iptables.protocol = utiliptables.ProtocolIPv4
- portOpener := NewFakeSocketManager()
- manager := &hostportManager{
- hostPortMap: make(map[hostport]closeable),
- iptables: iptables,
- portOpener: portOpener.openFakeSocket,
- execer: exec.New(),
- }
- testCases := []struct {
- mapping *PodPortMapping
- expectError bool
- }{
- // open HostPorts 8080/TCP, 8081/UDP and 8083/SCTP
- {
- mapping: &PodPortMapping{
- Name: "pod1",
- Namespace: "ns1",
- IP: netutils.ParseIPSloppy("10.1.1.2"),
- HostNetwork: false,
- PortMappings: []*PortMapping{
- {
- HostPort: 8080,
- ContainerPort: 80,
- Protocol: v1.ProtocolTCP,
- },
- {
- HostPort: 8081,
- ContainerPort: 81,
- Protocol: v1.ProtocolUDP,
- },
- {
- HostPort: 8083,
- ContainerPort: 83,
- Protocol: v1.ProtocolSCTP,
- },
- },
- },
- expectError: false,
- },
- // fail to open HostPort due to conflict 8083/SCTP
- {
- mapping: &PodPortMapping{
- Name: "pod2",
- Namespace: "ns1",
- IP: netutils.ParseIPSloppy("10.1.1.3"),
- HostNetwork: false,
- PortMappings: []*PortMapping{
- {
- HostPort: 8082,
- ContainerPort: 80,
- Protocol: v1.ProtocolTCP,
- },
- {
- HostPort: 8081,
- ContainerPort: 81,
- Protocol: v1.ProtocolUDP,
- },
- {
- HostPort: 8083,
- ContainerPort: 83,
- Protocol: v1.ProtocolSCTP,
- },
- },
- },
- expectError: true,
- },
- // open port 443
- {
- mapping: &PodPortMapping{
- Name: "pod3",
- Namespace: "ns1",
- IP: netutils.ParseIPSloppy("10.1.1.4"),
- HostNetwork: false,
- PortMappings: []*PortMapping{
- {
- HostPort: 8443,
- ContainerPort: 443,
- Protocol: v1.ProtocolTCP,
- },
- },
- },
- expectError: false,
- },
- // fail to open HostPort 8443 already allocated
- {
- mapping: &PodPortMapping{
- Name: "pod3",
- Namespace: "ns1",
- IP: netutils.ParseIPSloppy("192.168.12.12"),
- HostNetwork: false,
- PortMappings: []*PortMapping{
- {
- HostPort: 8443,
- ContainerPort: 443,
- Protocol: v1.ProtocolTCP,
- },
- },
- },
- expectError: true,
- },
- // skip HostPort with PodIP and HostIP using different families
- {
- mapping: &PodPortMapping{
- Name: "pod4",
- Namespace: "ns1",
- IP: netutils.ParseIPSloppy("2001:beef::2"),
- HostNetwork: false,
- PortMappings: []*PortMapping{
- {
- HostPort: 8444,
- ContainerPort: 444,
- Protocol: v1.ProtocolTCP,
- HostIP: "192.168.1.1",
- },
- },
- },
- expectError: false,
- },
-
- // open same HostPort on different IP
- {
- mapping: &PodPortMapping{
- Name: "pod5",
- Namespace: "ns5",
- IP: netutils.ParseIPSloppy("10.1.1.5"),
- HostNetwork: false,
- PortMappings: []*PortMapping{
- {
- HostPort: 8888,
- ContainerPort: 443,
- Protocol: v1.ProtocolTCP,
- HostIP: "127.0.0.2",
- },
- {
- HostPort: 8888,
- ContainerPort: 443,
- Protocol: v1.ProtocolTCP,
- HostIP: "127.0.0.1",
- },
- },
- },
- expectError: false,
- },
- // open same HostPort on different
- {
- mapping: &PodPortMapping{
- Name: "pod6",
- Namespace: "ns1",
- IP: netutils.ParseIPSloppy("10.1.1.2"),
- HostNetwork: false,
- PortMappings: []*PortMapping{
- {
- HostPort: 9999,
- ContainerPort: 443,
- Protocol: v1.ProtocolTCP,
- },
- {
- HostPort: 9999,
- ContainerPort: 443,
- Protocol: v1.ProtocolUDP,
- },
- },
- },
- expectError: false,
- },
- }
-
- // Add Hostports
- for _, tc := range testCases {
- err := manager.Add("id", tc.mapping, "cbr0")
- if tc.expectError {
- assert.Error(t, err)
- continue
- }
- assert.NoError(t, err)
- }
-
- // Check port opened
- expectedPorts := []hostport{
- {IPv4, "", 8080, "tcp"},
- {IPv4, "", 8081, "udp"},
- {IPv4, "", 8443, "tcp"},
- {IPv4, "127.0.0.1", 8888, "tcp"},
- {IPv4, "127.0.0.2", 8888, "tcp"},
- {IPv4, "", 9999, "tcp"},
- {IPv4, "", 9999, "udp"},
- }
- openedPorts := make(map[hostport]bool)
- for hp, port := range portOpener.mem {
- if !port.closed {
- openedPorts[hp] = true
- }
- }
- assert.EqualValues(t, len(openedPorts), len(expectedPorts))
- for _, hp := range expectedPorts {
- _, ok := openedPorts[hp]
- assert.EqualValues(t, true, ok)
- }
-
- // Check Iptables-save result after adding hostports
- raw := bytes.NewBuffer(nil)
- err := iptables.SaveInto(utiliptables.TableNAT, raw)
- assert.NoError(t, err)
-
- lines := strings.Split(raw.String(), "\n")
- expectedLines := map[string]bool{
- `*nat`: true,
- `:KUBE-HOSTPORTS - [0:0]`: true,
- `:OUTPUT - [0:0]`: true,
- `:PREROUTING - [0:0]`: true,
- `:POSTROUTING - [0:0]`: true,
- `:KUBE-HP-IJHALPHTORMHHPPK - [0:0]`: true,
- `:KUBE-HP-63UPIDJXVRSZGSUZ - [0:0]`: true,
- `:KUBE-HP-WFBOALXEP42XEMJK - [0:0]`: true,
- `:KUBE-HP-XU6AWMMJYOZOFTFZ - [0:0]`: true,
- `:KUBE-HP-TUKTZ736U5JD5UTK - [0:0]`: true,
- `:KUBE-HP-CAAJ45HDITK7ARGM - [0:0]`: true,
- `:KUBE-HP-WFUNFVXVDLD5ZVXN - [0:0]`: true,
- `:KUBE-HP-4MFWH2F2NAOMYD6A - [0:0]`: true,
- "-A KUBE-HOSTPORTS -m comment --comment \"pod3_ns1 hostport 8443\" -m tcp -p tcp --dport 8443 -j KUBE-HP-WFBOALXEP42XEMJK": true,
- "-A KUBE-HOSTPORTS -m comment --comment \"pod1_ns1 hostport 8081\" -m udp -p udp --dport 8081 -j KUBE-HP-63UPIDJXVRSZGSUZ": true,
- "-A KUBE-HOSTPORTS -m comment --comment \"pod1_ns1 hostport 8080\" -m tcp -p tcp --dport 8080 -j KUBE-HP-IJHALPHTORMHHPPK": true,
- "-A KUBE-HOSTPORTS -m comment --comment \"pod1_ns1 hostport 8083\" -m sctp -p sctp --dport 8083 -j KUBE-HP-XU6AWMMJYOZOFTFZ": true,
- "-A KUBE-HOSTPORTS -m comment --comment \"pod5_ns5 hostport 8888\" -m tcp -p tcp --dport 8888 -j KUBE-HP-TUKTZ736U5JD5UTK": true,
- "-A KUBE-HOSTPORTS -m comment --comment \"pod5_ns5 hostport 8888\" -m tcp -p tcp --dport 8888 -j KUBE-HP-CAAJ45HDITK7ARGM": true,
- "-A KUBE-HOSTPORTS -m comment --comment \"pod6_ns1 hostport 9999\" -m udp -p udp --dport 9999 -j KUBE-HP-4MFWH2F2NAOMYD6A": true,
- "-A KUBE-HOSTPORTS -m comment --comment \"pod6_ns1 hostport 9999\" -m tcp -p tcp --dport 9999 -j KUBE-HP-WFUNFVXVDLD5ZVXN": true,
- "-A OUTPUT -m comment --comment \"kube hostport portals\" -m addrtype --dst-type LOCAL -j KUBE-HOSTPORTS": true,
- "-A PREROUTING -m comment --comment \"kube hostport portals\" -m addrtype --dst-type LOCAL -j KUBE-HOSTPORTS": true,
- "-A POSTROUTING -m comment --comment \"SNAT for localhost access to hostports\" -o cbr0 -s 127.0.0.0/8 -j MASQUERADE": true,
- "-A KUBE-HP-IJHALPHTORMHHPPK -m comment --comment \"pod1_ns1 hostport 8080\" -s 10.1.1.2/32 -j KUBE-MARK-MASQ": true,
- "-A KUBE-HP-IJHALPHTORMHHPPK -m comment --comment \"pod1_ns1 hostport 8080\" -m tcp -p tcp -j DNAT --to-destination 10.1.1.2:80": true,
- "-A KUBE-HP-63UPIDJXVRSZGSUZ -m comment --comment \"pod1_ns1 hostport 8081\" -s 10.1.1.2/32 -j KUBE-MARK-MASQ": true,
- "-A KUBE-HP-63UPIDJXVRSZGSUZ -m comment --comment \"pod1_ns1 hostport 8081\" -m udp -p udp -j DNAT --to-destination 10.1.1.2:81": true,
- "-A KUBE-HP-XU6AWMMJYOZOFTFZ -m comment --comment \"pod1_ns1 hostport 8083\" -s 10.1.1.2/32 -j KUBE-MARK-MASQ": true,
- "-A KUBE-HP-XU6AWMMJYOZOFTFZ -m comment --comment \"pod1_ns1 hostport 8083\" -m sctp -p sctp -j DNAT --to-destination 10.1.1.2:83": true,
- "-A KUBE-HP-WFBOALXEP42XEMJK -m comment --comment \"pod3_ns1 hostport 8443\" -s 10.1.1.4/32 -j KUBE-MARK-MASQ": true,
- "-A KUBE-HP-WFBOALXEP42XEMJK -m comment --comment \"pod3_ns1 hostport 8443\" -m tcp -p tcp -j DNAT --to-destination 10.1.1.4:443": true,
- "-A KUBE-HP-TUKTZ736U5JD5UTK -m comment --comment \"pod5_ns5 hostport 8888\" -s 10.1.1.5/32 -j KUBE-MARK-MASQ": true,
- "-A KUBE-HP-TUKTZ736U5JD5UTK -m comment --comment \"pod5_ns5 hostport 8888\" -m tcp -p tcp -d 127.0.0.1/32 -j DNAT --to-destination 10.1.1.5:443": true,
- "-A KUBE-HP-CAAJ45HDITK7ARGM -m comment --comment \"pod5_ns5 hostport 8888\" -s 10.1.1.5/32 -j KUBE-MARK-MASQ": true,
- "-A KUBE-HP-CAAJ45HDITK7ARGM -m comment --comment \"pod5_ns5 hostport 8888\" -m tcp -p tcp -d 127.0.0.2/32 -j DNAT --to-destination 10.1.1.5:443": true,
- "-A KUBE-HP-WFUNFVXVDLD5ZVXN -m comment --comment \"pod6_ns1 hostport 9999\" -s 10.1.1.2/32 -j KUBE-MARK-MASQ": true,
- "-A KUBE-HP-WFUNFVXVDLD5ZVXN -m comment --comment \"pod6_ns1 hostport 9999\" -m tcp -p tcp -j DNAT --to-destination 10.1.1.2:443": true,
- "-A KUBE-HP-4MFWH2F2NAOMYD6A -m comment --comment \"pod6_ns1 hostport 9999\" -s 10.1.1.2/32 -j KUBE-MARK-MASQ": true,
- "-A KUBE-HP-4MFWH2F2NAOMYD6A -m comment --comment \"pod6_ns1 hostport 9999\" -m udp -p udp -j DNAT --to-destination 10.1.1.2:443": true,
- `COMMIT`: true,
- }
- for _, line := range lines {
- t.Logf("Line: %s", line)
- if len(strings.TrimSpace(line)) > 0 {
- _, ok := expectedLines[strings.TrimSpace(line)]
- assert.EqualValues(t, true, ok)
- }
- }
-
- // Remove all added hostports
- for _, tc := range testCases {
- if !tc.expectError {
- err := manager.Remove("id", tc.mapping)
- assert.NoError(t, err)
- }
- }
-
- // Check Iptables-save result after deleting hostports
- raw.Reset()
- err = iptables.SaveInto(utiliptables.TableNAT, raw)
- assert.NoError(t, err)
- lines = strings.Split(raw.String(), "\n")
- remainingChains := make(map[string]bool)
- for _, line := range lines {
- if strings.HasPrefix(line, ":") {
- remainingChains[strings.TrimSpace(line)] = true
- }
- }
- expectDeletedChains := []string{
- "KUBE-HP-4YVONL46AKYWSKS3", "KUBE-HP-7THKRFSEH4GIIXK7", "KUBE-HP-5N7UH5JAXCVP5UJR",
- "KUBE-HP-TUKTZ736U5JD5UTK", "KUBE-HP-CAAJ45HDITK7ARGM", "KUBE-HP-WFUNFVXVDLD5ZVXN", "KUBE-HP-4MFWH2F2NAOMYD6A",
- }
- for _, chain := range expectDeletedChains {
- _, ok := remainingChains[chain]
- assert.EqualValues(t, false, ok)
- }
-
- // check if all ports are closed
- for _, port := range portOpener.mem {
- assert.EqualValues(t, true, port.closed)
- }
- // Clear all elements in hostPortMap
- assert.Zero(t, len(manager.hostPortMap))
-}
-
-func TestGetHostportChain(t *testing.T) {
- m := make(map[string]int)
- chain := getHostportChain("testrdma-2", &PortMapping{HostPort: 57119, Protocol: "TCP", ContainerPort: 57119})
- m[string(chain)] = 1
- chain = getHostportChain("testrdma-2", &PortMapping{HostPort: 55429, Protocol: "TCP", ContainerPort: 55429})
- m[string(chain)] = 1
- chain = getHostportChain("testrdma-2", &PortMapping{HostPort: 56833, Protocol: "TCP", ContainerPort: 56833})
- m[string(chain)] = 1
- if len(m) != 3 {
- t.Fatal(m)
- }
-}
-
-func TestHostportManagerIPv6(t *testing.T) {
- iptables := NewFakeIPTables()
- iptables.protocol = utiliptables.ProtocolIPv6
- portOpener := NewFakeSocketManager()
- manager := &hostportManager{
- hostPortMap: make(map[hostport]closeable),
- iptables: iptables,
- portOpener: portOpener.openFakeSocket,
- execer: exec.New(),
- }
- testCases := []struct {
- mapping *PodPortMapping
- expectError bool
- }{
- {
- mapping: &PodPortMapping{
- Name: "pod1",
- Namespace: "ns1",
- IP: netutils.ParseIPSloppy("2001:beef::2"),
- HostNetwork: false,
- PortMappings: []*PortMapping{
- {
- HostPort: 8080,
- ContainerPort: 80,
- Protocol: v1.ProtocolTCP,
- },
- {
- HostPort: 8081,
- ContainerPort: 81,
- Protocol: v1.ProtocolUDP,
- },
- {
- HostPort: 8083,
- ContainerPort: 83,
- Protocol: v1.ProtocolSCTP,
- },
- },
- },
- expectError: false,
- },
- {
- mapping: &PodPortMapping{
- Name: "pod2",
- Namespace: "ns1",
- IP: netutils.ParseIPSloppy("2001:beef::3"),
- HostNetwork: false,
- PortMappings: []*PortMapping{
- {
- HostPort: 8082,
- ContainerPort: 80,
- Protocol: v1.ProtocolTCP,
- },
- {
- HostPort: 8081,
- ContainerPort: 81,
- Protocol: v1.ProtocolUDP,
- },
- {
- HostPort: 8083,
- ContainerPort: 83,
- Protocol: v1.ProtocolSCTP,
- },
- },
- },
- expectError: true,
- },
- {
- mapping: &PodPortMapping{
- Name: "pod3",
- Namespace: "ns1",
- IP: netutils.ParseIPSloppy("2001:beef::4"),
- HostNetwork: false,
- PortMappings: []*PortMapping{
- {
- HostPort: 8443,
- ContainerPort: 443,
- Protocol: v1.ProtocolTCP,
- },
- },
- },
- expectError: false,
- },
- {
- mapping: &PodPortMapping{
- Name: "pod4",
- Namespace: "ns2",
- IP: netutils.ParseIPSloppy("192.168.2.2"),
- HostNetwork: false,
- PortMappings: []*PortMapping{
- {
- HostPort: 8443,
- ContainerPort: 443,
- Protocol: v1.ProtocolTCP,
- },
- },
- },
- expectError: true,
- },
- }
-
- // Add Hostports
- for _, tc := range testCases {
- err := manager.Add("id", tc.mapping, "cbr0")
- if tc.expectError {
- assert.Error(t, err)
- continue
- }
- assert.NoError(t, err)
- }
-
- // Check port opened
- expectedPorts := []hostport{{IPv6, "", 8080, "tcp"}, {IPv6, "", 8081, "udp"}, {IPv6, "", 8443, "tcp"}}
- openedPorts := make(map[hostport]bool)
- for hp, port := range portOpener.mem {
- if !port.closed {
- openedPorts[hp] = true
- }
- }
- assert.EqualValues(t, len(openedPorts), len(expectedPorts))
- for _, hp := range expectedPorts {
- _, ok := openedPorts[hp]
- assert.EqualValues(t, true, ok)
- }
-
- // Check Iptables-save result after adding hostports
- raw := bytes.NewBuffer(nil)
- err := iptables.SaveInto(utiliptables.TableNAT, raw)
- assert.NoError(t, err)
-
- lines := strings.Split(raw.String(), "\n")
- expectedLines := map[string]bool{
- `*nat`: true,
- `:KUBE-HOSTPORTS - [0:0]`: true,
- `:OUTPUT - [0:0]`: true,
- `:PREROUTING - [0:0]`: true,
- `:POSTROUTING - [0:0]`: true,
- `:KUBE-HP-IJHALPHTORMHHPPK - [0:0]`: true,
- `:KUBE-HP-63UPIDJXVRSZGSUZ - [0:0]`: true,
- `:KUBE-HP-WFBOALXEP42XEMJK - [0:0]`: true,
- `:KUBE-HP-XU6AWMMJYOZOFTFZ - [0:0]`: true,
- "-A KUBE-HOSTPORTS -m comment --comment \"pod3_ns1 hostport 8443\" -m tcp -p tcp --dport 8443 -j KUBE-HP-WFBOALXEP42XEMJK": true,
- "-A KUBE-HOSTPORTS -m comment --comment \"pod1_ns1 hostport 8081\" -m udp -p udp --dport 8081 -j KUBE-HP-63UPIDJXVRSZGSUZ": true,
- "-A KUBE-HOSTPORTS -m comment --comment \"pod1_ns1 hostport 8080\" -m tcp -p tcp --dport 8080 -j KUBE-HP-IJHALPHTORMHHPPK": true,
- "-A KUBE-HOSTPORTS -m comment --comment \"pod1_ns1 hostport 8083\" -m sctp -p sctp --dport 8083 -j KUBE-HP-XU6AWMMJYOZOFTFZ": true,
- "-A OUTPUT -m comment --comment \"kube hostport portals\" -m addrtype --dst-type LOCAL -j KUBE-HOSTPORTS": true,
- "-A PREROUTING -m comment --comment \"kube hostport portals\" -m addrtype --dst-type LOCAL -j KUBE-HOSTPORTS": true,
- "-A POSTROUTING -m comment --comment \"SNAT for localhost access to hostports\" -o cbr0 -s ::1/128 -j MASQUERADE": true,
- "-A KUBE-HP-IJHALPHTORMHHPPK -m comment --comment \"pod1_ns1 hostport 8080\" -s 2001:beef::2/32 -j KUBE-MARK-MASQ": true,
- "-A KUBE-HP-IJHALPHTORMHHPPK -m comment --comment \"pod1_ns1 hostport 8080\" -m tcp -p tcp -j DNAT --to-destination [2001:beef::2]:80": true,
- "-A KUBE-HP-63UPIDJXVRSZGSUZ -m comment --comment \"pod1_ns1 hostport 8081\" -s 2001:beef::2/32 -j KUBE-MARK-MASQ": true,
- "-A KUBE-HP-63UPIDJXVRSZGSUZ -m comment --comment \"pod1_ns1 hostport 8081\" -m udp -p udp -j DNAT --to-destination [2001:beef::2]:81": true,
- "-A KUBE-HP-XU6AWMMJYOZOFTFZ -m comment --comment \"pod1_ns1 hostport 8083\" -s 2001:beef::2/32 -j KUBE-MARK-MASQ": true,
- "-A KUBE-HP-XU6AWMMJYOZOFTFZ -m comment --comment \"pod1_ns1 hostport 8083\" -m sctp -p sctp -j DNAT --to-destination [2001:beef::2]:83": true,
- "-A KUBE-HP-WFBOALXEP42XEMJK -m comment --comment \"pod3_ns1 hostport 8443\" -s 2001:beef::4/32 -j KUBE-MARK-MASQ": true,
- "-A KUBE-HP-WFBOALXEP42XEMJK -m comment --comment \"pod3_ns1 hostport 8443\" -m tcp -p tcp -j DNAT --to-destination [2001:beef::4]:443": true,
- `COMMIT`: true,
- }
- for _, line := range lines {
- if len(strings.TrimSpace(line)) > 0 {
- _, ok := expectedLines[strings.TrimSpace(line)]
- assert.EqualValues(t, true, ok)
- }
- }
-
- // Remove all added hostports
- for _, tc := range testCases {
- if !tc.expectError {
- err := manager.Remove("id", tc.mapping)
- assert.NoError(t, err)
- }
- }
-
- // Check Iptables-save result after deleting hostports
- raw.Reset()
- err = iptables.SaveInto(utiliptables.TableNAT, raw)
- assert.NoError(t, err)
- lines = strings.Split(raw.String(), "\n")
- remainingChains := make(map[string]bool)
- for _, line := range lines {
- if strings.HasPrefix(line, ":") {
- remainingChains[strings.TrimSpace(line)] = true
- }
- }
- expectDeletedChains := []string{"KUBE-HP-4YVONL46AKYWSKS3", "KUBE-HP-7THKRFSEH4GIIXK7", "KUBE-HP-5N7UH5JAXCVP5UJR"}
- for _, chain := range expectDeletedChains {
- _, ok := remainingChains[chain]
- assert.EqualValues(t, false, ok)
- }
-
- // check if all ports are closed
- for _, port := range portOpener.mem {
- assert.EqualValues(t, true, port.closed)
- }
-}
diff --git a/pkg/kubelet/dockershim/network/hostport/hostport_test.go b/pkg/kubelet/dockershim/network/hostport/hostport_test.go
deleted file mode 100644
index 575a0b2bcd9..00000000000
--- a/pkg/kubelet/dockershim/network/hostport/hostport_test.go
+++ /dev/null
@@ -1,90 +0,0 @@
-//go:build !dockerless
-// +build !dockerless
-
-/*
-Copyright 2017 The Kubernetes Authors.
-
-Licensed under the Apache License, Version 2.0 (the "License");
-you may not use this file except in compliance with the License.
-You may obtain a copy of the License at
-
- http://www.apache.org/licenses/LICENSE-2.0
-
-Unless required by applicable law or agreed to in writing, software
-distributed under the License is distributed on an "AS IS" BASIS,
-WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-See the License for the specific language governing permissions and
-limitations under the License.
-*/
-
-package hostport
-
-import (
- "fmt"
- "testing"
-
- "github.com/stretchr/testify/assert"
- utiliptables "k8s.io/kubernetes/pkg/util/iptables"
-)
-
-type fakeSocket struct {
- closed bool
- port int32
- protocol string
- ip string
-}
-
-func (f *fakeSocket) Close() error {
- if f.closed {
- return fmt.Errorf("socket %q.%s already closed", f.port, f.protocol)
- }
- f.closed = true
- return nil
-}
-
-func NewFakeSocketManager() *fakeSocketManager {
- return &fakeSocketManager{mem: make(map[hostport]*fakeSocket)}
-}
-
-type fakeSocketManager struct {
- mem map[hostport]*fakeSocket
-}
-
-func (f *fakeSocketManager) openFakeSocket(hp *hostport) (closeable, error) {
- if socket, ok := f.mem[*hp]; ok && !socket.closed {
- return nil, fmt.Errorf("hostport is occupied")
- }
- fs := &fakeSocket{
- port: hp.port,
- protocol: hp.protocol,
- closed: false,
- ip: hp.ip,
- }
- f.mem[*hp] = fs
- return fs, nil
-}
-
-func TestEnsureKubeHostportChains(t *testing.T) {
- interfaceName := "cbr0"
- builtinChains := []string{"PREROUTING", "OUTPUT"}
- jumpRule := "-m comment --comment \"kube hostport portals\" -m addrtype --dst-type LOCAL -j KUBE-HOSTPORTS"
- masqRule := "-m comment --comment \"SNAT for localhost access to hostports\" -o cbr0 -s 127.0.0.0/8 -j MASQUERADE"
-
- fakeIPTables := NewFakeIPTables()
- assert.NoError(t, ensureKubeHostportChains(fakeIPTables, interfaceName))
-
- _, _, err := fakeIPTables.getChain(utiliptables.TableNAT, utiliptables.Chain("KUBE-HOSTPORTS"))
- assert.NoError(t, err)
-
- _, chain, err := fakeIPTables.getChain(utiliptables.TableNAT, utiliptables.ChainPostrouting)
- assert.NoError(t, err)
- assert.EqualValues(t, len(chain.rules), 1)
- assert.Contains(t, chain.rules[0], masqRule)
-
- for _, chainName := range builtinChains {
- _, chain, err := fakeIPTables.getChain(utiliptables.TableNAT, utiliptables.Chain(chainName))
- assert.NoError(t, err)
- assert.EqualValues(t, len(chain.rules), 1)
- assert.Contains(t, chain.rules[0], jumpRule)
- }
-}
diff --git a/pkg/kubelet/dockershim/network/kubenet/kubenet.go b/pkg/kubelet/dockershim/network/kubenet/kubenet.go
deleted file mode 100644
index 159e7f75aee..00000000000
--- a/pkg/kubelet/dockershim/network/kubenet/kubenet.go
+++ /dev/null
@@ -1,24 +0,0 @@
-//go:build !dockerless
-// +build !dockerless
-
-/*
-Copyright 2016 The Kubernetes Authors.
-
-Licensed under the Apache License, Version 2.0 (the "License");
-you may not use this file except in compliance with the License.
-You may obtain a copy of the License at
-
- http://www.apache.org/licenses/LICENSE-2.0
-
-Unless required by applicable law or agreed to in writing, software
-distributed under the License is distributed on an "AS IS" BASIS,
-WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-See the License for the specific language governing permissions and
-limitations under the License.
-*/
-
-package kubenet
-
-const (
- KubenetPluginName = "kubenet"
-)
diff --git a/pkg/kubelet/dockershim/network/kubenet/kubenet_linux.go b/pkg/kubelet/dockershim/network/kubenet/kubenet_linux.go
deleted file mode 100644
index d069336011a..00000000000
--- a/pkg/kubelet/dockershim/network/kubenet/kubenet_linux.go
+++ /dev/null
@@ -1,930 +0,0 @@
-//go:build linux && !dockerless
-// +build linux,!dockerless
-
-/*
-Copyright 2014 The Kubernetes Authors.
-
-Licensed under the Apache License, Version 2.0 (the "License");
-you may not use this file except in compliance with the License.
-You may obtain a copy of the License at
-
- http://www.apache.org/licenses/LICENSE-2.0
-
-Unless required by applicable law or agreed to in writing, software
-distributed under the License is distributed on an "AS IS" BASIS,
-WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-See the License for the specific language governing permissions and
-limitations under the License.
-*/
-
-package kubenet
-
-import (
- "context"
- "fmt"
- "io/ioutil"
- "net"
- "strings"
- "sync"
- "time"
-
- "github.com/containernetworking/cni/libcni"
- cnitypes "github.com/containernetworking/cni/pkg/types"
- cnitypes020 "github.com/containernetworking/cni/pkg/types/020"
- "github.com/vishvananda/netlink"
- "golang.org/x/sys/unix"
- utilerrors "k8s.io/apimachinery/pkg/util/errors"
- utilnet "k8s.io/apimachinery/pkg/util/net"
- utilsets "k8s.io/apimachinery/pkg/util/sets"
- utilsysctl "k8s.io/component-helpers/node/util/sysctl"
- "k8s.io/klog/v2"
- kubeletconfig "k8s.io/kubernetes/pkg/kubelet/apis/config"
- kubecontainer "k8s.io/kubernetes/pkg/kubelet/container"
- "k8s.io/kubernetes/pkg/kubelet/dockershim/network"
- "k8s.io/kubernetes/pkg/kubelet/dockershim/network/hostport"
- "k8s.io/kubernetes/pkg/util/bandwidth"
- utiliptables "k8s.io/kubernetes/pkg/util/iptables"
- utilexec "k8s.io/utils/exec"
- utilebtables "k8s.io/utils/net/ebtables"
-
- netutils "k8s.io/utils/net"
-)
-
-const (
- BridgeName = "cbr0"
- DefaultCNIDir = "/opt/cni/bin"
-
- sysctlBridgeCallIPTables = "net/bridge/bridge-nf-call-iptables"
-
- // fallbackMTU is used if an MTU is not specified, and we cannot determine the MTU
- fallbackMTU = 1460
-
- // ebtables Chain to store dedup rules
- dedupChain = utilebtables.Chain("KUBE-DEDUP")
-
- zeroCIDRv6 = "::/0"
- zeroCIDRv4 = "0.0.0.0/0"
-
- NET_CONFIG_TEMPLATE = `{
- "cniVersion": "0.1.0",
- "name": "kubenet",
- "type": "bridge",
- "bridge": "%s",
- "mtu": %d,
- "addIf": "%s",
- "isGateway": true,
- "ipMasq": false,
- "hairpinMode": %t,
- "ipam": {
- "type": "host-local",
- "ranges": [%s],
- "routes": [%s]
- }
-}`
-)
-
-// CNI plugins required by kubenet in /opt/cni/bin or user-specified directory
-var requiredCNIPlugins = [...]string{"bridge", "host-local", "loopback"}
-
-type kubenetNetworkPlugin struct {
- network.NoopNetworkPlugin
-
- host network.Host
- netConfig *libcni.NetworkConfig
- loConfig *libcni.NetworkConfig
- cniConfig libcni.CNI
- bandwidthShaper bandwidth.Shaper
- mu sync.Mutex //Mutex for protecting podIPs map, netConfig, and shaper initialization
- podIPs map[kubecontainer.ContainerID]utilsets.String
- mtu int
- execer utilexec.Interface
- nsenterPath string
- hairpinMode kubeletconfig.HairpinMode
- hostportManager hostport.HostPortManager
- hostportManagerv6 hostport.HostPortManager
- iptables utiliptables.Interface
- iptablesv6 utiliptables.Interface
- sysctl utilsysctl.Interface
- ebtables utilebtables.Interface
- // binDirs is passed by kubelet cni-bin-dir parameter.
- // kubenet will search for CNI binaries in DefaultCNIDir first, then continue to binDirs.
- binDirs []string
- nonMasqueradeCIDR string
- cacheDir string
- podCIDRs []*net.IPNet
-}
-
-func NewPlugin(networkPluginDirs []string, cacheDir string) network.NetworkPlugin {
- execer := utilexec.New()
- iptInterface := utiliptables.New(execer, utiliptables.ProtocolIPv4)
- iptInterfacev6 := utiliptables.New(execer, utiliptables.ProtocolIPv6)
- return &kubenetNetworkPlugin{
- podIPs: make(map[kubecontainer.ContainerID]utilsets.String),
- execer: utilexec.New(),
- iptables: iptInterface,
- iptablesv6: iptInterfacev6,
- sysctl: utilsysctl.New(),
- binDirs: append([]string{DefaultCNIDir}, networkPluginDirs...),
- hostportManager: hostport.NewHostportManager(iptInterface),
- hostportManagerv6: hostport.NewHostportManager(iptInterfacev6),
- nonMasqueradeCIDR: "10.0.0.0/8",
- cacheDir: cacheDir,
- podCIDRs: make([]*net.IPNet, 0),
- }
-}
-
-func (plugin *kubenetNetworkPlugin) Init(host network.Host, hairpinMode kubeletconfig.HairpinMode, nonMasqueradeCIDR string, mtu int) error {
- plugin.host = host
- plugin.hairpinMode = hairpinMode
- plugin.nonMasqueradeCIDR = nonMasqueradeCIDR
- plugin.cniConfig = &libcni.CNIConfig{Path: plugin.binDirs}
-
- if mtu == network.UseDefaultMTU {
- if link, err := findMinMTU(); err == nil {
- plugin.mtu = link.MTU
- klog.V(5).InfoS("Using the interface MTU value as bridge MTU", "interfaceName", link.Name, "mtuValue", link.MTU)
- } else {
- plugin.mtu = fallbackMTU
- klog.InfoS("Failed to find default bridge MTU, using default value", "mtuValue", fallbackMTU, "err", err)
- }
- } else {
- plugin.mtu = mtu
- }
-
- // Since this plugin uses a Linux bridge, set bridge-nf-call-iptables=1
- // is necessary to ensure kube-proxy functions correctly.
- //
- // This will return an error on older kernel version (< 3.18) as the module
- // was built-in, we simply ignore the error here. A better thing to do is
- // to check the kernel version in the future.
- plugin.execer.Command("modprobe", "br-netfilter").CombinedOutput()
- err := plugin.sysctl.SetSysctl(sysctlBridgeCallIPTables, 1)
- if err != nil {
- klog.InfoS("can't set sysctl bridge-nf-call-iptables", "err", err)
- }
-
- plugin.loConfig, err = libcni.ConfFromBytes([]byte(`{
- "cniVersion": "0.1.0",
- "name": "kubenet-loopback",
- "type": "loopback"
-}`))
- if err != nil {
- return fmt.Errorf("failed to generate loopback config: %v", err)
- }
-
- plugin.nsenterPath, err = plugin.execer.LookPath("nsenter")
- if err != nil {
- return fmt.Errorf("failed to find nsenter binary: %v", err)
- }
-
- // Need to SNAT outbound traffic from cluster
- if err = plugin.ensureMasqRule(); err != nil {
- return err
- }
- return nil
-}
-
-// TODO: move thic logic into cni bridge plugin and remove this from kubenet
-func (plugin *kubenetNetworkPlugin) ensureMasqRule() error {
- if plugin.nonMasqueradeCIDR != zeroCIDRv4 && plugin.nonMasqueradeCIDR != zeroCIDRv6 {
- // switch according to target nonMasqueradeCidr ip family
- ipt := plugin.iptables
- if netutils.IsIPv6CIDRString(plugin.nonMasqueradeCIDR) {
- ipt = plugin.iptablesv6
- }
-
- if _, err := ipt.EnsureRule(utiliptables.Append, utiliptables.TableNAT, utiliptables.ChainPostrouting,
- "-m", "comment", "--comment", "kubenet: SNAT for outbound traffic from cluster",
- "-m", "addrtype", "!", "--dst-type", "LOCAL",
- "!", "-d", plugin.nonMasqueradeCIDR,
- "-j", "MASQUERADE"); err != nil {
- return fmt.Errorf("failed to ensure that %s chain %s jumps to MASQUERADE: %v", utiliptables.TableNAT, utiliptables.ChainPostrouting, err)
- }
- }
- return nil
-}
-
-func findMinMTU() (*net.Interface, error) {
- intfs, err := net.Interfaces()
- if err != nil {
- return nil, err
- }
-
- mtu := 999999
- defIntfIndex := -1
- for i, intf := range intfs {
- if ((intf.Flags & net.FlagUp) != 0) && (intf.Flags&(net.FlagLoopback|net.FlagPointToPoint) == 0) {
- if intf.MTU < mtu {
- mtu = intf.MTU
- defIntfIndex = i
- }
- }
- }
-
- if mtu >= 999999 || mtu < 576 || defIntfIndex < 0 {
- return nil, fmt.Errorf("no suitable interface: %v", BridgeName)
- }
-
- return &intfs[defIntfIndex], nil
-}
-
-func (plugin *kubenetNetworkPlugin) Event(name string, details map[string]interface{}) {
- var err error
- if name != network.NET_PLUGIN_EVENT_POD_CIDR_CHANGE {
- return
- }
-
- plugin.mu.Lock()
- defer plugin.mu.Unlock()
-
- podCIDR, ok := details[network.NET_PLUGIN_EVENT_POD_CIDR_CHANGE_DETAIL_CIDR].(string)
- if !ok {
- klog.InfoS("The event didn't contain pod CIDR", "event", network.NET_PLUGIN_EVENT_POD_CIDR_CHANGE)
- return
- }
-
- if plugin.netConfig != nil {
- klog.InfoS("Ignoring subsequent pod CIDR update to new cidr", "podCIDR", podCIDR)
- return
- }
-
- klog.V(4).InfoS("Kubenet: PodCIDR is set to new value", "podCIDR", podCIDR)
- podCIDRs := strings.Split(podCIDR, ",")
-
- for idx, currentPodCIDR := range podCIDRs {
- _, cidr, err := netutils.ParseCIDRSloppy(currentPodCIDR)
- if nil != err {
- klog.InfoS("Failed to generate CNI network config with cidr at the index", "podCIDR", currentPodCIDR, "index", idx, "err", err)
- return
- }
- // create list of ips
- plugin.podCIDRs = append(plugin.podCIDRs, cidr)
- }
-
- //setup hairpinMode
- setHairpin := plugin.hairpinMode == kubeletconfig.HairpinVeth
-
- json := fmt.Sprintf(NET_CONFIG_TEMPLATE, BridgeName, plugin.mtu, network.DefaultInterfaceName, setHairpin, plugin.getRangesConfig(), plugin.getRoutesConfig())
- klog.V(4).InfoS("CNI network config set to json format", "cniNetworkConfig", json)
- plugin.netConfig, err = libcni.ConfFromBytes([]byte(json))
- if err != nil {
- klog.InfoS("** failed to set up CNI with json format", "cniNetworkConfig", json, "err", err)
- // just incase it was set by mistake
- plugin.netConfig = nil
- // we bail out by clearing the *entire* list
- // of addresses assigned to cbr0
- plugin.clearUnusedBridgeAddresses()
- }
-}
-
-// clear all address on bridge except those operated on by kubenet
-func (plugin *kubenetNetworkPlugin) clearUnusedBridgeAddresses() {
- cidrIncluded := func(list []*net.IPNet, check *net.IPNet) bool {
- for _, thisNet := range list {
- if utilnet.IPNetEqual(thisNet, check) {
- return true
- }
- }
- return false
- }
-
- bridge, err := netlink.LinkByName(BridgeName)
- if err != nil {
- return
- }
-
- addrs, err := netlink.AddrList(bridge, unix.AF_INET)
- if err != nil {
- klog.V(2).InfoS("Attempting to get address for the interface failed", "interfaceName", BridgeName, "err", err)
- return
- }
-
- for _, addr := range addrs {
- if !cidrIncluded(plugin.podCIDRs, addr.IPNet) {
- klog.V(2).InfoS("Removing old address from the interface", "interfaceName", BridgeName, "address", addr.IPNet.String())
- netlink.AddrDel(bridge, &addr)
- }
- }
-}
-
-func (plugin *kubenetNetworkPlugin) Name() string {
- return KubenetPluginName
-}
-
-func (plugin *kubenetNetworkPlugin) Capabilities() utilsets.Int {
- return utilsets.NewInt()
-}
-
-// setup sets up networking through CNI using the given ns/name and sandbox ID.
-func (plugin *kubenetNetworkPlugin) setup(namespace string, name string, id kubecontainer.ContainerID, annotations map[string]string) error {
- var ipv4, ipv6 net.IP
- var podGateways []net.IP
- var podCIDRs []net.IPNet
-
- // Disable DAD so we skip the kernel delay on bringing up new interfaces.
- if err := plugin.disableContainerDAD(id); err != nil {
- klog.V(3).InfoS("Failed to disable DAD in container", "err", err)
- }
-
- // Bring up container loopback interface
- if _, err := plugin.addContainerToNetwork(plugin.loConfig, "lo", namespace, name, id); err != nil {
- return err
- }
-
- // Hook container up with our bridge
- resT, err := plugin.addContainerToNetwork(plugin.netConfig, network.DefaultInterfaceName, namespace, name, id)
- if err != nil {
- return err
- }
- // Coerce the CNI result version
- res, err := cnitypes020.GetResult(resT)
- if err != nil {
- return fmt.Errorf("unable to understand network config: %v", err)
- }
- //TODO: v1.16 (khenidak) update NET_CONFIG_TEMPLATE to CNI version 0.3.0 or later so
- // that we get multiple IP addresses in the returned Result structure
- if res.IP4 != nil {
- ipv4 = res.IP4.IP.IP.To4()
- podGateways = append(podGateways, res.IP4.Gateway)
- podCIDRs = append(podCIDRs, net.IPNet{IP: ipv4.Mask(res.IP4.IP.Mask), Mask: res.IP4.IP.Mask})
- }
-
- if res.IP6 != nil {
- ipv6 = res.IP6.IP.IP
- podGateways = append(podGateways, res.IP6.Gateway)
- podCIDRs = append(podCIDRs, net.IPNet{IP: ipv6.Mask(res.IP6.IP.Mask), Mask: res.IP6.IP.Mask})
- }
-
- if ipv4 == nil && ipv6 == nil {
- return fmt.Errorf("cni didn't report ipv4 ipv6")
- }
- // Put the container bridge into promiscuous mode to force it to accept hairpin packets.
- // TODO: Remove this once the kernel bug (#20096) is fixed.
- if plugin.hairpinMode == kubeletconfig.PromiscuousBridge {
- link, err := netlink.LinkByName(BridgeName)
- if err != nil {
- return fmt.Errorf("failed to lookup %q: %v", BridgeName, err)
- }
- if link.Attrs().Promisc != 1 {
- // promiscuous mode is not on, then turn it on.
- err := netlink.SetPromiscOn(link)
- if err != nil {
- return fmt.Errorf("error setting promiscuous mode on %s: %v", BridgeName, err)
- }
- }
-
- // configure the ebtables rules to eliminate duplicate packets by best effort
- plugin.syncEbtablesDedupRules(link.Attrs().HardwareAddr, podCIDRs, podGateways)
- }
-
- // add the ip to tracked ips
- if ipv4 != nil {
- plugin.addPodIP(id, ipv4.String())
- }
- if ipv6 != nil {
- plugin.addPodIP(id, ipv6.String())
- }
-
- if err := plugin.addTrafficShaping(id, annotations); err != nil {
- return err
- }
-
- return plugin.addPortMapping(id, name, namespace)
-}
-
-// The first SetUpPod call creates the bridge; get a shaper for the sake of initialization
-// TODO: replace with CNI traffic shaper plugin
-func (plugin *kubenetNetworkPlugin) addTrafficShaping(id kubecontainer.ContainerID, annotations map[string]string) error {
- shaper := plugin.shaper()
- ingress, egress, err := bandwidth.ExtractPodBandwidthResources(annotations)
- if err != nil {
- return fmt.Errorf("error reading pod bandwidth annotations: %v", err)
- }
- iplist, exists := plugin.getCachedPodIPs(id)
- if !exists {
- return fmt.Errorf("pod %s does not have recorded ips", id)
- }
-
- if egress != nil || ingress != nil {
- for _, ip := range iplist {
- mask := 32
- if netutils.IsIPv6String(ip) {
- mask = 128
- }
- if err != nil {
- return fmt.Errorf("failed to setup traffic shaping for pod ip%s", ip)
- }
-
- if err := shaper.ReconcileCIDR(fmt.Sprintf("%v/%v", ip, mask), egress, ingress); err != nil {
- return fmt.Errorf("failed to add pod to shaper: %v", err)
- }
- }
- }
- return nil
-}
-
-// TODO: replace with CNI port-forwarding plugin
-func (plugin *kubenetNetworkPlugin) addPortMapping(id kubecontainer.ContainerID, name, namespace string) error {
- portMappings, err := plugin.host.GetPodPortMappings(id.ID)
- if err != nil {
- return err
- }
-
- if len(portMappings) == 0 {
- return nil
- }
-
- iplist, exists := plugin.getCachedPodIPs(id)
- if !exists {
- return fmt.Errorf("pod %s does not have recorded ips", id)
- }
-
- for _, ip := range iplist {
- pm := &hostport.PodPortMapping{
- Namespace: namespace,
- Name: name,
- PortMappings: portMappings,
- IP: netutils.ParseIPSloppy(ip),
- HostNetwork: false,
- }
- if netutils.IsIPv6(pm.IP) {
- if err := plugin.hostportManagerv6.Add(id.ID, pm, BridgeName); err != nil {
- return err
- }
- } else {
- if err := plugin.hostportManager.Add(id.ID, pm, BridgeName); err != nil {
- return err
- }
- }
- }
-
- return nil
-}
-
-func (plugin *kubenetNetworkPlugin) SetUpPod(namespace string, name string, id kubecontainer.ContainerID, annotations, options map[string]string) error {
- start := time.Now()
-
- if err := plugin.Status(); err != nil {
- return fmt.Errorf("kubenet cannot SetUpPod: %v", err)
- }
-
- defer func() {
- klog.V(4).InfoS("SetUpPod took time", "pod", klog.KRef(namespace, name), "duration", time.Since(start))
- }()
-
- if err := plugin.setup(namespace, name, id, annotations); err != nil {
- if err := plugin.teardown(namespace, name, id); err != nil {
- // Not a hard error or warning
- klog.V(4).InfoS("Failed to clean up pod after SetUpPod failure", "pod", klog.KRef(namespace, name), "err", err)
- }
- return err
- }
-
- // Need to SNAT outbound traffic from cluster
- if err := plugin.ensureMasqRule(); err != nil {
- klog.ErrorS(err, "Failed to ensure MASQ rule")
- }
-
- return nil
-}
-
-// Tears down as much of a pod's network as it can even if errors occur. Returns
-// an aggregate error composed of all errors encountered during the teardown.
-func (plugin *kubenetNetworkPlugin) teardown(namespace string, name string, id kubecontainer.ContainerID) error {
- errList := []error{}
-
- // Loopback network deletion failure should not be fatal on teardown
- if err := plugin.delContainerFromNetwork(plugin.loConfig, "lo", namespace, name, id); err != nil {
- klog.InfoS("Failed to delete loopback network", "err", err)
- errList = append(errList, err)
-
- }
-
- // no ip dependent actions
- if err := plugin.delContainerFromNetwork(plugin.netConfig, network.DefaultInterfaceName, namespace, name, id); err != nil {
- klog.InfoS("Failed to delete the interface network", "interfaceName", network.DefaultInterfaceName, "err", err)
- errList = append(errList, err)
- }
-
- // If there are no IPs registered we can't teardown pod's IP dependencies
- iplist, exists := plugin.getCachedPodIPs(id)
- if !exists || len(iplist) == 0 {
- klog.V(5).InfoS("Container does not have IP registered. Ignoring teardown call", "containerID", id, "pod", klog.KRef(namespace, name))
- return nil
- }
-
- // get the list of port mappings
- portMappings, err := plugin.host.GetPodPortMappings(id.ID)
- if err != nil {
- errList = append(errList, err)
- }
-
- // process each pod IP
- for _, ip := range iplist {
- isV6 := netutils.IsIPv6String(ip)
- klog.V(5).InfoS("Removing pod port mappings from the IP", "IP", ip)
- if portMappings != nil && len(portMappings) > 0 {
- if isV6 {
- if err = plugin.hostportManagerv6.Remove(id.ID, &hostport.PodPortMapping{
- Namespace: namespace,
- Name: name,
- PortMappings: portMappings,
- HostNetwork: false,
- }); err != nil {
- errList = append(errList, err)
- }
- } else {
- if err = plugin.hostportManager.Remove(id.ID, &hostport.PodPortMapping{
- Namespace: namespace,
- Name: name,
- PortMappings: portMappings,
- HostNetwork: false,
- }); err != nil {
- errList = append(errList, err)
- }
- }
- }
-
- klog.V(5).InfoS("Removing pod IP from shaper for the pod", "pod", klog.KRef(namespace, name), "IP", ip)
- // shaper uses a cidr, but we are using a single IP.
- mask := "32"
- if isV6 {
- mask = "128"
- }
-
- if err := plugin.shaper().Reset(fmt.Sprintf("%s/%s", ip, mask)); err != nil {
- // Possible bandwidth shaping wasn't enabled for this pod anyways
- klog.V(4).InfoS("Failed to remove pod IP from shaper", "IP", ip, "err", err)
- }
-
- plugin.removePodIP(id, ip)
- }
- return utilerrors.NewAggregate(errList)
-}
-
-func (plugin *kubenetNetworkPlugin) TearDownPod(namespace string, name string, id kubecontainer.ContainerID) error {
- start := time.Now()
- defer func() {
- klog.V(4).InfoS("TearDownPod took time", "pod", klog.KRef(namespace, name), "duration", time.Since(start))
- }()
-
- if plugin.netConfig == nil {
- return fmt.Errorf("kubenet needs a PodCIDR to tear down pods")
- }
-
- if err := plugin.teardown(namespace, name, id); err != nil {
- return err
- }
-
- // Need to SNAT outbound traffic from cluster
- if err := plugin.ensureMasqRule(); err != nil {
- klog.ErrorS(err, "Failed to ensure MASQ rule")
- }
- return nil
-}
-
-// TODO: Use the addToNetwork function to obtain the IP of the Pod. That will assume idempotent ADD call to the plugin.
-// Also fix the runtime's call to Status function to be done only in the case that the IP is lost, no need to do periodic calls
-func (plugin *kubenetNetworkPlugin) GetPodNetworkStatus(namespace string, name string, id kubecontainer.ContainerID) (*network.PodNetworkStatus, error) {
- // try cached version
- networkStatus := plugin.getNetworkStatus(id)
- if networkStatus != nil {
- return networkStatus, nil
- }
-
- // not a cached version, get via network ns
- netnsPath, err := plugin.host.GetNetNS(id.ID)
- if err != nil {
- return nil, fmt.Errorf("kubenet failed to retrieve network namespace path: %v", err)
- }
- if netnsPath == "" {
- return nil, fmt.Errorf("cannot find the network namespace, skipping pod network status for container %q", id)
- }
- ips, err := network.GetPodIPs(plugin.execer, plugin.nsenterPath, netnsPath, network.DefaultInterfaceName)
- if err != nil {
- return nil, err
- }
-
- // cache the ips
- for _, ip := range ips {
- plugin.addPodIP(id, ip.String())
- }
-
- // return from cached
- return plugin.getNetworkStatus(id), nil
-}
-
-// returns networkstatus
-func (plugin *kubenetNetworkPlugin) getNetworkStatus(id kubecontainer.ContainerID) *network.PodNetworkStatus {
- // Assuming the ip of pod does not change. Try to retrieve ip from kubenet map first.
- iplist, ok := plugin.getCachedPodIPs(id)
- if !ok {
- return nil
- }
-
- if len(iplist) == 0 {
- return nil
- }
-
- ips := make([]net.IP, 0, len(iplist))
- for _, ip := range iplist {
- ips = append(ips, netutils.ParseIPSloppy(ip))
- }
-
- return &network.PodNetworkStatus{
- IP: ips[0],
- IPs: ips,
- }
-}
-
-func (plugin *kubenetNetworkPlugin) Status() error {
- // Can't set up pods if we don't have a PodCIDR yet
- if plugin.netConfig == nil {
- return fmt.Errorf("kubenet does not have netConfig. This is most likely due to lack of PodCIDR")
- }
-
- if !plugin.checkRequiredCNIPlugins() {
- return fmt.Errorf("could not locate kubenet required CNI plugins %v at %q", requiredCNIPlugins, plugin.binDirs)
- }
- return nil
-}
-
-// checkRequiredCNIPlugins returns if all kubenet required cni plugins can be found at /opt/cni/bin or user specified NetworkPluginDir.
-func (plugin *kubenetNetworkPlugin) checkRequiredCNIPlugins() bool {
- for _, dir := range plugin.binDirs {
- if plugin.checkRequiredCNIPluginsInOneDir(dir) {
- return true
- }
- }
- return false
-}
-
-// checkRequiredCNIPluginsInOneDir returns true if all required cni plugins are placed in dir
-func (plugin *kubenetNetworkPlugin) checkRequiredCNIPluginsInOneDir(dir string) bool {
- files, err := ioutil.ReadDir(dir)
- if err != nil {
- return false
- }
- for _, cniPlugin := range requiredCNIPlugins {
- found := false
- for _, file := range files {
- if strings.TrimSpace(file.Name()) == cniPlugin {
- found = true
- break
- }
- }
- if !found {
- return false
- }
- }
- return true
-}
-
-func (plugin *kubenetNetworkPlugin) buildCNIRuntimeConf(ifName string, id kubecontainer.ContainerID, needNetNs bool) (*libcni.RuntimeConf, error) {
- netnsPath, err := plugin.host.GetNetNS(id.ID)
- if needNetNs && err != nil {
- klog.ErrorS(err, "Kubenet failed to retrieve network namespace path")
- }
-
- return &libcni.RuntimeConf{
- ContainerID: id.ID,
- NetNS: netnsPath,
- IfName: ifName,
- CacheDir: plugin.cacheDir,
- }, nil
-}
-
-func (plugin *kubenetNetworkPlugin) addContainerToNetwork(config *libcni.NetworkConfig, ifName, namespace, name string, id kubecontainer.ContainerID) (cnitypes.Result, error) {
- rt, err := plugin.buildCNIRuntimeConf(ifName, id, true)
- if err != nil {
- return nil, fmt.Errorf("error building CNI config: %v", err)
- }
-
- klog.V(3).InfoS("Adding pod to network with CNI plugin and runtime", "pod", klog.KRef(namespace, name), "networkName", config.Network.Name, "networkType", config.Network.Type, "rt", rt)
- // Because the default remote runtime request timeout is 4 min,so set slightly less than 240 seconds
- // Todo get the timeout from parent ctx
- cniTimeoutCtx, cancelFunc := context.WithTimeout(context.Background(), network.CNITimeoutSec*time.Second)
- defer cancelFunc()
- res, err := plugin.cniConfig.AddNetwork(cniTimeoutCtx, config, rt)
- if err != nil {
- return nil, fmt.Errorf("error adding container to network: %v", err)
- }
- return res, nil
-}
-
-func (plugin *kubenetNetworkPlugin) delContainerFromNetwork(config *libcni.NetworkConfig, ifName, namespace, name string, id kubecontainer.ContainerID) error {
- rt, err := plugin.buildCNIRuntimeConf(ifName, id, false)
- if err != nil {
- return fmt.Errorf("error building CNI config: %v", err)
- }
-
- klog.V(3).InfoS("Removing pod from network with CNI plugin and runtime", "pod", klog.KRef(namespace, name), "networkName", config.Network.Name, "networkType", config.Network.Type, "rt", rt)
- // Because the default remote runtime request timeout is 4 min,so set slightly less than 240 seconds
- // Todo get the timeout from parent ctx
- cniTimeoutCtx, cancelFunc := context.WithTimeout(context.Background(), network.CNITimeoutSec*time.Second)
- defer cancelFunc()
- err = plugin.cniConfig.DelNetwork(cniTimeoutCtx, config, rt)
- // The pod may not get deleted successfully at the first time.
- // Ignore "no such file or directory" error in case the network has already been deleted in previous attempts.
- if err != nil && !strings.Contains(err.Error(), "no such file or directory") {
- return fmt.Errorf("error removing container from network: %v", err)
- }
- return nil
-}
-
-// shaper retrieves the bandwidth shaper and, if it hasn't been fetched before,
-// initializes it and ensures the bridge is appropriately configured
-func (plugin *kubenetNetworkPlugin) shaper() bandwidth.Shaper {
- plugin.mu.Lock()
- defer plugin.mu.Unlock()
- if plugin.bandwidthShaper == nil {
- plugin.bandwidthShaper = bandwidth.NewTCShaper(BridgeName)
- plugin.bandwidthShaper.ReconcileInterface()
- }
- return plugin.bandwidthShaper
-}
-
-//TODO: make this into a goroutine and rectify the dedup rules periodically
-func (plugin *kubenetNetworkPlugin) syncEbtablesDedupRules(macAddr net.HardwareAddr, podCIDRs []net.IPNet, podGateways []net.IP) {
- if plugin.ebtables == nil {
- plugin.ebtables = utilebtables.New(plugin.execer)
- klog.V(3).InfoS("Flushing dedup chain")
- if err := plugin.ebtables.FlushChain(utilebtables.TableFilter, dedupChain); err != nil {
- klog.ErrorS(err, "Failed to flush dedup chain")
- }
- }
- _, err := plugin.ebtables.GetVersion()
- if err != nil {
- klog.InfoS("Failed to get ebtables version. Skip syncing ebtables dedup rules", "err", err)
- return
- }
-
- // ensure custom chain exists
- _, err = plugin.ebtables.EnsureChain(utilebtables.TableFilter, dedupChain)
- if err != nil {
- klog.ErrorS(nil, "Failed to ensure filter table KUBE-DEDUP chain")
- return
- }
-
- // jump to custom chain to the chain from core tables
- _, err = plugin.ebtables.EnsureRule(utilebtables.Append, utilebtables.TableFilter, utilebtables.ChainOutput, "-j", string(dedupChain))
- if err != nil {
- klog.ErrorS(err, "Failed to ensure filter table OUTPUT chain jump to KUBE-DEDUP chain")
- return
- }
-
- // per gateway rule
- for idx, gw := range podGateways {
- klog.V(3).InfoS("Filtering packets with ebtables", "mac", macAddr.String(), "gateway", gw.String(), "podCIDR", podCIDRs[idx].String())
-
- bIsV6 := netutils.IsIPv6(gw)
- IPFamily := "IPv4"
- ipSrc := "--ip-src"
- if bIsV6 {
- IPFamily = "IPv6"
- ipSrc = "--ip6-src"
- }
- commonArgs := []string{"-p", IPFamily, "-s", macAddr.String(), "-o", "veth+"}
- _, err = plugin.ebtables.EnsureRule(utilebtables.Prepend, utilebtables.TableFilter, dedupChain, append(commonArgs, ipSrc, gw.String(), "-j", "ACCEPT")...)
- if err != nil {
- klog.ErrorS(err, "Failed to ensure packets from cbr0 gateway to be accepted with error", "gateway", gw.String())
- return
-
- }
- _, err = plugin.ebtables.EnsureRule(utilebtables.Append, utilebtables.TableFilter, dedupChain, append(commonArgs, ipSrc, podCIDRs[idx].String(), "-j", "DROP")...)
- if err != nil {
- klog.ErrorS(err, "Failed to ensure packets from podCidr but has mac address of cbr0 to get dropped.", "podCIDR", podCIDRs[idx].String())
- return
- }
- }
-}
-
-// disableContainerDAD disables duplicate address detection in the container.
-// DAD has a negative affect on pod creation latency, since we have to wait
-// a second or more for the addresses to leave the "tentative" state. Since
-// we're sure there won't be an address conflict (since we manage them manually),
-// this is safe. See issue 54651.
-//
-// This sets net.ipv6.conf.default.dad_transmits to 0. It must be run *before*
-// the CNI plugins are run.
-func (plugin *kubenetNetworkPlugin) disableContainerDAD(id kubecontainer.ContainerID) error {
- key := "net/ipv6/conf/default/dad_transmits"
-
- sysctlBin, err := plugin.execer.LookPath("sysctl")
- if err != nil {
- return fmt.Errorf("could not find sysctl binary: %s", err)
- }
-
- netnsPath, err := plugin.host.GetNetNS(id.ID)
- if err != nil {
- return fmt.Errorf("failed to get netns: %v", err)
- }
- if netnsPath == "" {
- return fmt.Errorf("pod has no network namespace")
- }
-
- // If the sysctl doesn't exist, it means ipv6 is disabled; log and move on
- if _, err := plugin.sysctl.GetSysctl(key); err != nil {
- return fmt.Errorf("ipv6 not enabled: %v", err)
- }
-
- output, err := plugin.execer.Command(plugin.nsenterPath,
- fmt.Sprintf("--net=%s", netnsPath), "-F", "--",
- sysctlBin, "-w", fmt.Sprintf("%s=%s", key, "0"),
- ).CombinedOutput()
- if err != nil {
- return fmt.Errorf("failed to write sysctl: output: %s error: %s",
- output, err)
- }
- return nil
-}
-
-// given a n cidrs assigned to nodes,
-// create bridge configuration that conforms to them
-func (plugin *kubenetNetworkPlugin) getRangesConfig() string {
- createRange := func(thisNet *net.IPNet) string {
- template := `
-[{
-"subnet": "%s"
-}]`
- return fmt.Sprintf(template, thisNet.String())
- }
-
- ranges := make([]string, len(plugin.podCIDRs))
- for idx, thisCIDR := range plugin.podCIDRs {
- ranges[idx] = createRange(thisCIDR)
- }
- //[{range}], [{range}]
- // each range contains a subnet. gateway will be fetched from cni result
- return strings.Join(ranges[:], ",")
-}
-
-// given a n cidrs assigned to nodes,
-// create bridge routes configuration that conforms to them
-func (plugin *kubenetNetworkPlugin) getRoutesConfig() string {
- var (
- routes []string
- hasV4, hasV6 bool
- )
- for _, thisCIDR := range plugin.podCIDRs {
- if thisCIDR.IP.To4() != nil {
- hasV4 = true
- } else {
- hasV6 = true
- }
- }
- if hasV4 {
- routes = append(routes, fmt.Sprintf(`{"dst": "%s"}`, zeroCIDRv4))
- }
- if hasV6 {
- routes = append(routes, fmt.Sprintf(`{"dst": "%s"}`, zeroCIDRv6))
- }
- return strings.Join(routes, ",")
-}
-
-func (plugin *kubenetNetworkPlugin) addPodIP(id kubecontainer.ContainerID, ip string) {
- plugin.mu.Lock()
- defer plugin.mu.Unlock()
-
- _, exist := plugin.podIPs[id]
- if !exist {
- plugin.podIPs[id] = utilsets.NewString()
- }
-
- if !plugin.podIPs[id].Has(ip) {
- plugin.podIPs[id].Insert(ip)
- }
-}
-
-func (plugin *kubenetNetworkPlugin) removePodIP(id kubecontainer.ContainerID, ip string) {
- plugin.mu.Lock()
- defer plugin.mu.Unlock()
-
- _, exist := plugin.podIPs[id]
- if !exist {
- return // did we restart kubelet?
- }
-
- if plugin.podIPs[id].Has(ip) {
- plugin.podIPs[id].Delete(ip)
- }
-
- // if there is no more ips here. let us delete
- if plugin.podIPs[id].Len() == 0 {
- delete(plugin.podIPs, id)
- }
-}
-
-// returns a copy of pod ips
-// false is returned if id does not exist
-func (plugin *kubenetNetworkPlugin) getCachedPodIPs(id kubecontainer.ContainerID) ([]string, bool) {
- plugin.mu.Lock()
- defer plugin.mu.Unlock()
-
- iplist, exists := plugin.podIPs[id]
- if !exists {
- return nil, false
- }
-
- return iplist.UnsortedList(), true
-}
diff --git a/pkg/kubelet/dockershim/network/kubenet/kubenet_linux_test.go b/pkg/kubelet/dockershim/network/kubenet/kubenet_linux_test.go
deleted file mode 100644
index e94f93726f5..00000000000
--- a/pkg/kubelet/dockershim/network/kubenet/kubenet_linux_test.go
+++ /dev/null
@@ -1,392 +0,0 @@
-//go:build !dockerless
-// +build !dockerless
-
-/*
-Copyright 2015 The Kubernetes Authors.
-
-Licensed under the Apache License, Version 2.0 (the "License");
-you may not use this file except in compliance with the License.
-You may obtain a copy of the License at
-
- http://www.apache.org/licenses/LICENSE-2.0
-
-Unless required by applicable law or agreed to in writing, software
-distributed under the License is distributed on an "AS IS" BASIS,
-WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-See the License for the specific language governing permissions and
-limitations under the License.
-*/
-
-package kubenet
-
-import (
- "fmt"
- "net"
- "strings"
- "testing"
-
- "github.com/containernetworking/cni/libcni"
- "github.com/containernetworking/cni/pkg/types"
- "github.com/stretchr/testify/assert"
- "github.com/stretchr/testify/mock"
-
- utilsets "k8s.io/apimachinery/pkg/util/sets"
- sysctltest "k8s.io/component-helpers/node/util/sysctl/testing"
- kubeletconfig "k8s.io/kubernetes/pkg/kubelet/apis/config"
- kubecontainer "k8s.io/kubernetes/pkg/kubelet/container"
- "k8s.io/kubernetes/pkg/kubelet/dockershim/network"
- mockcni "k8s.io/kubernetes/pkg/kubelet/dockershim/network/cni/testing"
- nettest "k8s.io/kubernetes/pkg/kubelet/dockershim/network/testing"
- "k8s.io/kubernetes/pkg/util/bandwidth"
- ipttest "k8s.io/kubernetes/pkg/util/iptables/testing"
- "k8s.io/utils/exec"
- fakeexec "k8s.io/utils/exec/testing"
- netutils "k8s.io/utils/net"
-)
-
-// test it fulfills the NetworkPlugin interface
-var _ network.NetworkPlugin = &kubenetNetworkPlugin{}
-
-func newFakeKubenetPlugin(initMap map[kubecontainer.ContainerID]utilsets.String, execer exec.Interface, host network.Host) *kubenetNetworkPlugin {
- return &kubenetNetworkPlugin{
- podIPs: initMap,
- execer: execer,
- mtu: 1460,
- host: host,
- }
-}
-
-func TestGetPodNetworkStatus(t *testing.T) {
- podIPMap := make(map[kubecontainer.ContainerID]utilsets.String)
- podIPMap[kubecontainer.ContainerID{ID: "1"}] = utilsets.NewString("10.245.0.2")
- podIPMap[kubecontainer.ContainerID{ID: "2"}] = utilsets.NewString("10.245.0.3")
- podIPMap[kubecontainer.ContainerID{ID: "3"}] = utilsets.NewString("10.245.0.4", "2000::")
- podIPMap[kubecontainer.ContainerID{ID: "4"}] = utilsets.NewString("2000::2")
-
- testCases := []struct {
- id string
- expectError bool
- expectIP utilsets.String
- }{
- //in podCIDR map
- {
- id: "1",
- expectError: false,
- expectIP: utilsets.NewString("10.245.0.2"),
- },
- {
- id: "2",
- expectError: false,
- expectIP: utilsets.NewString("10.245.0.3"),
- },
- {
- id: "3",
- expectError: false,
- expectIP: utilsets.NewString("10.245.0.4", "2000::"),
- },
- {
- id: "4",
- expectError: false,
- expectIP: utilsets.NewString("2000::2"),
- },
-
- //not in podIP map
- {
- id: "does-not-exist-map",
- expectError: true,
- expectIP: nil,
- },
- //TODO: add test cases for retrieving ip inside container network namespace
- }
-
- fakeCmds := make([]fakeexec.FakeCommandAction, 0)
- for _, t := range testCases {
- // the fake commands return the IP from the given index, or an error
- fCmd := fakeexec.FakeCmd{
- CombinedOutputScript: []fakeexec.FakeAction{
- func() ([]byte, []byte, error) {
- ips, ok := podIPMap[kubecontainer.ContainerID{ID: t.id}]
- if !ok {
- return nil, nil, fmt.Errorf("Pod IP %q not found", t.id)
- }
- ipsList := ips.UnsortedList()
- return []byte(ipsList[0]), nil, nil
- },
- },
- }
- fakeCmds = append(fakeCmds, func(cmd string, args ...string) exec.Cmd {
- return fakeexec.InitFakeCmd(&fCmd, cmd, args...)
- })
- }
- fexec := fakeexec.FakeExec{
- CommandScript: fakeCmds,
- LookPathFunc: func(file string) (string, error) {
- return fmt.Sprintf("/fake-bin/%s", file), nil
- },
- }
-
- fhost := nettest.NewFakeHost(nil)
- fakeKubenet := newFakeKubenetPlugin(podIPMap, &fexec, fhost)
-
- for i, tc := range testCases {
- out, err := fakeKubenet.GetPodNetworkStatus("", "", kubecontainer.ContainerID{ID: tc.id})
- if tc.expectError {
- if err == nil {
- t.Errorf("Test case %d expects error but got none", i)
- }
- continue
- } else {
- if err != nil {
- t.Errorf("Test case %d expects error but got error: %v", i, err)
- }
- }
- seen := make(map[string]bool)
- allExpected := tc.expectIP.UnsortedList()
- for _, expectedIP := range allExpected {
- for _, outIP := range out.IPs {
- if expectedIP == outIP.String() {
- seen[expectedIP] = true
- break
- }
- }
- }
- if len(tc.expectIP) != len(seen) {
- t.Errorf("Test case %d expects ip %s but got %s", i, tc.expectIP, out.IP.String())
- }
-
- }
-}
-
-// TestTeardownCallsShaper tests that a `TearDown` call does call
-// `shaper.Reset`
-func TestTeardownCallsShaper(t *testing.T) {
- fexec := &fakeexec.FakeExec{
- CommandScript: []fakeexec.FakeCommandAction{},
- LookPathFunc: func(file string) (string, error) {
- return fmt.Sprintf("/fake-bin/%s", file), nil
- },
- }
- fhost := nettest.NewFakeHost(nil)
- fshaper := &bandwidth.FakeShaper{}
- mockcni := &mockcni.MockCNI{}
- ips := make(map[kubecontainer.ContainerID]utilsets.String)
- kubenet := newFakeKubenetPlugin(ips, fexec, fhost)
- kubenet.loConfig = &libcni.NetworkConfig{
- Network: &types.NetConf{
- Name: "loopback-fake",
- Type: "loopback",
- },
- }
- kubenet.cniConfig = mockcni
- kubenet.iptables = ipttest.NewFake()
- kubenet.bandwidthShaper = fshaper
-
- mockcni.On("DelNetwork", mock.AnythingOfType("*context.timerCtx"), mock.AnythingOfType("*libcni.NetworkConfig"), mock.AnythingOfType("*libcni.RuntimeConf")).Return(nil)
-
- details := make(map[string]interface{})
- details[network.NET_PLUGIN_EVENT_POD_CIDR_CHANGE_DETAIL_CIDR] = "10.0.0.1/24"
- kubenet.Event(network.NET_PLUGIN_EVENT_POD_CIDR_CHANGE, details)
-
- existingContainerID := kubecontainer.BuildContainerID("docker", "123")
- kubenet.podIPs[existingContainerID] = utilsets.NewString("10.0.0.1")
-
- if err := kubenet.TearDownPod("namespace", "name", existingContainerID); err != nil {
- t.Fatalf("Unexpected error in TearDownPod: %v", err)
- }
- assert.Equal(t, []string{"10.0.0.1/32"}, fshaper.ResetCIDRs, "shaper.Reset should have been called")
-
- mockcni.AssertExpectations(t)
-}
-
-// TestInit tests that a `Init` call with an MTU sets the MTU
-func TestInit_MTU(t *testing.T) {
- var fakeCmds []fakeexec.FakeCommandAction
- {
- // modprobe br-netfilter
- fCmd := fakeexec.FakeCmd{
- CombinedOutputScript: []fakeexec.FakeAction{
- func() ([]byte, []byte, error) {
- return make([]byte, 0), nil, nil
- },
- },
- }
- fakeCmds = append(fakeCmds, func(cmd string, args ...string) exec.Cmd {
- return fakeexec.InitFakeCmd(&fCmd, cmd, args...)
- })
- }
-
- fexec := &fakeexec.FakeExec{
- CommandScript: fakeCmds,
- LookPathFunc: func(file string) (string, error) {
- return fmt.Sprintf("/fake-bin/%s", file), nil
- },
- }
-
- fhost := nettest.NewFakeHost(nil)
- ips := make(map[kubecontainer.ContainerID]utilsets.String)
- kubenet := newFakeKubenetPlugin(ips, fexec, fhost)
- kubenet.iptables = ipttest.NewFake()
-
- sysctl := sysctltest.NewFake()
- sysctl.Settings["net/bridge/bridge-nf-call-iptables"] = 0
- kubenet.sysctl = sysctl
-
- if err := kubenet.Init(nettest.NewFakeHost(nil), kubeletconfig.HairpinNone, "10.0.0.0/8", 1234); err != nil {
- t.Fatalf("Unexpected error in Init: %v", err)
- }
- assert.Equal(t, 1234, kubenet.mtu, "kubenet.mtu should have been set")
- assert.Equal(t, 1, sysctl.Settings["net/bridge/bridge-nf-call-iptables"], "net/bridge/bridge-nf-call-iptables sysctl should have been set")
-}
-
-// TestInvocationWithoutRuntime invokes the plugin without a runtime.
-// This is how kubenet is invoked from the cri.
-func TestTearDownWithoutRuntime(t *testing.T) {
- testCases := []struct {
- podCIDR []string
- expectedPodCIDR []string
- ip string
- }{
- {
- podCIDR: []string{"10.0.0.0/24"},
- expectedPodCIDR: []string{"10.0.0.0/24"},
- ip: "10.0.0.1",
- },
- {
- podCIDR: []string{"10.0.0.1/24"},
- expectedPodCIDR: []string{"10.0.0.0/24"},
- ip: "10.0.0.1",
- },
- {
- podCIDR: []string{"2001:beef::/48"},
- expectedPodCIDR: []string{"2001:beef::/48"},
- ip: "2001:beef::1",
- },
- {
- podCIDR: []string{"2001:beef::1/48"},
- expectedPodCIDR: []string{"2001:beef::/48"},
- ip: "2001:beef::1",
- },
- }
- for _, tc := range testCases {
-
- fhost := nettest.NewFakeHost(nil)
- fhost.Legacy = false
- mockcni := &mockcni.MockCNI{}
-
- fexec := &fakeexec.FakeExec{
- CommandScript: []fakeexec.FakeCommandAction{},
- LookPathFunc: func(file string) (string, error) {
- return fmt.Sprintf("/fake-bin/%s", file), nil
- },
- }
-
- ips := make(map[kubecontainer.ContainerID]utilsets.String)
- kubenet := newFakeKubenetPlugin(ips, fexec, fhost)
- kubenet.loConfig = &libcni.NetworkConfig{
- Network: &types.NetConf{
- Name: "loopback-fake",
- Type: "loopback",
- },
- }
- kubenet.cniConfig = mockcni
- kubenet.iptables = ipttest.NewFake()
-
- details := make(map[string]interface{})
- details[network.NET_PLUGIN_EVENT_POD_CIDR_CHANGE_DETAIL_CIDR] = strings.Join(tc.podCIDR, ",")
- kubenet.Event(network.NET_PLUGIN_EVENT_POD_CIDR_CHANGE, details)
-
- if len(kubenet.podCIDRs) != len(tc.podCIDR) {
- t.Errorf("generated podCidr: %q, expecting: %q are not of the same length", kubenet.podCIDRs, tc.podCIDR)
- continue
- }
- for idx := range tc.podCIDR {
- if kubenet.podCIDRs[idx].String() != tc.expectedPodCIDR[idx] {
- t.Errorf("generated podCidr: %q, expecting: %q", kubenet.podCIDRs[idx].String(), tc.expectedPodCIDR[idx])
- }
- }
-
- existingContainerID := kubecontainer.BuildContainerID("docker", "123")
- kubenet.podIPs[existingContainerID] = utilsets.NewString(tc.ip)
-
- mockcni.On("DelNetwork", mock.AnythingOfType("*context.timerCtx"), mock.AnythingOfType("*libcni.NetworkConfig"), mock.AnythingOfType("*libcni.RuntimeConf")).Return(nil)
-
- if err := kubenet.TearDownPod("namespace", "name", existingContainerID); err != nil {
- t.Fatalf("Unexpected error in TearDownPod: %v", err)
- }
- // Assert that the CNI DelNetwork made it through and we didn't crash
- // without a runtime.
- mockcni.AssertExpectations(t)
- }
-}
-
-func TestGetRoutesConfig(t *testing.T) {
- for _, test := range []struct {
- cidrs []string
- routes string
- }{
- {
- cidrs: []string{"10.0.0.1/24"},
- routes: `{"dst": "0.0.0.0/0"}`,
- },
- {
- cidrs: []string{"2001:4860:4860::8888/32"},
- routes: `{"dst": "::/0"}`,
- },
- {
- cidrs: []string{"2001:4860:4860::8888/32", "10.0.0.1/24"},
- routes: `{"dst": "0.0.0.0/0"},{"dst": "::/0"}`,
- },
- } {
- var cidrs []*net.IPNet
- for _, c := range test.cidrs {
- _, cidr, err := netutils.ParseCIDRSloppy(c)
- assert.NoError(t, err)
- cidrs = append(cidrs, cidr)
- }
- fakeKubenet := &kubenetNetworkPlugin{podCIDRs: cidrs}
- assert.Equal(t, test.routes, fakeKubenet.getRoutesConfig())
- }
-}
-
-func TestGetRangesConfig(t *testing.T) {
- for _, test := range []struct {
- cidrs []string
- ranges string
- }{
- {
- cidrs: []string{"10.0.0.0/24"},
- ranges: `
-[{
-"subnet": "10.0.0.0/24"
-}]`,
- },
- {
- cidrs: []string{"2001:4860::/32"},
- ranges: `
-[{
-"subnet": "2001:4860::/32"
-}]`,
- },
- {
- cidrs: []string{"10.0.0.0/24", "2001:4860::/32"},
- ranges: `
-[{
-"subnet": "10.0.0.0/24"
-}],
-[{
-"subnet": "2001:4860::/32"
-}]`,
- },
- } {
- var cidrs []*net.IPNet
- for _, c := range test.cidrs {
- _, cidr, err := netutils.ParseCIDRSloppy(c)
- assert.NoError(t, err)
- cidrs = append(cidrs, cidr)
- }
- fakeKubenet := &kubenetNetworkPlugin{podCIDRs: cidrs}
- assert.Equal(t, test.ranges, fakeKubenet.getRangesConfig())
- }
-}
-
-//TODO: add unit test for each implementation of network plugin interface
diff --git a/pkg/kubelet/dockershim/network/kubenet/kubenet_unsupported.go b/pkg/kubelet/dockershim/network/kubenet/kubenet_unsupported.go
deleted file mode 100644
index c9adf1c6d3d..00000000000
--- a/pkg/kubelet/dockershim/network/kubenet/kubenet_unsupported.go
+++ /dev/null
@@ -1,56 +0,0 @@
-//go:build !linux && !dockerless
-// +build !linux,!dockerless
-
-/*
-Copyright 2014 The Kubernetes Authors.
-
-Licensed under the Apache License, Version 2.0 (the "License");
-you may not use this file except in compliance with the License.
-You may obtain a copy of the License at
-
- http://www.apache.org/licenses/LICENSE-2.0
-
-Unless required by applicable law or agreed to in writing, software
-distributed under the License is distributed on an "AS IS" BASIS,
-WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-See the License for the specific language governing permissions and
-limitations under the License.
-*/
-
-package kubenet
-
-import (
- "fmt"
-
- kubeletconfig "k8s.io/kubernetes/pkg/kubelet/apis/config"
- kubecontainer "k8s.io/kubernetes/pkg/kubelet/container"
- "k8s.io/kubernetes/pkg/kubelet/dockershim/network"
-)
-
-type kubenetNetworkPlugin struct {
- network.NoopNetworkPlugin
-}
-
-func NewPlugin(networkPluginDirs []string, cacheDir string) network.NetworkPlugin {
- return &kubenetNetworkPlugin{}
-}
-
-func (plugin *kubenetNetworkPlugin) Init(host network.Host, hairpinMode kubeletconfig.HairpinMode, nonMasqueradeCIDR string, mtu int) error {
- return fmt.Errorf("Kubenet is not supported in this build")
-}
-
-func (plugin *kubenetNetworkPlugin) Name() string {
- return "kubenet"
-}
-
-func (plugin *kubenetNetworkPlugin) SetUpPod(namespace string, name string, id kubecontainer.ContainerID, annotations, options map[string]string) error {
- return fmt.Errorf("Kubenet is not supported in this build")
-}
-
-func (plugin *kubenetNetworkPlugin) TearDownPod(namespace string, name string, id kubecontainer.ContainerID) error {
- return fmt.Errorf("Kubenet is not supported in this build")
-}
-
-func (plugin *kubenetNetworkPlugin) GetPodNetworkStatus(namespace string, name string, id kubecontainer.ContainerID) (*network.PodNetworkStatus, error) {
- return nil, fmt.Errorf("Kubenet is not supported in this build")
-}
diff --git a/pkg/kubelet/dockershim/network/metrics/metrics.go b/pkg/kubelet/dockershim/network/metrics/metrics.go
deleted file mode 100644
index 318feabe31a..00000000000
--- a/pkg/kubelet/dockershim/network/metrics/metrics.go
+++ /dev/null
@@ -1,93 +0,0 @@
-//go:build !dockerless
-// +build !dockerless
-
-/*
-Copyright 2017 The Kubernetes Authors.
-
-Licensed under the Apache License, Version 2.0 (the "License");
-you may not use this file except in compliance with the License.
-You may obtain a copy of the License at
-
- http://www.apache.org/licenses/LICENSE-2.0
-
-Unless required by applicable law or agreed to in writing, software
-distributed under the License is distributed on an "AS IS" BASIS,
-WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-See the License for the specific language governing permissions and
-limitations under the License.
-*/
-
-package metrics
-
-import (
- "sync"
- "time"
-
- "k8s.io/component-base/metrics"
- "k8s.io/component-base/metrics/legacyregistry"
-)
-
-const (
- // NetworkPluginOperationsKey is the key for operation count metrics.
- NetworkPluginOperationsKey = "network_plugin_operations_total"
- // NetworkPluginOperationsLatencyKey is the key for the operation latency metrics.
- NetworkPluginOperationsLatencyKey = "network_plugin_operations_duration_seconds"
- // NetworkPluginOperationsErrorsKey is the key for the operations error metrics.
- NetworkPluginOperationsErrorsKey = "network_plugin_operations_errors_total"
-
- // Keep the "kubelet" subsystem for backward compatibility.
- kubeletSubsystem = "kubelet"
-)
-
-var (
- // NetworkPluginOperationsLatency collects operation latency numbers by operation
- // type.
- NetworkPluginOperationsLatency = metrics.NewHistogramVec(
- &metrics.HistogramOpts{
- Subsystem: kubeletSubsystem,
- Name: NetworkPluginOperationsLatencyKey,
- Help: "Latency in seconds of network plugin operations. Broken down by operation type.",
- Buckets: metrics.DefBuckets,
- StabilityLevel: metrics.ALPHA,
- },
- []string{"operation_type"},
- )
-
- // NetworkPluginOperations collects operation counts by operation type.
- NetworkPluginOperations = metrics.NewCounterVec(
- &metrics.CounterOpts{
- Subsystem: kubeletSubsystem,
- Name: NetworkPluginOperationsKey,
- Help: "Cumulative number of network plugin operations by operation type.",
- StabilityLevel: metrics.ALPHA,
- },
- []string{"operation_type"},
- )
-
- // NetworkPluginOperationsErrors collects operation errors by operation type.
- NetworkPluginOperationsErrors = metrics.NewCounterVec(
- &metrics.CounterOpts{
- Subsystem: kubeletSubsystem,
- Name: NetworkPluginOperationsErrorsKey,
- Help: "Cumulative number of network plugin operation errors by operation type.",
- StabilityLevel: metrics.ALPHA,
- },
- []string{"operation_type"},
- )
-)
-
-var registerMetrics sync.Once
-
-// Register all metrics.
-func Register() {
- registerMetrics.Do(func() {
- legacyregistry.MustRegister(NetworkPluginOperationsLatency)
- legacyregistry.MustRegister(NetworkPluginOperations)
- legacyregistry.MustRegister(NetworkPluginOperationsErrors)
- })
-}
-
-// SinceInSeconds gets the time since the specified start in seconds.
-func SinceInSeconds(start time.Time) float64 {
- return time.Since(start).Seconds()
-}
diff --git a/pkg/kubelet/dockershim/network/network.go b/pkg/kubelet/dockershim/network/network.go
deleted file mode 100644
index 82869c7aaf5..00000000000
--- a/pkg/kubelet/dockershim/network/network.go
+++ /dev/null
@@ -1,30 +0,0 @@
-//go:build !dockerless
-// +build !dockerless
-
-/*
-Copyright 2014 The Kubernetes Authors.
-
-Licensed under the Apache License, Version 2.0 (the "License");
-you may not use this file except in compliance with the License.
-You may obtain a copy of the License at
-
- http://www.apache.org/licenses/LICENSE-2.0
-
-Unless required by applicable law or agreed to in writing, software
-distributed under the License is distributed on an "AS IS" BASIS,
-WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-See the License for the specific language governing permissions and
-limitations under the License.
-*/
-
-package network
-
-// TODO: Consider making this value configurable.
-const DefaultInterfaceName = "eth0"
-
-// CNITimeoutSec is set to be slightly less than 240sec/4mins, which is the default remote runtime request timeout.
-const CNITimeoutSec = 220
-
-// UseDefaultMTU is a marker value that indicates the plugin should determine its own MTU
-// It is the zero value, so a non-initialized value will mean "UseDefault"
-const UseDefaultMTU = 0
diff --git a/pkg/kubelet/dockershim/network/plugins.go b/pkg/kubelet/dockershim/network/plugins.go
deleted file mode 100644
index a3eea046677..00000000000
--- a/pkg/kubelet/dockershim/network/plugins.go
+++ /dev/null
@@ -1,427 +0,0 @@
-//go:build !dockerless
-// +build !dockerless
-
-/*
-Copyright 2014 The Kubernetes Authors.
-
-Licensed under the Apache License, Version 2.0 (the "License");
-you may not use this file except in compliance with the License.
-You may obtain a copy of the License at
-
- http://www.apache.org/licenses/LICENSE-2.0
-
-Unless required by applicable law or agreed to in writing, software
-distributed under the License is distributed on an "AS IS" BASIS,
-WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-See the License for the specific language governing permissions and
-limitations under the License.
-*/
-
-//go:generate mockgen -copyright_file=$BUILD_TAG_FILE -source=plugins.go -destination=testing/mock_network_plugin.go -package=testing NetworkPlugin
-package network
-
-import (
- "fmt"
- "net"
- "strings"
- "sync"
- "time"
-
- metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
- utilerrors "k8s.io/apimachinery/pkg/util/errors"
- utilsets "k8s.io/apimachinery/pkg/util/sets"
- "k8s.io/apimachinery/pkg/util/validation"
- utilsysctl "k8s.io/component-helpers/node/util/sysctl"
- "k8s.io/klog/v2"
- kubeletconfig "k8s.io/kubernetes/pkg/kubelet/apis/config"
- kubecontainer "k8s.io/kubernetes/pkg/kubelet/container"
- "k8s.io/kubernetes/pkg/kubelet/dockershim/network/hostport"
- "k8s.io/kubernetes/pkg/kubelet/dockershim/network/metrics"
- utilexec "k8s.io/utils/exec"
- netutils "k8s.io/utils/net"
-)
-
-const (
- DefaultPluginName = "kubernetes.io/no-op"
-
- // Called when the node's Pod CIDR is known when using the
- // controller manager's --allocate-node-cidrs=true option
- NET_PLUGIN_EVENT_POD_CIDR_CHANGE = "pod-cidr-change"
- NET_PLUGIN_EVENT_POD_CIDR_CHANGE_DETAIL_CIDR = "pod-cidr"
-)
-
-// NetworkPlugin is an interface to network plugins for the kubelet
-type NetworkPlugin interface {
- // Init initializes the plugin. This will be called exactly once
- // before any other methods are called.
- Init(host Host, hairpinMode kubeletconfig.HairpinMode, nonMasqueradeCIDR string, mtu int) error
-
- // Called on various events like:
- // NET_PLUGIN_EVENT_POD_CIDR_CHANGE
- Event(name string, details map[string]interface{})
-
- // Name returns the plugin's name. This will be used when searching
- // for a plugin by name, e.g.
- Name() string
-
- // Returns a set of NET_PLUGIN_CAPABILITY_*
- Capabilities() utilsets.Int
-
- // SetUpPod is the method called after the infra container of
- // the pod has been created but before the other containers of the
- // pod are launched.
- SetUpPod(namespace string, name string, podSandboxID kubecontainer.ContainerID, annotations, options map[string]string) error
-
- // TearDownPod is the method called before a pod's infra container will be deleted
- TearDownPod(namespace string, name string, podSandboxID kubecontainer.ContainerID) error
-
- // GetPodNetworkStatus is the method called to obtain the ipv4 or ipv6 addresses of the container
- GetPodNetworkStatus(namespace string, name string, podSandboxID kubecontainer.ContainerID) (*PodNetworkStatus, error)
-
- // Status returns error if the network plugin is in error state
- Status() error
-}
-
-// +k8s:deepcopy-gen:interfaces=k8s.io/apimachinery/pkg/runtime.Object
-
-// PodNetworkStatus stores the network status of a pod (currently just the primary IP address)
-// This struct represents version "v1beta1"
-type PodNetworkStatus struct {
- metav1.TypeMeta `json:",inline"`
-
- // IP is the primary ipv4/ipv6 address of the pod. Among other things it is the address that -
- // - kube expects to be reachable across the cluster
- // - service endpoints are constructed with
- // - will be reported in the PodStatus.PodIP field (will override the IP reported by docker)
- IP net.IP `json:"ip" description:"Primary IP address of the pod"`
- // IPs is the list of IPs assigned to Pod. IPs[0] == IP. The rest of the list is additional IPs
- IPs []net.IP `json:"ips" description:"list of additional ips (inclusive of IP) assigned to pod"`
-}
-
-// Host is an interface that plugins can use to access the kubelet.
-// TODO(#35457): get rid of this backchannel to the kubelet. The scope of
-// the back channel is restricted to host-ports/testing, and restricted
-// to kubenet. No other network plugin wrapper needs it. Other plugins
-// only require a way to access namespace information and port mapping
-// information , which they can do directly through the embedded interfaces.
-type Host interface {
- // NamespaceGetter is a getter for sandbox namespace information.
- NamespaceGetter
-
- // PortMappingGetter is a getter for sandbox port mapping information.
- PortMappingGetter
-}
-
-// NamespaceGetter is an interface to retrieve namespace information for a given
-// podSandboxID. Typically implemented by runtime shims that are closely coupled to
-// CNI plugin wrappers like kubenet.
-type NamespaceGetter interface {
- // GetNetNS returns network namespace information for the given containerID.
- // Runtimes should *never* return an empty namespace and nil error for
- // a container; if error is nil then the namespace string must be valid.
- GetNetNS(containerID string) (string, error)
-}
-
-// PortMappingGetter is an interface to retrieve port mapping information for a given
-// podSandboxID. Typically implemented by runtime shims that are closely coupled to
-// CNI plugin wrappers like kubenet.
-type PortMappingGetter interface {
- // GetPodPortMappings returns sandbox port mappings information.
- GetPodPortMappings(containerID string) ([]*hostport.PortMapping, error)
-}
-
-// InitNetworkPlugin inits the plugin that matches networkPluginName. Plugins must have unique names.
-func InitNetworkPlugin(plugins []NetworkPlugin, networkPluginName string, host Host, hairpinMode kubeletconfig.HairpinMode, nonMasqueradeCIDR string, mtu int) (NetworkPlugin, error) {
- if networkPluginName == "" {
- // default to the no_op plugin
- plug := &NoopNetworkPlugin{}
- plug.Sysctl = utilsysctl.New()
- if err := plug.Init(host, hairpinMode, nonMasqueradeCIDR, mtu); err != nil {
- return nil, err
- }
- return plug, nil
- }
-
- pluginMap := map[string]NetworkPlugin{}
-
- allErrs := []error{}
- for _, plugin := range plugins {
- name := plugin.Name()
- if errs := validation.IsQualifiedName(name); len(errs) != 0 {
- allErrs = append(allErrs, fmt.Errorf("network plugin has invalid name: %q: %s", name, strings.Join(errs, ";")))
- continue
- }
-
- if _, found := pluginMap[name]; found {
- allErrs = append(allErrs, fmt.Errorf("network plugin %q was registered more than once", name))
- continue
- }
- pluginMap[name] = plugin
- }
-
- chosenPlugin := pluginMap[networkPluginName]
- if chosenPlugin != nil {
- err := chosenPlugin.Init(host, hairpinMode, nonMasqueradeCIDR, mtu)
- if err != nil {
- allErrs = append(allErrs, fmt.Errorf("network plugin %q failed init: %v", networkPluginName, err))
- } else {
- klog.V(1).InfoS("Loaded network plugin", "networkPluginName", networkPluginName)
- }
- } else {
- allErrs = append(allErrs, fmt.Errorf("network plugin %q not found", networkPluginName))
- }
-
- return chosenPlugin, utilerrors.NewAggregate(allErrs)
-}
-
-type NoopNetworkPlugin struct {
- Sysctl utilsysctl.Interface
-}
-
-const sysctlBridgeCallIPTables = "net/bridge/bridge-nf-call-iptables"
-const sysctlBridgeCallIP6Tables = "net/bridge/bridge-nf-call-ip6tables"
-
-func (plugin *NoopNetworkPlugin) Init(host Host, hairpinMode kubeletconfig.HairpinMode, nonMasqueradeCIDR string, mtu int) error {
- // Set bridge-nf-call-iptables=1 to maintain compatibility with older
- // kubernetes versions to ensure the iptables-based kube proxy functions
- // correctly. Other plugins are responsible for setting this correctly
- // depending on whether or not they connect containers to Linux bridges
- // or use some other mechanism (ie, SDN vswitch).
-
- // Ensure the netfilter module is loaded on kernel >= 3.18; previously
- // it was built-in.
- utilexec.New().Command("modprobe", "br-netfilter").CombinedOutput()
- if err := plugin.Sysctl.SetSysctl(sysctlBridgeCallIPTables, 1); err != nil {
- klog.InfoS("can't set sysctl bridge-nf-call-iptables", "err", err)
- }
- if val, err := plugin.Sysctl.GetSysctl(sysctlBridgeCallIP6Tables); err == nil {
- if val != 1 {
- if err = plugin.Sysctl.SetSysctl(sysctlBridgeCallIP6Tables, 1); err != nil {
- klog.InfoS("can't set sysctl bridge-nf-call-ip6tables", "err", err)
- }
- }
- }
-
- return nil
-}
-
-func (plugin *NoopNetworkPlugin) Event(name string, details map[string]interface{}) {
-}
-
-func (plugin *NoopNetworkPlugin) Name() string {
- return DefaultPluginName
-}
-
-func (plugin *NoopNetworkPlugin) Capabilities() utilsets.Int {
- return utilsets.NewInt()
-}
-
-func (plugin *NoopNetworkPlugin) SetUpPod(namespace string, name string, id kubecontainer.ContainerID, annotations, options map[string]string) error {
- return nil
-}
-
-func (plugin *NoopNetworkPlugin) TearDownPod(namespace string, name string, id kubecontainer.ContainerID) error {
- return nil
-}
-
-func (plugin *NoopNetworkPlugin) GetPodNetworkStatus(namespace string, name string, id kubecontainer.ContainerID) (*PodNetworkStatus, error) {
- return nil, nil
-}
-
-func (plugin *NoopNetworkPlugin) Status() error {
- return nil
-}
-
-func getOnePodIP(execer utilexec.Interface, nsenterPath, netnsPath, interfaceName, addrType string) (net.IP, error) {
- // Try to retrieve ip inside container network namespace
- output, err := execer.Command(nsenterPath, fmt.Sprintf("--net=%s", netnsPath), "-F", "--",
- "ip", "-o", addrType, "addr", "show", "dev", interfaceName, "scope", "global").CombinedOutput()
- if err != nil {
- return nil, fmt.Errorf("unexpected command output %s with error: %v", output, err)
- }
-
- lines := strings.Split(string(output), "\n")
- if len(lines) < 1 {
- return nil, fmt.Errorf("unexpected command output %s", output)
- }
- fields := strings.Fields(lines[0])
- if len(fields) < 4 {
- return nil, fmt.Errorf("unexpected address output %s ", lines[0])
- }
- ip, _, err := netutils.ParseCIDRSloppy(fields[3])
- if err != nil {
- return nil, fmt.Errorf("CNI failed to parse ip from output %s due to %v", output, err)
- }
-
- return ip, nil
-}
-
-// GetPodIP gets the IP of the pod by inspecting the network info inside the pod's network namespace.
-// TODO (khenidak). The "primary ip" in dual stack world does not really exist. For now
-// we are defaulting to v4 as primary
-func GetPodIPs(execer utilexec.Interface, nsenterPath, netnsPath, interfaceName string) ([]net.IP, error) {
- var (
- list []net.IP
- errs []error
- )
- for _, addrType := range []string{"-4", "-6"} {
- if ip, err := getOnePodIP(execer, nsenterPath, netnsPath, interfaceName, addrType); err == nil {
- list = append(list, ip)
- } else {
- errs = append(errs, err)
- }
- }
-
- if len(list) == 0 {
- return nil, utilerrors.NewAggregate(errs)
- }
- return list, nil
-
-}
-
-type NoopPortMappingGetter struct{}
-
-func (*NoopPortMappingGetter) GetPodPortMappings(containerID string) ([]*hostport.PortMapping, error) {
- return nil, nil
-}
-
-// The PluginManager wraps a kubelet network plugin and provides synchronization
-// for a given pod's network operations. Each pod's setup/teardown/status operations
-// are synchronized against each other, but network operations of other pods can
-// proceed in parallel.
-type PluginManager struct {
- // Network plugin being wrapped
- plugin NetworkPlugin
-
- // Pod list and lock
- podsLock sync.Mutex
- pods map[string]*podLock
-}
-
-func NewPluginManager(plugin NetworkPlugin) *PluginManager {
- metrics.Register()
- return &PluginManager{
- plugin: plugin,
- pods: make(map[string]*podLock),
- }
-}
-
-func (pm *PluginManager) PluginName() string {
- return pm.plugin.Name()
-}
-
-func (pm *PluginManager) Event(name string, details map[string]interface{}) {
- pm.plugin.Event(name, details)
-}
-
-func (pm *PluginManager) Status() error {
- return pm.plugin.Status()
-}
-
-type podLock struct {
- // Count of in-flight operations for this pod; when this reaches zero
- // the lock can be removed from the pod map
- refcount uint
-
- // Lock to synchronize operations for this specific pod
- mu sync.Mutex
-}
-
-// Lock network operations for a specific pod. If that pod is not yet in
-// the pod map, it will be added. The reference count for the pod will
-// be increased.
-func (pm *PluginManager) podLock(fullPodName string) *sync.Mutex {
- pm.podsLock.Lock()
- defer pm.podsLock.Unlock()
-
- lock, ok := pm.pods[fullPodName]
- if !ok {
- lock = &podLock{}
- pm.pods[fullPodName] = lock
- }
- lock.refcount++
- return &lock.mu
-}
-
-// Unlock network operations for a specific pod. The reference count for the
-// pod will be decreased. If the reference count reaches zero, the pod will be
-// removed from the pod map.
-func (pm *PluginManager) podUnlock(fullPodName string) {
- pm.podsLock.Lock()
- defer pm.podsLock.Unlock()
-
- lock, ok := pm.pods[fullPodName]
- if !ok {
- klog.InfoS("Unbalanced pod lock unref for the pod", "podFullName", fullPodName)
- return
- } else if lock.refcount == 0 {
- // This should never ever happen, but handle it anyway
- delete(pm.pods, fullPodName)
- klog.InfoS("Pod lock for the pod still in map with zero refcount", "podFullName", fullPodName)
- return
- }
- lock.refcount--
- lock.mu.Unlock()
- if lock.refcount == 0 {
- delete(pm.pods, fullPodName)
- }
-}
-
-// recordOperation records operation and duration
-func recordOperation(operation string, start time.Time) {
- metrics.NetworkPluginOperations.WithLabelValues(operation).Inc()
- metrics.NetworkPluginOperationsLatency.WithLabelValues(operation).Observe(metrics.SinceInSeconds(start))
-}
-
-// recordError records errors for metric.
-func recordError(operation string) {
- metrics.NetworkPluginOperationsErrors.WithLabelValues(operation).Inc()
-}
-
-func (pm *PluginManager) GetPodNetworkStatus(podNamespace, podName string, id kubecontainer.ContainerID) (*PodNetworkStatus, error) {
- const operation = "get_pod_network_status"
- defer recordOperation(operation, time.Now())
- fullPodName := kubecontainer.BuildPodFullName(podName, podNamespace)
- pm.podLock(fullPodName).Lock()
- defer pm.podUnlock(fullPodName)
-
- netStatus, err := pm.plugin.GetPodNetworkStatus(podNamespace, podName, id)
- if err != nil {
- recordError(operation)
- return nil, fmt.Errorf("networkPlugin %s failed on the status hook for pod %q: %v", pm.plugin.Name(), fullPodName, err)
- }
-
- return netStatus, nil
-}
-
-func (pm *PluginManager) SetUpPod(podNamespace, podName string, id kubecontainer.ContainerID, annotations, options map[string]string) error {
- const operation = "set_up_pod"
- defer recordOperation(operation, time.Now())
- fullPodName := kubecontainer.BuildPodFullName(podName, podNamespace)
- pm.podLock(fullPodName).Lock()
- defer pm.podUnlock(fullPodName)
-
- klog.V(3).InfoS("Calling network plugin to set up the pod", "pod", klog.KRef(podNamespace, podName), "networkPluginName", pm.plugin.Name())
- if err := pm.plugin.SetUpPod(podNamespace, podName, id, annotations, options); err != nil {
- recordError(operation)
- return fmt.Errorf("networkPlugin %s failed to set up pod %q network: %v", pm.plugin.Name(), fullPodName, err)
- }
-
- return nil
-}
-
-func (pm *PluginManager) TearDownPod(podNamespace, podName string, id kubecontainer.ContainerID) error {
- const operation = "tear_down_pod"
- defer recordOperation(operation, time.Now())
- fullPodName := kubecontainer.BuildPodFullName(podName, podNamespace)
- pm.podLock(fullPodName).Lock()
- defer pm.podUnlock(fullPodName)
-
- klog.V(3).InfoS("Calling network plugin to tear down the pod", "pod", klog.KRef(podNamespace, podName), "networkPluginName", pm.plugin.Name())
- if err := pm.plugin.TearDownPod(podNamespace, podName, id); err != nil {
- recordError(operation)
- return fmt.Errorf("networkPlugin %s failed to teardown pod %q network: %v", pm.plugin.Name(), fullPodName, err)
- }
-
- return nil
-}
diff --git a/pkg/kubelet/dockershim/network/plugins_test.go b/pkg/kubelet/dockershim/network/plugins_test.go
deleted file mode 100644
index 286cd6f44b7..00000000000
--- a/pkg/kubelet/dockershim/network/plugins_test.go
+++ /dev/null
@@ -1,69 +0,0 @@
-//go:build !dockerless
-// +build !dockerless
-
-/*
-Copyright 2020 The Kubernetes Authors.
-
-Licensed under the Apache License, Version 2.0 (the "License");
-you may not use this file except in compliance with the License.
-You may obtain a copy of the License at
-
- http://www.apache.org/licenses/LICENSE-2.0
-
-Unless required by applicable law or agreed to in writing, software
-distributed under the License is distributed on an "AS IS" BASIS,
-WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-See the License for the specific language governing permissions and
-limitations under the License.
-*/
-
-package network
-
-import (
- "strings"
- "testing"
- "time"
-
- "k8s.io/component-base/metrics/legacyregistry"
- "k8s.io/component-base/metrics/testutil"
- "k8s.io/kubernetes/pkg/kubelet/dockershim/network/metrics"
-)
-
-func TestNetworkPluginManagerMetrics(t *testing.T) {
- metrics.Register()
- defer legacyregistry.Reset()
-
- operation := "test_operation"
- recordOperation(operation, time.Now())
- recordError(operation)
-
- cases := []struct {
- metricName string
- want string
- }{
- {
- metricName: "kubelet_network_plugin_operations_total",
- want: `
-# HELP kubelet_network_plugin_operations_total [ALPHA] Cumulative number of network plugin operations by operation type.
-# TYPE kubelet_network_plugin_operations_total counter
-kubelet_network_plugin_operations_total{operation_type="test_operation"} 1
-`,
- },
- {
- metricName: "kubelet_network_plugin_operations_errors_total",
- want: `
-# HELP kubelet_network_plugin_operations_errors_total [ALPHA] Cumulative number of network plugin operation errors by operation type.
-# TYPE kubelet_network_plugin_operations_errors_total counter
-kubelet_network_plugin_operations_errors_total{operation_type="test_operation"} 1
-`,
- },
- }
-
- for _, tc := range cases {
- t.Run(tc.metricName, func(t *testing.T) {
- if err := testutil.GatherAndCompare(legacyregistry.DefaultGatherer, strings.NewReader(tc.want), tc.metricName); err != nil {
- t.Fatal(err)
- }
- })
- }
-}
diff --git a/pkg/kubelet/dockershim/network/testing/fake_host.go b/pkg/kubelet/dockershim/network/testing/fake_host.go
deleted file mode 100644
index c3d415be03a..00000000000
--- a/pkg/kubelet/dockershim/network/testing/fake_host.go
+++ /dev/null
@@ -1,69 +0,0 @@
-//go:build !dockerless
-// +build !dockerless
-
-/*
-Copyright 2014 The Kubernetes Authors.
-
-Licensed under the Apache License, Version 2.0 (the "License");
-you may not use this file except in compliance with the License.
-You may obtain a copy of the License at
-
- http://www.apache.org/licenses/LICENSE-2.0
-
-Unless required by applicable law or agreed to in writing, software
-distributed under the License is distributed on an "AS IS" BASIS,
-WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-See the License for the specific language governing permissions and
-limitations under the License.
-*/
-
-package testing
-
-// helper for testing plugins
-// a fake host is created here that can be used by plugins for testing
-
-import (
- "k8s.io/api/core/v1"
- clientset "k8s.io/client-go/kubernetes"
- "k8s.io/kubernetes/pkg/kubelet/dockershim/network/hostport"
-)
-
-type fakeNetworkHost struct {
- fakeNamespaceGetter
- FakePortMappingGetter
- kubeClient clientset.Interface
- Legacy bool
-}
-
-func NewFakeHost(kubeClient clientset.Interface) *fakeNetworkHost {
- host := &fakeNetworkHost{kubeClient: kubeClient, Legacy: true}
- return host
-}
-
-func (fnh *fakeNetworkHost) GetPodByName(name, namespace string) (*v1.Pod, bool) {
- return nil, false
-}
-
-func (fnh *fakeNetworkHost) GetKubeClient() clientset.Interface {
- return nil
-}
-
-func (nh *fakeNetworkHost) SupportsLegacyFeatures() bool {
- return nh.Legacy
-}
-
-type fakeNamespaceGetter struct {
- ns string
-}
-
-func (nh *fakeNamespaceGetter) GetNetNS(containerID string) (string, error) {
- return nh.ns, nil
-}
-
-type FakePortMappingGetter struct {
- PortMaps map[string][]*hostport.PortMapping
-}
-
-func (pm *FakePortMappingGetter) GetPodPortMappings(containerID string) ([]*hostport.PortMapping, error) {
- return pm.PortMaps[containerID], nil
-}
diff --git a/pkg/kubelet/dockershim/network/testing/mock_network_plugin.go b/pkg/kubelet/dockershim/network/testing/mock_network_plugin.go
deleted file mode 100644
index 6691d19fb3f..00000000000
--- a/pkg/kubelet/dockershim/network/testing/mock_network_plugin.go
+++ /dev/null
@@ -1,297 +0,0 @@
-//go:build !dockerless
-// +build !dockerless
-
-/*
-Copyright The Kubernetes Authors.
-
-Licensed under the Apache License, Version 2.0 (the "License");
-you may not use this file except in compliance with the License.
-You may obtain a copy of the License at
-
- http://www.apache.org/licenses/LICENSE-2.0
-
-Unless required by applicable law or agreed to in writing, software
-distributed under the License is distributed on an "AS IS" BASIS,
-WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-See the License for the specific language governing permissions and
-limitations under the License.
-*/
-
-// Code generated by MockGen. DO NOT EDIT.
-// Source: plugins.go
-
-// Package testing is a generated GoMock package.
-package testing
-
-import (
- gomock "github.com/golang/mock/gomock"
- sets "k8s.io/apimachinery/pkg/util/sets"
- config "k8s.io/kubernetes/pkg/kubelet/apis/config"
- container "k8s.io/kubernetes/pkg/kubelet/container"
- network "k8s.io/kubernetes/pkg/kubelet/dockershim/network"
- hostport "k8s.io/kubernetes/pkg/kubelet/dockershim/network/hostport"
- reflect "reflect"
-)
-
-// MockNetworkPlugin is a mock of NetworkPlugin interface
-type MockNetworkPlugin struct {
- ctrl *gomock.Controller
- recorder *MockNetworkPluginMockRecorder
-}
-
-// MockNetworkPluginMockRecorder is the mock recorder for MockNetworkPlugin
-type MockNetworkPluginMockRecorder struct {
- mock *MockNetworkPlugin
-}
-
-// NewMockNetworkPlugin creates a new mock instance
-func NewMockNetworkPlugin(ctrl *gomock.Controller) *MockNetworkPlugin {
- mock := &MockNetworkPlugin{ctrl: ctrl}
- mock.recorder = &MockNetworkPluginMockRecorder{mock}
- return mock
-}
-
-// EXPECT returns an object that allows the caller to indicate expected use
-func (m *MockNetworkPlugin) EXPECT() *MockNetworkPluginMockRecorder {
- return m.recorder
-}
-
-// Init mocks base method
-func (m *MockNetworkPlugin) Init(host network.Host, hairpinMode config.HairpinMode, nonMasqueradeCIDR string, mtu int) error {
- m.ctrl.T.Helper()
- ret := m.ctrl.Call(m, "Init", host, hairpinMode, nonMasqueradeCIDR, mtu)
- ret0, _ := ret[0].(error)
- return ret0
-}
-
-// Init indicates an expected call of Init
-func (mr *MockNetworkPluginMockRecorder) Init(host, hairpinMode, nonMasqueradeCIDR, mtu interface{}) *gomock.Call {
- mr.mock.ctrl.T.Helper()
- return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "Init", reflect.TypeOf((*MockNetworkPlugin)(nil).Init), host, hairpinMode, nonMasqueradeCIDR, mtu)
-}
-
-// Event mocks base method
-func (m *MockNetworkPlugin) Event(name string, details map[string]interface{}) {
- m.ctrl.T.Helper()
- m.ctrl.Call(m, "Event", name, details)
-}
-
-// Event indicates an expected call of Event
-func (mr *MockNetworkPluginMockRecorder) Event(name, details interface{}) *gomock.Call {
- mr.mock.ctrl.T.Helper()
- return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "Event", reflect.TypeOf((*MockNetworkPlugin)(nil).Event), name, details)
-}
-
-// Name mocks base method
-func (m *MockNetworkPlugin) Name() string {
- m.ctrl.T.Helper()
- ret := m.ctrl.Call(m, "Name")
- ret0, _ := ret[0].(string)
- return ret0
-}
-
-// Name indicates an expected call of Name
-func (mr *MockNetworkPluginMockRecorder) Name() *gomock.Call {
- mr.mock.ctrl.T.Helper()
- return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "Name", reflect.TypeOf((*MockNetworkPlugin)(nil).Name))
-}
-
-// Capabilities mocks base method
-func (m *MockNetworkPlugin) Capabilities() sets.Int {
- m.ctrl.T.Helper()
- ret := m.ctrl.Call(m, "Capabilities")
- ret0, _ := ret[0].(sets.Int)
- return ret0
-}
-
-// Capabilities indicates an expected call of Capabilities
-func (mr *MockNetworkPluginMockRecorder) Capabilities() *gomock.Call {
- mr.mock.ctrl.T.Helper()
- return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "Capabilities", reflect.TypeOf((*MockNetworkPlugin)(nil).Capabilities))
-}
-
-// SetUpPod mocks base method
-func (m *MockNetworkPlugin) SetUpPod(namespace, name string, podSandboxID container.ContainerID, annotations, options map[string]string) error {
- m.ctrl.T.Helper()
- ret := m.ctrl.Call(m, "SetUpPod", namespace, name, podSandboxID, annotations, options)
- ret0, _ := ret[0].(error)
- return ret0
-}
-
-// SetUpPod indicates an expected call of SetUpPod
-func (mr *MockNetworkPluginMockRecorder) SetUpPod(namespace, name, podSandboxID, annotations, options interface{}) *gomock.Call {
- mr.mock.ctrl.T.Helper()
- return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "SetUpPod", reflect.TypeOf((*MockNetworkPlugin)(nil).SetUpPod), namespace, name, podSandboxID, annotations, options)
-}
-
-// TearDownPod mocks base method
-func (m *MockNetworkPlugin) TearDownPod(namespace, name string, podSandboxID container.ContainerID) error {
- m.ctrl.T.Helper()
- ret := m.ctrl.Call(m, "TearDownPod", namespace, name, podSandboxID)
- ret0, _ := ret[0].(error)
- return ret0
-}
-
-// TearDownPod indicates an expected call of TearDownPod
-func (mr *MockNetworkPluginMockRecorder) TearDownPod(namespace, name, podSandboxID interface{}) *gomock.Call {
- mr.mock.ctrl.T.Helper()
- return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "TearDownPod", reflect.TypeOf((*MockNetworkPlugin)(nil).TearDownPod), namespace, name, podSandboxID)
-}
-
-// GetPodNetworkStatus mocks base method
-func (m *MockNetworkPlugin) GetPodNetworkStatus(namespace, name string, podSandboxID container.ContainerID) (*network.PodNetworkStatus, error) {
- m.ctrl.T.Helper()
- ret := m.ctrl.Call(m, "GetPodNetworkStatus", namespace, name, podSandboxID)
- ret0, _ := ret[0].(*network.PodNetworkStatus)
- ret1, _ := ret[1].(error)
- return ret0, ret1
-}
-
-// GetPodNetworkStatus indicates an expected call of GetPodNetworkStatus
-func (mr *MockNetworkPluginMockRecorder) GetPodNetworkStatus(namespace, name, podSandboxID interface{}) *gomock.Call {
- mr.mock.ctrl.T.Helper()
- return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "GetPodNetworkStatus", reflect.TypeOf((*MockNetworkPlugin)(nil).GetPodNetworkStatus), namespace, name, podSandboxID)
-}
-
-// Status mocks base method
-func (m *MockNetworkPlugin) Status() error {
- m.ctrl.T.Helper()
- ret := m.ctrl.Call(m, "Status")
- ret0, _ := ret[0].(error)
- return ret0
-}
-
-// Status indicates an expected call of Status
-func (mr *MockNetworkPluginMockRecorder) Status() *gomock.Call {
- mr.mock.ctrl.T.Helper()
- return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "Status", reflect.TypeOf((*MockNetworkPlugin)(nil).Status))
-}
-
-// MockHost is a mock of Host interface
-type MockHost struct {
- ctrl *gomock.Controller
- recorder *MockHostMockRecorder
-}
-
-// MockHostMockRecorder is the mock recorder for MockHost
-type MockHostMockRecorder struct {
- mock *MockHost
-}
-
-// NewMockHost creates a new mock instance
-func NewMockHost(ctrl *gomock.Controller) *MockHost {
- mock := &MockHost{ctrl: ctrl}
- mock.recorder = &MockHostMockRecorder{mock}
- return mock
-}
-
-// EXPECT returns an object that allows the caller to indicate expected use
-func (m *MockHost) EXPECT() *MockHostMockRecorder {
- return m.recorder
-}
-
-// GetNetNS mocks base method
-func (m *MockHost) GetNetNS(containerID string) (string, error) {
- m.ctrl.T.Helper()
- ret := m.ctrl.Call(m, "GetNetNS", containerID)
- ret0, _ := ret[0].(string)
- ret1, _ := ret[1].(error)
- return ret0, ret1
-}
-
-// GetNetNS indicates an expected call of GetNetNS
-func (mr *MockHostMockRecorder) GetNetNS(containerID interface{}) *gomock.Call {
- mr.mock.ctrl.T.Helper()
- return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "GetNetNS", reflect.TypeOf((*MockHost)(nil).GetNetNS), containerID)
-}
-
-// GetPodPortMappings mocks base method
-func (m *MockHost) GetPodPortMappings(containerID string) ([]*hostport.PortMapping, error) {
- m.ctrl.T.Helper()
- ret := m.ctrl.Call(m, "GetPodPortMappings", containerID)
- ret0, _ := ret[0].([]*hostport.PortMapping)
- ret1, _ := ret[1].(error)
- return ret0, ret1
-}
-
-// GetPodPortMappings indicates an expected call of GetPodPortMappings
-func (mr *MockHostMockRecorder) GetPodPortMappings(containerID interface{}) *gomock.Call {
- mr.mock.ctrl.T.Helper()
- return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "GetPodPortMappings", reflect.TypeOf((*MockHost)(nil).GetPodPortMappings), containerID)
-}
-
-// MockNamespaceGetter is a mock of NamespaceGetter interface
-type MockNamespaceGetter struct {
- ctrl *gomock.Controller
- recorder *MockNamespaceGetterMockRecorder
-}
-
-// MockNamespaceGetterMockRecorder is the mock recorder for MockNamespaceGetter
-type MockNamespaceGetterMockRecorder struct {
- mock *MockNamespaceGetter
-}
-
-// NewMockNamespaceGetter creates a new mock instance
-func NewMockNamespaceGetter(ctrl *gomock.Controller) *MockNamespaceGetter {
- mock := &MockNamespaceGetter{ctrl: ctrl}
- mock.recorder = &MockNamespaceGetterMockRecorder{mock}
- return mock
-}
-
-// EXPECT returns an object that allows the caller to indicate expected use
-func (m *MockNamespaceGetter) EXPECT() *MockNamespaceGetterMockRecorder {
- return m.recorder
-}
-
-// GetNetNS mocks base method
-func (m *MockNamespaceGetter) GetNetNS(containerID string) (string, error) {
- m.ctrl.T.Helper()
- ret := m.ctrl.Call(m, "GetNetNS", containerID)
- ret0, _ := ret[0].(string)
- ret1, _ := ret[1].(error)
- return ret0, ret1
-}
-
-// GetNetNS indicates an expected call of GetNetNS
-func (mr *MockNamespaceGetterMockRecorder) GetNetNS(containerID interface{}) *gomock.Call {
- mr.mock.ctrl.T.Helper()
- return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "GetNetNS", reflect.TypeOf((*MockNamespaceGetter)(nil).GetNetNS), containerID)
-}
-
-// MockPortMappingGetter is a mock of PortMappingGetter interface
-type MockPortMappingGetter struct {
- ctrl *gomock.Controller
- recorder *MockPortMappingGetterMockRecorder
-}
-
-// MockPortMappingGetterMockRecorder is the mock recorder for MockPortMappingGetter
-type MockPortMappingGetterMockRecorder struct {
- mock *MockPortMappingGetter
-}
-
-// NewMockPortMappingGetter creates a new mock instance
-func NewMockPortMappingGetter(ctrl *gomock.Controller) *MockPortMappingGetter {
- mock := &MockPortMappingGetter{ctrl: ctrl}
- mock.recorder = &MockPortMappingGetterMockRecorder{mock}
- return mock
-}
-
-// EXPECT returns an object that allows the caller to indicate expected use
-func (m *MockPortMappingGetter) EXPECT() *MockPortMappingGetterMockRecorder {
- return m.recorder
-}
-
-// GetPodPortMappings mocks base method
-func (m *MockPortMappingGetter) GetPodPortMappings(containerID string) ([]*hostport.PortMapping, error) {
- m.ctrl.T.Helper()
- ret := m.ctrl.Call(m, "GetPodPortMappings", containerID)
- ret0, _ := ret[0].([]*hostport.PortMapping)
- ret1, _ := ret[1].(error)
- return ret0, ret1
-}
-
-// GetPodPortMappings indicates an expected call of GetPodPortMappings
-func (mr *MockPortMappingGetterMockRecorder) GetPodPortMappings(containerID interface{}) *gomock.Call {
- mr.mock.ctrl.T.Helper()
- return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "GetPodPortMappings", reflect.TypeOf((*MockPortMappingGetter)(nil).GetPodPortMappings), containerID)
-}
diff --git a/pkg/kubelet/dockershim/network/testing/plugins_test.go b/pkg/kubelet/dockershim/network/testing/plugins_test.go
deleted file mode 100644
index 3a6cc61d77a..00000000000
--- a/pkg/kubelet/dockershim/network/testing/plugins_test.go
+++ /dev/null
@@ -1,252 +0,0 @@
-//go:build !dockerless
-// +build !dockerless
-
-/*
-Copyright 2014 The Kubernetes Authors.
-
-Licensed under the Apache License, Version 2.0 (the "License");
-you may not use this file except in compliance with the License.
-You may obtain a copy of the License at
-
- http://www.apache.org/licenses/LICENSE-2.0
-
-Unless required by applicable law or agreed to in writing, software
-distributed under the License is distributed on an "AS IS" BASIS,
-WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-See the License for the specific language governing permissions and
-limitations under the License.
-*/
-
-package testing
-
-import (
- "fmt"
- "sync"
- "testing"
-
- utilsets "k8s.io/apimachinery/pkg/util/sets"
- sysctltest "k8s.io/component-helpers/node/util/sysctl/testing"
- kubeletconfig "k8s.io/kubernetes/pkg/kubelet/apis/config"
- kubecontainer "k8s.io/kubernetes/pkg/kubelet/container"
- "k8s.io/kubernetes/pkg/kubelet/dockershim/network"
- netutils "k8s.io/utils/net"
-
- "github.com/golang/mock/gomock"
- "github.com/stretchr/testify/assert"
-)
-
-func TestSelectDefaultPlugin(t *testing.T) {
- all_plugins := []network.NetworkPlugin{}
- plug, err := network.InitNetworkPlugin(all_plugins, "", NewFakeHost(nil), kubeletconfig.HairpinNone, "10.0.0.0/8", network.UseDefaultMTU)
- if err != nil {
- t.Fatalf("Unexpected error in selecting default plugin: %v", err)
- }
- if plug == nil {
- t.Fatalf("Failed to select the default plugin.")
- }
- if plug.Name() != network.DefaultPluginName {
- t.Errorf("Failed to select the default plugin. Expected %s. Got %s", network.DefaultPluginName, plug.Name())
- }
-}
-
-func TestInit(t *testing.T) {
- tests := []struct {
- setting string
- expectedLen int
- }{
- {
- setting: "net/bridge/bridge-nf-call-iptables",
- expectedLen: 1,
- },
- {
- setting: "net/bridge/bridge-nf-call-ip6tables",
- expectedLen: 2,
- },
- }
- for _, tt := range tests {
- sysctl := sysctltest.NewFake()
- sysctl.Settings[tt.setting] = 0
- plug := &network.NoopNetworkPlugin{}
- plug.Sysctl = sysctl
- plug.Init(NewFakeHost(nil), kubeletconfig.HairpinNone, "10.0.0.0/8", network.UseDefaultMTU)
- // Verify the sysctl specified is set
- assert.Equal(t, 1, sysctl.Settings[tt.setting], tt.setting+" sysctl should have been set")
- // Verify iptables is always set
- assert.Equal(t, 1, sysctl.Settings["net/bridge/bridge-nf-call-iptables"], "net/bridge/bridge-nf-call-iptables sysctl should have been set")
- // Verify ip6tables is only set if it existed
- assert.Len(t, sysctl.Settings, tt.expectedLen, "length wrong for "+tt.setting)
- }
-}
-
-func TestPluginManager(t *testing.T) {
- ctrl := gomock.NewController(t)
- fnp := NewMockNetworkPlugin(ctrl)
- defer ctrl.Finish()
- pm := network.NewPluginManager(fnp)
-
- fnp.EXPECT().Name().Return("someNetworkPlugin").AnyTimes()
-
- allCreatedWg := sync.WaitGroup{}
- allCreatedWg.Add(1)
- allDoneWg := sync.WaitGroup{}
-
- // 10 pods, 4 setup/status/teardown runs each. Ensure that network locking
- // works and the pod map isn't concurrently accessed
- for i := 0; i < 10; i++ {
- podName := fmt.Sprintf("pod%d", i)
- containerID := kubecontainer.ContainerID{ID: podName}
-
- fnp.EXPECT().SetUpPod("", podName, containerID, nil, nil).Return(nil).Times(4)
- fnp.EXPECT().GetPodNetworkStatus("", podName, containerID).Return(&network.PodNetworkStatus{IP: netutils.ParseIPSloppy("1.2.3.4")}, nil).Times(4)
- fnp.EXPECT().TearDownPod("", podName, containerID).Return(nil).Times(4)
-
- for x := 0; x < 4; x++ {
- allDoneWg.Add(1)
- go func(name string, id kubecontainer.ContainerID, num int) {
- defer allDoneWg.Done()
-
- // Block all goroutines from running until all have
- // been created and are ready. This ensures we
- // have more pod network operations running
- // concurrently.
- allCreatedWg.Wait()
-
- if err := pm.SetUpPod("", name, id, nil, nil); err != nil {
- t.Errorf("Failed to set up pod %q: %v", name, err)
- return
- }
-
- if _, err := pm.GetPodNetworkStatus("", name, id); err != nil {
- t.Errorf("Failed to inspect pod %q: %v", name, err)
- return
- }
-
- if err := pm.TearDownPod("", name, id); err != nil {
- t.Errorf("Failed to tear down pod %q: %v", name, err)
- return
- }
- }(podName, containerID, x)
- }
- }
- // Block all goroutines from running until all have been created and started
- allCreatedWg.Done()
-
- // Wait for them all to finish
- allDoneWg.Wait()
-}
-
-type hookableFakeNetworkPluginSetupHook func(namespace, name string, id kubecontainer.ContainerID)
-
-type hookableFakeNetworkPlugin struct {
- setupHook hookableFakeNetworkPluginSetupHook
-}
-
-func newHookableFakeNetworkPlugin(setupHook hookableFakeNetworkPluginSetupHook) *hookableFakeNetworkPlugin {
- return &hookableFakeNetworkPlugin{
- setupHook: setupHook,
- }
-}
-
-func (p *hookableFakeNetworkPlugin) Init(host network.Host, hairpinMode kubeletconfig.HairpinMode, nonMasqueradeCIDR string, mtu int) error {
- return nil
-}
-
-func (p *hookableFakeNetworkPlugin) Event(name string, details map[string]interface{}) {
-}
-
-func (p *hookableFakeNetworkPlugin) Name() string {
- return "fakeplugin"
-}
-
-func (p *hookableFakeNetworkPlugin) Capabilities() utilsets.Int {
- return utilsets.NewInt()
-}
-
-func (p *hookableFakeNetworkPlugin) SetUpPod(namespace string, name string, id kubecontainer.ContainerID, annotations, options map[string]string) error {
- if p.setupHook != nil {
- p.setupHook(namespace, name, id)
- }
- return nil
-}
-
-func (p *hookableFakeNetworkPlugin) TearDownPod(string, string, kubecontainer.ContainerID) error {
- return nil
-}
-
-func (p *hookableFakeNetworkPlugin) GetPodNetworkStatus(string, string, kubecontainer.ContainerID) (*network.PodNetworkStatus, error) {
- return &network.PodNetworkStatus{IP: netutils.ParseIPSloppy("10.1.2.3")}, nil
-}
-
-func (p *hookableFakeNetworkPlugin) Status() error {
- return nil
-}
-
-// Ensure that one pod's network operations don't block another's. If the
-// test is successful (eg, first pod doesn't block on second) the test
-// will complete. If unsuccessful, it will hang and get killed.
-func TestMultiPodParallelNetworkOps(t *testing.T) {
- podWg := sync.WaitGroup{}
- podWg.Add(1)
-
- // Can't do this with MockNetworkPlugin because the gomock controller
- // has its own locks which don't allow the parallel network operation
- // to proceed.
- didWait := false
- fakePlugin := newHookableFakeNetworkPlugin(func(podNamespace, podName string, id kubecontainer.ContainerID) {
- if podName == "waiter" {
- podWg.Wait()
- didWait = true
- }
- })
- pm := network.NewPluginManager(fakePlugin)
-
- opsWg := sync.WaitGroup{}
-
- // Start the pod that will wait for the other to complete
- opsWg.Add(1)
- go func() {
- defer opsWg.Done()
-
- podName := "waiter"
- containerID := kubecontainer.ContainerID{ID: podName}
-
- // Setup will block on the runner pod completing. If network
- // operations locking isn't correct (eg pod network operations
- // block other pods) setUpPod() will never return.
- if err := pm.SetUpPod("", podName, containerID, nil, nil); err != nil {
- t.Errorf("Failed to set up waiter pod: %v", err)
- return
- }
-
- if err := pm.TearDownPod("", podName, containerID); err != nil {
- t.Errorf("Failed to tear down waiter pod: %v", err)
- return
- }
- }()
-
- opsWg.Add(1)
- go func() {
- defer opsWg.Done()
- // Let other pod proceed
- defer podWg.Done()
-
- podName := "runner"
- containerID := kubecontainer.ContainerID{ID: podName}
-
- if err := pm.SetUpPod("", podName, containerID, nil, nil); err != nil {
- t.Errorf("Failed to set up runner pod: %v", err)
- return
- }
-
- if err := pm.TearDownPod("", podName, containerID); err != nil {
- t.Errorf("Failed to tear down runner pod: %v", err)
- return
- }
- }()
-
- opsWg.Wait()
-
- if !didWait {
- t.Errorf("waiter pod didn't wait for runner pod!")
- }
-}
diff --git a/pkg/kubelet/dockershim/remote/docker_server.go b/pkg/kubelet/dockershim/remote/docker_server.go
deleted file mode 100644
index 433f53890e0..00000000000
--- a/pkg/kubelet/dockershim/remote/docker_server.go
+++ /dev/null
@@ -1,82 +0,0 @@
-//go:build !dockerless
-// +build !dockerless
-
-/*
-Copyright 2016 The Kubernetes Authors.
-
-Licensed under the Apache License, Version 2.0 (the "License");
-you may not use this file except in compliance with the License.
-You may obtain a copy of the License at
-
- http://www.apache.org/licenses/LICENSE-2.0
-
-Unless required by applicable law or agreed to in writing, software
-distributed under the License is distributed on an "AS IS" BASIS,
-WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-See the License for the specific language governing permissions and
-limitations under the License.
-*/
-
-package remote
-
-import (
- "fmt"
- "os"
-
- "google.golang.org/grpc"
- runtimeapi "k8s.io/cri-api/pkg/apis/runtime/v1alpha2"
- "k8s.io/klog/v2"
- "k8s.io/kubernetes/pkg/kubelet/dockershim"
- "k8s.io/kubernetes/pkg/kubelet/util"
-)
-
-// maxMsgSize use 16MB as the default message size limit.
-// grpc library default is 4MB
-const maxMsgSize = 1024 * 1024 * 16
-
-// DockerServer is the grpc server of dockershim.
-type DockerServer struct {
- // endpoint is the endpoint to serve on.
- endpoint string
- // service is the docker service which implements runtime and image services.
- service dockershim.CRIService
- // server is the grpc server.
- server *grpc.Server
-}
-
-// NewDockerServer creates the dockershim grpc server.
-func NewDockerServer(endpoint string, s dockershim.CRIService) *DockerServer {
- return &DockerServer{
- endpoint: endpoint,
- service: s,
- }
-}
-
-// Start starts the dockershim grpc server.
-func (s *DockerServer) Start() error {
- // Start the internal service.
- if err := s.service.Start(); err != nil {
- klog.ErrorS(err, "Unable to start docker service")
- return err
- }
-
- klog.V(2).InfoS("Start dockershim grpc server")
- l, err := util.CreateListener(s.endpoint)
- if err != nil {
- return fmt.Errorf("failed to listen on %q: %v", s.endpoint, err)
- }
- // Create the grpc server and register runtime and image services.
- s.server = grpc.NewServer(
- grpc.MaxRecvMsgSize(maxMsgSize),
- grpc.MaxSendMsgSize(maxMsgSize),
- )
- runtimeapi.RegisterRuntimeServiceServer(s.server, s.service)
- runtimeapi.RegisterImageServiceServer(s.server, s.service)
- go func() {
- if err := s.server.Serve(l); err != nil {
- klog.ErrorS(err, "Failed to serve connections")
- os.Exit(1)
- }
- }()
- return nil
-}
diff --git a/pkg/kubelet/dockershim/remote/docker_server_test.go b/pkg/kubelet/dockershim/remote/docker_server_test.go
deleted file mode 100644
index 6cecb514aa8..00000000000
--- a/pkg/kubelet/dockershim/remote/docker_server_test.go
+++ /dev/null
@@ -1,174 +0,0 @@
-//go:build !dockerless
-// +build !dockerless
-
-/*
-Copyright 2021 The Kubernetes Authors.
-
-Licensed under the Apache License, Version 2.0 (the "License");
-you may not use this file except in compliance with the License.
-You may obtain a copy of the License at
-
- http://www.apache.org/licenses/LICENSE-2.0
-
-Unless required by applicable law or agreed to in writing, software
-distributed under the License is distributed on an "AS IS" BASIS,
-WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-See the License for the specific language governing permissions and
-limitations under the License.
-*/
-
-package remote
-
-import (
- "context"
- "io/ioutil"
- "testing"
-
- "github.com/stretchr/testify/assert"
- "google.golang.org/grpc"
-
- runtimeapi "k8s.io/cri-api/pkg/apis/runtime/v1alpha2"
-)
-
-func TestServer(t *testing.T) {
- file, err := ioutil.TempFile("", "docker-server-")
- assert.Nil(t, err)
- endpoint := "unix://" + file.Name()
-
- server := NewDockerServer(endpoint, &fakeCRIService{})
- assert.Nil(t, server.Start())
-
- ctx := context.Background()
- conn, err := grpc.Dial(endpoint, grpc.WithInsecure())
- assert.Nil(t, err)
-
- runtimeClient := runtimeapi.NewRuntimeServiceClient(conn)
- _, err = runtimeClient.Version(ctx, &runtimeapi.VersionRequest{})
- assert.Nil(t, err)
-
- imageClient := runtimeapi.NewImageServiceClient(conn)
- _, err = imageClient.ImageFsInfo(ctx, &runtimeapi.ImageFsInfoRequest{})
- assert.Nil(t, err)
-}
-
-type fakeCRIService struct{}
-
-func (*fakeCRIService) Start() error {
- return nil
-}
-
-func (*fakeCRIService) Version(context.Context, *runtimeapi.VersionRequest) (*runtimeapi.VersionResponse, error) {
- return &runtimeapi.VersionResponse{}, nil
-}
-
-func (*fakeCRIService) RunPodSandbox(context.Context, *runtimeapi.RunPodSandboxRequest) (*runtimeapi.RunPodSandboxResponse, error) {
- return nil, nil
-}
-
-func (*fakeCRIService) StopPodSandbox(context.Context, *runtimeapi.StopPodSandboxRequest) (*runtimeapi.StopPodSandboxResponse, error) {
- return nil, nil
-}
-
-func (*fakeCRIService) RemovePodSandbox(context.Context, *runtimeapi.RemovePodSandboxRequest) (*runtimeapi.RemovePodSandboxResponse, error) {
- return nil, nil
-}
-
-func (*fakeCRIService) PodSandboxStatus(context.Context, *runtimeapi.PodSandboxStatusRequest) (*runtimeapi.PodSandboxStatusResponse, error) {
- return nil, nil
-}
-
-func (*fakeCRIService) ListPodSandbox(context.Context, *runtimeapi.ListPodSandboxRequest) (*runtimeapi.ListPodSandboxResponse, error) {
- return nil, nil
-}
-
-func (*fakeCRIService) CreateContainer(context.Context, *runtimeapi.CreateContainerRequest) (*runtimeapi.CreateContainerResponse, error) {
- return nil, nil
-}
-
-func (*fakeCRIService) StartContainer(context.Context, *runtimeapi.StartContainerRequest) (*runtimeapi.StartContainerResponse, error) {
- return nil, nil
-}
-
-func (*fakeCRIService) StopContainer(context.Context, *runtimeapi.StopContainerRequest) (*runtimeapi.StopContainerResponse, error) {
- return nil, nil
-}
-
-func (*fakeCRIService) RemoveContainer(context.Context, *runtimeapi.RemoveContainerRequest) (*runtimeapi.RemoveContainerResponse, error) {
- return nil, nil
-}
-
-func (*fakeCRIService) ListContainers(context.Context, *runtimeapi.ListContainersRequest) (*runtimeapi.ListContainersResponse, error) {
- return nil, nil
-}
-
-func (*fakeCRIService) ContainerStatus(context.Context, *runtimeapi.ContainerStatusRequest) (*runtimeapi.ContainerStatusResponse, error) {
- return nil, nil
-}
-
-func (*fakeCRIService) UpdateContainerResources(context.Context, *runtimeapi.UpdateContainerResourcesRequest) (*runtimeapi.UpdateContainerResourcesResponse, error) {
- return nil, nil
-}
-
-func (*fakeCRIService) ReopenContainerLog(context.Context, *runtimeapi.ReopenContainerLogRequest) (*runtimeapi.ReopenContainerLogResponse, error) {
- return nil, nil
-}
-
-func (*fakeCRIService) ExecSync(context.Context, *runtimeapi.ExecSyncRequest) (*runtimeapi.ExecSyncResponse, error) {
- return nil, nil
-}
-
-func (*fakeCRIService) Exec(context.Context, *runtimeapi.ExecRequest) (*runtimeapi.ExecResponse, error) {
- return nil, nil
-}
-
-func (*fakeCRIService) Attach(context.Context, *runtimeapi.AttachRequest) (*runtimeapi.AttachResponse, error) {
- return nil, nil
-}
-
-func (*fakeCRIService) PortForward(context.Context, *runtimeapi.PortForwardRequest) (*runtimeapi.PortForwardResponse, error) {
- return nil, nil
-}
-
-func (*fakeCRIService) ContainerStats(context.Context, *runtimeapi.ContainerStatsRequest) (*runtimeapi.ContainerStatsResponse, error) {
- return nil, nil
-}
-
-func (*fakeCRIService) ListContainerStats(context.Context, *runtimeapi.ListContainerStatsRequest) (*runtimeapi.ListContainerStatsResponse, error) {
- return nil, nil
-}
-
-func (*fakeCRIService) PodSandboxStats(context.Context, *runtimeapi.PodSandboxStatsRequest) (*runtimeapi.PodSandboxStatsResponse, error) {
- return nil, nil
-}
-
-func (*fakeCRIService) ListPodSandboxStats(context.Context, *runtimeapi.ListPodSandboxStatsRequest) (*runtimeapi.ListPodSandboxStatsResponse, error) {
- return nil, nil
-}
-
-func (*fakeCRIService) UpdateRuntimeConfig(context.Context, *runtimeapi.UpdateRuntimeConfigRequest) (*runtimeapi.UpdateRuntimeConfigResponse, error) {
- return nil, nil
-}
-
-func (*fakeCRIService) Status(context.Context, *runtimeapi.StatusRequest) (*runtimeapi.StatusResponse, error) {
- return nil, nil
-}
-
-func (*fakeCRIService) ListImages(context.Context, *runtimeapi.ListImagesRequest) (*runtimeapi.ListImagesResponse, error) {
- return nil, nil
-}
-
-func (*fakeCRIService) ImageStatus(context.Context, *runtimeapi.ImageStatusRequest) (*runtimeapi.ImageStatusResponse, error) {
- return nil, nil
-}
-
-func (*fakeCRIService) PullImage(context.Context, *runtimeapi.PullImageRequest) (*runtimeapi.PullImageResponse, error) {
- return nil, nil
-}
-
-func (*fakeCRIService) RemoveImage(context.Context, *runtimeapi.RemoveImageRequest) (*runtimeapi.RemoveImageResponse, error) {
- return nil, nil
-}
-
-func (*fakeCRIService) ImageFsInfo(context.Context, *runtimeapi.ImageFsInfoRequest) (*runtimeapi.ImageFsInfoResponse, error) {
- return &runtimeapi.ImageFsInfoResponse{}, nil
-}
diff --git a/pkg/kubelet/dockershim/security_context.go b/pkg/kubelet/dockershim/security_context.go
deleted file mode 100644
index 25a12c450eb..00000000000
--- a/pkg/kubelet/dockershim/security_context.go
+++ /dev/null
@@ -1,207 +0,0 @@
-//go:build !dockerless
-// +build !dockerless
-
-/*
-Copyright 2016 The Kubernetes Authors.
-
-Licensed under the Apache License, Version 2.0 (the "License");
-you may not use this file except in compliance with the License.
-You may obtain a copy of the License at
-
- http://www.apache.org/licenses/LICENSE-2.0
-
-Unless required by applicable law or agreed to in writing, software
-distributed under the License is distributed on an "AS IS" BASIS,
-WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-See the License for the specific language governing permissions and
-limitations under the License.
-*/
-
-package dockershim
-
-import (
- "fmt"
- "strconv"
-
- dockercontainer "github.com/docker/docker/api/types/container"
-
- runtimeapi "k8s.io/cri-api/pkg/apis/runtime/v1alpha2"
- knetwork "k8s.io/kubernetes/pkg/kubelet/dockershim/network"
-)
-
-// applySandboxSecurityContext updates docker sandbox options according to security context.
-func applySandboxSecurityContext(lc *runtimeapi.LinuxPodSandboxConfig, config *dockercontainer.Config, hc *dockercontainer.HostConfig, network *knetwork.PluginManager, separator rune) error {
- if lc == nil {
- return nil
- }
-
- var sc *runtimeapi.LinuxContainerSecurityContext
- if lc.SecurityContext != nil {
- sc = &runtimeapi.LinuxContainerSecurityContext{
- SupplementalGroups: lc.SecurityContext.SupplementalGroups,
- RunAsUser: lc.SecurityContext.RunAsUser,
- RunAsGroup: lc.SecurityContext.RunAsGroup,
- ReadonlyRootfs: lc.SecurityContext.ReadonlyRootfs,
- SelinuxOptions: lc.SecurityContext.SelinuxOptions,
- NamespaceOptions: lc.SecurityContext.NamespaceOptions,
- Privileged: lc.SecurityContext.Privileged,
- }
- }
-
- err := modifyContainerConfig(sc, config)
- if err != nil {
- return err
- }
-
- if err := modifyHostConfig(sc, hc, separator); err != nil {
- return err
- }
- modifySandboxNamespaceOptions(sc.GetNamespaceOptions(), hc, network)
- return nil
-}
-
-// applyContainerSecurityContext updates docker container options according to security context.
-func applyContainerSecurityContext(lc *runtimeapi.LinuxContainerConfig, podSandboxID string, config *dockercontainer.Config, hc *dockercontainer.HostConfig, separator rune) error {
- if lc == nil {
- return nil
- }
-
- err := modifyContainerConfig(lc.SecurityContext, config)
- if err != nil {
- return err
- }
- if err := modifyHostConfig(lc.SecurityContext, hc, separator); err != nil {
- return err
- }
- modifyContainerNamespaceOptions(lc.SecurityContext.GetNamespaceOptions(), podSandboxID, hc)
- return nil
-}
-
-// modifyContainerConfig applies container security context config to dockercontainer.Config.
-func modifyContainerConfig(sc *runtimeapi.LinuxContainerSecurityContext, config *dockercontainer.Config) error {
- if sc == nil {
- return nil
- }
- if sc.RunAsUser != nil {
- config.User = strconv.FormatInt(sc.GetRunAsUser().Value, 10)
- }
- if sc.RunAsUsername != "" {
- config.User = sc.RunAsUsername
- }
-
- user := config.User
- if sc.RunAsGroup != nil {
- if user == "" {
- return fmt.Errorf("runAsGroup is specified without a runAsUser")
- }
- user = fmt.Sprintf("%s:%d", config.User, sc.GetRunAsGroup().Value)
- }
-
- config.User = user
-
- return nil
-}
-
-// modifyHostConfig applies security context config to dockercontainer.HostConfig.
-func modifyHostConfig(sc *runtimeapi.LinuxContainerSecurityContext, hostConfig *dockercontainer.HostConfig, separator rune) error {
- if sc == nil {
- return nil
- }
-
- // Apply supplemental groups.
- for _, group := range sc.SupplementalGroups {
- hostConfig.GroupAdd = append(hostConfig.GroupAdd, strconv.FormatInt(group, 10))
- }
-
- // Apply security context for the container.
- hostConfig.Privileged = sc.Privileged
- hostConfig.ReadonlyRootfs = sc.ReadonlyRootfs
- if sc.Capabilities != nil {
- hostConfig.CapAdd = sc.GetCapabilities().AddCapabilities
- hostConfig.CapDrop = sc.GetCapabilities().DropCapabilities
- }
- if sc.SelinuxOptions != nil {
- hostConfig.SecurityOpt = addSELinuxOptions(
- hostConfig.SecurityOpt,
- sc.SelinuxOptions,
- separator,
- )
- }
-
- // Apply apparmor options.
- apparmorSecurityOpts, err := getApparmorSecurityOpts(sc, separator)
- if err != nil {
- return fmt.Errorf("failed to generate apparmor security options: %v", err)
- }
- hostConfig.SecurityOpt = append(hostConfig.SecurityOpt, apparmorSecurityOpts...)
-
- if sc.NoNewPrivs {
- hostConfig.SecurityOpt = append(hostConfig.SecurityOpt, "no-new-privileges")
- }
-
- if !hostConfig.Privileged {
- hostConfig.MaskedPaths = sc.MaskedPaths
- hostConfig.ReadonlyPaths = sc.ReadonlyPaths
- }
-
- return nil
-}
-
-// modifySandboxNamespaceOptions apply namespace options for sandbox
-func modifySandboxNamespaceOptions(nsOpts *runtimeapi.NamespaceOption, hostConfig *dockercontainer.HostConfig, network *knetwork.PluginManager) {
- // The sandbox's PID namespace is the one that's shared, so CONTAINER and POD are equivalent for it
- if nsOpts.GetPid() == runtimeapi.NamespaceMode_NODE {
- hostConfig.PidMode = namespaceModeHost
- }
- modifyHostOptionsForSandbox(nsOpts, network, hostConfig)
-}
-
-// modifyContainerNamespaceOptions apply namespace options for container
-func modifyContainerNamespaceOptions(nsOpts *runtimeapi.NamespaceOption, podSandboxID string, hostConfig *dockercontainer.HostConfig) {
- switch nsOpts.GetPid() {
- case runtimeapi.NamespaceMode_NODE:
- hostConfig.PidMode = namespaceModeHost
- case runtimeapi.NamespaceMode_POD:
- hostConfig.PidMode = dockercontainer.PidMode(fmt.Sprintf("container:%v", podSandboxID))
- case runtimeapi.NamespaceMode_TARGET:
- hostConfig.PidMode = dockercontainer.PidMode(fmt.Sprintf("container:%v", nsOpts.GetTargetId()))
- }
- modifyHostOptionsForContainer(nsOpts, podSandboxID, hostConfig)
-}
-
-// modifyHostOptionsForSandbox applies NetworkMode/UTSMode to sandbox's dockercontainer.HostConfig.
-func modifyHostOptionsForSandbox(nsOpts *runtimeapi.NamespaceOption, network *knetwork.PluginManager, hc *dockercontainer.HostConfig) {
- if nsOpts.GetIpc() == runtimeapi.NamespaceMode_NODE {
- hc.IpcMode = namespaceModeHost
- }
- if nsOpts.GetNetwork() == runtimeapi.NamespaceMode_NODE {
- hc.NetworkMode = namespaceModeHost
- return
- }
-
- if network == nil {
- hc.NetworkMode = "default"
- return
- }
-
- switch network.PluginName() {
- case "cni":
- fallthrough
- case "kubenet":
- hc.NetworkMode = "none"
- default:
- hc.NetworkMode = "default"
- }
-}
-
-// modifyHostOptionsForContainer applies NetworkMode/UTSMode to container's dockercontainer.HostConfig.
-func modifyHostOptionsForContainer(nsOpts *runtimeapi.NamespaceOption, podSandboxID string, hc *dockercontainer.HostConfig) {
- sandboxNSMode := fmt.Sprintf("container:%v", podSandboxID)
- hc.NetworkMode = dockercontainer.NetworkMode(sandboxNSMode)
- hc.IpcMode = dockercontainer.IpcMode(sandboxNSMode)
- hc.UTSMode = ""
-
- if nsOpts.GetNetwork() == runtimeapi.NamespaceMode_NODE {
- hc.UTSMode = namespaceModeHost
- }
-}
diff --git a/pkg/kubelet/dockershim/security_context_test.go b/pkg/kubelet/dockershim/security_context_test.go
deleted file mode 100644
index 01f46fa1b52..00000000000
--- a/pkg/kubelet/dockershim/security_context_test.go
+++ /dev/null
@@ -1,494 +0,0 @@
-//go:build !dockerless
-// +build !dockerless
-
-/*
-Copyright 2016 The Kubernetes Authors.
-
-Licensed under the Apache License, Version 2.0 (the "License");
-you may not use this file except in compliance with the License.
-You may obtain a copy of the License at
-
- http://www.apache.org/licenses/LICENSE-2.0
-
-Unless required by applicable law or agreed to in writing, software
-distributed under the License is distributed on an "AS IS" BASIS,
-WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-See the License for the specific language governing permissions and
-limitations under the License.
-*/
-
-package dockershim
-
-import (
- "fmt"
- "strconv"
- "testing"
-
- dockercontainer "github.com/docker/docker/api/types/container"
- "github.com/stretchr/testify/assert"
-
- runtimeapi "k8s.io/cri-api/pkg/apis/runtime/v1alpha2"
-)
-
-func TestModifyContainerConfig(t *testing.T) {
- var uid int64 = 123
- var username = "testuser"
- var gid int64 = 423
-
- cases := []struct {
- name string
- sc *runtimeapi.LinuxContainerSecurityContext
- expected *dockercontainer.Config
- isErr bool
- }{
- {
- name: "container.SecurityContext.RunAsUser set",
- sc: &runtimeapi.LinuxContainerSecurityContext{
- RunAsUser: &runtimeapi.Int64Value{Value: uid},
- },
- expected: &dockercontainer.Config{
- User: strconv.FormatInt(uid, 10),
- },
- isErr: false,
- },
- {
- name: "container.SecurityContext.RunAsUsername set",
- sc: &runtimeapi.LinuxContainerSecurityContext{
- RunAsUsername: username,
- },
- expected: &dockercontainer.Config{
- User: username,
- },
- isErr: false,
- },
- {
- name: "container.SecurityContext.RunAsUsername and container.SecurityContext.RunAsUser set",
- sc: &runtimeapi.LinuxContainerSecurityContext{
- RunAsUsername: username,
- RunAsUser: &runtimeapi.Int64Value{Value: uid},
- },
- expected: &dockercontainer.Config{
- User: username,
- },
- isErr: false,
- },
-
- {
- name: "no RunAsUser value set",
- sc: &runtimeapi.LinuxContainerSecurityContext{},
- expected: &dockercontainer.Config{},
- isErr: false,
- },
- {
- name: "RunAsUser value set, RunAsGroup set",
- sc: &runtimeapi.LinuxContainerSecurityContext{
- RunAsUser: &runtimeapi.Int64Value{Value: uid},
- RunAsGroup: &runtimeapi.Int64Value{Value: gid},
- },
- expected: &dockercontainer.Config{
- User: "123:423",
- },
- isErr: false,
- },
- {
- name: "RunAsUsername value set, RunAsGroup set",
- sc: &runtimeapi.LinuxContainerSecurityContext{
- RunAsUsername: username,
- RunAsGroup: &runtimeapi.Int64Value{Value: gid},
- },
- expected: &dockercontainer.Config{
- User: "testuser:423",
- },
- isErr: false,
- },
- {
- name: "RunAsUser/RunAsUsername not set, RunAsGroup set",
- sc: &runtimeapi.LinuxContainerSecurityContext{
- RunAsGroup: &runtimeapi.Int64Value{Value: gid},
- },
- isErr: true,
- },
- {
- name: "RunAsUser/RunAsUsername both set, RunAsGroup set",
- sc: &runtimeapi.LinuxContainerSecurityContext{
- RunAsUser: &runtimeapi.Int64Value{Value: uid},
- RunAsUsername: username,
- RunAsGroup: &runtimeapi.Int64Value{Value: gid},
- },
- expected: &dockercontainer.Config{
- User: "testuser:423",
- },
- isErr: false,
- },
- }
-
- for _, tc := range cases {
- dockerCfg := &dockercontainer.Config{}
- err := modifyContainerConfig(tc.sc, dockerCfg)
- if tc.isErr {
- assert.Error(t, err)
- } else {
- assert.NoError(t, err)
- assert.Equal(t, tc.expected, dockerCfg, "[Test case %q]", tc.name)
- }
- }
-}
-
-func TestModifyHostConfig(t *testing.T) {
- setNetworkHC := &dockercontainer.HostConfig{}
-
- // When we have Privileged pods, we do not need to use the
- // Masked / Readonly paths.
- setPrivSC := &runtimeapi.LinuxContainerSecurityContext{}
- setPrivSC.Privileged = true
- setPrivSC.MaskedPaths = []string{"/hello/world/masked"}
- setPrivSC.ReadonlyPaths = []string{"/hello/world/readonly"}
- setPrivHC := &dockercontainer.HostConfig{
- Privileged: true,
- }
-
- unsetPrivSC := &runtimeapi.LinuxContainerSecurityContext{}
- unsetPrivSC.Privileged = false
- unsetPrivSC.MaskedPaths = []string{"/hello/world/masked"}
- unsetPrivSC.ReadonlyPaths = []string{"/hello/world/readonly"}
- unsetPrivHC := &dockercontainer.HostConfig{
- Privileged: false,
- MaskedPaths: []string{"/hello/world/masked"},
- ReadonlyPaths: []string{"/hello/world/readonly"},
- }
-
- setCapsHC := &dockercontainer.HostConfig{
- CapAdd: []string{"addCapA", "addCapB"},
- CapDrop: []string{"dropCapA", "dropCapB"},
- }
- setSELinuxHC := &dockercontainer.HostConfig{
- SecurityOpt: []string{
- fmt.Sprintf("%s:%s", selinuxLabelUser('='), "user"),
- fmt.Sprintf("%s:%s", selinuxLabelRole('='), "role"),
- fmt.Sprintf("%s:%s", selinuxLabelType('='), "type"),
- fmt.Sprintf("%s:%s", selinuxLabelLevel('='), "level"),
- },
- }
-
- cases := []struct {
- name string
- sc *runtimeapi.LinuxContainerSecurityContext
- expected *dockercontainer.HostConfig
- }{
- {
- name: "fully set container.SecurityContext",
- sc: fullValidSecurityContext(),
- expected: fullValidHostConfig(),
- },
- {
- name: "empty container.SecurityContext",
- sc: &runtimeapi.LinuxContainerSecurityContext{},
- expected: setNetworkHC,
- },
- {
- name: "container.SecurityContext.Privileged",
- sc: setPrivSC,
- expected: setPrivHC,
- },
- {
- name: "container.SecurityContext.NoPrivileges",
- sc: unsetPrivSC,
- expected: unsetPrivHC,
- },
- {
- name: "container.SecurityContext.Capabilities",
- sc: &runtimeapi.LinuxContainerSecurityContext{
- Capabilities: inputCapabilities(),
- },
- expected: setCapsHC,
- },
- {
- name: "container.SecurityContext.SELinuxOptions",
- sc: &runtimeapi.LinuxContainerSecurityContext{
- SelinuxOptions: inputSELinuxOptions(),
- },
- expected: setSELinuxHC,
- },
- }
-
- for _, tc := range cases {
- dockerCfg := &dockercontainer.HostConfig{}
- modifyHostConfig(tc.sc, dockerCfg, '=')
- assert.Equal(t, tc.expected, dockerCfg, "[Test case %q]", tc.name)
- }
-}
-
-func TestModifyHostConfigWithGroups(t *testing.T) {
- supplementalGroupsSC := &runtimeapi.LinuxContainerSecurityContext{}
- supplementalGroupsSC.SupplementalGroups = []int64{2222}
- supplementalGroupHC := &dockercontainer.HostConfig{}
- supplementalGroupHC.GroupAdd = []string{"2222"}
-
- testCases := []struct {
- name string
- securityContext *runtimeapi.LinuxContainerSecurityContext
- expected *dockercontainer.HostConfig
- }{
- {
- name: "nil",
- securityContext: nil,
- expected: &dockercontainer.HostConfig{},
- },
- {
- name: "SupplementalGroup",
- securityContext: supplementalGroupsSC,
- expected: supplementalGroupHC,
- },
- }
-
- for _, tc := range testCases {
- dockerCfg := &dockercontainer.HostConfig{}
- modifyHostConfig(tc.securityContext, dockerCfg, '=')
- assert.Equal(t, tc.expected, dockerCfg, "[Test case %q]", tc.name)
- }
-}
-
-func TestModifyHostConfigAndNamespaceOptionsForContainer(t *testing.T) {
- priv := true
- sandboxID := "sandbox"
- sandboxNSMode := fmt.Sprintf("container:%v", sandboxID)
- setPrivSC := &runtimeapi.LinuxContainerSecurityContext{}
- setPrivSC.Privileged = priv
- setPrivHC := &dockercontainer.HostConfig{
- Privileged: true,
- IpcMode: dockercontainer.IpcMode(sandboxNSMode),
- NetworkMode: dockercontainer.NetworkMode(sandboxNSMode),
- PidMode: dockercontainer.PidMode(sandboxNSMode),
- }
- setCapsHC := &dockercontainer.HostConfig{
- CapAdd: []string{"addCapA", "addCapB"},
- CapDrop: []string{"dropCapA", "dropCapB"},
- IpcMode: dockercontainer.IpcMode(sandboxNSMode),
- NetworkMode: dockercontainer.NetworkMode(sandboxNSMode),
- PidMode: dockercontainer.PidMode(sandboxNSMode),
- }
- setSELinuxHC := &dockercontainer.HostConfig{
- SecurityOpt: []string{
- fmt.Sprintf("%s:%s", selinuxLabelUser('='), "user"),
- fmt.Sprintf("%s:%s", selinuxLabelRole('='), "role"),
- fmt.Sprintf("%s:%s", selinuxLabelType('='), "type"),
- fmt.Sprintf("%s:%s", selinuxLabelLevel('='), "level"),
- },
- IpcMode: dockercontainer.IpcMode(sandboxNSMode),
- NetworkMode: dockercontainer.NetworkMode(sandboxNSMode),
- PidMode: dockercontainer.PidMode(sandboxNSMode),
- }
-
- cases := []struct {
- name string
- sc *runtimeapi.LinuxContainerSecurityContext
- expected *dockercontainer.HostConfig
- }{
- {
- name: "container.SecurityContext.Privileged",
- sc: setPrivSC,
- expected: setPrivHC,
- },
- {
- name: "container.SecurityContext.Capabilities",
- sc: &runtimeapi.LinuxContainerSecurityContext{
- Capabilities: inputCapabilities(),
- },
- expected: setCapsHC,
- },
- {
- name: "container.SecurityContext.SELinuxOptions",
- sc: &runtimeapi.LinuxContainerSecurityContext{
- SelinuxOptions: inputSELinuxOptions(),
- },
- expected: setSELinuxHC,
- },
- }
-
- for _, tc := range cases {
- dockerCfg := &dockercontainer.HostConfig{}
- modifyHostConfig(tc.sc, dockerCfg, '=')
- modifyContainerNamespaceOptions(tc.sc.GetNamespaceOptions(), sandboxID, dockerCfg)
- assert.Equal(t, tc.expected, dockerCfg, "[Test case %q]", tc.name)
- }
-}
-
-func TestModifySandboxNamespaceOptions(t *testing.T) {
- cases := []struct {
- name string
- nsOpt *runtimeapi.NamespaceOption
- expected *dockercontainer.HostConfig
- }{
- {
- name: "Host Network NamespaceOption",
- nsOpt: &runtimeapi.NamespaceOption{
- Network: runtimeapi.NamespaceMode_NODE,
- },
- expected: &dockercontainer.HostConfig{
- NetworkMode: namespaceModeHost,
- },
- },
- {
- name: "Host IPC NamespaceOption",
- nsOpt: &runtimeapi.NamespaceOption{
- Ipc: runtimeapi.NamespaceMode_NODE,
- },
- expected: &dockercontainer.HostConfig{
- IpcMode: namespaceModeHost,
- NetworkMode: "default",
- },
- },
- {
- name: "Host PID NamespaceOption",
- nsOpt: &runtimeapi.NamespaceOption{
- Pid: runtimeapi.NamespaceMode_NODE,
- },
- expected: &dockercontainer.HostConfig{
- PidMode: namespaceModeHost,
- NetworkMode: "default",
- },
- },
- {
- name: "Pod PID NamespaceOption (for sandbox is same as container ns option)",
- nsOpt: &runtimeapi.NamespaceOption{
- Pid: runtimeapi.NamespaceMode_POD,
- },
- expected: &dockercontainer.HostConfig{
- PidMode: "",
- NetworkMode: "default",
- },
- },
- {
- name: "Target PID NamespaceOption (invalid for sandbox)",
- nsOpt: &runtimeapi.NamespaceOption{
- Pid: runtimeapi.NamespaceMode_TARGET,
- TargetId: "same-container",
- },
- expected: &dockercontainer.HostConfig{
- PidMode: "",
- NetworkMode: "default",
- },
- },
- }
- for _, tc := range cases {
- dockerCfg := &dockercontainer.HostConfig{}
- modifySandboxNamespaceOptions(tc.nsOpt, dockerCfg, nil)
- assert.Equal(t, tc.expected, dockerCfg, "[Test case %q]", tc.name)
- }
-}
-
-func TestModifyContainerNamespaceOptions(t *testing.T) {
- sandboxID := "sandbox"
- sandboxNSMode := fmt.Sprintf("container:%v", sandboxID)
- cases := []struct {
- name string
- nsOpt *runtimeapi.NamespaceOption
- expected *dockercontainer.HostConfig
- }{
- {
- name: "Host Network NamespaceOption",
- nsOpt: &runtimeapi.NamespaceOption{
- Network: runtimeapi.NamespaceMode_NODE,
- },
- expected: &dockercontainer.HostConfig{
- NetworkMode: dockercontainer.NetworkMode(sandboxNSMode),
- IpcMode: dockercontainer.IpcMode(sandboxNSMode),
- UTSMode: namespaceModeHost,
- PidMode: dockercontainer.PidMode(sandboxNSMode),
- },
- },
- {
- name: "Host IPC NamespaceOption",
- nsOpt: &runtimeapi.NamespaceOption{
- Ipc: runtimeapi.NamespaceMode_NODE,
- },
- expected: &dockercontainer.HostConfig{
- NetworkMode: dockercontainer.NetworkMode(sandboxNSMode),
- IpcMode: dockercontainer.IpcMode(sandboxNSMode),
- PidMode: dockercontainer.PidMode(sandboxNSMode),
- },
- },
- {
- name: "Host PID NamespaceOption",
- nsOpt: &runtimeapi.NamespaceOption{
- Pid: runtimeapi.NamespaceMode_NODE,
- },
- expected: &dockercontainer.HostConfig{
- NetworkMode: dockercontainer.NetworkMode(sandboxNSMode),
- IpcMode: dockercontainer.IpcMode(sandboxNSMode),
- PidMode: namespaceModeHost,
- },
- },
- {
- name: "Pod PID NamespaceOption",
- nsOpt: &runtimeapi.NamespaceOption{
- Pid: runtimeapi.NamespaceMode_POD,
- },
- expected: &dockercontainer.HostConfig{
- NetworkMode: dockercontainer.NetworkMode(sandboxNSMode),
- IpcMode: dockercontainer.IpcMode(sandboxNSMode),
- PidMode: dockercontainer.PidMode(sandboxNSMode),
- },
- },
- {
- name: "Target PID NamespaceOption",
- nsOpt: &runtimeapi.NamespaceOption{
- Pid: runtimeapi.NamespaceMode_TARGET,
- TargetId: "some-container",
- },
- expected: &dockercontainer.HostConfig{
- NetworkMode: dockercontainer.NetworkMode(sandboxNSMode),
- IpcMode: dockercontainer.IpcMode(sandboxNSMode),
- PidMode: dockercontainer.PidMode("container:some-container"),
- },
- },
- }
- for _, tc := range cases {
- dockerCfg := &dockercontainer.HostConfig{}
- modifyContainerNamespaceOptions(tc.nsOpt, sandboxID, dockerCfg)
- assert.Equal(t, tc.expected, dockerCfg, "[Test case %q]", tc.name)
- }
-}
-
-func fullValidSecurityContext() *runtimeapi.LinuxContainerSecurityContext {
- return &runtimeapi.LinuxContainerSecurityContext{
- Privileged: true,
- Capabilities: inputCapabilities(),
- SelinuxOptions: inputSELinuxOptions(),
- }
-}
-
-func inputCapabilities() *runtimeapi.Capability {
- return &runtimeapi.Capability{
- AddCapabilities: []string{"addCapA", "addCapB"},
- DropCapabilities: []string{"dropCapA", "dropCapB"},
- }
-}
-
-func inputSELinuxOptions() *runtimeapi.SELinuxOption {
- user := "user"
- role := "role"
- stype := "type"
- level := "level"
-
- return &runtimeapi.SELinuxOption{
- User: user,
- Role: role,
- Type: stype,
- Level: level,
- }
-}
-
-func fullValidHostConfig() *dockercontainer.HostConfig {
- return &dockercontainer.HostConfig{
- Privileged: true,
- CapAdd: []string{"addCapA", "addCapB"},
- CapDrop: []string{"dropCapA", "dropCapB"},
- SecurityOpt: []string{
- fmt.Sprintf("%s:%s", selinuxLabelUser('='), "user"),
- fmt.Sprintf("%s:%s", selinuxLabelRole('='), "role"),
- fmt.Sprintf("%s:%s", selinuxLabelType('='), "type"),
- fmt.Sprintf("%s:%s", selinuxLabelLevel('='), "level"),
- },
- }
-}
diff --git a/pkg/kubelet/dockershim/selinux_util.go b/pkg/kubelet/dockershim/selinux_util.go
deleted file mode 100644
index 4d299a7f934..00000000000
--- a/pkg/kubelet/dockershim/selinux_util.go
+++ /dev/null
@@ -1,89 +0,0 @@
-//go:build !dockerless
-// +build !dockerless
-
-/*
-Copyright 2017 The Kubernetes Authors.
-
-Licensed under the Apache License, Version 2.0 (the "License");
-you may not use this file except in compliance with the License.
-You may obtain a copy of the License at
-
- http://www.apache.org/licenses/LICENSE-2.0
-
-Unless required by applicable law or agreed to in writing, software
-distributed under the License is distributed on an "AS IS" BASIS,
-WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-See the License for the specific language governing permissions and
-limitations under the License.
-*/
-
-package dockershim
-
-import (
- "fmt"
-
- runtimeapi "k8s.io/cri-api/pkg/apis/runtime/v1alpha2"
-)
-
-// selinuxLabelUser returns the fragment of a Docker security opt that
-// describes the SELinux user. Note that strictly speaking this is not
-// actually the name of the security opt, but a fragment of the whole key-
-// value pair necessary to set the opt.
-func selinuxLabelUser(separator rune) string {
- return fmt.Sprintf("label%cuser", separator)
-}
-
-// selinuxLabelRole returns the fragment of a Docker security opt that
-// describes the SELinux role. Note that strictly speaking this is not
-// actually the name of the security opt, but a fragment of the whole key-
-// value pair necessary to set the opt.
-func selinuxLabelRole(separator rune) string {
- return fmt.Sprintf("label%crole", separator)
-}
-
-// selinuxLabelType returns the fragment of a Docker security opt that
-// describes the SELinux type. Note that strictly speaking this is not
-// actually the name of the security opt, but a fragment of the whole key-
-// value pair necessary to set the opt.
-func selinuxLabelType(separator rune) string {
- return fmt.Sprintf("label%ctype", separator)
-}
-
-// selinuxLabelLevel returns the fragment of a Docker security opt that
-// describes the SELinux level. Note that strictly speaking this is not
-// actually the name of the security opt, but a fragment of the whole key-
-// value pair necessary to set the opt.
-func selinuxLabelLevel(separator rune) string {
- return fmt.Sprintf("label%clevel", separator)
-}
-
-// addSELinuxOptions adds SELinux options to config using the given
-// separator.
-func addSELinuxOptions(config []string, selinuxOpts *runtimeapi.SELinuxOption, separator rune) []string {
- // Note, strictly speaking, we are actually mutating the values of these
- // keys, rather than formatting name and value into a string. Docker re-
- // uses the same option name multiple times (it's just 'label') with
- // different values which are themselves key-value pairs. For example,
- // the SELinux type is represented by the security opt:
- //
- // labeltype:
- //
- // In Docker API versions before 1.23, the separator was the `:` rune; in
- // API version 1.23 it changed to the `=` rune.
- config = modifySecurityOption(config, selinuxLabelUser(separator), selinuxOpts.User)
- config = modifySecurityOption(config, selinuxLabelRole(separator), selinuxOpts.Role)
- config = modifySecurityOption(config, selinuxLabelType(separator), selinuxOpts.Type)
- config = modifySecurityOption(config, selinuxLabelLevel(separator), selinuxOpts.Level)
-
- return config
-}
-
-// modifySecurityOption adds the security option of name to the config array
-// with value in the form of name:value.
-func modifySecurityOption(config []string, name, value string) []string {
- if len(value) > 0 {
- config = append(config, fmt.Sprintf("%s:%s", name, value))
- }
-
- return config
-}
diff --git a/pkg/kubelet/dockershim/selinux_util_test.go b/pkg/kubelet/dockershim/selinux_util_test.go
deleted file mode 100644
index d1e47dbcd14..00000000000
--- a/pkg/kubelet/dockershim/selinux_util_test.go
+++ /dev/null
@@ -1,57 +0,0 @@
-//go:build !dockerless
-// +build !dockerless
-
-/*
-Copyright 2017 The Kubernetes Authors.
-
-Licensed under the Apache License, Version 2.0 (the "License");
-you may not use this file except in compliance with the License.
-You may obtain a copy of the License at
-
- http://www.apache.org/licenses/LICENSE-2.0
-
-Unless required by applicable law or agreed to in writing, software
-distributed under the License is distributed on an "AS IS" BASIS,
-WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-See the License for the specific language governing permissions and
-limitations under the License.
-*/
-
-package dockershim
-
-import (
- "reflect"
- "testing"
-)
-
-func TestModifySecurityOptions(t *testing.T) {
- testCases := []struct {
- name string
- config []string
- optName string
- optVal string
- expected []string
- }{
- {
- name: "Empty val",
- config: []string{"a:b", "c:d"},
- optName: "optA",
- optVal: "",
- expected: []string{"a:b", "c:d"},
- },
- {
- name: "Valid",
- config: []string{"a:b", "c:d"},
- optName: "e",
- optVal: "f",
- expected: []string{"a:b", "c:d", "e:f"},
- },
- }
-
- for _, tc := range testCases {
- actual := modifySecurityOption(tc.config, tc.optName, tc.optVal)
- if !reflect.DeepEqual(tc.expected, actual) {
- t.Errorf("Failed to apply options correctly for tc: %s. Expected: %v but got %v", tc.name, tc.expected, actual)
- }
- }
-}
diff --git a/pkg/kubelet/kubelet.go b/pkg/kubelet/kubelet.go
index 2013c871a60..0e6f5f946de 100644
--- a/pkg/kubelet/kubelet.go
+++ b/pkg/kubelet/kubelet.go
@@ -73,7 +73,6 @@ import (
"k8s.io/kubernetes/pkg/kubelet/configmap"
kubecontainer "k8s.io/kubernetes/pkg/kubelet/container"
"k8s.io/kubernetes/pkg/kubelet/cri/remote"
- "k8s.io/kubernetes/pkg/kubelet/cri/streaming"
"k8s.io/kubernetes/pkg/kubelet/events"
"k8s.io/kubernetes/pkg/kubelet/eviction"
"k8s.io/kubernetes/pkg/kubelet/images"
@@ -310,18 +309,7 @@ func PreInitRuntimeService(kubeCfg *kubeletconfiginternal.KubeletConfiguration,
switch containerRuntime {
case kubetypes.DockerContainerRuntime:
- klog.InfoS("Using dockershim is deprecated, please consider using a full-fledged CRI implementation")
- if err := runDockershim(
- kubeCfg,
- kubeDeps,
- crOptions,
- runtimeCgroups,
- remoteRuntimeEndpoint,
- remoteImageEndpoint,
- nonMasqueradeCIDR,
- ); err != nil {
- return err
- }
+ return fmt.Errorf("using dockershim is not supported, please consider using a full-fledged CRI implementation")
case kubetypes.RemoteContainerRuntime:
// No-op.
break
@@ -2440,15 +2428,3 @@ func isSyncPodWorthy(event *pleg.PodLifecycleEvent) bool {
// ContainerRemoved doesn't affect pod state
return event.Type != pleg.ContainerRemoved
}
-
-// Gets the streaming server configuration to use with in-process CRI shims.
-func getStreamingConfig(kubeCfg *kubeletconfiginternal.KubeletConfiguration, kubeDeps *Dependencies, crOptions *config.ContainerRuntimeOptions) *streaming.Config {
- config := &streaming.Config{
- StreamIdleTimeout: kubeCfg.StreamingConnectionIdleTimeout.Duration,
- StreamCreationTimeout: streaming.DefaultConfig.StreamCreationTimeout,
- SupportedRemoteCommandProtocols: streaming.DefaultConfig.SupportedRemoteCommandProtocols,
- SupportedPortForwardProtocols: streaming.DefaultConfig.SupportedPortForwardProtocols,
- }
- config.Addr = net.JoinHostPort("localhost", "0")
- return config
-}
diff --git a/pkg/kubelet/kubelet_dockershim.go b/pkg/kubelet/kubelet_dockershim.go
deleted file mode 100644
index edc486949e8..00000000000
--- a/pkg/kubelet/kubelet_dockershim.go
+++ /dev/null
@@ -1,80 +0,0 @@
-//go:build !dockerless
-// +build !dockerless
-
-/*
-Copyright 2020 The Kubernetes Authors.
-
-Licensed under the Apache License, Version 2.0 (the "License");
-you may not use this file except in compliance with the License.
-You may obtain a copy of the License at
-
- http://www.apache.org/licenses/LICENSE-2.0
-
-Unless required by applicable law or agreed to in writing, software
-distributed under the License is distributed on an "AS IS" BASIS,
-WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-See the License for the specific language governing permissions and
-limitations under the License.
-*/
-
-package kubelet
-
-import (
- "k8s.io/klog/v2"
-
- kubeletconfiginternal "k8s.io/kubernetes/pkg/kubelet/apis/config"
- "k8s.io/kubernetes/pkg/kubelet/config"
- "k8s.io/kubernetes/pkg/kubelet/dockershim"
- dockerremote "k8s.io/kubernetes/pkg/kubelet/dockershim/remote"
-)
-
-func runDockershim(kubeCfg *kubeletconfiginternal.KubeletConfiguration,
- kubeDeps *Dependencies,
- crOptions *config.ContainerRuntimeOptions,
- runtimeCgroups string,
- remoteRuntimeEndpoint string,
- remoteImageEndpoint string,
- nonMasqueradeCIDR string) error {
- pluginSettings := dockershim.NetworkPluginSettings{
- HairpinMode: kubeletconfiginternal.HairpinMode(kubeCfg.HairpinMode),
- NonMasqueradeCIDR: nonMasqueradeCIDR,
- PluginName: crOptions.NetworkPluginName,
- PluginConfDir: crOptions.CNIConfDir,
- PluginBinDirString: crOptions.CNIBinDir,
- PluginCacheDir: crOptions.CNICacheDir,
- MTU: int(crOptions.NetworkPluginMTU),
- }
-
- // Create and start the CRI shim running as a grpc server.
- streamingConfig := getStreamingConfig(kubeCfg, kubeDeps, crOptions)
- dockerClientConfig := &dockershim.ClientConfig{
- DockerEndpoint: kubeDeps.DockerOptions.DockerEndpoint,
- RuntimeRequestTimeout: kubeDeps.DockerOptions.RuntimeRequestTimeout,
- ImagePullProgressDeadline: kubeDeps.DockerOptions.ImagePullProgressDeadline,
- }
- ds, err := dockershim.NewDockerService(dockerClientConfig, crOptions.PodSandboxImage, streamingConfig,
- &pluginSettings, runtimeCgroups, kubeCfg.CgroupDriver, crOptions.DockershimRootDirectory)
- if err != nil {
- return err
- }
-
- // The unix socket for kubelet <-> dockershim communication, dockershim start before runtime service init.
- klog.V(5).InfoS("Using remote runtime endpoint and image endpoint", "runtimeEndpoint", remoteRuntimeEndpoint, "imageEndpoint", remoteImageEndpoint)
- klog.V(2).InfoS("Starting the GRPC server for the docker CRI shim.")
-
- dockerServer := dockerremote.NewDockerServer(remoteRuntimeEndpoint, ds)
- if err := dockerServer.Start(); err != nil {
- return err
- }
-
- // Create dockerLegacyService when the logging driver is not supported.
- supported, err := ds.IsCRISupportedLogDriver()
- if err != nil {
- return err
- }
- if !supported {
- kubeDeps.dockerLegacyService = ds
- }
-
- return nil
-}
diff --git a/staging/src/k8s.io/legacy-cloud-providers/aws/aws_fakes.go b/staging/src/k8s.io/legacy-cloud-providers/aws/aws_fakes.go
index 06fdc92493b..c970a6aaa02 100644
--- a/staging/src/k8s.io/legacy-cloud-providers/aws/aws_fakes.go
+++ b/staging/src/k8s.io/legacy-cloud-providers/aws/aws_fakes.go
@@ -30,6 +30,7 @@ import (
"github.com/aws/aws-sdk-go/service/elb"
"github.com/aws/aws-sdk-go/service/elbv2"
"github.com/aws/aws-sdk-go/service/kms"
+ _ "github.com/stretchr/testify/mock"
"k8s.io/klog/v2"
)
diff --git a/test/e2e/framework/.import-restrictions b/test/e2e/framework/.import-restrictions
index a60fb9d7908..1353f40df9d 100644
--- a/test/e2e/framework/.import-restrictions
+++ b/test/e2e/framework/.import-restrictions
@@ -86,16 +86,6 @@ rules:
- k8s.io/kubernetes/pkg/kubelet/config
- k8s.io/kubernetes/pkg/kubelet/configmap
- k8s.io/kubernetes/pkg/kubelet/container
- - k8s.io/kubernetes/pkg/kubelet/dockershim
- - k8s.io/kubernetes/pkg/kubelet/dockershim/cm
- - k8s.io/kubernetes/pkg/kubelet/dockershim/libdocker
- - k8s.io/kubernetes/pkg/kubelet/dockershim/metrics
- - k8s.io/kubernetes/pkg/kubelet/dockershim/network
- - k8s.io/kubernetes/pkg/kubelet/dockershim/network/cni
- - k8s.io/kubernetes/pkg/kubelet/dockershim/network/hostport
- - k8s.io/kubernetes/pkg/kubelet/dockershim/network/kubenet
- - k8s.io/kubernetes/pkg/kubelet/dockershim/network/metrics
- - k8s.io/kubernetes/pkg/kubelet/dockershim/remote
- k8s.io/kubernetes/pkg/kubelet/envvars
- k8s.io/kubernetes/pkg/kubelet/eviction
- k8s.io/kubernetes/pkg/kubelet/eviction/api
diff --git a/vendor/github.com/containerd/containerd/errdefs/errors.go b/vendor/github.com/containerd/containerd/errdefs/errors.go
deleted file mode 100644
index 05a35228ca4..00000000000
--- a/vendor/github.com/containerd/containerd/errdefs/errors.go
+++ /dev/null
@@ -1,93 +0,0 @@
-/*
- Copyright The containerd Authors.
-
- Licensed under the Apache License, Version 2.0 (the "License");
- you may not use this file except in compliance with the License.
- You may obtain a copy of the License at
-
- http://www.apache.org/licenses/LICENSE-2.0
-
- Unless required by applicable law or agreed to in writing, software
- distributed under the License is distributed on an "AS IS" BASIS,
- WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- See the License for the specific language governing permissions and
- limitations under the License.
-*/
-
-// Package errdefs defines the common errors used throughout containerd
-// packages.
-//
-// Use with errors.Wrap and error.Wrapf to add context to an error.
-//
-// To detect an error class, use the IsXXX functions to tell whether an error
-// is of a certain type.
-//
-// The functions ToGRPC and FromGRPC can be used to map server-side and
-// client-side errors to the correct types.
-package errdefs
-
-import (
- "context"
-
- "github.com/pkg/errors"
-)
-
-// Definitions of common error types used throughout containerd. All containerd
-// errors returned by most packages will map into one of these errors classes.
-// Packages should return errors of these types when they want to instruct a
-// client to take a particular action.
-//
-// For the most part, we just try to provide local grpc errors. Most conditions
-// map very well to those defined by grpc.
-var (
- ErrUnknown = errors.New("unknown") // used internally to represent a missed mapping.
- ErrInvalidArgument = errors.New("invalid argument")
- ErrNotFound = errors.New("not found")
- ErrAlreadyExists = errors.New("already exists")
- ErrFailedPrecondition = errors.New("failed precondition")
- ErrUnavailable = errors.New("unavailable")
- ErrNotImplemented = errors.New("not implemented") // represents not supported and unimplemented
-)
-
-// IsInvalidArgument returns true if the error is due to an invalid argument
-func IsInvalidArgument(err error) bool {
- return errors.Is(err, ErrInvalidArgument)
-}
-
-// IsNotFound returns true if the error is due to a missing object
-func IsNotFound(err error) bool {
- return errors.Is(err, ErrNotFound)
-}
-
-// IsAlreadyExists returns true if the error is due to an already existing
-// metadata item
-func IsAlreadyExists(err error) bool {
- return errors.Is(err, ErrAlreadyExists)
-}
-
-// IsFailedPrecondition returns true if an operation could not proceed to the
-// lack of a particular condition
-func IsFailedPrecondition(err error) bool {
- return errors.Is(err, ErrFailedPrecondition)
-}
-
-// IsUnavailable returns true if the error is due to a resource being unavailable
-func IsUnavailable(err error) bool {
- return errors.Is(err, ErrUnavailable)
-}
-
-// IsNotImplemented returns true if the error is due to not being implemented
-func IsNotImplemented(err error) bool {
- return errors.Is(err, ErrNotImplemented)
-}
-
-// IsCanceled returns true if the error is due to `context.Canceled`.
-func IsCanceled(err error) bool {
- return errors.Is(err, context.Canceled)
-}
-
-// IsDeadlineExceeded returns true if the error is due to
-// `context.DeadlineExceeded`.
-func IsDeadlineExceeded(err error) bool {
- return errors.Is(err, context.DeadlineExceeded)
-}
diff --git a/vendor/github.com/containerd/containerd/errdefs/grpc.go b/vendor/github.com/containerd/containerd/errdefs/grpc.go
deleted file mode 100644
index 209f63bd0fc..00000000000
--- a/vendor/github.com/containerd/containerd/errdefs/grpc.go
+++ /dev/null
@@ -1,147 +0,0 @@
-/*
- Copyright The containerd Authors.
-
- Licensed under the Apache License, Version 2.0 (the "License");
- you may not use this file except in compliance with the License.
- You may obtain a copy of the License at
-
- http://www.apache.org/licenses/LICENSE-2.0
-
- Unless required by applicable law or agreed to in writing, software
- distributed under the License is distributed on an "AS IS" BASIS,
- WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- See the License for the specific language governing permissions and
- limitations under the License.
-*/
-
-package errdefs
-
-import (
- "context"
- "strings"
-
- "github.com/pkg/errors"
- "google.golang.org/grpc/codes"
- "google.golang.org/grpc/status"
-)
-
-// ToGRPC will attempt to map the backend containerd error into a grpc error,
-// using the original error message as a description.
-//
-// Further information may be extracted from certain errors depending on their
-// type.
-//
-// If the error is unmapped, the original error will be returned to be handled
-// by the regular grpc error handling stack.
-func ToGRPC(err error) error {
- if err == nil {
- return nil
- }
-
- if isGRPCError(err) {
- // error has already been mapped to grpc
- return err
- }
-
- switch {
- case IsInvalidArgument(err):
- return status.Errorf(codes.InvalidArgument, err.Error())
- case IsNotFound(err):
- return status.Errorf(codes.NotFound, err.Error())
- case IsAlreadyExists(err):
- return status.Errorf(codes.AlreadyExists, err.Error())
- case IsFailedPrecondition(err):
- return status.Errorf(codes.FailedPrecondition, err.Error())
- case IsUnavailable(err):
- return status.Errorf(codes.Unavailable, err.Error())
- case IsNotImplemented(err):
- return status.Errorf(codes.Unimplemented, err.Error())
- case IsCanceled(err):
- return status.Errorf(codes.Canceled, err.Error())
- case IsDeadlineExceeded(err):
- return status.Errorf(codes.DeadlineExceeded, err.Error())
- }
-
- return err
-}
-
-// ToGRPCf maps the error to grpc error codes, assembling the formatting string
-// and combining it with the target error string.
-//
-// This is equivalent to errors.ToGRPC(errors.Wrapf(err, format, args...))
-func ToGRPCf(err error, format string, args ...interface{}) error {
- return ToGRPC(errors.Wrapf(err, format, args...))
-}
-
-// FromGRPC returns the underlying error from a grpc service based on the grpc error code
-func FromGRPC(err error) error {
- if err == nil {
- return nil
- }
-
- var cls error // divide these into error classes, becomes the cause
-
- switch code(err) {
- case codes.InvalidArgument:
- cls = ErrInvalidArgument
- case codes.AlreadyExists:
- cls = ErrAlreadyExists
- case codes.NotFound:
- cls = ErrNotFound
- case codes.Unavailable:
- cls = ErrUnavailable
- case codes.FailedPrecondition:
- cls = ErrFailedPrecondition
- case codes.Unimplemented:
- cls = ErrNotImplemented
- case codes.Canceled:
- cls = context.Canceled
- case codes.DeadlineExceeded:
- cls = context.DeadlineExceeded
- default:
- cls = ErrUnknown
- }
-
- msg := rebaseMessage(cls, err)
- if msg != "" {
- err = errors.Wrap(cls, msg)
- } else {
- err = errors.WithStack(cls)
- }
-
- return err
-}
-
-// rebaseMessage removes the repeats for an error at the end of an error
-// string. This will happen when taking an error over grpc then remapping it.
-//
-// Effectively, we just remove the string of cls from the end of err if it
-// appears there.
-func rebaseMessage(cls error, err error) string {
- desc := errDesc(err)
- clss := cls.Error()
- if desc == clss {
- return ""
- }
-
- return strings.TrimSuffix(desc, ": "+clss)
-}
-
-func isGRPCError(err error) bool {
- _, ok := status.FromError(err)
- return ok
-}
-
-func code(err error) codes.Code {
- if s, ok := status.FromError(err); ok {
- return s.Code()
- }
- return codes.Unknown
-}
-
-func errDesc(err error) string {
- if s, ok := status.FromError(err); ok {
- return s.Message()
- }
- return err.Error()
-}
diff --git a/vendor/github.com/containerd/containerd/log/context.go b/vendor/github.com/containerd/containerd/log/context.go
deleted file mode 100644
index 21599c4fd64..00000000000
--- a/vendor/github.com/containerd/containerd/log/context.go
+++ /dev/null
@@ -1,60 +0,0 @@
-/*
- Copyright The containerd Authors.
-
- Licensed under the Apache License, Version 2.0 (the "License");
- you may not use this file except in compliance with the License.
- You may obtain a copy of the License at
-
- http://www.apache.org/licenses/LICENSE-2.0
-
- Unless required by applicable law or agreed to in writing, software
- distributed under the License is distributed on an "AS IS" BASIS,
- WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- See the License for the specific language governing permissions and
- limitations under the License.
-*/
-
-package log
-
-import (
- "context"
-
- "github.com/sirupsen/logrus"
-)
-
-var (
- // G is an alias for GetLogger.
- //
- // We may want to define this locally to a package to get package tagged log
- // messages.
- G = GetLogger
-
- // L is an alias for the standard logger.
- L = logrus.NewEntry(logrus.StandardLogger())
-)
-
-type (
- loggerKey struct{}
-)
-
-// RFC3339NanoFixed is time.RFC3339Nano with nanoseconds padded using zeros to
-// ensure the formatted time is always the same number of characters.
-const RFC3339NanoFixed = "2006-01-02T15:04:05.000000000Z07:00"
-
-// WithLogger returns a new context with the provided logger. Use in
-// combination with logger.WithField(s) for great effect.
-func WithLogger(ctx context.Context, logger *logrus.Entry) context.Context {
- return context.WithValue(ctx, loggerKey{}, logger)
-}
-
-// GetLogger retrieves the current logger from the context. If no logger is
-// available, the default logger is returned.
-func GetLogger(ctx context.Context) *logrus.Entry {
- logger := ctx.Value(loggerKey{})
-
- if logger == nil {
- return L
- }
-
- return logger.(*logrus.Entry)
-}
diff --git a/vendor/github.com/containerd/containerd/platforms/compare.go b/vendor/github.com/containerd/containerd/platforms/compare.go
deleted file mode 100644
index 3ad22a10d0c..00000000000
--- a/vendor/github.com/containerd/containerd/platforms/compare.go
+++ /dev/null
@@ -1,229 +0,0 @@
-/*
- Copyright The containerd Authors.
-
- Licensed under the Apache License, Version 2.0 (the "License");
- you may not use this file except in compliance with the License.
- You may obtain a copy of the License at
-
- http://www.apache.org/licenses/LICENSE-2.0
-
- Unless required by applicable law or agreed to in writing, software
- distributed under the License is distributed on an "AS IS" BASIS,
- WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- See the License for the specific language governing permissions and
- limitations under the License.
-*/
-
-package platforms
-
-import specs "github.com/opencontainers/image-spec/specs-go/v1"
-
-// MatchComparer is able to match and compare platforms to
-// filter and sort platforms.
-type MatchComparer interface {
- Matcher
-
- Less(specs.Platform, specs.Platform) bool
-}
-
-// Only returns a match comparer for a single platform
-// using default resolution logic for the platform.
-//
-// For ARMv8, will also match ARMv7, ARMv6 and ARMv5 (for 32bit runtimes)
-// For ARMv7, will also match ARMv6 and ARMv5
-// For ARMv6, will also match ARMv5
-func Only(platform specs.Platform) MatchComparer {
- platform = Normalize(platform)
- if platform.Architecture == "arm" {
- if platform.Variant == "v8" {
- return orderedPlatformComparer{
- matchers: []Matcher{
- &matcher{
- Platform: platform,
- },
- &matcher{
- Platform: specs.Platform{
- Architecture: platform.Architecture,
- OS: platform.OS,
- OSVersion: platform.OSVersion,
- OSFeatures: platform.OSFeatures,
- Variant: "v7",
- },
- },
- &matcher{
- Platform: specs.Platform{
- Architecture: platform.Architecture,
- OS: platform.OS,
- OSVersion: platform.OSVersion,
- OSFeatures: platform.OSFeatures,
- Variant: "v6",
- },
- },
- &matcher{
- Platform: specs.Platform{
- Architecture: platform.Architecture,
- OS: platform.OS,
- OSVersion: platform.OSVersion,
- OSFeatures: platform.OSFeatures,
- Variant: "v5",
- },
- },
- },
- }
- }
- if platform.Variant == "v7" {
- return orderedPlatformComparer{
- matchers: []Matcher{
- &matcher{
- Platform: platform,
- },
- &matcher{
- Platform: specs.Platform{
- Architecture: platform.Architecture,
- OS: platform.OS,
- OSVersion: platform.OSVersion,
- OSFeatures: platform.OSFeatures,
- Variant: "v6",
- },
- },
- &matcher{
- Platform: specs.Platform{
- Architecture: platform.Architecture,
- OS: platform.OS,
- OSVersion: platform.OSVersion,
- OSFeatures: platform.OSFeatures,
- Variant: "v5",
- },
- },
- },
- }
- }
- if platform.Variant == "v6" {
- return orderedPlatformComparer{
- matchers: []Matcher{
- &matcher{
- Platform: platform,
- },
- &matcher{
- Platform: specs.Platform{
- Architecture: platform.Architecture,
- OS: platform.OS,
- OSVersion: platform.OSVersion,
- OSFeatures: platform.OSFeatures,
- Variant: "v5",
- },
- },
- },
- }
- }
- }
-
- return singlePlatformComparer{
- Matcher: &matcher{
- Platform: platform,
- },
- }
-}
-
-// Ordered returns a platform MatchComparer which matches any of the platforms
-// but orders them in order they are provided.
-func Ordered(platforms ...specs.Platform) MatchComparer {
- matchers := make([]Matcher, len(platforms))
- for i := range platforms {
- matchers[i] = NewMatcher(platforms[i])
- }
- return orderedPlatformComparer{
- matchers: matchers,
- }
-}
-
-// Any returns a platform MatchComparer which matches any of the platforms
-// with no preference for ordering.
-func Any(platforms ...specs.Platform) MatchComparer {
- matchers := make([]Matcher, len(platforms))
- for i := range platforms {
- matchers[i] = NewMatcher(platforms[i])
- }
- return anyPlatformComparer{
- matchers: matchers,
- }
-}
-
-// All is a platform MatchComparer which matches all platforms
-// with preference for ordering.
-var All MatchComparer = allPlatformComparer{}
-
-type singlePlatformComparer struct {
- Matcher
-}
-
-func (c singlePlatformComparer) Less(p1, p2 specs.Platform) bool {
- return c.Match(p1) && !c.Match(p2)
-}
-
-type orderedPlatformComparer struct {
- matchers []Matcher
-}
-
-func (c orderedPlatformComparer) Match(platform specs.Platform) bool {
- for _, m := range c.matchers {
- if m.Match(platform) {
- return true
- }
- }
- return false
-}
-
-func (c orderedPlatformComparer) Less(p1 specs.Platform, p2 specs.Platform) bool {
- for _, m := range c.matchers {
- p1m := m.Match(p1)
- p2m := m.Match(p2)
- if p1m && !p2m {
- return true
- }
- if p1m || p2m {
- return false
- }
- }
- return false
-}
-
-type anyPlatformComparer struct {
- matchers []Matcher
-}
-
-func (c anyPlatformComparer) Match(platform specs.Platform) bool {
- for _, m := range c.matchers {
- if m.Match(platform) {
- return true
- }
- }
- return false
-}
-
-func (c anyPlatformComparer) Less(p1, p2 specs.Platform) bool {
- var p1m, p2m bool
- for _, m := range c.matchers {
- if !p1m && m.Match(p1) {
- p1m = true
- }
- if !p2m && m.Match(p2) {
- p2m = true
- }
- if p1m && p2m {
- return false
- }
- }
- // If one matches, and the other does, sort match first
- return p1m && !p2m
-}
-
-type allPlatformComparer struct{}
-
-func (allPlatformComparer) Match(specs.Platform) bool {
- return true
-}
-
-func (allPlatformComparer) Less(specs.Platform, specs.Platform) bool {
- return false
-}
diff --git a/vendor/github.com/containerd/containerd/platforms/cpuinfo.go b/vendor/github.com/containerd/containerd/platforms/cpuinfo.go
deleted file mode 100644
index db65a726b90..00000000000
--- a/vendor/github.com/containerd/containerd/platforms/cpuinfo.go
+++ /dev/null
@@ -1,122 +0,0 @@
-/*
- Copyright The containerd Authors.
-
- Licensed under the Apache License, Version 2.0 (the "License");
- you may not use this file except in compliance with the License.
- You may obtain a copy of the License at
-
- http://www.apache.org/licenses/LICENSE-2.0
-
- Unless required by applicable law or agreed to in writing, software
- distributed under the License is distributed on an "AS IS" BASIS,
- WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- See the License for the specific language governing permissions and
- limitations under the License.
-*/
-
-package platforms
-
-import (
- "bufio"
- "os"
- "runtime"
- "strings"
-
- "github.com/containerd/containerd/errdefs"
- "github.com/containerd/containerd/log"
- "github.com/pkg/errors"
-)
-
-// Present the ARM instruction set architecture, eg: v7, v8
-var cpuVariant string
-
-func init() {
- if isArmArch(runtime.GOARCH) {
- cpuVariant = getCPUVariant()
- } else {
- cpuVariant = ""
- }
-}
-
-// For Linux, the kernel has already detected the ABI, ISA and Features.
-// So we don't need to access the ARM registers to detect platform information
-// by ourselves. We can just parse these information from /proc/cpuinfo
-func getCPUInfo(pattern string) (info string, err error) {
- if !isLinuxOS(runtime.GOOS) {
- return "", errors.Wrapf(errdefs.ErrNotImplemented, "getCPUInfo for OS %s", runtime.GOOS)
- }
-
- cpuinfo, err := os.Open("/proc/cpuinfo")
- if err != nil {
- return "", err
- }
- defer cpuinfo.Close()
-
- // Start to Parse the Cpuinfo line by line. For SMP SoC, we parse
- // the first core is enough.
- scanner := bufio.NewScanner(cpuinfo)
- for scanner.Scan() {
- newline := scanner.Text()
- list := strings.Split(newline, ":")
-
- if len(list) > 1 && strings.EqualFold(strings.TrimSpace(list[0]), pattern) {
- return strings.TrimSpace(list[1]), nil
- }
- }
-
- // Check whether the scanner encountered errors
- err = scanner.Err()
- if err != nil {
- return "", err
- }
-
- return "", errors.Wrapf(errdefs.ErrNotFound, "getCPUInfo for pattern: %s", pattern)
-}
-
-func getCPUVariant() string {
- if runtime.GOOS == "windows" || runtime.GOOS == "darwin" {
- // Windows/Darwin only supports v7 for ARM32 and v8 for ARM64 and so we can use
- // runtime.GOARCH to determine the variants
- var variant string
- switch runtime.GOARCH {
- case "arm64":
- variant = "v8"
- case "arm":
- variant = "v7"
- default:
- variant = "unknown"
- }
-
- return variant
- }
-
- variant, err := getCPUInfo("Cpu architecture")
- if err != nil {
- log.L.WithError(err).Error("failure getting variant")
- return ""
- }
-
- switch strings.ToLower(variant) {
- case "8", "aarch64":
- // special case: if running a 32-bit userspace on aarch64, the variant should be "v7"
- if runtime.GOARCH == "arm" {
- variant = "v7"
- } else {
- variant = "v8"
- }
- case "7", "7m", "?(12)", "?(13)", "?(14)", "?(15)", "?(16)", "?(17)":
- variant = "v7"
- case "6", "6tej":
- variant = "v6"
- case "5", "5t", "5te", "5tej":
- variant = "v5"
- case "4", "4t":
- variant = "v4"
- case "3":
- variant = "v3"
- default:
- variant = "unknown"
- }
-
- return variant
-}
diff --git a/vendor/github.com/containerd/containerd/platforms/database.go b/vendor/github.com/containerd/containerd/platforms/database.go
deleted file mode 100644
index 6ede94061eb..00000000000
--- a/vendor/github.com/containerd/containerd/platforms/database.go
+++ /dev/null
@@ -1,114 +0,0 @@
-/*
- Copyright The containerd Authors.
-
- Licensed under the Apache License, Version 2.0 (the "License");
- you may not use this file except in compliance with the License.
- You may obtain a copy of the License at
-
- http://www.apache.org/licenses/LICENSE-2.0
-
- Unless required by applicable law or agreed to in writing, software
- distributed under the License is distributed on an "AS IS" BASIS,
- WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- See the License for the specific language governing permissions and
- limitations under the License.
-*/
-
-package platforms
-
-import (
- "runtime"
- "strings"
-)
-
-// isLinuxOS returns true if the operating system is Linux.
-//
-// The OS value should be normalized before calling this function.
-func isLinuxOS(os string) bool {
- return os == "linux"
-}
-
-// These function are generated from https://golang.org/src/go/build/syslist.go.
-//
-// We use switch statements because they are slightly faster than map lookups
-// and use a little less memory.
-
-// isKnownOS returns true if we know about the operating system.
-//
-// The OS value should be normalized before calling this function.
-func isKnownOS(os string) bool {
- switch os {
- case "aix", "android", "darwin", "dragonfly", "freebsd", "hurd", "illumos", "js", "linux", "nacl", "netbsd", "openbsd", "plan9", "solaris", "windows", "zos":
- return true
- }
- return false
-}
-
-// isArmArch returns true if the architecture is ARM.
-//
-// The arch value should be normalized before being passed to this function.
-func isArmArch(arch string) bool {
- switch arch {
- case "arm", "arm64":
- return true
- }
- return false
-}
-
-// isKnownArch returns true if we know about the architecture.
-//
-// The arch value should be normalized before being passed to this function.
-func isKnownArch(arch string) bool {
- switch arch {
- case "386", "amd64", "amd64p32", "arm", "armbe", "arm64", "arm64be", "ppc64", "ppc64le", "mips", "mipsle", "mips64", "mips64le", "mips64p32", "mips64p32le", "ppc", "riscv", "riscv64", "s390", "s390x", "sparc", "sparc64", "wasm":
- return true
- }
- return false
-}
-
-func normalizeOS(os string) string {
- if os == "" {
- return runtime.GOOS
- }
- os = strings.ToLower(os)
-
- switch os {
- case "macos":
- os = "darwin"
- }
- return os
-}
-
-// normalizeArch normalizes the architecture.
-func normalizeArch(arch, variant string) (string, string) {
- arch, variant = strings.ToLower(arch), strings.ToLower(variant)
- switch arch {
- case "i386":
- arch = "386"
- variant = ""
- case "x86_64", "x86-64":
- arch = "amd64"
- variant = ""
- case "aarch64", "arm64":
- arch = "arm64"
- switch variant {
- case "8", "v8":
- variant = ""
- }
- case "armhf":
- arch = "arm"
- variant = "v7"
- case "armel":
- arch = "arm"
- variant = "v6"
- case "arm":
- switch variant {
- case "", "7":
- variant = "v7"
- case "5", "6", "8":
- variant = "v" + variant
- }
- }
-
- return arch, variant
-}
diff --git a/vendor/github.com/containerd/containerd/platforms/defaults.go b/vendor/github.com/containerd/containerd/platforms/defaults.go
deleted file mode 100644
index a14d80e58cb..00000000000
--- a/vendor/github.com/containerd/containerd/platforms/defaults.go
+++ /dev/null
@@ -1,38 +0,0 @@
-/*
- Copyright The containerd Authors.
-
- Licensed under the Apache License, Version 2.0 (the "License");
- you may not use this file except in compliance with the License.
- You may obtain a copy of the License at
-
- http://www.apache.org/licenses/LICENSE-2.0
-
- Unless required by applicable law or agreed to in writing, software
- distributed under the License is distributed on an "AS IS" BASIS,
- WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- See the License for the specific language governing permissions and
- limitations under the License.
-*/
-
-package platforms
-
-import (
- "runtime"
-
- specs "github.com/opencontainers/image-spec/specs-go/v1"
-)
-
-// DefaultString returns the default string specifier for the platform.
-func DefaultString() string {
- return Format(DefaultSpec())
-}
-
-// DefaultSpec returns the current platform's default platform specification.
-func DefaultSpec() specs.Platform {
- return specs.Platform{
- OS: runtime.GOOS,
- Architecture: runtime.GOARCH,
- // The Variant field will be empty if arch != ARM.
- Variant: cpuVariant,
- }
-}
diff --git a/vendor/github.com/containerd/containerd/platforms/defaults_unix.go b/vendor/github.com/containerd/containerd/platforms/defaults_unix.go
deleted file mode 100644
index e8a7d5ffa0d..00000000000
--- a/vendor/github.com/containerd/containerd/platforms/defaults_unix.go
+++ /dev/null
@@ -1,24 +0,0 @@
-// +build !windows
-
-/*
- Copyright The containerd Authors.
-
- Licensed under the Apache License, Version 2.0 (the "License");
- you may not use this file except in compliance with the License.
- You may obtain a copy of the License at
-
- http://www.apache.org/licenses/LICENSE-2.0
-
- Unless required by applicable law or agreed to in writing, software
- distributed under the License is distributed on an "AS IS" BASIS,
- WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- See the License for the specific language governing permissions and
- limitations under the License.
-*/
-
-package platforms
-
-// Default returns the default matcher for the platform.
-func Default() MatchComparer {
- return Only(DefaultSpec())
-}
diff --git a/vendor/github.com/containerd/containerd/platforms/defaults_windows.go b/vendor/github.com/containerd/containerd/platforms/defaults_windows.go
deleted file mode 100644
index 0defbd36c04..00000000000
--- a/vendor/github.com/containerd/containerd/platforms/defaults_windows.go
+++ /dev/null
@@ -1,31 +0,0 @@
-// +build windows
-
-/*
- Copyright The containerd Authors.
-
- Licensed under the Apache License, Version 2.0 (the "License");
- you may not use this file except in compliance with the License.
- You may obtain a copy of the License at
-
- http://www.apache.org/licenses/LICENSE-2.0
-
- Unless required by applicable law or agreed to in writing, software
- distributed under the License is distributed on an "AS IS" BASIS,
- WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- See the License for the specific language governing permissions and
- limitations under the License.
-*/
-
-package platforms
-
-import (
- specs "github.com/opencontainers/image-spec/specs-go/v1"
-)
-
-// Default returns the default matcher for the platform.
-func Default() MatchComparer {
- return Ordered(DefaultSpec(), specs.Platform{
- OS: "linux",
- Architecture: "amd64",
- })
-}
diff --git a/vendor/github.com/containerd/containerd/platforms/platforms.go b/vendor/github.com/containerd/containerd/platforms/platforms.go
deleted file mode 100644
index 77d3f184ec1..00000000000
--- a/vendor/github.com/containerd/containerd/platforms/platforms.go
+++ /dev/null
@@ -1,278 +0,0 @@
-/*
- Copyright The containerd Authors.
-
- Licensed under the Apache License, Version 2.0 (the "License");
- you may not use this file except in compliance with the License.
- You may obtain a copy of the License at
-
- http://www.apache.org/licenses/LICENSE-2.0
-
- Unless required by applicable law or agreed to in writing, software
- distributed under the License is distributed on an "AS IS" BASIS,
- WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- See the License for the specific language governing permissions and
- limitations under the License.
-*/
-
-// Package platforms provides a toolkit for normalizing, matching and
-// specifying container platforms.
-//
-// Centered around OCI platform specifications, we define a string-based
-// specifier syntax that can be used for user input. With a specifier, users
-// only need to specify the parts of the platform that are relevant to their
-// context, providing an operating system or architecture or both.
-//
-// How do I use this package?
-//
-// The vast majority of use cases should simply use the match function with
-// user input. The first step is to parse a specifier into a matcher:
-//
-// m, err := Parse("linux")
-// if err != nil { ... }
-//
-// Once you have a matcher, use it to match against the platform declared by a
-// component, typically from an image or runtime. Since extracting an images
-// platform is a little more involved, we'll use an example against the
-// platform default:
-//
-// if ok := m.Match(Default()); !ok { /* doesn't match */ }
-//
-// This can be composed in loops for resolving runtimes or used as a filter for
-// fetch and select images.
-//
-// More details of the specifier syntax and platform spec follow.
-//
-// Declaring Platform Support
-//
-// Components that have strict platform requirements should use the OCI
-// platform specification to declare their support. Typically, this will be
-// images and runtimes that should make these declaring which platform they
-// support specifically. This looks roughly as follows:
-//
-// type Platform struct {
-// Architecture string
-// OS string
-// Variant string
-// }
-//
-// Most images and runtimes should at least set Architecture and OS, according
-// to their GOARCH and GOOS values, respectively (follow the OCI image
-// specification when in doubt). ARM should set variant under certain
-// discussions, which are outlined below.
-//
-// Platform Specifiers
-//
-// While the OCI platform specifications provide a tool for components to
-// specify structured information, user input typically doesn't need the full
-// context and much can be inferred. To solve this problem, we introduced
-// "specifiers". A specifier has the format
-// `||/[/]`. The user can provide either the
-// operating system or the architecture or both.
-//
-// An example of a common specifier is `linux/amd64`. If the host has a default
-// of runtime that matches this, the user can simply provide the component that
-// matters. For example, if a image provides amd64 and arm64 support, the
-// operating system, `linux` can be inferred, so they only have to provide
-// `arm64` or `amd64`. Similar behavior is implemented for operating systems,
-// where the architecture may be known but a runtime may support images from
-// different operating systems.
-//
-// Normalization
-//
-// Because not all users are familiar with the way the Go runtime represents
-// platforms, several normalizations have been provided to make this package
-// easier to user.
-//
-// The following are performed for architectures:
-//
-// Value Normalized
-// aarch64 arm64
-// armhf arm
-// armel arm/v6
-// i386 386
-// x86_64 amd64
-// x86-64 amd64
-//
-// We also normalize the operating system `macos` to `darwin`.
-//
-// ARM Support
-//
-// To qualify ARM architecture, the Variant field is used to qualify the arm
-// version. The most common arm version, v7, is represented without the variant
-// unless it is explicitly provided. This is treated as equivalent to armhf. A
-// previous architecture, armel, will be normalized to arm/v6.
-//
-// While these normalizations are provided, their support on arm platforms has
-// not yet been fully implemented and tested.
-package platforms
-
-import (
- "regexp"
- "runtime"
- "strconv"
- "strings"
-
- "github.com/containerd/containerd/errdefs"
- specs "github.com/opencontainers/image-spec/specs-go/v1"
- "github.com/pkg/errors"
-)
-
-var (
- specifierRe = regexp.MustCompile(`^[A-Za-z0-9_-]+$`)
-)
-
-// Matcher matches platforms specifications, provided by an image or runtime.
-type Matcher interface {
- Match(platform specs.Platform) bool
-}
-
-// NewMatcher returns a simple matcher based on the provided platform
-// specification. The returned matcher only looks for equality based on os,
-// architecture and variant.
-//
-// One may implement their own matcher if this doesn't provide the required
-// functionality.
-//
-// Applications should opt to use `Match` over directly parsing specifiers.
-func NewMatcher(platform specs.Platform) Matcher {
- return &matcher{
- Platform: Normalize(platform),
- }
-}
-
-type matcher struct {
- specs.Platform
-}
-
-func (m *matcher) Match(platform specs.Platform) bool {
- normalized := Normalize(platform)
- return m.OS == normalized.OS &&
- m.Architecture == normalized.Architecture &&
- m.Variant == normalized.Variant
-}
-
-func (m *matcher) String() string {
- return Format(m.Platform)
-}
-
-// Parse parses the platform specifier syntax into a platform declaration.
-//
-// Platform specifiers are in the format `||/[/]`.
-// The minimum required information for a platform specifier is the operating
-// system or architecture. If there is only a single string (no slashes), the
-// value will be matched against the known set of operating systems, then fall
-// back to the known set of architectures. The missing component will be
-// inferred based on the local environment.
-func Parse(specifier string) (specs.Platform, error) {
- if strings.Contains(specifier, "*") {
- // TODO(stevvooe): need to work out exact wildcard handling
- return specs.Platform{}, errors.Wrapf(errdefs.ErrInvalidArgument, "%q: wildcards not yet supported", specifier)
- }
-
- parts := strings.Split(specifier, "/")
-
- for _, part := range parts {
- if !specifierRe.MatchString(part) {
- return specs.Platform{}, errors.Wrapf(errdefs.ErrInvalidArgument, "%q is an invalid component of %q: platform specifier component must match %q", part, specifier, specifierRe.String())
- }
- }
-
- var p specs.Platform
- switch len(parts) {
- case 1:
- // in this case, we will test that the value might be an OS, then look
- // it up. If it is not known, we'll treat it as an architecture. Since
- // we have very little information about the platform here, we are
- // going to be a little more strict if we don't know about the argument
- // value.
- p.OS = normalizeOS(parts[0])
- if isKnownOS(p.OS) {
- // picks a default architecture
- p.Architecture = runtime.GOARCH
- if p.Architecture == "arm" && cpuVariant != "v7" {
- p.Variant = cpuVariant
- }
-
- return p, nil
- }
-
- p.Architecture, p.Variant = normalizeArch(parts[0], "")
- if p.Architecture == "arm" && p.Variant == "v7" {
- p.Variant = ""
- }
- if isKnownArch(p.Architecture) {
- p.OS = runtime.GOOS
- return p, nil
- }
-
- return specs.Platform{}, errors.Wrapf(errdefs.ErrInvalidArgument, "%q: unknown operating system or architecture", specifier)
- case 2:
- // In this case, we treat as a regular os/arch pair. We don't care
- // about whether or not we know of the platform.
- p.OS = normalizeOS(parts[0])
- p.Architecture, p.Variant = normalizeArch(parts[1], "")
- if p.Architecture == "arm" && p.Variant == "v7" {
- p.Variant = ""
- }
-
- return p, nil
- case 3:
- // we have a fully specified variant, this is rare
- p.OS = normalizeOS(parts[0])
- p.Architecture, p.Variant = normalizeArch(parts[1], parts[2])
- if p.Architecture == "arm64" && p.Variant == "" {
- p.Variant = "v8"
- }
-
- return p, nil
- }
-
- return specs.Platform{}, errors.Wrapf(errdefs.ErrInvalidArgument, "%q: cannot parse platform specifier", specifier)
-}
-
-// MustParse is like Parses but panics if the specifier cannot be parsed.
-// Simplifies initialization of global variables.
-func MustParse(specifier string) specs.Platform {
- p, err := Parse(specifier)
- if err != nil {
- panic("platform: Parse(" + strconv.Quote(specifier) + "): " + err.Error())
- }
- return p
-}
-
-// Format returns a string specifier from the provided platform specification.
-func Format(platform specs.Platform) string {
- if platform.OS == "" {
- return "unknown"
- }
-
- return joinNotEmpty(platform.OS, platform.Architecture, platform.Variant)
-}
-
-func joinNotEmpty(s ...string) string {
- var ss []string
- for _, s := range s {
- if s == "" {
- continue
- }
-
- ss = append(ss, s)
- }
-
- return strings.Join(ss, "/")
-}
-
-// Normalize validates and translate the platform to the canonical value.
-//
-// For example, if "Aarch64" is encountered, we change it to "arm64" or if
-// "x86_64" is encountered, it becomes "amd64".
-func Normalize(platform specs.Platform) specs.Platform {
- platform.OS = normalizeOS(platform.OS)
- platform.Architecture, platform.Variant = normalizeArch(platform.Architecture, platform.Variant)
-
- // these fields are deprecated, remove them
- platform.OSFeatures = nil
- platform.OSVersion = ""
-
- return platform
-}
diff --git a/vendor/github.com/containernetworking/cni/LICENSE b/vendor/github.com/containernetworking/cni/LICENSE
deleted file mode 100644
index 8f71f43fee3..00000000000
--- a/vendor/github.com/containernetworking/cni/LICENSE
+++ /dev/null
@@ -1,202 +0,0 @@
- Apache License
- Version 2.0, January 2004
- http://www.apache.org/licenses/
-
- TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
-
- 1. Definitions.
-
- "License" shall mean the terms and conditions for use, reproduction,
- and distribution as defined by Sections 1 through 9 of this document.
-
- "Licensor" shall mean the copyright owner or entity authorized by
- the copyright owner that is granting the License.
-
- "Legal Entity" shall mean the union of the acting entity and all
- other entities that control, are controlled by, or are under common
- control with that entity. For the purposes of this definition,
- "control" means (i) the power, direct or indirect, to cause the
- direction or management of such entity, whether by contract or
- otherwise, or (ii) ownership of fifty percent (50%) or more of the
- outstanding shares, or (iii) beneficial ownership of such entity.
-
- "You" (or "Your") shall mean an individual or Legal Entity
- exercising permissions granted by this License.
-
- "Source" form shall mean the preferred form for making modifications,
- including but not limited to software source code, documentation
- source, and configuration files.
-
- "Object" form shall mean any form resulting from mechanical
- transformation or translation of a Source form, including but
- not limited to compiled object code, generated documentation,
- and conversions to other media types.
-
- "Work" shall mean the work of authorship, whether in Source or
- Object form, made available under the License, as indicated by a
- copyright notice that is included in or attached to the work
- (an example is provided in the Appendix below).
-
- "Derivative Works" shall mean any work, whether in Source or Object
- form, that is based on (or derived from) the Work and for which the
- editorial revisions, annotations, elaborations, or other modifications
- represent, as a whole, an original work of authorship. For the purposes
- of this License, Derivative Works shall not include works that remain
- separable from, or merely link (or bind by name) to the interfaces of,
- the Work and Derivative Works thereof.
-
- "Contribution" shall mean any work of authorship, including
- the original version of the Work and any modifications or additions
- to that Work or Derivative Works thereof, that is intentionally
- submitted to Licensor for inclusion in the Work by the copyright owner
- or by an individual or Legal Entity authorized to submit on behalf of
- the copyright owner. For the purposes of this definition, "submitted"
- means any form of electronic, verbal, or written communication sent
- to the Licensor or its representatives, including but not limited to
- communication on electronic mailing lists, source code control systems,
- and issue tracking systems that are managed by, or on behalf of, the
- Licensor for the purpose of discussing and improving the Work, but
- excluding communication that is conspicuously marked or otherwise
- designated in writing by the copyright owner as "Not a Contribution."
-
- "Contributor" shall mean Licensor and any individual or Legal Entity
- on behalf of whom a Contribution has been received by Licensor and
- subsequently incorporated within the Work.
-
- 2. Grant of Copyright License. Subject to the terms and conditions of
- this License, each Contributor hereby grants to You a perpetual,
- worldwide, non-exclusive, no-charge, royalty-free, irrevocable
- copyright license to reproduce, prepare Derivative Works of,
- publicly display, publicly perform, sublicense, and distribute the
- Work and such Derivative Works in Source or Object form.
-
- 3. Grant of Patent License. Subject to the terms and conditions of
- this License, each Contributor hereby grants to You a perpetual,
- worldwide, non-exclusive, no-charge, royalty-free, irrevocable
- (except as stated in this section) patent license to make, have made,
- use, offer to sell, sell, import, and otherwise transfer the Work,
- where such license applies only to those patent claims licensable
- by such Contributor that are necessarily infringed by their
- Contribution(s) alone or by combination of their Contribution(s)
- with the Work to which such Contribution(s) was submitted. If You
- institute patent litigation against any entity (including a
- cross-claim or counterclaim in a lawsuit) alleging that the Work
- or a Contribution incorporated within the Work constitutes direct
- or contributory patent infringement, then any patent licenses
- granted to You under this License for that Work shall terminate
- as of the date such litigation is filed.
-
- 4. Redistribution. You may reproduce and distribute copies of the
- Work or Derivative Works thereof in any medium, with or without
- modifications, and in Source or Object form, provided that You
- meet the following conditions:
-
- (a) You must give any other recipients of the Work or
- Derivative Works a copy of this License; and
-
- (b) You must cause any modified files to carry prominent notices
- stating that You changed the files; and
-
- (c) You must retain, in the Source form of any Derivative Works
- that You distribute, all copyright, patent, trademark, and
- attribution notices from the Source form of the Work,
- excluding those notices that do not pertain to any part of
- the Derivative Works; and
-
- (d) If the Work includes a "NOTICE" text file as part of its
- distribution, then any Derivative Works that You distribute must
- include a readable copy of the attribution notices contained
- within such NOTICE file, excluding those notices that do not
- pertain to any part of the Derivative Works, in at least one
- of the following places: within a NOTICE text file distributed
- as part of the Derivative Works; within the Source form or
- documentation, if provided along with the Derivative Works; or,
- within a display generated by the Derivative Works, if and
- wherever such third-party notices normally appear. The contents
- of the NOTICE file are for informational purposes only and
- do not modify the License. You may add Your own attribution
- notices within Derivative Works that You distribute, alongside
- or as an addendum to the NOTICE text from the Work, provided
- that such additional attribution notices cannot be construed
- as modifying the License.
-
- You may add Your own copyright statement to Your modifications and
- may provide additional or different license terms and conditions
- for use, reproduction, or distribution of Your modifications, or
- for any such Derivative Works as a whole, provided Your use,
- reproduction, and distribution of the Work otherwise complies with
- the conditions stated in this License.
-
- 5. Submission of Contributions. Unless You explicitly state otherwise,
- any Contribution intentionally submitted for inclusion in the Work
- by You to the Licensor shall be under the terms and conditions of
- this License, without any additional terms or conditions.
- Notwithstanding the above, nothing herein shall supersede or modify
- the terms of any separate license agreement you may have executed
- with Licensor regarding such Contributions.
-
- 6. Trademarks. This License does not grant permission to use the trade
- names, trademarks, service marks, or product names of the Licensor,
- except as required for reasonable and customary use in describing the
- origin of the Work and reproducing the content of the NOTICE file.
-
- 7. Disclaimer of Warranty. Unless required by applicable law or
- agreed to in writing, Licensor provides the Work (and each
- Contributor provides its Contributions) on an "AS IS" BASIS,
- WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
- implied, including, without limitation, any warranties or conditions
- of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
- PARTICULAR PURPOSE. You are solely responsible for determining the
- appropriateness of using or redistributing the Work and assume any
- risks associated with Your exercise of permissions under this License.
-
- 8. Limitation of Liability. In no event and under no legal theory,
- whether in tort (including negligence), contract, or otherwise,
- unless required by applicable law (such as deliberate and grossly
- negligent acts) or agreed to in writing, shall any Contributor be
- liable to You for damages, including any direct, indirect, special,
- incidental, or consequential damages of any character arising as a
- result of this License or out of the use or inability to use the
- Work (including but not limited to damages for loss of goodwill,
- work stoppage, computer failure or malfunction, or any and all
- other commercial damages or losses), even if such Contributor
- has been advised of the possibility of such damages.
-
- 9. Accepting Warranty or Additional Liability. While redistributing
- the Work or Derivative Works thereof, You may choose to offer,
- and charge a fee for, acceptance of support, warranty, indemnity,
- or other liability obligations and/or rights consistent with this
- License. However, in accepting such obligations, You may act only
- on Your own behalf and on Your sole responsibility, not on behalf
- of any other Contributor, and only if You agree to indemnify,
- defend, and hold each Contributor harmless for any liability
- incurred by, or claims asserted against, such Contributor by reason
- of your accepting any such warranty or additional liability.
-
- END OF TERMS AND CONDITIONS
-
- APPENDIX: How to apply the Apache License to your work.
-
- To apply the Apache License to your work, attach the following
- boilerplate notice, with the fields enclosed by brackets "{}"
- replaced with your own identifying information. (Don't include
- the brackets!) The text should be enclosed in the appropriate
- comment syntax for the file format. We also recommend that a
- file or class name and description of purpose be included on the
- same "printed page" as the copyright notice for easier
- identification within third-party archives.
-
- Copyright {yyyy} {name of copyright owner}
-
- Licensed under the Apache License, Version 2.0 (the "License");
- you may not use this file except in compliance with the License.
- You may obtain a copy of the License at
-
- http://www.apache.org/licenses/LICENSE-2.0
-
- Unless required by applicable law or agreed to in writing, software
- distributed under the License is distributed on an "AS IS" BASIS,
- WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- See the License for the specific language governing permissions and
- limitations under the License.
-
diff --git a/vendor/github.com/containernetworking/cni/libcni/api.go b/vendor/github.com/containernetworking/cni/libcni/api.go
deleted file mode 100644
index 7e52bd83873..00000000000
--- a/vendor/github.com/containernetworking/cni/libcni/api.go
+++ /dev/null
@@ -1,673 +0,0 @@
-// Copyright 2015 CNI authors
-//
-// Licensed under the Apache License, Version 2.0 (the "License");
-// you may not use this file except in compliance with the License.
-// You may obtain a copy of the License at
-//
-// http://www.apache.org/licenses/LICENSE-2.0
-//
-// Unless required by applicable law or agreed to in writing, software
-// distributed under the License is distributed on an "AS IS" BASIS,
-// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-// See the License for the specific language governing permissions and
-// limitations under the License.
-
-package libcni
-
-import (
- "context"
- "encoding/json"
- "fmt"
- "io/ioutil"
- "os"
- "path/filepath"
- "strings"
-
- "github.com/containernetworking/cni/pkg/invoke"
- "github.com/containernetworking/cni/pkg/types"
- "github.com/containernetworking/cni/pkg/utils"
- "github.com/containernetworking/cni/pkg/version"
-)
-
-var (
- CacheDir = "/var/lib/cni"
-)
-
-const (
- CNICacheV1 = "cniCacheV1"
-)
-
-// A RuntimeConf holds the arguments to one invocation of a CNI plugin
-// excepting the network configuration, with the nested exception that
-// the `runtimeConfig` from the network configuration is included
-// here.
-type RuntimeConf struct {
- ContainerID string
- NetNS string
- IfName string
- Args [][2]string
- // A dictionary of capability-specific data passed by the runtime
- // to plugins as top-level keys in the 'runtimeConfig' dictionary
- // of the plugin's stdin data. libcni will ensure that only keys
- // in this map which match the capabilities of the plugin are passed
- // to the plugin
- CapabilityArgs map[string]interface{}
-
- // DEPRECATED. Will be removed in a future release.
- CacheDir string
-}
-
-type NetworkConfig struct {
- Network *types.NetConf
- Bytes []byte
-}
-
-type NetworkConfigList struct {
- Name string
- CNIVersion string
- DisableCheck bool
- Plugins []*NetworkConfig
- Bytes []byte
-}
-
-type CNI interface {
- AddNetworkList(ctx context.Context, net *NetworkConfigList, rt *RuntimeConf) (types.Result, error)
- CheckNetworkList(ctx context.Context, net *NetworkConfigList, rt *RuntimeConf) error
- DelNetworkList(ctx context.Context, net *NetworkConfigList, rt *RuntimeConf) error
- GetNetworkListCachedResult(net *NetworkConfigList, rt *RuntimeConf) (types.Result, error)
- GetNetworkListCachedConfig(net *NetworkConfigList, rt *RuntimeConf) ([]byte, *RuntimeConf, error)
-
- AddNetwork(ctx context.Context, net *NetworkConfig, rt *RuntimeConf) (types.Result, error)
- CheckNetwork(ctx context.Context, net *NetworkConfig, rt *RuntimeConf) error
- DelNetwork(ctx context.Context, net *NetworkConfig, rt *RuntimeConf) error
- GetNetworkCachedResult(net *NetworkConfig, rt *RuntimeConf) (types.Result, error)
- GetNetworkCachedConfig(net *NetworkConfig, rt *RuntimeConf) ([]byte, *RuntimeConf, error)
-
- ValidateNetworkList(ctx context.Context, net *NetworkConfigList) ([]string, error)
- ValidateNetwork(ctx context.Context, net *NetworkConfig) ([]string, error)
-}
-
-type CNIConfig struct {
- Path []string
- exec invoke.Exec
- cacheDir string
-}
-
-// CNIConfig implements the CNI interface
-var _ CNI = &CNIConfig{}
-
-// NewCNIConfig returns a new CNIConfig object that will search for plugins
-// in the given paths and use the given exec interface to run those plugins,
-// or if the exec interface is not given, will use a default exec handler.
-func NewCNIConfig(path []string, exec invoke.Exec) *CNIConfig {
- return NewCNIConfigWithCacheDir(path, "", exec)
-}
-
-// NewCNIConfigWithCacheDir returns a new CNIConfig object that will search for plugins
-// in the given paths use the given exec interface to run those plugins,
-// or if the exec interface is not given, will use a default exec handler.
-// The given cache directory will be used for temporary data storage when needed.
-func NewCNIConfigWithCacheDir(path []string, cacheDir string, exec invoke.Exec) *CNIConfig {
- return &CNIConfig{
- Path: path,
- cacheDir: cacheDir,
- exec: exec,
- }
-}
-
-func buildOneConfig(name, cniVersion string, orig *NetworkConfig, prevResult types.Result, rt *RuntimeConf) (*NetworkConfig, error) {
- var err error
-
- inject := map[string]interface{}{
- "name": name,
- "cniVersion": cniVersion,
- }
- // Add previous plugin result
- if prevResult != nil {
- inject["prevResult"] = prevResult
- }
-
- // Ensure every config uses the same name and version
- orig, err = InjectConf(orig, inject)
- if err != nil {
- return nil, err
- }
-
- return injectRuntimeConfig(orig, rt)
-}
-
-// This function takes a libcni RuntimeConf structure and injects values into
-// a "runtimeConfig" dictionary in the CNI network configuration JSON that
-// will be passed to the plugin on stdin.
-//
-// Only "capabilities arguments" passed by the runtime are currently injected.
-// These capabilities arguments are filtered through the plugin's advertised
-// capabilities from its config JSON, and any keys in the CapabilityArgs
-// matching plugin capabilities are added to the "runtimeConfig" dictionary
-// sent to the plugin via JSON on stdin. For example, if the plugin's
-// capabilities include "portMappings", and the CapabilityArgs map includes a
-// "portMappings" key, that key and its value are added to the "runtimeConfig"
-// dictionary to be passed to the plugin's stdin.
-func injectRuntimeConfig(orig *NetworkConfig, rt *RuntimeConf) (*NetworkConfig, error) {
- var err error
-
- rc := make(map[string]interface{})
- for capability, supported := range orig.Network.Capabilities {
- if !supported {
- continue
- }
- if data, ok := rt.CapabilityArgs[capability]; ok {
- rc[capability] = data
- }
- }
-
- if len(rc) > 0 {
- orig, err = InjectConf(orig, map[string]interface{}{"runtimeConfig": rc})
- if err != nil {
- return nil, err
- }
- }
-
- return orig, nil
-}
-
-// ensure we have a usable exec if the CNIConfig was not given one
-func (c *CNIConfig) ensureExec() invoke.Exec {
- if c.exec == nil {
- c.exec = &invoke.DefaultExec{
- RawExec: &invoke.RawExec{Stderr: os.Stderr},
- PluginDecoder: version.PluginDecoder{},
- }
- }
- return c.exec
-}
-
-type cachedInfo struct {
- Kind string `json:"kind"`
- ContainerID string `json:"containerId"`
- Config []byte `json:"config"`
- IfName string `json:"ifName"`
- NetworkName string `json:"networkName"`
- CniArgs [][2]string `json:"cniArgs,omitempty"`
- CapabilityArgs map[string]interface{} `json:"capabilityArgs,omitempty"`
- RawResult map[string]interface{} `json:"result,omitempty"`
- Result types.Result `json:"-"`
-}
-
-// getCacheDir returns the cache directory in this order:
-// 1) global cacheDir from CNIConfig object
-// 2) deprecated cacheDir from RuntimeConf object
-// 3) fall back to default cache directory
-func (c *CNIConfig) getCacheDir(rt *RuntimeConf) string {
- if c.cacheDir != "" {
- return c.cacheDir
- }
- if rt.CacheDir != "" {
- return rt.CacheDir
- }
- return CacheDir
-}
-
-func (c *CNIConfig) getCacheFilePath(netName string, rt *RuntimeConf) (string, error) {
- if netName == "" || rt.ContainerID == "" || rt.IfName == "" {
- return "", fmt.Errorf("cache file path requires network name (%q), container ID (%q), and interface name (%q)", netName, rt.ContainerID, rt.IfName)
- }
- return filepath.Join(c.getCacheDir(rt), "results", fmt.Sprintf("%s-%s-%s", netName, rt.ContainerID, rt.IfName)), nil
-}
-
-func (c *CNIConfig) cacheAdd(result types.Result, config []byte, netName string, rt *RuntimeConf) error {
- cached := cachedInfo{
- Kind: CNICacheV1,
- ContainerID: rt.ContainerID,
- Config: config,
- IfName: rt.IfName,
- NetworkName: netName,
- CniArgs: rt.Args,
- CapabilityArgs: rt.CapabilityArgs,
- }
-
- // We need to get type.Result into cachedInfo as JSON map
- // Marshal to []byte, then Unmarshal into cached.RawResult
- data, err := json.Marshal(result)
- if err != nil {
- return err
- }
-
- err = json.Unmarshal(data, &cached.RawResult)
- if err != nil {
- return err
- }
-
- newBytes, err := json.Marshal(&cached)
- if err != nil {
- return err
- }
-
- fname, err := c.getCacheFilePath(netName, rt)
- if err != nil {
- return err
- }
- if err := os.MkdirAll(filepath.Dir(fname), 0700); err != nil {
- return err
- }
-
- return ioutil.WriteFile(fname, newBytes, 0600)
-}
-
-func (c *CNIConfig) cacheDel(netName string, rt *RuntimeConf) error {
- fname, err := c.getCacheFilePath(netName, rt)
- if err != nil {
- // Ignore error
- return nil
- }
- return os.Remove(fname)
-}
-
-func (c *CNIConfig) getCachedConfig(netName string, rt *RuntimeConf) ([]byte, *RuntimeConf, error) {
- var bytes []byte
-
- fname, err := c.getCacheFilePath(netName, rt)
- if err != nil {
- return nil, nil, err
- }
- bytes, err = ioutil.ReadFile(fname)
- if err != nil {
- // Ignore read errors; the cached result may not exist on-disk
- return nil, nil, nil
- }
-
- unmarshaled := cachedInfo{}
- if err := json.Unmarshal(bytes, &unmarshaled); err != nil {
- return nil, nil, fmt.Errorf("failed to unmarshal cached network %q config: %v", netName, err)
- }
- if unmarshaled.Kind != CNICacheV1 {
- return nil, nil, fmt.Errorf("read cached network %q config has wrong kind: %v", netName, unmarshaled.Kind)
- }
-
- newRt := *rt
- if unmarshaled.CniArgs != nil {
- newRt.Args = unmarshaled.CniArgs
- }
- newRt.CapabilityArgs = unmarshaled.CapabilityArgs
-
- return unmarshaled.Config, &newRt, nil
-}
-
-func (c *CNIConfig) getLegacyCachedResult(netName, cniVersion string, rt *RuntimeConf) (types.Result, error) {
- fname, err := c.getCacheFilePath(netName, rt)
- if err != nil {
- return nil, err
- }
- data, err := ioutil.ReadFile(fname)
- if err != nil {
- // Ignore read errors; the cached result may not exist on-disk
- return nil, nil
- }
-
- // Read the version of the cached result
- decoder := version.ConfigDecoder{}
- resultCniVersion, err := decoder.Decode(data)
- if err != nil {
- return nil, err
- }
-
- // Ensure we can understand the result
- result, err := version.NewResult(resultCniVersion, data)
- if err != nil {
- return nil, err
- }
-
- // Convert to the config version to ensure plugins get prevResult
- // in the same version as the config. The cached result version
- // should match the config version unless the config was changed
- // while the container was running.
- result, err = result.GetAsVersion(cniVersion)
- if err != nil && resultCniVersion != cniVersion {
- return nil, fmt.Errorf("failed to convert cached result version %q to config version %q: %v", resultCniVersion, cniVersion, err)
- }
- return result, err
-}
-
-func (c *CNIConfig) getCachedResult(netName, cniVersion string, rt *RuntimeConf) (types.Result, error) {
- fname, err := c.getCacheFilePath(netName, rt)
- if err != nil {
- return nil, err
- }
- fdata, err := ioutil.ReadFile(fname)
- if err != nil {
- // Ignore read errors; the cached result may not exist on-disk
- return nil, nil
- }
-
- cachedInfo := cachedInfo{}
- if err := json.Unmarshal(fdata, &cachedInfo); err != nil || cachedInfo.Kind != CNICacheV1 {
- return c.getLegacyCachedResult(netName, cniVersion, rt)
- }
-
- newBytes, err := json.Marshal(&cachedInfo.RawResult)
- if err != nil {
- return nil, fmt.Errorf("failed to marshal cached network %q config: %v", netName, err)
- }
-
- // Read the version of the cached result
- decoder := version.ConfigDecoder{}
- resultCniVersion, err := decoder.Decode(newBytes)
- if err != nil {
- return nil, err
- }
-
- // Ensure we can understand the result
- result, err := version.NewResult(resultCniVersion, newBytes)
- if err != nil {
- return nil, err
- }
-
- // Convert to the config version to ensure plugins get prevResult
- // in the same version as the config. The cached result version
- // should match the config version unless the config was changed
- // while the container was running.
- result, err = result.GetAsVersion(cniVersion)
- if err != nil && resultCniVersion != cniVersion {
- return nil, fmt.Errorf("failed to convert cached result version %q to config version %q: %v", resultCniVersion, cniVersion, err)
- }
- return result, err
-}
-
-// GetNetworkListCachedResult returns the cached Result of the previous
-// AddNetworkList() operation for a network list, or an error.
-func (c *CNIConfig) GetNetworkListCachedResult(list *NetworkConfigList, rt *RuntimeConf) (types.Result, error) {
- return c.getCachedResult(list.Name, list.CNIVersion, rt)
-}
-
-// GetNetworkCachedResult returns the cached Result of the previous
-// AddNetwork() operation for a network, or an error.
-func (c *CNIConfig) GetNetworkCachedResult(net *NetworkConfig, rt *RuntimeConf) (types.Result, error) {
- return c.getCachedResult(net.Network.Name, net.Network.CNIVersion, rt)
-}
-
-// GetNetworkListCachedConfig copies the input RuntimeConf to output
-// RuntimeConf with fields updated with info from the cached Config.
-func (c *CNIConfig) GetNetworkListCachedConfig(list *NetworkConfigList, rt *RuntimeConf) ([]byte, *RuntimeConf, error) {
- return c.getCachedConfig(list.Name, rt)
-}
-
-// GetNetworkCachedConfig copies the input RuntimeConf to output
-// RuntimeConf with fields updated with info from the cached Config.
-func (c *CNIConfig) GetNetworkCachedConfig(net *NetworkConfig, rt *RuntimeConf) ([]byte, *RuntimeConf, error) {
- return c.getCachedConfig(net.Network.Name, rt)
-}
-
-func (c *CNIConfig) addNetwork(ctx context.Context, name, cniVersion string, net *NetworkConfig, prevResult types.Result, rt *RuntimeConf) (types.Result, error) {
- c.ensureExec()
- pluginPath, err := c.exec.FindInPath(net.Network.Type, c.Path)
- if err != nil {
- return nil, err
- }
- if err := utils.ValidateContainerID(rt.ContainerID); err != nil {
- return nil, err
- }
- if err := utils.ValidateNetworkName(name); err != nil {
- return nil, err
- }
- if err := utils.ValidateInterfaceName(rt.IfName); err != nil {
- return nil, err
- }
-
- newConf, err := buildOneConfig(name, cniVersion, net, prevResult, rt)
- if err != nil {
- return nil, err
- }
-
- return invoke.ExecPluginWithResult(ctx, pluginPath, newConf.Bytes, c.args("ADD", rt), c.exec)
-}
-
-// AddNetworkList executes a sequence of plugins with the ADD command
-func (c *CNIConfig) AddNetworkList(ctx context.Context, list *NetworkConfigList, rt *RuntimeConf) (types.Result, error) {
- var err error
- var result types.Result
- for _, net := range list.Plugins {
- result, err = c.addNetwork(ctx, list.Name, list.CNIVersion, net, result, rt)
- if err != nil {
- return nil, err
- }
- }
-
- if err = c.cacheAdd(result, list.Bytes, list.Name, rt); err != nil {
- return nil, fmt.Errorf("failed to set network %q cached result: %v", list.Name, err)
- }
-
- return result, nil
-}
-
-func (c *CNIConfig) checkNetwork(ctx context.Context, name, cniVersion string, net *NetworkConfig, prevResult types.Result, rt *RuntimeConf) error {
- c.ensureExec()
- pluginPath, err := c.exec.FindInPath(net.Network.Type, c.Path)
- if err != nil {
- return err
- }
-
- newConf, err := buildOneConfig(name, cniVersion, net, prevResult, rt)
- if err != nil {
- return err
- }
-
- return invoke.ExecPluginWithoutResult(ctx, pluginPath, newConf.Bytes, c.args("CHECK", rt), c.exec)
-}
-
-// CheckNetworkList executes a sequence of plugins with the CHECK command
-func (c *CNIConfig) CheckNetworkList(ctx context.Context, list *NetworkConfigList, rt *RuntimeConf) error {
- // CHECK was added in CNI spec version 0.4.0 and higher
- if gtet, err := version.GreaterThanOrEqualTo(list.CNIVersion, "0.4.0"); err != nil {
- return err
- } else if !gtet {
- return fmt.Errorf("configuration version %q does not support the CHECK command", list.CNIVersion)
- }
-
- if list.DisableCheck {
- return nil
- }
-
- cachedResult, err := c.getCachedResult(list.Name, list.CNIVersion, rt)
- if err != nil {
- return fmt.Errorf("failed to get network %q cached result: %v", list.Name, err)
- }
-
- for _, net := range list.Plugins {
- if err := c.checkNetwork(ctx, list.Name, list.CNIVersion, net, cachedResult, rt); err != nil {
- return err
- }
- }
-
- return nil
-}
-
-func (c *CNIConfig) delNetwork(ctx context.Context, name, cniVersion string, net *NetworkConfig, prevResult types.Result, rt *RuntimeConf) error {
- c.ensureExec()
- pluginPath, err := c.exec.FindInPath(net.Network.Type, c.Path)
- if err != nil {
- return err
- }
-
- newConf, err := buildOneConfig(name, cniVersion, net, prevResult, rt)
- if err != nil {
- return err
- }
-
- return invoke.ExecPluginWithoutResult(ctx, pluginPath, newConf.Bytes, c.args("DEL", rt), c.exec)
-}
-
-// DelNetworkList executes a sequence of plugins with the DEL command
-func (c *CNIConfig) DelNetworkList(ctx context.Context, list *NetworkConfigList, rt *RuntimeConf) error {
- var cachedResult types.Result
-
- // Cached result on DEL was added in CNI spec version 0.4.0 and higher
- if gtet, err := version.GreaterThanOrEqualTo(list.CNIVersion, "0.4.0"); err != nil {
- return err
- } else if gtet {
- cachedResult, err = c.getCachedResult(list.Name, list.CNIVersion, rt)
- if err != nil {
- return fmt.Errorf("failed to get network %q cached result: %v", list.Name, err)
- }
- }
-
- for i := len(list.Plugins) - 1; i >= 0; i-- {
- net := list.Plugins[i]
- if err := c.delNetwork(ctx, list.Name, list.CNIVersion, net, cachedResult, rt); err != nil {
- return err
- }
- }
- _ = c.cacheDel(list.Name, rt)
-
- return nil
-}
-
-// AddNetwork executes the plugin with the ADD command
-func (c *CNIConfig) AddNetwork(ctx context.Context, net *NetworkConfig, rt *RuntimeConf) (types.Result, error) {
- result, err := c.addNetwork(ctx, net.Network.Name, net.Network.CNIVersion, net, nil, rt)
- if err != nil {
- return nil, err
- }
-
- if err = c.cacheAdd(result, net.Bytes, net.Network.Name, rt); err != nil {
- return nil, fmt.Errorf("failed to set network %q cached result: %v", net.Network.Name, err)
- }
-
- return result, nil
-}
-
-// CheckNetwork executes the plugin with the CHECK command
-func (c *CNIConfig) CheckNetwork(ctx context.Context, net *NetworkConfig, rt *RuntimeConf) error {
- // CHECK was added in CNI spec version 0.4.0 and higher
- if gtet, err := version.GreaterThanOrEqualTo(net.Network.CNIVersion, "0.4.0"); err != nil {
- return err
- } else if !gtet {
- return fmt.Errorf("configuration version %q does not support the CHECK command", net.Network.CNIVersion)
- }
-
- cachedResult, err := c.getCachedResult(net.Network.Name, net.Network.CNIVersion, rt)
- if err != nil {
- return fmt.Errorf("failed to get network %q cached result: %v", net.Network.Name, err)
- }
- return c.checkNetwork(ctx, net.Network.Name, net.Network.CNIVersion, net, cachedResult, rt)
-}
-
-// DelNetwork executes the plugin with the DEL command
-func (c *CNIConfig) DelNetwork(ctx context.Context, net *NetworkConfig, rt *RuntimeConf) error {
- var cachedResult types.Result
-
- // Cached result on DEL was added in CNI spec version 0.4.0 and higher
- if gtet, err := version.GreaterThanOrEqualTo(net.Network.CNIVersion, "0.4.0"); err != nil {
- return err
- } else if gtet {
- cachedResult, err = c.getCachedResult(net.Network.Name, net.Network.CNIVersion, rt)
- if err != nil {
- return fmt.Errorf("failed to get network %q cached result: %v", net.Network.Name, err)
- }
- }
-
- if err := c.delNetwork(ctx, net.Network.Name, net.Network.CNIVersion, net, cachedResult, rt); err != nil {
- return err
- }
- _ = c.cacheDel(net.Network.Name, rt)
- return nil
-}
-
-// ValidateNetworkList checks that a configuration is reasonably valid.
-// - all the specified plugins exist on disk
-// - every plugin supports the desired version.
-//
-// Returns a list of all capabilities supported by the configuration, or error
-func (c *CNIConfig) ValidateNetworkList(ctx context.Context, list *NetworkConfigList) ([]string, error) {
- version := list.CNIVersion
-
- // holding map for seen caps (in case of duplicates)
- caps := map[string]interface{}{}
-
- errs := []error{}
- for _, net := range list.Plugins {
- if err := c.validatePlugin(ctx, net.Network.Type, version); err != nil {
- errs = append(errs, err)
- }
- for c, enabled := range net.Network.Capabilities {
- if !enabled {
- continue
- }
- caps[c] = struct{}{}
- }
- }
-
- if len(errs) > 0 {
- return nil, fmt.Errorf("%v", errs)
- }
-
- // make caps list
- cc := make([]string, 0, len(caps))
- for c := range caps {
- cc = append(cc, c)
- }
-
- return cc, nil
-}
-
-// ValidateNetwork checks that a configuration is reasonably valid.
-// It uses the same logic as ValidateNetworkList)
-// Returns a list of capabilities
-func (c *CNIConfig) ValidateNetwork(ctx context.Context, net *NetworkConfig) ([]string, error) {
- caps := []string{}
- for c, ok := range net.Network.Capabilities {
- if ok {
- caps = append(caps, c)
- }
- }
- if err := c.validatePlugin(ctx, net.Network.Type, net.Network.CNIVersion); err != nil {
- return nil, err
- }
- return caps, nil
-}
-
-// validatePlugin checks that an individual plugin's configuration is sane
-func (c *CNIConfig) validatePlugin(ctx context.Context, pluginName, expectedVersion string) error {
- c.ensureExec()
- pluginPath, err := c.exec.FindInPath(pluginName, c.Path)
- if err != nil {
- return err
- }
- if expectedVersion == "" {
- expectedVersion = "0.1.0"
- }
-
- vi, err := invoke.GetVersionInfo(ctx, pluginPath, c.exec)
- if err != nil {
- return err
- }
- for _, vers := range vi.SupportedVersions() {
- if vers == expectedVersion {
- return nil
- }
- }
- return fmt.Errorf("plugin %s does not support config version %q", pluginName, expectedVersion)
-}
-
-// GetVersionInfo reports which versions of the CNI spec are supported by
-// the given plugin.
-func (c *CNIConfig) GetVersionInfo(ctx context.Context, pluginType string) (version.PluginInfo, error) {
- c.ensureExec()
- pluginPath, err := c.exec.FindInPath(pluginType, c.Path)
- if err != nil {
- return nil, err
- }
-
- return invoke.GetVersionInfo(ctx, pluginPath, c.exec)
-}
-
-// =====
-func (c *CNIConfig) args(action string, rt *RuntimeConf) *invoke.Args {
- return &invoke.Args{
- Command: action,
- ContainerID: rt.ContainerID,
- NetNS: rt.NetNS,
- PluginArgs: rt.Args,
- IfName: rt.IfName,
- Path: strings.Join(c.Path, string(os.PathListSeparator)),
- }
-}
diff --git a/vendor/github.com/containernetworking/cni/libcni/conf.go b/vendor/github.com/containernetworking/cni/libcni/conf.go
deleted file mode 100644
index d8920cf8cd5..00000000000
--- a/vendor/github.com/containernetworking/cni/libcni/conf.go
+++ /dev/null
@@ -1,268 +0,0 @@
-// Copyright 2015 CNI authors
-//
-// Licensed under the Apache License, Version 2.0 (the "License");
-// you may not use this file except in compliance with the License.
-// You may obtain a copy of the License at
-//
-// http://www.apache.org/licenses/LICENSE-2.0
-//
-// Unless required by applicable law or agreed to in writing, software
-// distributed under the License is distributed on an "AS IS" BASIS,
-// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-// See the License for the specific language governing permissions and
-// limitations under the License.
-
-package libcni
-
-import (
- "encoding/json"
- "fmt"
- "io/ioutil"
- "os"
- "path/filepath"
- "sort"
-)
-
-type NotFoundError struct {
- Dir string
- Name string
-}
-
-func (e NotFoundError) Error() string {
- return fmt.Sprintf(`no net configuration with name "%s" in %s`, e.Name, e.Dir)
-}
-
-type NoConfigsFoundError struct {
- Dir string
-}
-
-func (e NoConfigsFoundError) Error() string {
- return fmt.Sprintf(`no net configurations found in %s`, e.Dir)
-}
-
-func ConfFromBytes(bytes []byte) (*NetworkConfig, error) {
- conf := &NetworkConfig{Bytes: bytes}
- if err := json.Unmarshal(bytes, &conf.Network); err != nil {
- return nil, fmt.Errorf("error parsing configuration: %s", err)
- }
- if conf.Network.Type == "" {
- return nil, fmt.Errorf("error parsing configuration: missing 'type'")
- }
- return conf, nil
-}
-
-func ConfFromFile(filename string) (*NetworkConfig, error) {
- bytes, err := ioutil.ReadFile(filename)
- if err != nil {
- return nil, fmt.Errorf("error reading %s: %s", filename, err)
- }
- return ConfFromBytes(bytes)
-}
-
-func ConfListFromBytes(bytes []byte) (*NetworkConfigList, error) {
- rawList := make(map[string]interface{})
- if err := json.Unmarshal(bytes, &rawList); err != nil {
- return nil, fmt.Errorf("error parsing configuration list: %s", err)
- }
-
- rawName, ok := rawList["name"]
- if !ok {
- return nil, fmt.Errorf("error parsing configuration list: no name")
- }
- name, ok := rawName.(string)
- if !ok {
- return nil, fmt.Errorf("error parsing configuration list: invalid name type %T", rawName)
- }
-
- var cniVersion string
- rawVersion, ok := rawList["cniVersion"]
- if ok {
- cniVersion, ok = rawVersion.(string)
- if !ok {
- return nil, fmt.Errorf("error parsing configuration list: invalid cniVersion type %T", rawVersion)
- }
- }
-
- disableCheck := false
- if rawDisableCheck, ok := rawList["disableCheck"]; ok {
- disableCheck, ok = rawDisableCheck.(bool)
- if !ok {
- return nil, fmt.Errorf("error parsing configuration list: invalid disableCheck type %T", rawDisableCheck)
- }
- }
-
- list := &NetworkConfigList{
- Name: name,
- DisableCheck: disableCheck,
- CNIVersion: cniVersion,
- Bytes: bytes,
- }
-
- var plugins []interface{}
- plug, ok := rawList["plugins"]
- if !ok {
- return nil, fmt.Errorf("error parsing configuration list: no 'plugins' key")
- }
- plugins, ok = plug.([]interface{})
- if !ok {
- return nil, fmt.Errorf("error parsing configuration list: invalid 'plugins' type %T", plug)
- }
- if len(plugins) == 0 {
- return nil, fmt.Errorf("error parsing configuration list: no plugins in list")
- }
-
- for i, conf := range plugins {
- newBytes, err := json.Marshal(conf)
- if err != nil {
- return nil, fmt.Errorf("failed to marshal plugin config %d: %v", i, err)
- }
- netConf, err := ConfFromBytes(newBytes)
- if err != nil {
- return nil, fmt.Errorf("failed to parse plugin config %d: %v", i, err)
- }
- list.Plugins = append(list.Plugins, netConf)
- }
-
- return list, nil
-}
-
-func ConfListFromFile(filename string) (*NetworkConfigList, error) {
- bytes, err := ioutil.ReadFile(filename)
- if err != nil {
- return nil, fmt.Errorf("error reading %s: %s", filename, err)
- }
- return ConfListFromBytes(bytes)
-}
-
-func ConfFiles(dir string, extensions []string) ([]string, error) {
- // In part, adapted from rkt/networking/podenv.go#listFiles
- files, err := ioutil.ReadDir(dir)
- switch {
- case err == nil: // break
- case os.IsNotExist(err):
- return nil, nil
- default:
- return nil, err
- }
-
- confFiles := []string{}
- for _, f := range files {
- if f.IsDir() {
- continue
- }
- fileExt := filepath.Ext(f.Name())
- for _, ext := range extensions {
- if fileExt == ext {
- confFiles = append(confFiles, filepath.Join(dir, f.Name()))
- }
- }
- }
- return confFiles, nil
-}
-
-func LoadConf(dir, name string) (*NetworkConfig, error) {
- files, err := ConfFiles(dir, []string{".conf", ".json"})
- switch {
- case err != nil:
- return nil, err
- case len(files) == 0:
- return nil, NoConfigsFoundError{Dir: dir}
- }
- sort.Strings(files)
-
- for _, confFile := range files {
- conf, err := ConfFromFile(confFile)
- if err != nil {
- return nil, err
- }
- if conf.Network.Name == name {
- return conf, nil
- }
- }
- return nil, NotFoundError{dir, name}
-}
-
-func LoadConfList(dir, name string) (*NetworkConfigList, error) {
- files, err := ConfFiles(dir, []string{".conflist"})
- if err != nil {
- return nil, err
- }
- sort.Strings(files)
-
- for _, confFile := range files {
- conf, err := ConfListFromFile(confFile)
- if err != nil {
- return nil, err
- }
- if conf.Name == name {
- return conf, nil
- }
- }
-
- // Try and load a network configuration file (instead of list)
- // from the same name, then upconvert.
- singleConf, err := LoadConf(dir, name)
- if err != nil {
- // A little extra logic so the error makes sense
- if _, ok := err.(NoConfigsFoundError); len(files) != 0 && ok {
- // Config lists found but no config files found
- return nil, NotFoundError{dir, name}
- }
-
- return nil, err
- }
- return ConfListFromConf(singleConf)
-}
-
-func InjectConf(original *NetworkConfig, newValues map[string]interface{}) (*NetworkConfig, error) {
- config := make(map[string]interface{})
- err := json.Unmarshal(original.Bytes, &config)
- if err != nil {
- return nil, fmt.Errorf("unmarshal existing network bytes: %s", err)
- }
-
- for key, value := range newValues {
- if key == "" {
- return nil, fmt.Errorf("keys cannot be empty")
- }
-
- if value == nil {
- return nil, fmt.Errorf("key '%s' value must not be nil", key)
- }
-
- config[key] = value
- }
-
- newBytes, err := json.Marshal(config)
- if err != nil {
- return nil, err
- }
-
- return ConfFromBytes(newBytes)
-}
-
-// ConfListFromConf "upconverts" a network config in to a NetworkConfigList,
-// with the single network as the only entry in the list.
-func ConfListFromConf(original *NetworkConfig) (*NetworkConfigList, error) {
- // Re-deserialize the config's json, then make a raw map configlist.
- // This may seem a bit strange, but it's to make the Bytes fields
- // actually make sense. Otherwise, the generated json is littered with
- // golang default values.
-
- rawConfig := make(map[string]interface{})
- if err := json.Unmarshal(original.Bytes, &rawConfig); err != nil {
- return nil, err
- }
-
- rawConfigList := map[string]interface{}{
- "name": original.Network.Name,
- "cniVersion": original.Network.CNIVersion,
- "plugins": []interface{}{rawConfig},
- }
-
- b, err := json.Marshal(rawConfigList)
- if err != nil {
- return nil, err
- }
- return ConfListFromBytes(b)
-}
diff --git a/vendor/github.com/containernetworking/cni/pkg/invoke/args.go b/vendor/github.com/containernetworking/cni/pkg/invoke/args.go
deleted file mode 100644
index 3cdb4bc8dad..00000000000
--- a/vendor/github.com/containernetworking/cni/pkg/invoke/args.go
+++ /dev/null
@@ -1,128 +0,0 @@
-// Copyright 2015 CNI authors
-//
-// Licensed under the Apache License, Version 2.0 (the "License");
-// you may not use this file except in compliance with the License.
-// You may obtain a copy of the License at
-//
-// http://www.apache.org/licenses/LICENSE-2.0
-//
-// Unless required by applicable law or agreed to in writing, software
-// distributed under the License is distributed on an "AS IS" BASIS,
-// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-// See the License for the specific language governing permissions and
-// limitations under the License.
-
-package invoke
-
-import (
- "fmt"
- "os"
- "strings"
-)
-
-type CNIArgs interface {
- // For use with os/exec; i.e., return nil to inherit the
- // environment from this process
- // For use in delegation; inherit the environment from this
- // process and allow overrides
- AsEnv() []string
-}
-
-type inherited struct{}
-
-var inheritArgsFromEnv inherited
-
-func (*inherited) AsEnv() []string {
- return nil
-}
-
-func ArgsFromEnv() CNIArgs {
- return &inheritArgsFromEnv
-}
-
-type Args struct {
- Command string
- ContainerID string
- NetNS string
- PluginArgs [][2]string
- PluginArgsStr string
- IfName string
- Path string
-}
-
-// Args implements the CNIArgs interface
-var _ CNIArgs = &Args{}
-
-func (args *Args) AsEnv() []string {
- env := os.Environ()
- pluginArgsStr := args.PluginArgsStr
- if pluginArgsStr == "" {
- pluginArgsStr = stringify(args.PluginArgs)
- }
-
- // Duplicated values which come first will be overridden, so we must put the
- // custom values in the end to avoid being overridden by the process environments.
- env = append(env,
- "CNI_COMMAND="+args.Command,
- "CNI_CONTAINERID="+args.ContainerID,
- "CNI_NETNS="+args.NetNS,
- "CNI_ARGS="+pluginArgsStr,
- "CNI_IFNAME="+args.IfName,
- "CNI_PATH="+args.Path,
- )
- return dedupEnv(env)
-}
-
-// taken from rkt/networking/net_plugin.go
-func stringify(pluginArgs [][2]string) string {
- entries := make([]string, len(pluginArgs))
-
- for i, kv := range pluginArgs {
- entries[i] = strings.Join(kv[:], "=")
- }
-
- return strings.Join(entries, ";")
-}
-
-// DelegateArgs implements the CNIArgs interface
-// used for delegation to inherit from environments
-// and allow some overrides like CNI_COMMAND
-var _ CNIArgs = &DelegateArgs{}
-
-type DelegateArgs struct {
- Command string
-}
-
-func (d *DelegateArgs) AsEnv() []string {
- env := os.Environ()
-
- // The custom values should come in the end to override the existing
- // process environment of the same key.
- env = append(env,
- "CNI_COMMAND="+d.Command,
- )
- return dedupEnv(env)
-}
-
-// dedupEnv returns a copy of env with any duplicates removed, in favor of later values.
-// Items not of the normal environment "key=value" form are preserved unchanged.
-func dedupEnv(env []string) []string {
- out := make([]string, 0, len(env))
- envMap := map[string]string{}
-
- for _, kv := range env {
- // find the first "=" in environment, if not, just keep it
- eq := strings.Index(kv, "=")
- if eq < 0 {
- out = append(out, kv)
- continue
- }
- envMap[kv[:eq]] = kv[eq+1:]
- }
-
- for k, v := range envMap {
- out = append(out, fmt.Sprintf("%s=%s", k, v))
- }
-
- return out
-}
diff --git a/vendor/github.com/containernetworking/cni/pkg/invoke/delegate.go b/vendor/github.com/containernetworking/cni/pkg/invoke/delegate.go
deleted file mode 100644
index 8defe4dd398..00000000000
--- a/vendor/github.com/containernetworking/cni/pkg/invoke/delegate.go
+++ /dev/null
@@ -1,80 +0,0 @@
-// Copyright 2016 CNI authors
-//
-// Licensed under the Apache License, Version 2.0 (the "License");
-// you may not use this file except in compliance with the License.
-// You may obtain a copy of the License at
-//
-// http://www.apache.org/licenses/LICENSE-2.0
-//
-// Unless required by applicable law or agreed to in writing, software
-// distributed under the License is distributed on an "AS IS" BASIS,
-// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-// See the License for the specific language governing permissions and
-// limitations under the License.
-
-package invoke
-
-import (
- "context"
- "os"
- "path/filepath"
-
- "github.com/containernetworking/cni/pkg/types"
-)
-
-func delegateCommon(delegatePlugin string, exec Exec) (string, Exec, error) {
- if exec == nil {
- exec = defaultExec
- }
-
- paths := filepath.SplitList(os.Getenv("CNI_PATH"))
- pluginPath, err := exec.FindInPath(delegatePlugin, paths)
- if err != nil {
- return "", nil, err
- }
-
- return pluginPath, exec, nil
-}
-
-// DelegateAdd calls the given delegate plugin with the CNI ADD action and
-// JSON configuration
-func DelegateAdd(ctx context.Context, delegatePlugin string, netconf []byte, exec Exec) (types.Result, error) {
- pluginPath, realExec, err := delegateCommon(delegatePlugin, exec)
- if err != nil {
- return nil, err
- }
-
- // DelegateAdd will override the original "CNI_COMMAND" env from process with ADD
- return ExecPluginWithResult(ctx, pluginPath, netconf, delegateArgs("ADD"), realExec)
-}
-
-// DelegateCheck calls the given delegate plugin with the CNI CHECK action and
-// JSON configuration
-func DelegateCheck(ctx context.Context, delegatePlugin string, netconf []byte, exec Exec) error {
- pluginPath, realExec, err := delegateCommon(delegatePlugin, exec)
- if err != nil {
- return err
- }
-
- // DelegateCheck will override the original CNI_COMMAND env from process with CHECK
- return ExecPluginWithoutResult(ctx, pluginPath, netconf, delegateArgs("CHECK"), realExec)
-}
-
-// DelegateDel calls the given delegate plugin with the CNI DEL action and
-// JSON configuration
-func DelegateDel(ctx context.Context, delegatePlugin string, netconf []byte, exec Exec) error {
- pluginPath, realExec, err := delegateCommon(delegatePlugin, exec)
- if err != nil {
- return err
- }
-
- // DelegateDel will override the original CNI_COMMAND env from process with DEL
- return ExecPluginWithoutResult(ctx, pluginPath, netconf, delegateArgs("DEL"), realExec)
-}
-
-// return CNIArgs used by delegation
-func delegateArgs(action string) *DelegateArgs {
- return &DelegateArgs{
- Command: action,
- }
-}
diff --git a/vendor/github.com/containernetworking/cni/pkg/invoke/exec.go b/vendor/github.com/containernetworking/cni/pkg/invoke/exec.go
deleted file mode 100644
index 8e6d30b8290..00000000000
--- a/vendor/github.com/containernetworking/cni/pkg/invoke/exec.go
+++ /dev/null
@@ -1,144 +0,0 @@
-// Copyright 2015 CNI authors
-//
-// Licensed under the Apache License, Version 2.0 (the "License");
-// you may not use this file except in compliance with the License.
-// You may obtain a copy of the License at
-//
-// http://www.apache.org/licenses/LICENSE-2.0
-//
-// Unless required by applicable law or agreed to in writing, software
-// distributed under the License is distributed on an "AS IS" BASIS,
-// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-// See the License for the specific language governing permissions and
-// limitations under the License.
-
-package invoke
-
-import (
- "context"
- "fmt"
- "os"
-
- "github.com/containernetworking/cni/pkg/types"
- "github.com/containernetworking/cni/pkg/version"
-)
-
-// Exec is an interface encapsulates all operations that deal with finding
-// and executing a CNI plugin. Tests may provide a fake implementation
-// to avoid writing fake plugins to temporary directories during the test.
-type Exec interface {
- ExecPlugin(ctx context.Context, pluginPath string, stdinData []byte, environ []string) ([]byte, error)
- FindInPath(plugin string, paths []string) (string, error)
- Decode(jsonBytes []byte) (version.PluginInfo, error)
-}
-
-// For example, a testcase could pass an instance of the following fakeExec
-// object to ExecPluginWithResult() to verify the incoming stdin and environment
-// and provide a tailored response:
-//
-//import (
-// "encoding/json"
-// "path"
-// "strings"
-//)
-//
-//type fakeExec struct {
-// version.PluginDecoder
-//}
-//
-//func (f *fakeExec) ExecPlugin(pluginPath string, stdinData []byte, environ []string) ([]byte, error) {
-// net := &types.NetConf{}
-// err := json.Unmarshal(stdinData, net)
-// if err != nil {
-// return nil, fmt.Errorf("failed to unmarshal configuration: %v", err)
-// }
-// pluginName := path.Base(pluginPath)
-// if pluginName != net.Type {
-// return nil, fmt.Errorf("plugin name %q did not match config type %q", pluginName, net.Type)
-// }
-// for _, e := range environ {
-// // Check environment for forced failure request
-// parts := strings.Split(e, "=")
-// if len(parts) > 0 && parts[0] == "FAIL" {
-// return nil, fmt.Errorf("failed to execute plugin %s", pluginName)
-// }
-// }
-// return []byte("{\"CNIVersion\":\"0.4.0\"}"), nil
-//}
-//
-//func (f *fakeExec) FindInPath(plugin string, paths []string) (string, error) {
-// if len(paths) > 0 {
-// return path.Join(paths[0], plugin), nil
-// }
-// return "", fmt.Errorf("failed to find plugin %s in paths %v", plugin, paths)
-//}
-
-func ExecPluginWithResult(ctx context.Context, pluginPath string, netconf []byte, args CNIArgs, exec Exec) (types.Result, error) {
- if exec == nil {
- exec = defaultExec
- }
-
- stdoutBytes, err := exec.ExecPlugin(ctx, pluginPath, netconf, args.AsEnv())
- if err != nil {
- return nil, err
- }
-
- // Plugin must return result in same version as specified in netconf
- versionDecoder := &version.ConfigDecoder{}
- confVersion, err := versionDecoder.Decode(netconf)
- if err != nil {
- return nil, err
- }
-
- return version.NewResult(confVersion, stdoutBytes)
-}
-
-func ExecPluginWithoutResult(ctx context.Context, pluginPath string, netconf []byte, args CNIArgs, exec Exec) error {
- if exec == nil {
- exec = defaultExec
- }
- _, err := exec.ExecPlugin(ctx, pluginPath, netconf, args.AsEnv())
- return err
-}
-
-// GetVersionInfo returns the version information available about the plugin.
-// For recent-enough plugins, it uses the information returned by the VERSION
-// command. For older plugins which do not recognize that command, it reports
-// version 0.1.0
-func GetVersionInfo(ctx context.Context, pluginPath string, exec Exec) (version.PluginInfo, error) {
- if exec == nil {
- exec = defaultExec
- }
- args := &Args{
- Command: "VERSION",
-
- // set fake values required by plugins built against an older version of skel
- NetNS: "dummy",
- IfName: "dummy",
- Path: "dummy",
- }
- stdin := []byte(fmt.Sprintf(`{"cniVersion":%q}`, version.Current()))
- stdoutBytes, err := exec.ExecPlugin(ctx, pluginPath, stdin, args.AsEnv())
- if err != nil {
- if err.Error() == "unknown CNI_COMMAND: VERSION" {
- return version.PluginSupports("0.1.0"), nil
- }
- return nil, err
- }
-
- return exec.Decode(stdoutBytes)
-}
-
-// DefaultExec is an object that implements the Exec interface which looks
-// for and executes plugins from disk.
-type DefaultExec struct {
- *RawExec
- version.PluginDecoder
-}
-
-// DefaultExec implements the Exec interface
-var _ Exec = &DefaultExec{}
-
-var defaultExec = &DefaultExec{
- RawExec: &RawExec{Stderr: os.Stderr},
-}
diff --git a/vendor/github.com/containernetworking/cni/pkg/invoke/find.go b/vendor/github.com/containernetworking/cni/pkg/invoke/find.go
deleted file mode 100644
index e62029eb788..00000000000
--- a/vendor/github.com/containernetworking/cni/pkg/invoke/find.go
+++ /dev/null
@@ -1,48 +0,0 @@
-// Copyright 2015 CNI authors
-//
-// Licensed under the Apache License, Version 2.0 (the "License");
-// you may not use this file except in compliance with the License.
-// You may obtain a copy of the License at
-//
-// http://www.apache.org/licenses/LICENSE-2.0
-//
-// Unless required by applicable law or agreed to in writing, software
-// distributed under the License is distributed on an "AS IS" BASIS,
-// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-// See the License for the specific language governing permissions and
-// limitations under the License.
-
-package invoke
-
-import (
- "fmt"
- "os"
- "path/filepath"
- "strings"
-)
-
-// FindInPath returns the full path of the plugin by searching in the provided path
-func FindInPath(plugin string, paths []string) (string, error) {
- if plugin == "" {
- return "", fmt.Errorf("no plugin name provided")
- }
-
- if strings.ContainsRune(plugin, os.PathSeparator) {
- return "", fmt.Errorf("invalid plugin name: %s", plugin)
- }
-
- if len(paths) == 0 {
- return "", fmt.Errorf("no paths provided")
- }
-
- for _, path := range paths {
- for _, fe := range ExecutableFileExtensions {
- fullpath := filepath.Join(path, plugin) + fe
- if fi, err := os.Stat(fullpath); err == nil && fi.Mode().IsRegular() {
- return fullpath, nil
- }
- }
- }
-
- return "", fmt.Errorf("failed to find plugin %q in path %s", plugin, paths)
-}
diff --git a/vendor/github.com/containernetworking/cni/pkg/invoke/os_unix.go b/vendor/github.com/containernetworking/cni/pkg/invoke/os_unix.go
deleted file mode 100644
index 9bcfb455367..00000000000
--- a/vendor/github.com/containernetworking/cni/pkg/invoke/os_unix.go
+++ /dev/null
@@ -1,20 +0,0 @@
-// Copyright 2016 CNI authors
-//
-// Licensed under the Apache License, Version 2.0 (the "License");
-// you may not use this file except in compliance with the License.
-// You may obtain a copy of the License at
-//
-// http://www.apache.org/licenses/LICENSE-2.0
-//
-// Unless required by applicable law or agreed to in writing, software
-// distributed under the License is distributed on an "AS IS" BASIS,
-// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-// See the License for the specific language governing permissions and
-// limitations under the License.
-
-// +build darwin dragonfly freebsd linux netbsd openbsd solaris
-
-package invoke
-
-// Valid file extensions for plugin executables.
-var ExecutableFileExtensions = []string{""}
diff --git a/vendor/github.com/containernetworking/cni/pkg/invoke/os_windows.go b/vendor/github.com/containernetworking/cni/pkg/invoke/os_windows.go
deleted file mode 100644
index 7665125b133..00000000000
--- a/vendor/github.com/containernetworking/cni/pkg/invoke/os_windows.go
+++ /dev/null
@@ -1,18 +0,0 @@
-// Copyright 2016 CNI authors
-//
-// Licensed under the Apache License, Version 2.0 (the "License");
-// you may not use this file except in compliance with the License.
-// You may obtain a copy of the License at
-//
-// http://www.apache.org/licenses/LICENSE-2.0
-//
-// Unless required by applicable law or agreed to in writing, software
-// distributed under the License is distributed on an "AS IS" BASIS,
-// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-// See the License for the specific language governing permissions and
-// limitations under the License.
-
-package invoke
-
-// Valid file extensions for plugin executables.
-var ExecutableFileExtensions = []string{".exe", ""}
diff --git a/vendor/github.com/containernetworking/cni/pkg/invoke/raw_exec.go b/vendor/github.com/containernetworking/cni/pkg/invoke/raw_exec.go
deleted file mode 100644
index 5ab5cc88576..00000000000
--- a/vendor/github.com/containernetworking/cni/pkg/invoke/raw_exec.go
+++ /dev/null
@@ -1,88 +0,0 @@
-// Copyright 2016 CNI authors
-//
-// Licensed under the Apache License, Version 2.0 (the "License");
-// you may not use this file except in compliance with the License.
-// You may obtain a copy of the License at
-//
-// http://www.apache.org/licenses/LICENSE-2.0
-//
-// Unless required by applicable law or agreed to in writing, software
-// distributed under the License is distributed on an "AS IS" BASIS,
-// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-// See the License for the specific language governing permissions and
-// limitations under the License.
-
-package invoke
-
-import (
- "bytes"
- "context"
- "encoding/json"
- "fmt"
- "io"
- "os/exec"
- "strings"
- "time"
-
- "github.com/containernetworking/cni/pkg/types"
-)
-
-type RawExec struct {
- Stderr io.Writer
-}
-
-func (e *RawExec) ExecPlugin(ctx context.Context, pluginPath string, stdinData []byte, environ []string) ([]byte, error) {
- stdout := &bytes.Buffer{}
- stderr := &bytes.Buffer{}
- c := exec.CommandContext(ctx, pluginPath)
- c.Env = environ
- c.Stdin = bytes.NewBuffer(stdinData)
- c.Stdout = stdout
- c.Stderr = stderr
-
- // Retry the command on "text file busy" errors
- for i := 0; i <= 5; i++ {
- err := c.Run()
-
- // Command succeeded
- if err == nil {
- break
- }
-
- // If the plugin is currently about to be written, then we wait a
- // second and try it again
- if strings.Contains(err.Error(), "text file busy") {
- time.Sleep(time.Second)
- continue
- }
-
- // All other errors except than the busy text file
- return nil, e.pluginErr(err, stdout.Bytes(), stderr.Bytes())
- }
-
- // Copy stderr to caller's buffer in case plugin printed to both
- // stdout and stderr for some reason. Ignore failures as stderr is
- // only informational.
- if e.Stderr != nil && stderr.Len() > 0 {
- _, _ = stderr.WriteTo(e.Stderr)
- }
- return stdout.Bytes(), nil
-}
-
-func (e *RawExec) pluginErr(err error, stdout, stderr []byte) error {
- emsg := types.Error{}
- if len(stdout) == 0 {
- if len(stderr) == 0 {
- emsg.Msg = fmt.Sprintf("netplugin failed with no error message: %v", err)
- } else {
- emsg.Msg = fmt.Sprintf("netplugin failed: %q", string(stderr))
- }
- } else if perr := json.Unmarshal(stdout, &emsg); perr != nil {
- emsg.Msg = fmt.Sprintf("netplugin failed but error parsing its diagnostic message %q: %v", string(stdout), perr)
- }
- return &emsg
-}
-
-func (e *RawExec) FindInPath(plugin string, paths []string) (string, error) {
- return FindInPath(plugin, paths)
-}
diff --git a/vendor/github.com/containernetworking/cni/pkg/types/020/types.go b/vendor/github.com/containernetworking/cni/pkg/types/020/types.go
deleted file mode 100644
index 36f31678a8e..00000000000
--- a/vendor/github.com/containernetworking/cni/pkg/types/020/types.go
+++ /dev/null
@@ -1,126 +0,0 @@
-// Copyright 2016 CNI authors
-//
-// Licensed under the Apache License, Version 2.0 (the "License");
-// you may not use this file except in compliance with the License.
-// You may obtain a copy of the License at
-//
-// http://www.apache.org/licenses/LICENSE-2.0
-//
-// Unless required by applicable law or agreed to in writing, software
-// distributed under the License is distributed on an "AS IS" BASIS,
-// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-// See the License for the specific language governing permissions and
-// limitations under the License.
-
-package types020
-
-import (
- "encoding/json"
- "fmt"
- "io"
- "net"
- "os"
-
- "github.com/containernetworking/cni/pkg/types"
-)
-
-const ImplementedSpecVersion string = "0.2.0"
-
-var SupportedVersions = []string{"", "0.1.0", ImplementedSpecVersion}
-
-// Compatibility types for CNI version 0.1.0 and 0.2.0
-
-func NewResult(data []byte) (types.Result, error) {
- result := &Result{}
- if err := json.Unmarshal(data, result); err != nil {
- return nil, err
- }
- return result, nil
-}
-
-func GetResult(r types.Result) (*Result, error) {
- // We expect version 0.1.0/0.2.0 results
- result020, err := r.GetAsVersion(ImplementedSpecVersion)
- if err != nil {
- return nil, err
- }
- result, ok := result020.(*Result)
- if !ok {
- return nil, fmt.Errorf("failed to convert result")
- }
- return result, nil
-}
-
-// Result is what gets returned from the plugin (via stdout) to the caller
-type Result struct {
- CNIVersion string `json:"cniVersion,omitempty"`
- IP4 *IPConfig `json:"ip4,omitempty"`
- IP6 *IPConfig `json:"ip6,omitempty"`
- DNS types.DNS `json:"dns,omitempty"`
-}
-
-func (r *Result) Version() string {
- return ImplementedSpecVersion
-}
-
-func (r *Result) GetAsVersion(version string) (types.Result, error) {
- for _, supportedVersion := range SupportedVersions {
- if version == supportedVersion {
- r.CNIVersion = version
- return r, nil
- }
- }
- return nil, fmt.Errorf("cannot convert version %q to %s", SupportedVersions, version)
-}
-
-func (r *Result) Print() error {
- return r.PrintTo(os.Stdout)
-}
-
-func (r *Result) PrintTo(writer io.Writer) error {
- data, err := json.MarshalIndent(r, "", " ")
- if err != nil {
- return err
- }
- _, err = writer.Write(data)
- return err
-}
-
-// IPConfig contains values necessary to configure an interface
-type IPConfig struct {
- IP net.IPNet
- Gateway net.IP
- Routes []types.Route
-}
-
-// net.IPNet is not JSON (un)marshallable so this duality is needed
-// for our custom IPNet type
-
-// JSON (un)marshallable types
-type ipConfig struct {
- IP types.IPNet `json:"ip"`
- Gateway net.IP `json:"gateway,omitempty"`
- Routes []types.Route `json:"routes,omitempty"`
-}
-
-func (c *IPConfig) MarshalJSON() ([]byte, error) {
- ipc := ipConfig{
- IP: types.IPNet(c.IP),
- Gateway: c.Gateway,
- Routes: c.Routes,
- }
-
- return json.Marshal(ipc)
-}
-
-func (c *IPConfig) UnmarshalJSON(data []byte) error {
- ipc := ipConfig{}
- if err := json.Unmarshal(data, &ipc); err != nil {
- return err
- }
-
- c.IP = net.IPNet(ipc.IP)
- c.Gateway = ipc.Gateway
- c.Routes = ipc.Routes
- return nil
-}
diff --git a/vendor/github.com/containernetworking/cni/pkg/types/args.go b/vendor/github.com/containernetworking/cni/pkg/types/args.go
deleted file mode 100644
index 4eac6489947..00000000000
--- a/vendor/github.com/containernetworking/cni/pkg/types/args.go
+++ /dev/null
@@ -1,112 +0,0 @@
-// Copyright 2015 CNI authors
-//
-// Licensed under the Apache License, Version 2.0 (the "License");
-// you may not use this file except in compliance with the License.
-// You may obtain a copy of the License at
-//
-// http://www.apache.org/licenses/LICENSE-2.0
-//
-// Unless required by applicable law or agreed to in writing, software
-// distributed under the License is distributed on an "AS IS" BASIS,
-// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-// See the License for the specific language governing permissions and
-// limitations under the License.
-
-package types
-
-import (
- "encoding"
- "fmt"
- "reflect"
- "strings"
-)
-
-// UnmarshallableBool typedef for builtin bool
-// because builtin type's methods can't be declared
-type UnmarshallableBool bool
-
-// UnmarshalText implements the encoding.TextUnmarshaler interface.
-// Returns boolean true if the string is "1" or "[Tt]rue"
-// Returns boolean false if the string is "0" or "[Ff]alse"
-func (b *UnmarshallableBool) UnmarshalText(data []byte) error {
- s := strings.ToLower(string(data))
- switch s {
- case "1", "true":
- *b = true
- case "0", "false":
- *b = false
- default:
- return fmt.Errorf("boolean unmarshal error: invalid input %s", s)
- }
- return nil
-}
-
-// UnmarshallableString typedef for builtin string
-type UnmarshallableString string
-
-// UnmarshalText implements the encoding.TextUnmarshaler interface.
-// Returns the string
-func (s *UnmarshallableString) UnmarshalText(data []byte) error {
- *s = UnmarshallableString(data)
- return nil
-}
-
-// CommonArgs contains the IgnoreUnknown argument
-// and must be embedded by all Arg structs
-type CommonArgs struct {
- IgnoreUnknown UnmarshallableBool `json:"ignoreunknown,omitempty"`
-}
-
-// GetKeyField is a helper function to receive Values
-// Values that represent a pointer to a struct
-func GetKeyField(keyString string, v reflect.Value) reflect.Value {
- return v.Elem().FieldByName(keyString)
-}
-
-// UnmarshalableArgsError is used to indicate error unmarshalling args
-// from the args-string in the form "K=V;K2=V2;..."
-type UnmarshalableArgsError struct {
- error
-}
-
-// LoadArgs parses args from a string in the form "K=V;K2=V2;..."
-func LoadArgs(args string, container interface{}) error {
- if args == "" {
- return nil
- }
-
- containerValue := reflect.ValueOf(container)
-
- pairs := strings.Split(args, ";")
- unknownArgs := []string{}
- for _, pair := range pairs {
- kv := strings.Split(pair, "=")
- if len(kv) != 2 {
- return fmt.Errorf("ARGS: invalid pair %q", pair)
- }
- keyString := kv[0]
- valueString := kv[1]
- keyField := GetKeyField(keyString, containerValue)
- if !keyField.IsValid() {
- unknownArgs = append(unknownArgs, pair)
- continue
- }
- keyFieldIface := keyField.Addr().Interface()
- u, ok := keyFieldIface.(encoding.TextUnmarshaler)
- if !ok {
- return UnmarshalableArgsError{fmt.Errorf(
- "ARGS: cannot unmarshal into field '%s' - type '%s' does not implement encoding.TextUnmarshaler",
- keyString, reflect.TypeOf(keyFieldIface))}
- }
- err := u.UnmarshalText([]byte(valueString))
- if err != nil {
- return fmt.Errorf("ARGS: error parsing value of pair %q: %v)", pair, err)
- }
- }
-
- isIgnoreUnknown := GetKeyField("IgnoreUnknown", containerValue).Bool()
- if len(unknownArgs) > 0 && !isIgnoreUnknown {
- return fmt.Errorf("ARGS: unknown args %q", unknownArgs)
- }
- return nil
-}
diff --git a/vendor/github.com/containernetworking/cni/pkg/types/current/types.go b/vendor/github.com/containernetworking/cni/pkg/types/current/types.go
deleted file mode 100644
index 754cc6e722e..00000000000
--- a/vendor/github.com/containernetworking/cni/pkg/types/current/types.go
+++ /dev/null
@@ -1,276 +0,0 @@
-// Copyright 2016 CNI authors
-//
-// Licensed under the Apache License, Version 2.0 (the "License");
-// you may not use this file except in compliance with the License.
-// You may obtain a copy of the License at
-//
-// http://www.apache.org/licenses/LICENSE-2.0
-//
-// Unless required by applicable law or agreed to in writing, software
-// distributed under the License is distributed on an "AS IS" BASIS,
-// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-// See the License for the specific language governing permissions and
-// limitations under the License.
-
-package current
-
-import (
- "encoding/json"
- "fmt"
- "io"
- "net"
- "os"
-
- "github.com/containernetworking/cni/pkg/types"
- "github.com/containernetworking/cni/pkg/types/020"
-)
-
-const ImplementedSpecVersion string = "0.4.0"
-
-var SupportedVersions = []string{"0.3.0", "0.3.1", ImplementedSpecVersion}
-
-func NewResult(data []byte) (types.Result, error) {
- result := &Result{}
- if err := json.Unmarshal(data, result); err != nil {
- return nil, err
- }
- return result, nil
-}
-
-func GetResult(r types.Result) (*Result, error) {
- resultCurrent, err := r.GetAsVersion(ImplementedSpecVersion)
- if err != nil {
- return nil, err
- }
- result, ok := resultCurrent.(*Result)
- if !ok {
- return nil, fmt.Errorf("failed to convert result")
- }
- return result, nil
-}
-
-var resultConverters = []struct {
- versions []string
- convert func(types.Result) (*Result, error)
-}{
- {types020.SupportedVersions, convertFrom020},
- {SupportedVersions, convertFrom030},
-}
-
-func convertFrom020(result types.Result) (*Result, error) {
- oldResult, err := types020.GetResult(result)
- if err != nil {
- return nil, err
- }
-
- newResult := &Result{
- CNIVersion: ImplementedSpecVersion,
- DNS: oldResult.DNS,
- Routes: []*types.Route{},
- }
-
- if oldResult.IP4 != nil {
- newResult.IPs = append(newResult.IPs, &IPConfig{
- Version: "4",
- Address: oldResult.IP4.IP,
- Gateway: oldResult.IP4.Gateway,
- })
- for _, route := range oldResult.IP4.Routes {
- newResult.Routes = append(newResult.Routes, &types.Route{
- Dst: route.Dst,
- GW: route.GW,
- })
- }
- }
-
- if oldResult.IP6 != nil {
- newResult.IPs = append(newResult.IPs, &IPConfig{
- Version: "6",
- Address: oldResult.IP6.IP,
- Gateway: oldResult.IP6.Gateway,
- })
- for _, route := range oldResult.IP6.Routes {
- newResult.Routes = append(newResult.Routes, &types.Route{
- Dst: route.Dst,
- GW: route.GW,
- })
- }
- }
-
- return newResult, nil
-}
-
-func convertFrom030(result types.Result) (*Result, error) {
- newResult, ok := result.(*Result)
- if !ok {
- return nil, fmt.Errorf("failed to convert result")
- }
- newResult.CNIVersion = ImplementedSpecVersion
- return newResult, nil
-}
-
-func NewResultFromResult(result types.Result) (*Result, error) {
- version := result.Version()
- for _, converter := range resultConverters {
- for _, supportedVersion := range converter.versions {
- if version == supportedVersion {
- return converter.convert(result)
- }
- }
- }
- return nil, fmt.Errorf("unsupported CNI result22 version %q", version)
-}
-
-// Result is what gets returned from the plugin (via stdout) to the caller
-type Result struct {
- CNIVersion string `json:"cniVersion,omitempty"`
- Interfaces []*Interface `json:"interfaces,omitempty"`
- IPs []*IPConfig `json:"ips,omitempty"`
- Routes []*types.Route `json:"routes,omitempty"`
- DNS types.DNS `json:"dns,omitempty"`
-}
-
-// Convert to the older 0.2.0 CNI spec Result type
-func (r *Result) convertTo020() (*types020.Result, error) {
- oldResult := &types020.Result{
- CNIVersion: types020.ImplementedSpecVersion,
- DNS: r.DNS,
- }
-
- for _, ip := range r.IPs {
- // Only convert the first IP address of each version as 0.2.0
- // and earlier cannot handle multiple IP addresses
- if ip.Version == "4" && oldResult.IP4 == nil {
- oldResult.IP4 = &types020.IPConfig{
- IP: ip.Address,
- Gateway: ip.Gateway,
- }
- } else if ip.Version == "6" && oldResult.IP6 == nil {
- oldResult.IP6 = &types020.IPConfig{
- IP: ip.Address,
- Gateway: ip.Gateway,
- }
- }
-
- if oldResult.IP4 != nil && oldResult.IP6 != nil {
- break
- }
- }
-
- for _, route := range r.Routes {
- is4 := route.Dst.IP.To4() != nil
- if is4 && oldResult.IP4 != nil {
- oldResult.IP4.Routes = append(oldResult.IP4.Routes, types.Route{
- Dst: route.Dst,
- GW: route.GW,
- })
- } else if !is4 && oldResult.IP6 != nil {
- oldResult.IP6.Routes = append(oldResult.IP6.Routes, types.Route{
- Dst: route.Dst,
- GW: route.GW,
- })
- }
- }
-
- if oldResult.IP4 == nil && oldResult.IP6 == nil {
- return nil, fmt.Errorf("cannot convert: no valid IP addresses")
- }
-
- return oldResult, nil
-}
-
-func (r *Result) Version() string {
- return ImplementedSpecVersion
-}
-
-func (r *Result) GetAsVersion(version string) (types.Result, error) {
- switch version {
- case "0.3.0", "0.3.1", ImplementedSpecVersion:
- r.CNIVersion = version
- return r, nil
- case types020.SupportedVersions[0], types020.SupportedVersions[1], types020.SupportedVersions[2]:
- return r.convertTo020()
- }
- return nil, fmt.Errorf("cannot convert version 0.3.x to %q", version)
-}
-
-func (r *Result) Print() error {
- return r.PrintTo(os.Stdout)
-}
-
-func (r *Result) PrintTo(writer io.Writer) error {
- data, err := json.MarshalIndent(r, "", " ")
- if err != nil {
- return err
- }
- _, err = writer.Write(data)
- return err
-}
-
-// Convert this old version result to the current CNI version result
-func (r *Result) Convert() (*Result, error) {
- return r, nil
-}
-
-// Interface contains values about the created interfaces
-type Interface struct {
- Name string `json:"name"`
- Mac string `json:"mac,omitempty"`
- Sandbox string `json:"sandbox,omitempty"`
-}
-
-func (i *Interface) String() string {
- return fmt.Sprintf("%+v", *i)
-}
-
-// Int returns a pointer to the int value passed in. Used to
-// set the IPConfig.Interface field.
-func Int(v int) *int {
- return &v
-}
-
-// IPConfig contains values necessary to configure an IP address on an interface
-type IPConfig struct {
- // IP version, either "4" or "6"
- Version string
- // Index into Result structs Interfaces list
- Interface *int
- Address net.IPNet
- Gateway net.IP
-}
-
-func (i *IPConfig) String() string {
- return fmt.Sprintf("%+v", *i)
-}
-
-// JSON (un)marshallable types
-type ipConfig struct {
- Version string `json:"version"`
- Interface *int `json:"interface,omitempty"`
- Address types.IPNet `json:"address"`
- Gateway net.IP `json:"gateway,omitempty"`
-}
-
-func (c *IPConfig) MarshalJSON() ([]byte, error) {
- ipc := ipConfig{
- Version: c.Version,
- Interface: c.Interface,
- Address: types.IPNet(c.Address),
- Gateway: c.Gateway,
- }
-
- return json.Marshal(ipc)
-}
-
-func (c *IPConfig) UnmarshalJSON(data []byte) error {
- ipc := ipConfig{}
- if err := json.Unmarshal(data, &ipc); err != nil {
- return err
- }
-
- c.Version = ipc.Version
- c.Interface = ipc.Interface
- c.Address = net.IPNet(ipc.Address)
- c.Gateway = ipc.Gateway
- return nil
-}
diff --git a/vendor/github.com/containernetworking/cni/pkg/types/types.go b/vendor/github.com/containernetworking/cni/pkg/types/types.go
deleted file mode 100644
index 3fa757a5d22..00000000000
--- a/vendor/github.com/containernetworking/cni/pkg/types/types.go
+++ /dev/null
@@ -1,207 +0,0 @@
-// Copyright 2015 CNI authors
-//
-// Licensed under the Apache License, Version 2.0 (the "License");
-// you may not use this file except in compliance with the License.
-// You may obtain a copy of the License at
-//
-// http://www.apache.org/licenses/LICENSE-2.0
-//
-// Unless required by applicable law or agreed to in writing, software
-// distributed under the License is distributed on an "AS IS" BASIS,
-// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-// See the License for the specific language governing permissions and
-// limitations under the License.
-
-package types
-
-import (
- "encoding/json"
- "fmt"
- "io"
- "net"
- "os"
-)
-
-// like net.IPNet but adds JSON marshalling and unmarshalling
-type IPNet net.IPNet
-
-// ParseCIDR takes a string like "10.2.3.1/24" and
-// return IPNet with "10.2.3.1" and /24 mask
-func ParseCIDR(s string) (*net.IPNet, error) {
- ip, ipn, err := net.ParseCIDR(s)
- if err != nil {
- return nil, err
- }
-
- ipn.IP = ip
- return ipn, nil
-}
-
-func (n IPNet) MarshalJSON() ([]byte, error) {
- return json.Marshal((*net.IPNet)(&n).String())
-}
-
-func (n *IPNet) UnmarshalJSON(data []byte) error {
- var s string
- if err := json.Unmarshal(data, &s); err != nil {
- return err
- }
-
- tmp, err := ParseCIDR(s)
- if err != nil {
- return err
- }
-
- *n = IPNet(*tmp)
- return nil
-}
-
-// NetConf describes a network.
-type NetConf struct {
- CNIVersion string `json:"cniVersion,omitempty"`
-
- Name string `json:"name,omitempty"`
- Type string `json:"type,omitempty"`
- Capabilities map[string]bool `json:"capabilities,omitempty"`
- IPAM IPAM `json:"ipam,omitempty"`
- DNS DNS `json:"dns"`
-
- RawPrevResult map[string]interface{} `json:"prevResult,omitempty"`
- PrevResult Result `json:"-"`
-}
-
-type IPAM struct {
- Type string `json:"type,omitempty"`
-}
-
-// NetConfList describes an ordered list of networks.
-type NetConfList struct {
- CNIVersion string `json:"cniVersion,omitempty"`
-
- Name string `json:"name,omitempty"`
- DisableCheck bool `json:"disableCheck,omitempty"`
- Plugins []*NetConf `json:"plugins,omitempty"`
-}
-
-type ResultFactoryFunc func([]byte) (Result, error)
-
-// Result is an interface that provides the result of plugin execution
-type Result interface {
- // The highest CNI specification result version the result supports
- // without having to convert
- Version() string
-
- // Returns the result converted into the requested CNI specification
- // result version, or an error if conversion failed
- GetAsVersion(version string) (Result, error)
-
- // Prints the result in JSON format to stdout
- Print() error
-
- // Prints the result in JSON format to provided writer
- PrintTo(writer io.Writer) error
-}
-
-func PrintResult(result Result, version string) error {
- newResult, err := result.GetAsVersion(version)
- if err != nil {
- return err
- }
- return newResult.Print()
-}
-
-// DNS contains values interesting for DNS resolvers
-type DNS struct {
- Nameservers []string `json:"nameservers,omitempty"`
- Domain string `json:"domain,omitempty"`
- Search []string `json:"search,omitempty"`
- Options []string `json:"options,omitempty"`
-}
-
-type Route struct {
- Dst net.IPNet
- GW net.IP
-}
-
-func (r *Route) String() string {
- return fmt.Sprintf("%+v", *r)
-}
-
-// Well known error codes
-// see https://github.com/containernetworking/cni/blob/master/SPEC.md#well-known-error-codes
-const (
- ErrUnknown uint = iota // 0
- ErrIncompatibleCNIVersion // 1
- ErrUnsupportedField // 2
- ErrUnknownContainer // 3
- ErrInvalidEnvironmentVariables // 4
- ErrIOFailure // 5
- ErrDecodingFailure // 6
- ErrInvalidNetworkConfig // 7
- ErrTryAgainLater uint = 11
- ErrInternal uint = 999
-)
-
-type Error struct {
- Code uint `json:"code"`
- Msg string `json:"msg"`
- Details string `json:"details,omitempty"`
-}
-
-func NewError(code uint, msg, details string) *Error {
- return &Error{
- Code: code,
- Msg: msg,
- Details: details,
- }
-}
-
-func (e *Error) Error() string {
- details := ""
- if e.Details != "" {
- details = fmt.Sprintf("; %v", e.Details)
- }
- return fmt.Sprintf("%v%v", e.Msg, details)
-}
-
-func (e *Error) Print() error {
- return prettyPrint(e)
-}
-
-// net.IPNet is not JSON (un)marshallable so this duality is needed
-// for our custom IPNet type
-
-// JSON (un)marshallable types
-type route struct {
- Dst IPNet `json:"dst"`
- GW net.IP `json:"gw,omitempty"`
-}
-
-func (r *Route) UnmarshalJSON(data []byte) error {
- rt := route{}
- if err := json.Unmarshal(data, &rt); err != nil {
- return err
- }
-
- r.Dst = net.IPNet(rt.Dst)
- r.GW = rt.GW
- return nil
-}
-
-func (r Route) MarshalJSON() ([]byte, error) {
- rt := route{
- Dst: IPNet(r.Dst),
- GW: r.GW,
- }
-
- return json.Marshal(rt)
-}
-
-func prettyPrint(obj interface{}) error {
- data, err := json.MarshalIndent(obj, "", " ")
- if err != nil {
- return err
- }
- _, err = os.Stdout.Write(data)
- return err
-}
diff --git a/vendor/github.com/containernetworking/cni/pkg/utils/utils.go b/vendor/github.com/containernetworking/cni/pkg/utils/utils.go
deleted file mode 100644
index b8ec3887459..00000000000
--- a/vendor/github.com/containernetworking/cni/pkg/utils/utils.go
+++ /dev/null
@@ -1,84 +0,0 @@
-// Copyright 2019 CNI authors
-//
-// Licensed under the Apache License, Version 2.0 (the "License");
-// you may not use this file except in compliance with the License.
-// You may obtain a copy of the License at
-//
-// http://www.apache.org/licenses/LICENSE-2.0
-//
-// Unless required by applicable law or agreed to in writing, software
-// distributed under the License is distributed on an "AS IS" BASIS,
-// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-// See the License for the specific language governing permissions and
-// limitations under the License.
-
-package utils
-
-import (
- "bytes"
- "fmt"
- "regexp"
- "unicode"
-
- "github.com/containernetworking/cni/pkg/types"
-)
-
-const (
- // cniValidNameChars is the regexp used to validate valid characters in
- // containerID and networkName
- cniValidNameChars = `[a-zA-Z0-9][a-zA-Z0-9_.\-]`
-
- // maxInterfaceNameLength is the length max of a valid interface name
- maxInterfaceNameLength = 15
-)
-
-var cniReg = regexp.MustCompile(`^` + cniValidNameChars + `*$`)
-
-// ValidateContainerID will validate that the supplied containerID is not empty does not contain invalid characters
-func ValidateContainerID(containerID string) *types.Error {
-
- if containerID == "" {
- return types.NewError(types.ErrUnknownContainer, "missing containerID", "")
- }
- if !cniReg.MatchString(containerID) {
- return types.NewError(types.ErrInvalidEnvironmentVariables, "invalid characters in containerID", containerID)
- }
- return nil
-}
-
-// ValidateNetworkName will validate that the supplied networkName does not contain invalid characters
-func ValidateNetworkName(networkName string) *types.Error {
-
- if networkName == "" {
- return types.NewError(types.ErrInvalidNetworkConfig, "missing network name:", "")
- }
- if !cniReg.MatchString(networkName) {
- return types.NewError(types.ErrInvalidNetworkConfig, "invalid characters found in network name", networkName)
- }
- return nil
-}
-
-// ValidateInterfaceName will validate the interface name based on the three rules below
-// 1. The name must not be empty
-// 2. The name must be less than 16 characters
-// 3. The name must not be "." or ".."
-// 3. The name must not contain / or : or any whitespace characters
-// ref to https://github.com/torvalds/linux/blob/master/net/core/dev.c#L1024
-func ValidateInterfaceName(ifName string) *types.Error {
- if len(ifName) == 0 {
- return types.NewError(types.ErrInvalidEnvironmentVariables, "interface name is empty", "")
- }
- if len(ifName) > maxInterfaceNameLength {
- return types.NewError(types.ErrInvalidEnvironmentVariables, "interface name is too long", fmt.Sprintf("interface name should be less than %d characters", maxInterfaceNameLength+1))
- }
- if ifName == "." || ifName == ".." {
- return types.NewError(types.ErrInvalidEnvironmentVariables, "interface name is . or ..", "")
- }
- for _, r := range bytes.Runes([]byte(ifName)) {
- if r == '/' || r == ':' || unicode.IsSpace(r) {
- return types.NewError(types.ErrInvalidEnvironmentVariables, "interface name contains / or : or whitespace characters", "")
- }
- }
-
- return nil
-}
diff --git a/vendor/github.com/containernetworking/cni/pkg/version/conf.go b/vendor/github.com/containernetworking/cni/pkg/version/conf.go
deleted file mode 100644
index 3cca58bbeb8..00000000000
--- a/vendor/github.com/containernetworking/cni/pkg/version/conf.go
+++ /dev/null
@@ -1,37 +0,0 @@
-// Copyright 2016 CNI authors
-//
-// Licensed under the Apache License, Version 2.0 (the "License");
-// you may not use this file except in compliance with the License.
-// You may obtain a copy of the License at
-//
-// http://www.apache.org/licenses/LICENSE-2.0
-//
-// Unless required by applicable law or agreed to in writing, software
-// distributed under the License is distributed on an "AS IS" BASIS,
-// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-// See the License for the specific language governing permissions and
-// limitations under the License.
-
-package version
-
-import (
- "encoding/json"
- "fmt"
-)
-
-// ConfigDecoder can decode the CNI version available in network config data
-type ConfigDecoder struct{}
-
-func (*ConfigDecoder) Decode(jsonBytes []byte) (string, error) {
- var conf struct {
- CNIVersion string `json:"cniVersion"`
- }
- err := json.Unmarshal(jsonBytes, &conf)
- if err != nil {
- return "", fmt.Errorf("decoding version from network config: %s", err)
- }
- if conf.CNIVersion == "" {
- return "0.1.0", nil
- }
- return conf.CNIVersion, nil
-}
diff --git a/vendor/github.com/containernetworking/cni/pkg/version/plugin.go b/vendor/github.com/containernetworking/cni/pkg/version/plugin.go
deleted file mode 100644
index 1df427243f3..00000000000
--- a/vendor/github.com/containernetworking/cni/pkg/version/plugin.go
+++ /dev/null
@@ -1,144 +0,0 @@
-// Copyright 2016 CNI authors
-//
-// Licensed under the Apache License, Version 2.0 (the "License");
-// you may not use this file except in compliance with the License.
-// You may obtain a copy of the License at
-//
-// http://www.apache.org/licenses/LICENSE-2.0
-//
-// Unless required by applicable law or agreed to in writing, software
-// distributed under the License is distributed on an "AS IS" BASIS,
-// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-// See the License for the specific language governing permissions and
-// limitations under the License.
-
-package version
-
-import (
- "encoding/json"
- "fmt"
- "io"
- "strconv"
- "strings"
-)
-
-// PluginInfo reports information about CNI versioning
-type PluginInfo interface {
- // SupportedVersions returns one or more CNI spec versions that the plugin
- // supports. If input is provided in one of these versions, then the plugin
- // promises to use the same CNI version in its response
- SupportedVersions() []string
-
- // Encode writes this CNI version information as JSON to the given Writer
- Encode(io.Writer) error
-}
-
-type pluginInfo struct {
- CNIVersion_ string `json:"cniVersion"`
- SupportedVersions_ []string `json:"supportedVersions,omitempty"`
-}
-
-// pluginInfo implements the PluginInfo interface
-var _ PluginInfo = &pluginInfo{}
-
-func (p *pluginInfo) Encode(w io.Writer) error {
- return json.NewEncoder(w).Encode(p)
-}
-
-func (p *pluginInfo) SupportedVersions() []string {
- return p.SupportedVersions_
-}
-
-// PluginSupports returns a new PluginInfo that will report the given versions
-// as supported
-func PluginSupports(supportedVersions ...string) PluginInfo {
- if len(supportedVersions) < 1 {
- panic("programmer error: you must support at least one version")
- }
- return &pluginInfo{
- CNIVersion_: Current(),
- SupportedVersions_: supportedVersions,
- }
-}
-
-// PluginDecoder can decode the response returned by a plugin's VERSION command
-type PluginDecoder struct{}
-
-func (*PluginDecoder) Decode(jsonBytes []byte) (PluginInfo, error) {
- var info pluginInfo
- err := json.Unmarshal(jsonBytes, &info)
- if err != nil {
- return nil, fmt.Errorf("decoding version info: %s", err)
- }
- if info.CNIVersion_ == "" {
- return nil, fmt.Errorf("decoding version info: missing field cniVersion")
- }
- if len(info.SupportedVersions_) == 0 {
- if info.CNIVersion_ == "0.2.0" {
- return PluginSupports("0.1.0", "0.2.0"), nil
- }
- return nil, fmt.Errorf("decoding version info: missing field supportedVersions")
- }
- return &info, nil
-}
-
-// ParseVersion parses a version string like "3.0.1" or "0.4.5" into major,
-// minor, and micro numbers or returns an error
-func ParseVersion(version string) (int, int, int, error) {
- var major, minor, micro int
- if version == "" {
- return -1, -1, -1, fmt.Errorf("invalid version %q: the version is empty", version)
- }
-
- parts := strings.Split(version, ".")
- if len(parts) >= 4 {
- return -1, -1, -1, fmt.Errorf("invalid version %q: too many parts", version)
- }
-
- major, err := strconv.Atoi(parts[0])
- if err != nil {
- return -1, -1, -1, fmt.Errorf("failed to convert major version part %q: %v", parts[0], err)
- }
-
- if len(parts) >= 2 {
- minor, err = strconv.Atoi(parts[1])
- if err != nil {
- return -1, -1, -1, fmt.Errorf("failed to convert minor version part %q: %v", parts[1], err)
- }
- }
-
- if len(parts) >= 3 {
- micro, err = strconv.Atoi(parts[2])
- if err != nil {
- return -1, -1, -1, fmt.Errorf("failed to convert micro version part %q: %v", parts[2], err)
- }
- }
-
- return major, minor, micro, nil
-}
-
-// GreaterThanOrEqualTo takes two string versions, parses them into major/minor/micro
-// numbers, and compares them to determine whether the first version is greater
-// than or equal to the second
-func GreaterThanOrEqualTo(version, otherVersion string) (bool, error) {
- firstMajor, firstMinor, firstMicro, err := ParseVersion(version)
- if err != nil {
- return false, err
- }
-
- secondMajor, secondMinor, secondMicro, err := ParseVersion(otherVersion)
- if err != nil {
- return false, err
- }
-
- if firstMajor > secondMajor {
- return true, nil
- } else if firstMajor == secondMajor {
- if firstMinor > secondMinor {
- return true, nil
- } else if firstMinor == secondMinor && firstMicro >= secondMicro {
- return true, nil
- }
- }
- return false, nil
-}
diff --git a/vendor/github.com/containernetworking/cni/pkg/version/reconcile.go b/vendor/github.com/containernetworking/cni/pkg/version/reconcile.go
deleted file mode 100644
index 25c3810b2aa..00000000000
--- a/vendor/github.com/containernetworking/cni/pkg/version/reconcile.go
+++ /dev/null
@@ -1,49 +0,0 @@
-// Copyright 2016 CNI authors
-//
-// Licensed under the Apache License, Version 2.0 (the "License");
-// you may not use this file except in compliance with the License.
-// You may obtain a copy of the License at
-//
-// http://www.apache.org/licenses/LICENSE-2.0
-//
-// Unless required by applicable law or agreed to in writing, software
-// distributed under the License is distributed on an "AS IS" BASIS,
-// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-// See the License for the specific language governing permissions and
-// limitations under the License.
-
-package version
-
-import "fmt"
-
-type ErrorIncompatible struct {
- Config string
- Supported []string
-}
-
-func (e *ErrorIncompatible) Details() string {
- return fmt.Sprintf("config is %q, plugin supports %q", e.Config, e.Supported)
-}
-
-func (e *ErrorIncompatible) Error() string {
- return fmt.Sprintf("incompatible CNI versions: %s", e.Details())
-}
-
-type Reconciler struct{}
-
-func (r *Reconciler) Check(configVersion string, pluginInfo PluginInfo) *ErrorIncompatible {
- return r.CheckRaw(configVersion, pluginInfo.SupportedVersions())
-}
-
-func (*Reconciler) CheckRaw(configVersion string, supportedVersions []string) *ErrorIncompatible {
- for _, supportedVersion := range supportedVersions {
- if configVersion == supportedVersion {
- return nil
- }
- }
-
- return &ErrorIncompatible{
- Config: configVersion,
- Supported: supportedVersions,
- }
-}
diff --git a/vendor/github.com/containernetworking/cni/pkg/version/version.go b/vendor/github.com/containernetworking/cni/pkg/version/version.go
deleted file mode 100644
index 8f3508e61f3..00000000000
--- a/vendor/github.com/containernetworking/cni/pkg/version/version.go
+++ /dev/null
@@ -1,83 +0,0 @@
-// Copyright 2016 CNI authors
-//
-// Licensed under the Apache License, Version 2.0 (the "License");
-// you may not use this file except in compliance with the License.
-// You may obtain a copy of the License at
-//
-// http://www.apache.org/licenses/LICENSE-2.0
-//
-// Unless required by applicable law or agreed to in writing, software
-// distributed under the License is distributed on an "AS IS" BASIS,
-// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-// See the License for the specific language governing permissions and
-// limitations under the License.
-
-package version
-
-import (
- "encoding/json"
- "fmt"
-
- "github.com/containernetworking/cni/pkg/types"
- "github.com/containernetworking/cni/pkg/types/020"
- "github.com/containernetworking/cni/pkg/types/current"
-)
-
-// Current reports the version of the CNI spec implemented by this library
-func Current() string {
- return "0.4.0"
-}
-
-// Legacy PluginInfo describes a plugin that is backwards compatible with the
-// CNI spec version 0.1.0. In particular, a runtime compiled against the 0.1.0
-// library ought to work correctly with a plugin that reports support for
-// Legacy versions.
-//
-// Any future CNI spec versions which meet this definition should be added to
-// this list.
-var Legacy = PluginSupports("0.1.0", "0.2.0")
-var All = PluginSupports("0.1.0", "0.2.0", "0.3.0", "0.3.1", "0.4.0")
-
-var resultFactories = []struct {
- supportedVersions []string
- newResult types.ResultFactoryFunc
-}{
- {current.SupportedVersions, current.NewResult},
- {types020.SupportedVersions, types020.NewResult},
-}
-
-// Finds a Result object matching the requested version (if any) and asks
-// that object to parse the plugin result, returning an error if parsing failed.
-func NewResult(version string, resultBytes []byte) (types.Result, error) {
- reconciler := &Reconciler{}
- for _, resultFactory := range resultFactories {
- err := reconciler.CheckRaw(version, resultFactory.supportedVersions)
- if err == nil {
- // Result supports this version
- return resultFactory.newResult(resultBytes)
- }
- }
-
- return nil, fmt.Errorf("unsupported CNI result version %q", version)
-}
-
-// ParsePrevResult parses a prevResult in a NetConf structure and sets
-// the NetConf's PrevResult member to the parsed Result object.
-func ParsePrevResult(conf *types.NetConf) error {
- if conf.RawPrevResult == nil {
- return nil
- }
-
- resultBytes, err := json.Marshal(conf.RawPrevResult)
- if err != nil {
- return fmt.Errorf("could not serialize prevResult: %v", err)
- }
-
- conf.RawPrevResult = nil
- conf.PrevResult, err = NewResult(conf.CNIVersion, resultBytes)
- if err != nil {
- return fmt.Errorf("could not parse prevResult: %v", err)
- }
-
- return nil
-}
diff --git a/vendor/github.com/docker/distribution/registry/api/errcode/errors.go b/vendor/github.com/docker/distribution/registry/api/errcode/errors.go
deleted file mode 100644
index 6d9bb4b62af..00000000000
--- a/vendor/github.com/docker/distribution/registry/api/errcode/errors.go
+++ /dev/null
@@ -1,267 +0,0 @@
-package errcode
-
-import (
- "encoding/json"
- "fmt"
- "strings"
-)
-
-// ErrorCoder is the base interface for ErrorCode and Error allowing
-// users of each to just call ErrorCode to get the real ID of each
-type ErrorCoder interface {
- ErrorCode() ErrorCode
-}
-
-// ErrorCode represents the error type. The errors are serialized via strings
-// and the integer format may change and should *never* be exported.
-type ErrorCode int
-
-var _ error = ErrorCode(0)
-
-// ErrorCode just returns itself
-func (ec ErrorCode) ErrorCode() ErrorCode {
- return ec
-}
-
-// Error returns the ID/Value
-func (ec ErrorCode) Error() string {
- // NOTE(stevvooe): Cannot use message here since it may have unpopulated args.
- return strings.ToLower(strings.Replace(ec.String(), "_", " ", -1))
-}
-
-// Descriptor returns the descriptor for the error code.
-func (ec ErrorCode) Descriptor() ErrorDescriptor {
- d, ok := errorCodeToDescriptors[ec]
-
- if !ok {
- return ErrorCodeUnknown.Descriptor()
- }
-
- return d
-}
-
-// String returns the canonical identifier for this error code.
-func (ec ErrorCode) String() string {
- return ec.Descriptor().Value
-}
-
-// Message returned the human-readable error message for this error code.
-func (ec ErrorCode) Message() string {
- return ec.Descriptor().Message
-}
-
-// MarshalText encodes the receiver into UTF-8-encoded text and returns the
-// result.
-func (ec ErrorCode) MarshalText() (text []byte, err error) {
- return []byte(ec.String()), nil
-}
-
-// UnmarshalText decodes the form generated by MarshalText.
-func (ec *ErrorCode) UnmarshalText(text []byte) error {
- desc, ok := idToDescriptors[string(text)]
-
- if !ok {
- desc = ErrorCodeUnknown.Descriptor()
- }
-
- *ec = desc.Code
-
- return nil
-}
-
-// WithMessage creates a new Error struct based on the passed-in info and
-// overrides the Message property.
-func (ec ErrorCode) WithMessage(message string) Error {
- return Error{
- Code: ec,
- Message: message,
- }
-}
-
-// WithDetail creates a new Error struct based on the passed-in info and
-// set the Detail property appropriately
-func (ec ErrorCode) WithDetail(detail interface{}) Error {
- return Error{
- Code: ec,
- Message: ec.Message(),
- }.WithDetail(detail)
-}
-
-// WithArgs creates a new Error struct and sets the Args slice
-func (ec ErrorCode) WithArgs(args ...interface{}) Error {
- return Error{
- Code: ec,
- Message: ec.Message(),
- }.WithArgs(args...)
-}
-
-// Error provides a wrapper around ErrorCode with extra Details provided.
-type Error struct {
- Code ErrorCode `json:"code"`
- Message string `json:"message"`
- Detail interface{} `json:"detail,omitempty"`
-
- // TODO(duglin): See if we need an "args" property so we can do the
- // variable substitution right before showing the message to the user
-}
-
-var _ error = Error{}
-
-// ErrorCode returns the ID/Value of this Error
-func (e Error) ErrorCode() ErrorCode {
- return e.Code
-}
-
-// Error returns a human readable representation of the error.
-func (e Error) Error() string {
- return fmt.Sprintf("%s: %s", e.Code.Error(), e.Message)
-}
-
-// WithDetail will return a new Error, based on the current one, but with
-// some Detail info added
-func (e Error) WithDetail(detail interface{}) Error {
- return Error{
- Code: e.Code,
- Message: e.Message,
- Detail: detail,
- }
-}
-
-// WithArgs uses the passed-in list of interface{} as the substitution
-// variables in the Error's Message string, but returns a new Error
-func (e Error) WithArgs(args ...interface{}) Error {
- return Error{
- Code: e.Code,
- Message: fmt.Sprintf(e.Code.Message(), args...),
- Detail: e.Detail,
- }
-}
-
-// ErrorDescriptor provides relevant information about a given error code.
-type ErrorDescriptor struct {
- // Code is the error code that this descriptor describes.
- Code ErrorCode
-
- // Value provides a unique, string key, often captilized with
- // underscores, to identify the error code. This value is used as the
- // keyed value when serializing api errors.
- Value string
-
- // Message is a short, human readable decription of the error condition
- // included in API responses.
- Message string
-
- // Description provides a complete account of the errors purpose, suitable
- // for use in documentation.
- Description string
-
- // HTTPStatusCode provides the http status code that is associated with
- // this error condition.
- HTTPStatusCode int
-}
-
-// ParseErrorCode returns the value by the string error code.
-// `ErrorCodeUnknown` will be returned if the error is not known.
-func ParseErrorCode(value string) ErrorCode {
- ed, ok := idToDescriptors[value]
- if ok {
- return ed.Code
- }
-
- return ErrorCodeUnknown
-}
-
-// Errors provides the envelope for multiple errors and a few sugar methods
-// for use within the application.
-type Errors []error
-
-var _ error = Errors{}
-
-func (errs Errors) Error() string {
- switch len(errs) {
- case 0:
- return ""
- case 1:
- return errs[0].Error()
- default:
- msg := "errors:\n"
- for _, err := range errs {
- msg += err.Error() + "\n"
- }
- return msg
- }
-}
-
-// Len returns the current number of errors.
-func (errs Errors) Len() int {
- return len(errs)
-}
-
-// MarshalJSON converts slice of error, ErrorCode or Error into a
-// slice of Error - then serializes
-func (errs Errors) MarshalJSON() ([]byte, error) {
- var tmpErrs struct {
- Errors []Error `json:"errors,omitempty"`
- }
-
- for _, daErr := range errs {
- var err Error
-
- switch daErr.(type) {
- case ErrorCode:
- err = daErr.(ErrorCode).WithDetail(nil)
- case Error:
- err = daErr.(Error)
- default:
- err = ErrorCodeUnknown.WithDetail(daErr)
-
- }
-
- // If the Error struct was setup and they forgot to set the
- // Message field (meaning its "") then grab it from the ErrCode
- msg := err.Message
- if msg == "" {
- msg = err.Code.Message()
- }
-
- tmpErrs.Errors = append(tmpErrs.Errors, Error{
- Code: err.Code,
- Message: msg,
- Detail: err.Detail,
- })
- }
-
- return json.Marshal(tmpErrs)
-}
-
-// UnmarshalJSON deserializes []Error and then converts it into slice of
-// Error or ErrorCode
-func (errs *Errors) UnmarshalJSON(data []byte) error {
- var tmpErrs struct {
- Errors []Error
- }
-
- if err := json.Unmarshal(data, &tmpErrs); err != nil {
- return err
- }
-
- var newErrs Errors
- for _, daErr := range tmpErrs.Errors {
- // If Message is empty or exactly matches the Code's message string
- // then just use the Code, no need for a full Error struct
- if daErr.Detail == nil && (daErr.Message == "" || daErr.Message == daErr.Code.Message()) {
- // Error's w/o details get converted to ErrorCode
- newErrs = append(newErrs, daErr.Code)
- } else {
- // Error's w/ details are untouched
- newErrs = append(newErrs, Error{
- Code: daErr.Code,
- Message: daErr.Message,
- Detail: daErr.Detail,
- })
- }
- }
-
- *errs = newErrs
- return nil
-}
diff --git a/vendor/github.com/docker/distribution/registry/api/errcode/handler.go b/vendor/github.com/docker/distribution/registry/api/errcode/handler.go
deleted file mode 100644
index d77e70473e7..00000000000
--- a/vendor/github.com/docker/distribution/registry/api/errcode/handler.go
+++ /dev/null
@@ -1,40 +0,0 @@
-package errcode
-
-import (
- "encoding/json"
- "net/http"
-)
-
-// ServeJSON attempts to serve the errcode in a JSON envelope. It marshals err
-// and sets the content-type header to 'application/json'. It will handle
-// ErrorCoder and Errors, and if necessary will create an envelope.
-func ServeJSON(w http.ResponseWriter, err error) error {
- w.Header().Set("Content-Type", "application/json; charset=utf-8")
- var sc int
-
- switch errs := err.(type) {
- case Errors:
- if len(errs) < 1 {
- break
- }
-
- if err, ok := errs[0].(ErrorCoder); ok {
- sc = err.ErrorCode().Descriptor().HTTPStatusCode
- }
- case ErrorCoder:
- sc = errs.ErrorCode().Descriptor().HTTPStatusCode
- err = Errors{err} // create an envelope.
- default:
- // We just have an unhandled error type, so just place in an envelope
- // and move along.
- err = Errors{err}
- }
-
- if sc == 0 {
- sc = http.StatusInternalServerError
- }
-
- w.WriteHeader(sc)
-
- return json.NewEncoder(w).Encode(err)
-}
diff --git a/vendor/github.com/docker/distribution/registry/api/errcode/register.go b/vendor/github.com/docker/distribution/registry/api/errcode/register.go
deleted file mode 100644
index d1e8826c6d7..00000000000
--- a/vendor/github.com/docker/distribution/registry/api/errcode/register.go
+++ /dev/null
@@ -1,138 +0,0 @@
-package errcode
-
-import (
- "fmt"
- "net/http"
- "sort"
- "sync"
-)
-
-var (
- errorCodeToDescriptors = map[ErrorCode]ErrorDescriptor{}
- idToDescriptors = map[string]ErrorDescriptor{}
- groupToDescriptors = map[string][]ErrorDescriptor{}
-)
-
-var (
- // ErrorCodeUnknown is a generic error that can be used as a last
- // resort if there is no situation-specific error message that can be used
- ErrorCodeUnknown = Register("errcode", ErrorDescriptor{
- Value: "UNKNOWN",
- Message: "unknown error",
- Description: `Generic error returned when the error does not have an
- API classification.`,
- HTTPStatusCode: http.StatusInternalServerError,
- })
-
- // ErrorCodeUnsupported is returned when an operation is not supported.
- ErrorCodeUnsupported = Register("errcode", ErrorDescriptor{
- Value: "UNSUPPORTED",
- Message: "The operation is unsupported.",
- Description: `The operation was unsupported due to a missing
- implementation or invalid set of parameters.`,
- HTTPStatusCode: http.StatusMethodNotAllowed,
- })
-
- // ErrorCodeUnauthorized is returned if a request requires
- // authentication.
- ErrorCodeUnauthorized = Register("errcode", ErrorDescriptor{
- Value: "UNAUTHORIZED",
- Message: "authentication required",
- Description: `The access controller was unable to authenticate
- the client. Often this will be accompanied by a
- Www-Authenticate HTTP response header indicating how to
- authenticate.`,
- HTTPStatusCode: http.StatusUnauthorized,
- })
-
- // ErrorCodeDenied is returned if a client does not have sufficient
- // permission to perform an action.
- ErrorCodeDenied = Register("errcode", ErrorDescriptor{
- Value: "DENIED",
- Message: "requested access to the resource is denied",
- Description: `The access controller denied access for the
- operation on a resource.`,
- HTTPStatusCode: http.StatusForbidden,
- })
-
- // ErrorCodeUnavailable provides a common error to report unavailability
- // of a service or endpoint.
- ErrorCodeUnavailable = Register("errcode", ErrorDescriptor{
- Value: "UNAVAILABLE",
- Message: "service unavailable",
- Description: "Returned when a service is not available",
- HTTPStatusCode: http.StatusServiceUnavailable,
- })
-
- // ErrorCodeTooManyRequests is returned if a client attempts too many
- // times to contact a service endpoint.
- ErrorCodeTooManyRequests = Register("errcode", ErrorDescriptor{
- Value: "TOOMANYREQUESTS",
- Message: "too many requests",
- Description: `Returned when a client attempts to contact a
- service too many times`,
- HTTPStatusCode: http.StatusTooManyRequests,
- })
-)
-
-var nextCode = 1000
-var registerLock sync.Mutex
-
-// Register will make the passed-in error known to the environment and
-// return a new ErrorCode
-func Register(group string, descriptor ErrorDescriptor) ErrorCode {
- registerLock.Lock()
- defer registerLock.Unlock()
-
- descriptor.Code = ErrorCode(nextCode)
-
- if _, ok := idToDescriptors[descriptor.Value]; ok {
- panic(fmt.Sprintf("ErrorValue %q is already registered", descriptor.Value))
- }
- if _, ok := errorCodeToDescriptors[descriptor.Code]; ok {
- panic(fmt.Sprintf("ErrorCode %v is already registered", descriptor.Code))
- }
-
- groupToDescriptors[group] = append(groupToDescriptors[group], descriptor)
- errorCodeToDescriptors[descriptor.Code] = descriptor
- idToDescriptors[descriptor.Value] = descriptor
-
- nextCode++
- return descriptor.Code
-}
-
-type byValue []ErrorDescriptor
-
-func (a byValue) Len() int { return len(a) }
-func (a byValue) Swap(i, j int) { a[i], a[j] = a[j], a[i] }
-func (a byValue) Less(i, j int) bool { return a[i].Value < a[j].Value }
-
-// GetGroupNames returns the list of Error group names that are registered
-func GetGroupNames() []string {
- keys := []string{}
-
- for k := range groupToDescriptors {
- keys = append(keys, k)
- }
- sort.Strings(keys)
- return keys
-}
-
-// GetErrorCodeGroup returns the named group of error descriptors
-func GetErrorCodeGroup(name string) []ErrorDescriptor {
- desc := groupToDescriptors[name]
- sort.Sort(byValue(desc))
- return desc
-}
-
-// GetErrorAllDescriptors returns a slice of all ErrorDescriptors that are
-// registered, irrespective of what group they're in
-func GetErrorAllDescriptors() []ErrorDescriptor {
- result := []ErrorDescriptor{}
-
- for _, group := range GetGroupNames() {
- result = append(result, GetErrorCodeGroup(group)...)
- }
- sort.Sort(byValue(result))
- return result
-}
diff --git a/vendor/github.com/docker/docker/AUTHORS b/vendor/github.com/docker/docker/AUTHORS
deleted file mode 100644
index dffacff1120..00000000000
--- a/vendor/github.com/docker/docker/AUTHORS
+++ /dev/null
@@ -1,2175 +0,0 @@
-# This file lists all individuals having contributed content to the repository.
-# For how it is generated, see `hack/generate-authors.sh`.
-
-Aanand Prasad
-Aaron Davidson
-Aaron Feng
-Aaron Hnatiw
-Aaron Huslage
-Aaron L. Xu
-Aaron Lehmann
-Aaron Welch
-Aaron.L.Xu
-Abel Muiño
-Abhijeet Kasurde
-Abhinandan Prativadi
-Abhinav Ajgaonkar
-Abhishek Chanda
-Abhishek Sharma
-Abin Shahab
-Adam Avilla
-Adam Dobrawy
-Adam Eijdenberg
-Adam Kunk
-Adam Miller
-Adam Mills
-Adam Pointer
-Adam Singer
-Adam Walz
-Addam Hardy
-Aditi Rajagopal
-Aditya
-Adnan Khan
-Adolfo Ochagavía
-Adria Casas
-Adrian Moisey
-Adrian Mouat
-Adrian Oprea
-Adrien Folie
-Adrien Gallouët
-Ahmed Kamal
-Ahmet Alp Balkan
-Aidan Feldman
-Aidan Hobson Sayers
-AJ Bowen
-Ajey Charantimath
-ajneu
-Akash Gupta
-Akhil Mohan
-Akihiro Matsushima
-Akihiro Suda
-Akim Demaille
-Akira Koyasu
-Akshay Karle
-Al Tobey
-alambike
-Alan Hoyle
-Alan Scherger
-Alan Thompson
-Albert Callarisa
-Albert Zhang
-Albin Kerouanton
-Alejandro González Hevia
-Aleksa Sarai
-Aleksandrs Fadins
-Alena Prokharchyk
-Alessandro Boch
-Alessio Biancalana
-Alex Chan
-Alex Chen
-Alex Coventry
-Alex Crawford
-Alex Ellis
-Alex Gaynor
-Alex Goodman
-Alex Olshansky
-Alex Samorukov
-Alex Warhawk
-Alexander Artemenko
-Alexander Boyd
-Alexander Larsson
-Alexander Midlash
-Alexander Morozov
-Alexander Shopov
-Alexandre Beslic
-Alexandre Garnier
-Alexandre González
-Alexandre Jomin
-Alexandru Sfirlogea
-Alexei Margasov
-Alexey Guskov
-Alexey Kotlyarov
-Alexey Shamrin
-Alexis THOMAS
-Alfred Landrum
-Ali Dehghani
-Alicia Lauerman
-Alihan Demir
-Allen Madsen
-Allen Sun
-almoehi
-Alvaro Saurin
-Alvin Deng
-Alvin Richards
-amangoel
-Amen Belayneh
-Amir Goldstein
-Amit Bakshi
-Amit Krishnan
-Amit Shukla
-Amr Gawish
-Amy Lindburg
-Anand Patil
-AnandkumarPatel
-Anatoly Borodin
-Anca Iordache
-Anchal Agrawal
-Anda Xu
-Anders Janmyr
-Andre Dublin <81dublin@gmail.com>
-Andre Granovsky
-Andrea Denisse Gómez
-Andrea Luzzardi
-Andrea Turli
-Andreas Elvers
-Andreas Köhler
-Andreas Savvides
-Andreas Tiefenthaler
-Andrei Gherzan
-Andrei Vagin
-Andrew C. Bodine
-Andrew Clay Shafer
-Andrew Duckworth
-Andrew France
-Andrew Gerrand
-Andrew Guenther
-Andrew He
-Andrew Hsu
-Andrew Kuklewicz
-Andrew Macgregor
-Andrew Macpherson
-Andrew Martin
-Andrew McDonnell
-Andrew Munsell
-Andrew Pennebaker
-Andrew Po
-Andrew Weiss
-Andrew Williams
-Andrews Medina
-Andrey Kolomentsev
-Andrey Petrov
-Andrey Stolbovsky
-André Martins
-andy
-Andy Chambers
-andy diller
-Andy Goldstein
-Andy Kipp
-Andy Rothfusz
-Andy Smith
-Andy Wilson
-Anes Hasicic
-Anil Belur
-Anil Madhavapeddy
-Ankit Jain
-Ankush Agarwal
-Anonmily
-Anran Qiao
-Anshul Pundir
-Anthon van der Neut
-Anthony Baire
-Anthony Bishopric
-Anthony Dahanne
-Anthony Sottile
-Anton Löfgren
-Anton Nikitin
-Anton Polonskiy
-Anton Tiurin
-Antonio Murdaca
-Antonis Kalipetis
-Antony Messerli
-Anuj Bahuguna
-Anusha Ragunathan
-apocas
-Arash Deshmeh
-ArikaChen
-Arko Dasgupta
-Arnaud Lefebvre
-Arnaud Porterie
-Arnaud Rebillout
-Arthur Barr
-Arthur Gautier
-Artur Meyster
-Arun Gupta
-Asad Saeeduddin
-Asbjørn Enge
-averagehuman
-Avi Das
-Avi Kivity
-Avi Miller
-Avi Vaid
-ayoshitake
-Azat Khuyiyakhmetov
-Bardia Keyoumarsi
-Barnaby Gray
-Barry Allard
-Bartłomiej Piotrowski
-Bastiaan Bakker
-bdevloed
-Ben Bonnefoy
-Ben Firshman
-Ben Golub
-Ben Gould
-Ben Hall
-Ben Sargent
-Ben Severson
-Ben Toews
-Ben Wiklund
-Benjamin Atkin
-Benjamin Baker
-Benjamin Boudreau
-Benjamin Yolken
-Benny Ng
-Benoit Chesneau
-Bernerd Schaefer
-Bernhard M. Wiedemann
-Bert Goethals
-Bertrand Roussel
-Bevisy Zhang
-Bharath Thiruveedula
-Bhiraj Butala
-Bhumika Bayani
-Bilal Amarni
-Bill Wang
-Bily Zhang
-Bin Liu
-Bingshen Wang
-Blake Geno
-Boaz Shuster
-bobby abbott
-Boqin Qin ]