Commit Graph

18947 Commits

Author SHA1 Message Date
Wojciech Tyczynski
449aade4ee Implement runtime.Object <-> map[string]interface{} conversion 2017-02-10 10:53:27 +01:00
Kubernetes Submit Queue
f9215e8fb3 Merge pull request #41058 from liggitt/v1-tokenreview
Automatic merge from submit-queue (batch tested with PRs 41112, 41201, 41058, 40650, 40926)

Promote TokenReview to v1

Peer to https://github.com/kubernetes/kubernetes/pull/40709

We have multiple features that depend on this API:

- [webhook authentication](https://kubernetes.io/docs/admin/authentication/#webhook-token-authentication)
- [kubelet delegated authentication](https://kubernetes.io/docs/admin/kubelet-authentication-authorization/#kubelet-authentication)
- add-on API server delegated authentication

The API has been in use since 1.3 in beta status (v1beta1) with negligible changes:
- Added a status field for reporting errors evaluating the token

This PR promotes the existing v1beta1 API to v1 with no changes

Because the API does not persist data (it is a query/response-style API), there are no data migration concerns.

This positions us to promote the features that depend on this API to stable in 1.7

cc @kubernetes/sig-auth-api-reviews @kubernetes/sig-auth-misc

```release-note
The authentication.k8s.io API group was promoted to v1
```
2017-02-10 01:40:44 -08:00
Kubernetes Submit Queue
64f34eb5fa Merge pull request #41201 from deads2k/api-01-round-trip
Automatic merge from submit-queue (batch tested with PRs 41112, 41201, 41058, 40650, 40926)

make round trip testing generic

RoundTrip testing is something associated with a scheme and everyone who writes an API will want to do it.  In the end, we should wire each API group separately in a test scheme and have them all call this general function.  Once `kubeadm` is out of the main scheme, we'll be able to remove the one really ugly hack.

@luxas @sttts @kubernetes/sig-apimachinery-pr-reviews @smarterclayton
2017-02-10 01:40:42 -08:00
Kubernetes Submit Queue
673d061c56 Merge pull request #40838 from kow3ns/ss-fixes
Automatic merge from submit-queue (batch tested with PRs 40796, 40878, 36033, 40838, 41210)

StatefulSet hardening

**What this PR does / why we need it**:

This PR contains the following changes to StatefulSet. Only one change effects the semantics of how the controller operates (This is described in #38418), and this change only brings the controller into conformance with its documented behavior.

1. pcb and pcb controller are removed and their functionality is encapsulated in StatefulPodControlInterface. This class modules the design contoller.PodControlInterface and provides an abstraction to clientset.Interface which is useful for testing purposes.
2. IdentityMappers has been removed to clarify what properties of a Pod are mutated by the controller. All mutations are performed in the UpdateStatefulPod method of the StatefulPodControlInterface.
3. The statefulSetIterator and petQueue classes are removed. These classes sorted Pods by CreationTimestamp. This is brittle and not resilient to clock skew. The current control loop, which implements the same logic, is in stateful_set_control.go. The Pods are now sorted and considered by their ordinal indices, as is outlined in the documentation.
4. StatefulSetController now checks to see if the Pods matching a StatefulSet's Selector also match the Name of the StatefulSet. This will make the controller resilient to overlapping, and will be enhanced by the addition of ControllerRefs.
5. The total lines of production code have been reduced, and the total number of unit tests has been increased. All new code has 100% unit coverage giving the module 83% coverage. Tests for StatefulSetController have been added, but it is not practical to achieve greater coverage in unit testing for this code (the e2e tests for StatefulSet cover these areas).
6. Issue #38418 is fixed in that StaefulSet will ensure that all Pods that are predecessors of another Pod are Running and Ready prior to launching a new Pod. This removes the potential for deadlock when a Pod needs to be rescheduled while its predecessor is hung in Pending or Initializing.
7. All reference to pet have been removed from the code and comments.

**Which issue this PR fixes**
 fixes #38418,#36859
**Special notes for your reviewer**:

**Release note**:

```release-note
Fixes issue #38418 which, under circumstance, could cause StatefulSet to deadlock. 
Mediates issue #36859. StatefulSet only acts on Pods whose identity matches the StatefulSet, providing a partial mediation for overlapping controllers.
```
2017-02-10 00:04:49 -08:00
Kubernetes Submit Queue
45d122dd6b Merge pull request #36033 from DirectXMan12/feature/hpa-v2
Automatic merge from submit-queue (batch tested with PRs 40796, 40878, 36033, 40838, 41210)

HPA v2 (API Changes)

**Release note**:
```release-note
Introduces an new alpha version of the Horizontal Pod Autoscaler including expanded support for specifying metrics.
```

Implements the API changes for kubernetes/features#117.

This implements #34754, which is the new design for the Horizontal Pod Autoscaler.  It includes improved support for custom metrics (and/or arbitrary metrics) as well as expanded support for resource metrics.  The new HPA object is introduces in the API group "autoscaling/v1alpha1".

Note that the improved custom metric support currently is limited to per pod metrics from Heapster -- attempting to use the new "object metrics" will simply result in an error.  This will change once #34586 is merged and implemented.
2017-02-10 00:04:48 -08:00
Kubernetes Submit Queue
8188c3cca4 Merge pull request #40796 from wojtek-t/use_node_ttl_in_secret_manager
Automatic merge from submit-queue (batch tested with PRs 40796, 40878, 36033, 40838, 41210)

Implement TTL controller and use the ttl annotation attached to node in secret manager

For every secret attached to a pod as volume, Kubelet is trying to refresh it every sync period. Currently Kubelet has a ttl-cache of secrets of its pods and the ttl is set to 1 minute. That means that in large clusters we are targetting (5k nodes, 30pods/node), given that each pod has a secret associated with ServiceAccount from its namespaces, and with large enough number of namespaces (where on each node (almost) every pod is from a different namespace), that resource in ~30 GETs to refresh all secrets every minute from one node, which gives ~2500QPS for GET secrets to apiserver.

Apiserver cannot keep up with it very easily.

Desired solution would be to watch for secret changes, but because of security we don't want a node watching for all secrets, and it is not possible for now to watch only for secrets attached to pods from my node.

So as a temporary solution, we are introducing an annotation that would be a suggestion for kubelet for the TTL of secrets in the cache and a very simple controller that would be setting this annotation based on the cluster size (the large cluster is, the bigger ttl is). 
That workaround mean that only very local changes are needed in Kubelet, we are creating a well separated very simple controller, and once watching "my secrets" will be possible it will be easy to remove it and switch to that. And it will allow us to reach scalability goals.

@dchen1107 @thockin @liggitt
2017-02-10 00:04:44 -08:00
Kubernetes Submit Queue
85b4d2e5cf Merge pull request #36592 from andrewsykim/36273-set-all-node-conditions-unknown-when-node-unreachable
Automatic merge from submit-queue (batch tested with PRs 40917, 41181, 41123, 36592, 41183)

Set all node conditions to Unknown when node is unreachable

**What this PR does / why we need it**:
Sets all node conditions to Unknown when node does not report status/unreachable

**Which issue this PR fixes** 
fixes https://github.com/kubernetes/kubernetes/issues/36273
2017-02-09 23:10:47 -08:00
Kubernetes Submit Queue
76b39431d3 Merge pull request #41147 from derekwaynecarr/improve-eviction-logs
Automatic merge from submit-queue (batch tested with PRs 41074, 41147, 40854, 41167, 40045)

Add debug logging to eviction manager

**What this PR does / why we need it**:
This PR adds debug logging to eviction manager.

We need it to help users understand when/why eviction manager is/is not making decisions to support information gathering during support.
2017-02-09 17:41:41 -08:00
Kubernetes Submit Queue
f5c07157a8 Merge pull request #41092 from yujuhong/cri-docker1_10
Automatic merge from submit-queue (batch tested with PRs 41037, 40118, 40959, 41084, 41092)

CRI node e2e: add tests for docker 1.10
2017-02-09 16:44:44 -08:00
Kubernetes Submit Queue
d2ada4bbd3 Merge pull request #41084 from ncdc/shared-informers-03-certs
Automatic merge from submit-queue (batch tested with PRs 41037, 40118, 40959, 41084, 41092)

Switch CSR controller to use shared informer

Switch the CSR controller to use a shared informer. Originally part of #40097 but I'm splitting that up into multiple PRs.

I have added a test to try to ensure we don't mutate the cache. It could use some fleshing out for additional coverage but it gets the initial job done, I think.

cc @mikedanese @deads2k @liggitt @sttts @kubernetes/sig-scalability-pr-reviews
2017-02-09 16:44:43 -08:00
Kubernetes Submit Queue
052f3b9d4c Merge pull request #40118 from vmware/FixdetachVolumeOnNodeOff
Automatic merge from submit-queue (batch tested with PRs 41037, 40118, 40959, 41084, 41092)

Fix for detach volume when node is not present/ powered off

Fixes #33061 
When a vm is reported as no longer present in cloud provider and is deleted by node controller, there are no attempts to detach respective volumes. For example, if a VM is powered off or paused, and pods are migrated to other nodes. In the case of vSphere, the VM cannot be started again because the VM still holds mount points to volumes that are now mounted to other VMs.

In order to re-join this node again, you will have to manually detach these volumes from the powered off vm before starting it.

The current fix will make sure the mount points are deleted when the VM is powered off. Since all the mount points are deleted, the VM can be powered on again.

This is a workaround proposal only. I still don't see the kubernetes issuing a detach request to the vsphere cloud provider which should be the case. (Details in original issue #33061 )

@luomiao @kerneltime @pdhamdhere @jingxu97 @saad-ali
2017-02-09 16:44:40 -08:00
Kubernetes Submit Queue
75887829bc Merge pull request #41136 from deads2k/apiserver-10-example
Automatic merge from submit-queue (batch tested with PRs 41121, 40048, 40502, 41136, 40759)

add k8s.io/sample-apiserver to demonstrate how to build an aggregated API server

builds on https://github.com/kubernetes/kubernetes/pull/41093

This creates a sample API server is a separate staging repo to guarantee no cheating with `k8s.io/kubernetes` dependencies.  The sample is run during integration tests (simple tests on it so far) to ensure that it continues to run.

@sttts @kubernetes/sig-api-machinery-misc ptal
@pwittrock @pmorie @kris-nova an aggregated API server example that will stay up to date.
2017-02-09 14:27:48 -08:00
Kubernetes Submit Queue
ce998a9fa7 Merge pull request #40365 from shiywang/attach
Automatic merge from submit-queue (batch tested with PRs 41145, 38771, 41003, 41089, 40365)

Add `kubectl attach` support for multiple types

To address this issue: https://github.com/kubernetes/kubernetes/issues/24857
the new `kubectl attach` will contain three scenarios depend on args:
1. `kubectl attach POD` :   if only one argument provided, we assume it's a pod name
2. `kubectl attach TYPE NAME` : if two arguments provided, we assume first one is resource we [supported](4770162fd3/pkg/kubectl/cmd/util/factory_object_mapping.go (L285)), the second resource's name.
3. `kubectl attach TYPE/NAME` : one argument provided and arg[0] must contain `/`, ditto

Is there any other scenarios I haven't consider in ?

for now the first scenario is compatible with changed before, also `make test` pass  
will write some unit test to test second and third scenario, if you guys think i'm doing the right way.
@pwittrock @kargakis @fabianofranz @ymqytw @AdoHe
2017-02-09 13:34:56 -08:00
Kubernetes Submit Queue
85fd324b62 Merge pull request #41003 from xingzhou/create-rolebinding-bug
Automatic merge from submit-queue (batch tested with PRs 41145, 38771, 41003, 41089, 40365)

Remove useless param from kubectl create rolebinding

The `force` param is not used in
`kubectl create rolebinding` & `kubectl create clusterrolebinding`
commands, removed it.
2017-02-09 13:34:49 -08:00
Kubernetes Submit Queue
5c4cec21c3 Merge pull request #38771 from derekwaynecarr/fix_drain_typo
Automatic merge from submit-queue (batch tested with PRs 41145, 38771, 41003, 41089, 40365)

Fix typo in drain command
2017-02-09 13:34:47 -08:00
Kubernetes Submit Queue
641315f859 Merge pull request #41145 from kargakis/cleanup-test-fix
Automatic merge from submit-queue

Do not cleanup already deleted replica sets and add more logging around it

For https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-kops-aws/3569

@ncdc will make the output of the test cleaner
2017-02-09 13:34:24 -08:00
Kubernetes Submit Queue
1c33a1cde2 Merge pull request #41164 from janetkuo/controller-approver
Automatic merge from submit-queue

Add janetkuo to approvers for controllers
2017-02-09 11:20:29 -08:00
Kubernetes Submit Queue
310b1e2e2c Merge pull request #41062 from jsafrane/gce-owners
Automatic merge from submit-queue

Add OWNERS file for GCE cloud provider

GCE cloud provider does not have OWNERS file and all PRs need to be approved by owner of pkg/cloudprovider, which is currently only @mikedanese. Adding more options would be helpful to speed up reviews.

Feel free to add/remove some names, this first version is just my qualified guess. It's hard to distinguish generic Kubernetes refactoring from real cloud provider work in git log.

```release-note
NONE
```
2017-02-09 11:20:21 -08:00
David Ashpole
b224f83c37 Revert "[Kubelet] Delay deletion of pod from the API server until volumes are deleted" 2017-02-09 08:45:18 -08:00
Kenneth Owens
4d99b4d825 StatefulSet refactoring and semantics fix
1. pcb and pcb controller are removed and their functionality is
encapsulated in StatefulPodControlInterface.
2. IdentityMappers has been removed to clarify what properties of a Pod are
mutated by the controller. All mutations are performed in the
UpdateStatefulPod method of the StatefulPodControlInterface.
3. The statefulSetIterator and petQueue classes are removed. These classes
sorted Pods by CreationTimestamp. This is brittle and not resilient to
clock skew. The current control loop, which implements the same logic,
is in stateful_set_control.go. The Pods are now sorted and considered by
their ordinal indices, as is outlined in the documentation.
4. StatefulSetController now checks to see if the Pods matching a
StatefulSet's Selector also match the Name of the StatefulSet. This will
make the controller resilient to overlapping, and will be enhanced by
the addition of ControllerRefs.
2017-02-09 08:42:28 -08:00
deads2k
0a74353118 make round trip testing generic 2017-02-09 11:42:22 -05:00
Wojciech Tyczynski
dcf8a85fdf Add integration test for ttlcontroller. 2017-02-09 14:50:24 +01:00
Wojciech Tyczynski
6c0535a939 Use secret TTL annotation in secret manager 2017-02-09 13:53:32 +01:00
Wojciech Tyczynski
3aebc4c003 Implement ttl controller 2017-02-09 13:53:32 +01:00
Michail Kargakis
97c9e7fe07 Do not cleanup replicasets already marked for deletion 2017-02-09 10:31:25 +01:00
Michail Kargakis
ff83eb58eb Add more logs during the cleanup phase of a deployment 2017-02-09 10:31:15 +01:00
Janet Kuo
16ce097b04 Add janetkuo to approvers for controllers 2017-02-08 14:37:25 -08:00
Kubernetes Submit Queue
42d8d4ca88 Merge pull request #40948 from freehan/cri-hostport
Automatic merge from submit-queue (batch tested with PRs 40873, 40948, 39580, 41065, 40815)

[CRI] Enable Hostport Feature for Dockershim

Commits:
1. Refactor common hostport util logics and add more tests

2. Add HostportManager which can ADD/DEL hostports instead of a complete sync.

3. Add Interface for retreiving portMappings information of a pod in Network Host interface. 
Implement GetPodPortMappings interface in dockerService. 

4. Teach kubenet to use HostportManager
2017-02-08 14:14:43 -08:00
Derek Carr
0171121486 Add debug logging to eviction manager 2017-02-08 15:01:12 -05:00
Kubernetes Submit Queue
1be74e39fa Merge pull request #41095 from dashpole/deletion_pod_lifecycle
Automatic merge from submit-queue

[Kubelet] Delay deletion of pod from the API server until volumes are deleted

Previous PR that was reverted: #40239.

To summarize the conclusion of the previous PR after reverting:
- The status manager has the most up-to-date status, but the volume manager uses the status from the pod manager, which only is as up-to-date as the API server.
- Because of this, the previous change required an additional round trip between the kubelet and API server.
- When few pods are being added or deleted, this is only a minor issue.  However, when under heavy load, the QPS limit to the API server causes this round trip to take ~60 seconds, which is an unacceptable increase in latency. Take a look at the graphs in #40239 to see the effect of QPS changes on timing.
- To remedy this, the volume manager looks at the status from the status manager, which eliminates the round trip.

cc: @vishh @derekwaynecarr @sjenning @jingxu97 @kubernetes/sig-storage-misc
2017-02-08 10:45:16 -08:00
Yu-Ju Hong
f96611ac45 dockershim: set the default cgroup driver 2017-02-08 10:22:19 -08:00
Kubernetes Submit Queue
59f7af1c67 Merge pull request #41138 from deads2k/owners-07-controllers
Automatic merge from submit-queue

add deads2k to approvers for controllers

I've done significant maintenance on these for a while and introduced new patterns like shared informers and rate limited work queues.
2017-02-08 09:49:16 -08:00
Kubernetes Submit Queue
4166c0e4a0 Merge pull request #38683 from bruceauyeung/k8s-branch-fix-possible-nil-error
Automatic merge from submit-queue

avoid repeated length calculation and some other code improvements

**What this PR does / why we need it**:
1. in function `ParsePairs`,  calculating `invalidBuf`'s length over and over again brings performance penalty. a `invalidBufNonEmpty` bool value can fix this.
2. pairArg is not a string template and also there is no other arguments for `fmt.Sprintf`, so i remove `fmt.Sprintf`
3. in function `DumpReaderToFile`, we must check nil error first before defer statement, otherwise there maybe a potential nil error on `f.Close()`
4. add nil checks into `GetWideFlag` function
5. some other minor code improvements for better readability.


Signed-off-by: bruceauyeung <ouyang.qinhua@zte.com.cn>
2017-02-08 09:49:10 -08:00
Minhan Xia
be9eca6b51 teach kubenet to use hostport_manager 2017-02-08 09:35:04 -08:00
Minhan Xia
bd05e1af2b add portmapping getter into network host 2017-02-08 09:35:04 -08:00
Minhan Xia
8e7219cbb4 add hostport manager 2017-02-08 09:26:52 -08:00
Minhan Xia
aabdaa984f refactor hostport logic 2017-02-08 09:26:52 -08:00
Andy Goldstein
e5fc73a4f1 Switch CSR controller to use shared informer 2017-02-08 11:01:34 -05:00
Kubernetes Submit Queue
4ed86f5d46 Merge pull request #41076 from gyliu513/port-forward
Automatic merge from submit-queue

Removed a space in portforward.go.

**What this PR does / why we need it**:

**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #

**Special notes for your reviewer**:

**Release note**:

```release-note
```
2017-02-08 07:59:10 -08:00
David Ashpole
67cb2704c5 delete volumes before pod deletion 2017-02-08 07:34:49 -08:00
deads2k
390266f9b0 add deads2k to approves for controllers 2017-02-08 10:16:38 -05:00
deads2k
a463540d47 remove duplication of RESTOptionsGetter for kube 2017-02-08 09:08:58 -05:00
deads2k
470cb9d2c9 streamline etcd options for aggregated api server 2017-02-08 09:07:47 -05:00
Michail Kargakis
38195704be Add more logs in the progress check path 2017-02-08 13:15:28 +01:00
Kubernetes Submit Queue
c6f30d3750 Merge pull request #41105 from Random-Liu/fix-kuberuntime-log
Automatic merge from submit-queue (batch tested with PRs 38796, 40823, 40756, 41083, 41105)

Let ReadLogs return when there is a read error.

Fixes a bug in kuberuntime log.

Today, @yujuhong found that once we cancel `kubectl logs -f` with `Ctrl+C`, kuberuntime will keep complaining:
```
27939 kuberuntime_logs.go:192] Failed with err write tcp 10.240.0.4:10250->10.240.0.2:53913: write: broken pipe when writing log for log file "/var/log/pods/5bb76510-ed71-11e6-ad02-42010af00002/busybox_0.log": &{timestamp:{sec:63622095387 nsec:625309193 loc:0x484c440} stream:stdout log:[84 117 101 32 70 101 98 32 32 55 32 50 48 58 49 54 58 50 55 32 85 84 67 32 50 48 49 55 10]}
```

This is because kuberuntime keeps writing to the connection even though it is already closed. Actually, kuberuntime should return and report error whenever there is a writing error.

Ref the [docker code](3a4ae1f661/pkg/stdcopy/stdcopy.go (L159-L167))

I'm still creating the cluster and verifying this fix. Will post the result here after that.

/cc @yujuhong @kubernetes/sig-node-bugs
2017-02-08 00:49:51 -08:00
Kubernetes Submit Queue
44980eb55c Merge pull request #40756 from vmware/e2eTestsUpdate
Automatic merge from submit-queue (batch tested with PRs 38796, 40823, 40756, 41083, 41105)

e2e tests for vSphere cloud provider

**What this PR does / why we need it**:
This PR contains changes for existing e2e volume provisioning test cases for running on vsphere cloud provider.

**Following is the summary of changes made in  existing e2e test cases**

**Added test/e2e/persistent_volumes-vsphere.go**
- This test verifies deleting a PVC before the pod does not cause pod deletion to fail on PD detach and deleting the PV before the pod does not cause pod deletion to fail on PD detach.

**test/e2e/volume_provisioning.go**
- This test creates a StorageClass and claim with dynamic provisioning and alpha dynamic provisioning annotations and verifies that required volumes are getting created. Test also verifies that created volume is readable and retaining data.
- Added vsphere as supported cloud provider. Also set pluginName to "kubernetes.io/vsphere-volume" for vsphere cloud provider.

**test/e2e/volumes.go**
- Added test spec for vsphere
-  This test creates requested volume, mount it on the pod, write some random content at /opt/0/index.html and verifies file contents are perfect to make sure we don't see the content from previous test runs.
- This test also passes "1234" as fsGroup to mount volume and verifies fsGroup is set correctly.

**added  test/e2e/vsphere_utils.go** 
- Added function verifyVSphereDiskAttached - Verify the persistent disk attached to the node.
- Added function waitForVSphereDiskToDetach - Wait until vsphere vmdk is deteched from the given node or time out after 5 minutes
- Added getVSpherePersistentVolumeSpec - create vsphere volume spec with given VMDK volume path, Reclaim Policy and labels
- Added getVSpherePersistentVolumeClaimSpec - get vsphere persistent volume spec with given selector labels
- createVSphereVolume - function to create vmdk volume

**Following is the summary of new e2e tests added with this PR**

**test/e2e/vsphere_volume_placement.go**
- contains volume placement tests using node label selector
-  Test Back-to-back pod creation/deletion with the same volume source on the same worker node
- Test Back-to-back pod creation/deletion with the same volume source attach/detach to different worker nodes

**test/e2e/pv_reclaimpolicy.go**
- contains tests for PV/PVC - Reclaiming Policy
- Test verifies persistent volume should be deleted when reclaimPolicy on the PV is set to delete and associated claim is deleted
- Test also verified that persistent volume should be retained when reclaimPolicy on the PV is set to retain and associated claim is deleted 

**test/e2e/pvc_label_selector.go** 
- This is function test for Selector-Label Volume Binding Feature.
- Verify volume with the matching label is bounded with the PVC.

Other changes
Updated  pkg/cloudprovider/providers/vsphere/BUILD  and test/e2e/BUILD 


**Which issue this PR fixes** *
fixes # 41087

**Special notes for your reviewer**:
Updated tests were executed on kubernetes v1.4.8 release on vsphere.
Test steps are provided in comments 


@kerneltime @BaluDontu
2017-02-08 00:49:47 -08:00
Kubernetes Submit Queue
79e333afe0 Merge pull request #40823 from liggitt/edit-tests
Automatic merge from submit-queue (batch tested with PRs 38796, 40823, 40756, 41083, 41105)

Add unit tests for interactive edit command

Before updating edit to use unstructured objects and use generic JSON patching, we need better test coverage of the existing paths. This adds unit tests for the interactive edit scenarios. 

This PR adds:
* Simple framework for recording tests for interactive edit:
  * record.go is a tiny test server that records editor and API inputs as test expectations, and editor and API outputs as playback stubs
  * record_editor.sh is a shell script that sends the before/after of an interactive `vi` edit to the test server
  * record_testcase.sh (see README) starts up the test server, sets up a kubeconfig to proxy to the test server, sets EDITOR to invoke record_editor.sh, then opens a shell that lets you use `kubectl edit` normally
* Adds test cases for the following scenarios:
  - [x] no-op edit (open and close without making changes)
  - [x] try to edit a missing object
  - [x] edit single item successfully
  - [x] edit list of items successfully
  - [x] edit a single item, submit with an error, re-edit, submit fixed successfully
  - [x] edit list of items, submit some with errors and some good, re-edit errors, submit fixed
  - [x] edit trying to change immutable things like name/version/kind, ensure preconditions prevent submission
  - [x] edit in "create mode" successfully (`kubectl create -f ... --edit`)
  - [x] edit in "create mode" introducing errors (`kubectl create -f ... --edit`)
* Fixes a bug with edit printing errors to stdout (caught when testing stdout/stderr against expected output)

Follow-ups:
- [ ] clean up edit code path
- [ ] switch edit to use unstructured objects
- [ ] make edit fall back to jsonmerge for objects without registered go structs (TPR, unknown versions of pods, etc)
- [ ] add tests:
  - [ ] edit TPR
  - [ ] edit mix of TPR and known objects
  - [ ] edit known object with extra field from server
  - [ ] edit known object with new version from server
2017-02-08 00:49:46 -08:00
Kubernetes Submit Queue
1b7bdde40b Merge pull request #38796 from chentao1596/kubelet-cni-log
Automatic merge from submit-queue (batch tested with PRs 38796, 40823, 40756, 41083, 41105)

kubelet/network-cni-plugin: modify the log's info

**What this PR does / why we need it**:

  Checking the startup logs of kubelet, i can always find a error like this:
    "E1215 10:19:24.891724    2752 cni.go:163] error updating cni config: No networks found in /etc/cni/net.d"
 
 It will appears, neither i use cni network-plugin or not. 
After analysis codes, i thought it should be a warn log, because it will not produce any actions like as exit or abort, and just ignored when not any valid plugins exit.

 thank you!
2017-02-08 00:49:44 -08:00
shiywang
919cb19706 kubectl attach support for multiple types
hot fix

add unit test and statefulSet

update example

remove package

change to ResourceNames

remove some code

remove strings

add fake testing func for AttachablePodForObject

minor change

add test.obj nil check

update testfile

gofmt

update

add fallthough

revert back
2017-02-08 16:10:55 +08:00
Kubernetes Submit Queue
01c45f7de1 Merge pull request #41085 from deads2k/apiserver-07-move-runtime-config
Automatic merge from submit-queue (batch tested with PRs 41061, 40888, 40664, 41020, 41085)

move --runtime-config to kubeapiserver

`--runtime-config` is only useful if you have a lot of API groups in one server.  If you have a single API group in your server (the vast majority of aggregated API servers), then the flag is unneeded and relatively complex.  This moves it to closer to point of use.

@sttts
2017-02-07 23:06:43 -08:00