Merge remote-tracking branch 'upstream/master'
This commit is contained in:
@@ -98,7 +98,7 @@ Use the file [`namespace-dev.json`](namespace-dev.json) which describes a develo
|
||||
}
|
||||
```
|
||||
|
||||
[Download example](namespace-dev.json)
|
||||
[Download example](namespace-dev.json?raw=true)
|
||||
<!-- END MUNGE: EXAMPLE namespace-dev.json -->
|
||||
|
||||
Create the development namespace using kubectl.
|
||||
|
||||
@@ -50,6 +50,7 @@ Code conventions
|
||||
- so pkg/controllers/autoscaler/foo.go should say `package autoscaler` not `package autoscalercontroller`.
|
||||
- Unless there's a good reason, the `package foo` line should match the name of the directory in which the .go file exists.
|
||||
- Importers can use a different name if they need to disambiguate.
|
||||
- Locks should be called `lock` and should never be embedded (always `lock sync.Mutex`). When multiple locks are present, give each lock a distinct name following Go conventions - `stateLock`, `mapLock` etc.
|
||||
- API conventions
|
||||
- [API changes](api_changes.md)
|
||||
- [API conventions](api-conventions.md)
|
||||
|
||||
@@ -96,7 +96,7 @@ git push -f origin myfeature
|
||||
|
||||
### Creating a pull request
|
||||
|
||||
1. Visit http://github.com/$YOUR_GITHUB_USERNAME/kubernetes
|
||||
1. Visit https://github.com/$YOUR_GITHUB_USERNAME/kubernetes
|
||||
2. Click the "Compare and pull request" button next to your "myfeature" branch.
|
||||
3. Check out the pull request [process](pull-requests.md) for more details
|
||||
|
||||
|
||||
Binary file not shown.
|
Before Width: | Height: | Size: 88 KiB After Width: | Height: | Size: 112 KiB |
@@ -76,7 +76,7 @@ using [cluster/aws/config-default.sh](http://releases.k8s.io/HEAD/cluster/aws/co
|
||||
|
||||
This process takes about 5 to 10 minutes. Once the cluster is up, the IP addresses of your master and node(s) will be printed,
|
||||
as well as information about the default services running in the cluster (monitoring, logging, dns). User credentials and security
|
||||
tokens are written in `~/.kube/kubeconfig`, they will be necessary to use the CLI or the HTTP Basic Auth.
|
||||
tokens are written in `~/.kube/config`, they will be necessary to use the CLI or the HTTP Basic Auth.
|
||||
|
||||
By default, the script will provision a new VPC and a 4 node k8s cluster in us-west-2a (Oregon) with `t2.micro` instances running on Ubuntu.
|
||||
You can override the variables defined in [config-default.sh](http://releases.k8s.io/HEAD/cluster/aws/config-default.sh) to change this behavior as follows:
|
||||
|
||||
@@ -43,7 +43,7 @@ Getting started on Microsoft Azure
|
||||
|
||||
## Prerequisites
|
||||
|
||||
** Azure Prerequisites**
|
||||
**Azure Prerequisites**
|
||||
|
||||
1. You need an Azure account. Visit http://azure.microsoft.com/ to get started.
|
||||
2. Install and configure the Azure cross-platform command-line interface. http://azure.microsoft.com/en-us/documentation/articles/xplat-cli/
|
||||
|
||||
@@ -62,7 +62,7 @@ fed-node = 192.168.121.65
|
||||
**Prepare the hosts:**
|
||||
|
||||
* Install Kubernetes on all hosts - fed-{master,node}. This will also pull in docker. Also install etcd on fed-master. This guide has been tested with kubernetes-0.18 and beyond.
|
||||
* The [--enablerepo=update-testing](https://fedoraproject.org/wiki/QA:Updates_Testing) directive in the yum command below will ensure that the most recent Kubernetes version that is scheduled for pre-release will be installed. This should be a more recent version than the Fedora "stable" release for Kubernetes that you would get without adding the directive.
|
||||
* The [--enablerepo=updates-testing](https://fedoraproject.org/wiki/QA:Updates_Testing) directive in the yum command below will ensure that the most recent Kubernetes version that is scheduled for pre-release will be installed. This should be a more recent version than the Fedora "stable" release for Kubernetes that you would get without adding the directive.
|
||||
* If you want the very latest Kubernetes release [you can download and yum install the RPM directly from Fedora Koji](http://koji.fedoraproject.org/koji/packageinfo?packageID=19202) instead of using the yum install command below.
|
||||
|
||||
```sh
|
||||
|
||||
@@ -73,7 +73,7 @@ spec:
|
||||
'for ((i = 0; ; i++)); do echo "$i: $(date)"; sleep 1; done']
|
||||
```
|
||||
|
||||
[Download example](../../examples/blog-logging/counter-pod.yaml)
|
||||
[Download example](../../examples/blog-logging/counter-pod.yaml?raw=true)
|
||||
<!-- END MUNGE: EXAMPLE ../../examples/blog-logging/counter-pod.yaml -->
|
||||
|
||||
This pod specification has one container which runs a bash script when the container is born. This script simply writes out the value of a counter and the date once per second and runs indefinitely. Let’s create the pod in the default
|
||||
@@ -182,6 +182,7 @@ spec:
|
||||
mountPath: /varlog
|
||||
- name: containers
|
||||
mountPath: /var/lib/docker/containers
|
||||
readOnly: true
|
||||
terminationGracePeriodSeconds: 30
|
||||
volumes:
|
||||
- name: varlog
|
||||
@@ -192,7 +193,7 @@ spec:
|
||||
path: /var/lib/docker/containers
|
||||
```
|
||||
|
||||
[Download example](../../cluster/saltbase/salt/fluentd-gcp/fluentd-gcp.yaml)
|
||||
[Download example](../../cluster/saltbase/salt/fluentd-gcp/fluentd-gcp.yaml?raw=true)
|
||||
<!-- END MUNGE: EXAMPLE ../../cluster/saltbase/salt/fluentd-gcp/fluentd-gcp.yaml -->
|
||||
|
||||
This pod specification maps the directory on the host containing the Docker log files, `/var/lib/docker/containers`, to a directory inside the container which has the same path. The pod runs one image, `gcr.io/google_containers/fluentd-gcp:1.6`, which is configured to collect the Docker log files from the logs directory and ingest them into Google Cloud Logging. One instance of this pod runs on each node of the cluster. Kubernetes will notice if this pod fails and automatically restart it.
|
||||
|
||||
@@ -30,7 +30,7 @@ exists, it will output details for every resource that has a name prefixed with
|
||||
Possible resource types include (case insensitive): pods (po), services (svc),
|
||||
replicationcontrollers (rc), nodes (no), events (ev), limitranges (limits),
|
||||
persistentvolumes (pv), persistentvolumeclaims (pvc), resourcequotas (quota),
|
||||
namespaces (ns) or secrets.
|
||||
namespaces (ns), serviceaccounts or secrets.
|
||||
|
||||
|
||||
.SH OPTIONS
|
||||
|
||||
@@ -19,7 +19,7 @@ Display one or many resources.
|
||||
Possible resource types include (case insensitive): pods (po), services (svc),
|
||||
replicationcontrollers (rc), nodes (no), events (ev), componentstatuses (cs),
|
||||
limitranges (limits), persistentvolumes (pv), persistentvolumeclaims (pvc),
|
||||
resourcequotas (quota), namespaces (ns), endpoints (ep) or secrets.
|
||||
resourcequotas (quota), namespaces (ns), endpoints (ep), serviceaccounts or secrets.
|
||||
|
||||
.PP
|
||||
By specifying the output as 'template' and providing a Go template as the value
|
||||
|
||||
BIN
docs/proposals/Kubemark_architecture.png
Normal file
BIN
docs/proposals/Kubemark_architecture.png
Normal file
Binary file not shown.
|
After Width: | Height: | Size: 30 KiB |
152
docs/proposals/api-group.md
Normal file
152
docs/proposals/api-group.md
Normal file
@@ -0,0 +1,152 @@
|
||||
<!-- BEGIN MUNGE: UNVERSIONED_WARNING -->
|
||||
|
||||
<!-- BEGIN STRIP_FOR_RELEASE -->
|
||||
|
||||
<img src="http://kubernetes.io/img/warning.png" alt="WARNING"
|
||||
width="25" height="25">
|
||||
<img src="http://kubernetes.io/img/warning.png" alt="WARNING"
|
||||
width="25" height="25">
|
||||
<img src="http://kubernetes.io/img/warning.png" alt="WARNING"
|
||||
width="25" height="25">
|
||||
<img src="http://kubernetes.io/img/warning.png" alt="WARNING"
|
||||
width="25" height="25">
|
||||
<img src="http://kubernetes.io/img/warning.png" alt="WARNING"
|
||||
width="25" height="25">
|
||||
|
||||
<h2>PLEASE NOTE: This document applies to the HEAD of the source tree</h2>
|
||||
|
||||
If you are using a released version of Kubernetes, you should
|
||||
refer to the docs that go with that version.
|
||||
|
||||
<strong>
|
||||
The latest 1.0.x release of this document can be found
|
||||
[here](http://releases.k8s.io/release-1.0/docs/proposals/api-group.md).
|
||||
|
||||
Documentation for other releases can be found at
|
||||
[releases.k8s.io](http://releases.k8s.io).
|
||||
</strong>
|
||||
--
|
||||
|
||||
<!-- END STRIP_FOR_RELEASE -->
|
||||
|
||||
<!-- END MUNGE: UNVERSIONED_WARNING -->
|
||||
|
||||
# Supporting multiple API groups
|
||||
|
||||
## Goal
|
||||
|
||||
1. Breaking the monolithic v1 API into modular groups and allowing groups to be enabled/disabled individually. This allows us to break the monolithic API server to smaller components in the future.
|
||||
|
||||
2. Supporting different versions in different groups. This allows different groups to evolve at different speed.
|
||||
|
||||
3. Supporting identically named kinds to exist in different groups. This is useful when we experiment new features of an API in the experimental group while supporting the stable API in the original group at the same time.
|
||||
|
||||
4. Exposing the API groups and versions supported by the server. This is required to develop a dynamic client.
|
||||
|
||||
5. Laying the basis for [API Plugin](../../docs/design/extending-api.md).
|
||||
|
||||
6. Keeping the user interaction easy. For example, we should allow users to omit group name when using kubectl if there is no ambiguity.
|
||||
|
||||
|
||||
## Bookkeeping for groups
|
||||
|
||||
1. No changes to TypeMeta:
|
||||
|
||||
Currently many internal structures, such as RESTMapper and Scheme, are indexed and retrieved by APIVersion. For a fast implementation targeting the v1.1 deadline, we will concatenate group with version, in the form of "group/version", and use it where a version string is expected, so that many code can be reused. This implies we will not add a new field to TypeMeta, we will use TypeMeta.APIVersion to hold "group/version".
|
||||
|
||||
For backward compatibility, v1 objects belong to the group with an empty name, so existing v1 config files will remain valid.
|
||||
|
||||
2. /pkg/conversion#Scheme:
|
||||
|
||||
The key of /pkg/conversion#Scheme.versionMap for versioned types will be "group/version". For now, the internal version types of all groups will be registered to versionMap[""], as we don't have any identically named kinds in different groups yet. In the near future, internal version types will be registered to versionMap["group/"], and pkg/conversion#Scheme.InternalVersion will have type []string.
|
||||
|
||||
We will need a mechanism to express if two kinds in different groups (e.g., compute/pods and experimental/pods) are convertible, and auto-generate the conversions if they are.
|
||||
|
||||
3. meta.RESTMapper:
|
||||
|
||||
Each group will have its own RESTMapper (of type DefaultRESTMapper), and these mappers will be registered to pkg/api#RESTMapper (of type MultiRESTMapper).
|
||||
|
||||
To support identically named kinds in different groups, We need to expand the input of RESTMapper.VersionAndKindForResource from (resource string) to (group, resource string). If group is not specified and there is ambiguity (i.e., the resource exists in multiple groups), an error should be returned to force the user to specify the group.
|
||||
|
||||
## Server-side implementation
|
||||
|
||||
1. resource handlers' URL:
|
||||
|
||||
We will force the URL to be in the form of prefix/group/version/...
|
||||
|
||||
Prefix is used to differentiate API paths from other paths like /healthz. All groups will use the same prefix="apis", except when backward compatibility requires otherwise. No "/" is allowed in prefix, group, or version. Specifically,
|
||||
|
||||
* for /api/v1, we set the prefix="api" (which is populated from cmd/kube-apiserver/app#APIServer.APIPrefix), group="", version="v1", so the URL remains to be /api/v1.
|
||||
|
||||
* for new kube API groups, we will set the prefix="apis" (we will add a field in type APIServer to hold this prefix), group=GROUP_NAME, version=VERSION. For example, the URL of the experimental resources will be /apis/experimental/v1alpha1.
|
||||
|
||||
* for OpenShift v1 API, because it's currently registered at /oapi/v1, to be backward compatible, OpenShift may set prefix="oapi", group="".
|
||||
|
||||
* for other new third-party API, they should also use the prefix="apis" and choose the group and version. This can be done through the thirdparty API plugin mechanism in [13000](http://pr.k8s.io/13000).
|
||||
|
||||
2. supporting API discovery:
|
||||
|
||||
* At /prefix (e.g., /apis), API server will return the supported groups and their versions using pkg/api/unversioned#APIVersions type, setting the Versions field to "group/version". This is backward compatible, because currently API server does return "v1" encoded in pkg/api/unversioned#APIVersions at /api. (We will also rename the JSON field name from `versions` to `apiVersions`, to be consistent with pkg/api#TypeMeta.APIVersion field)
|
||||
|
||||
* At /prefix/group, API server will return all supported versions of the group. We will create a new type VersionList (name is open to discussion) in pkg/api/unversioned as the API.
|
||||
|
||||
* At /prefix/group/version, API server will return all supported resources in this group, and whether each resource is namespaced. We will create a new type APIResourceList (name is open to discussion) in pkg/api/unversioned as the API.
|
||||
|
||||
We will design how to handle deeper path in other proposals.
|
||||
|
||||
* At /swaggerapi/swagger-version/prefix/group/version, API server will return the Swagger spec of that group/version in `swagger-version` (e.g. we may support both Swagger v1.2 and v2.0).
|
||||
|
||||
3. handling common API objects:
|
||||
|
||||
* top-level common API objects:
|
||||
|
||||
To handle the top-level API objects that are used by all groups, we either have to register them to all schemes, or we can choose not to encode them to a version. We plan to take the latter approach and place such types in a new package called `unversioned`, because many of the common top-level objects, such as APIVersions, VersionList, and APIResourceList, which are used in the API discovery, and pkg/api#Status, are part of the protocol between client and server, and do not belong to the domain-specific parts of the API, which will evolve independently over time.
|
||||
|
||||
Types in the unversioned package will not have the APIVersion field, but may retain the Kind field.
|
||||
|
||||
For backward compatibility, when hanlding the Status, the server will encode it to v1 if the client expects the Status to be encoded in v1, otherwise the server will send the unversioned#Status. If an error occurs before the version can be determined, the server will send the unversioned#Status.
|
||||
|
||||
* non-top-level common API objects:
|
||||
|
||||
Assuming object o belonging to group X is used as a field in an object belonging to group Y, currently genconversion will generate the conversion functions for o in package Y. Hence, we don't need any special treatment for non-top-level common API objects.
|
||||
|
||||
TypeMeta is an exception, because it is a common object that is used by objects in all groups but does not logically belong to any group. We plan to move it to the package `unversioned`.
|
||||
|
||||
## Client-side implementation
|
||||
|
||||
1. clients:
|
||||
|
||||
Currently we have structured (pkg/client/unversioned#ExperimentalClient, pkg/client/unversioned#Client) and unstructured (pkg/kubectl/resource#Helper) clients. The structured clients are not scalable because each of them implements specific interface, e.g., [here](../../pkg/client/unversioned/client.go#L32). Only the unstructured clients are scalable. We should either auto-generate the code for structured clients or migrate to use the unstructured clients as much as possible.
|
||||
|
||||
We should also move the unstructured client to pkg/client/.
|
||||
|
||||
2. Spelling the URL:
|
||||
|
||||
The URL is in the form of prefix/group/version/. The prefix is hard-coded in the client/unversioned.Config (see [here](../../pkg/client/unversioned/experimental.go#L101)). The client should be able to figure out `group` and `version` using the RESTMapper. For a third-party client which does not have access to the RESTMapper, it should discover the mapping of `group`, `version` and `kind` by querying the server as described in point 2 of #server-side-implementation.
|
||||
|
||||
3. kubectl:
|
||||
|
||||
kubectl should accept arguments like `group/resource`, `group/resource/name`. Nevertheless, the user can omit the `group`, then kubectl shall rely on RESTMapper.VersionAndKindForResource() to figure out the default group/version of the resource. For example, for resources (like `node`) that exist in both k8s v1 API and k8s modularized API (like `infra/v2`), we should set kubectl default to use one of them. If there is no default group, kubectl should return an error for the ambiguity.
|
||||
|
||||
When kubectl is used with a single resource type, the --api-version and --output-version flag of kubectl should accept values in the form of `group/version`, and they should work as they do today. For multi-resource operations, we will disable these two flags initially.
|
||||
|
||||
Currently, by setting pkg/client/unversioned/clientcmd/api/v1#Config.NamedCluster[x].Cluster.APIVersion ([here](../../pkg/client/unversioned/clientcmd/api/v1/types.go#L58)), user can configure the default apiVersion used by kubectl to talk to server. It does not make sense to set a global version used by kubectl when there are multiple groups, so we plan to deprecate this field. We may extend the version negotiation function to negotiate the preferred version of each group. Details will be in another proposal.
|
||||
|
||||
## OpenShift integration
|
||||
|
||||
OpenShift can take a similar approach to break monolithic v1 API: keeping the v1 where they are, and gradually adding groups.
|
||||
|
||||
For the v1 objects in OpenShift, they should keep doing what they do now: they should remain registered to Scheme.versionMap["v1"] scheme, they should keep being added to originMapper.
|
||||
|
||||
For new OpenShift groups, they should do the same as native Kubernetes groups would do: each group should register to Scheme.versionMap["group/version"], each should has separate RESTMapper and the register the MultiRESTMapper.
|
||||
|
||||
To expose a list of the supported Openshift groups to clients, OpenShift just has to call to pkg/cmd/server/origin#call initAPIVersionRoute() as it does now, passing in the supported "group/versions" instead of "versions".
|
||||
|
||||
|
||||
## Future work
|
||||
|
||||
1. Dependencies between groups: we need an interface to register the dependencies between groups. It is not our priority now as the use cases are not clear yet.
|
||||
|
||||
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
|
||||
[]()
|
||||
<!-- END MUNGE: GENERATED_ANALYTICS -->
|
||||
190
docs/proposals/kubemark.md
Normal file
190
docs/proposals/kubemark.md
Normal file
@@ -0,0 +1,190 @@
|
||||
<!-- BEGIN MUNGE: UNVERSIONED_WARNING -->
|
||||
|
||||
<!-- BEGIN STRIP_FOR_RELEASE -->
|
||||
|
||||
<img src="http://kubernetes.io/img/warning.png" alt="WARNING"
|
||||
width="25" height="25">
|
||||
<img src="http://kubernetes.io/img/warning.png" alt="WARNING"
|
||||
width="25" height="25">
|
||||
<img src="http://kubernetes.io/img/warning.png" alt="WARNING"
|
||||
width="25" height="25">
|
||||
<img src="http://kubernetes.io/img/warning.png" alt="WARNING"
|
||||
width="25" height="25">
|
||||
<img src="http://kubernetes.io/img/warning.png" alt="WARNING"
|
||||
width="25" height="25">
|
||||
|
||||
<h2>PLEASE NOTE: This document applies to the HEAD of the source tree</h2>
|
||||
|
||||
If you are using a released version of Kubernetes, you should
|
||||
refer to the docs that go with that version.
|
||||
|
||||
<strong>
|
||||
The latest 1.0.x release of this document can be found
|
||||
[here](http://releases.k8s.io/release-1.0/docs/proposals/kubemark.md).
|
||||
|
||||
Documentation for other releases can be found at
|
||||
[releases.k8s.io](http://releases.k8s.io).
|
||||
</strong>
|
||||
--
|
||||
|
||||
<!-- END STRIP_FOR_RELEASE -->
|
||||
|
||||
<!-- END MUNGE: UNVERSIONED_WARNING -->
|
||||
|
||||
# Kubemark proposal
|
||||
|
||||
## Goal of this document
|
||||
|
||||
This document describes a design of Kubemark - a system that allows performance testing of a Kubernetes cluster. It describes the
|
||||
assumption, high level design and discusses possible solutions for lower-level problems. It is supposed to be a starting point for more
|
||||
detailed discussion.
|
||||
|
||||
## Current state and objective
|
||||
|
||||
Currently performance testing happens on ‘live’ clusters of up to 100 Nodes. It takes quite a while to start such cluster or to push
|
||||
updates to all Nodes, and it uses quite a lot of resources. At this scale the amount of wasted time and used resources is still acceptable.
|
||||
In the next quarter or two we’re targeting 1000 Node cluster, which will push it way beyond ‘acceptable’ level. Additionally we want to
|
||||
enable people without many resources to run scalability tests on bigger clusters than they can afford at given time. Having an ability to
|
||||
cheaply run scalability tests will enable us to run some set of them on "normal" test clusters, which in turn would mean ability to run
|
||||
them on every PR.
|
||||
|
||||
This means that we need a system that will allow for realistic performance testing on (much) smaller number of “real” machines. First
|
||||
assumption we make is that Nodes are independent, i.e. number of existing Nodes do not impact performance of a single Node. This is not
|
||||
entirely true, as number of Nodes can increase latency of various components on Master machine, which in turn may increase latency of Node
|
||||
operations, but we’re not interested in measuring this effect here. Instead we want to measure how number of Nodes and the load imposed by
|
||||
Node daemons affects the performance of Master components.
|
||||
|
||||
## Kubemark architecture overview
|
||||
|
||||
The high-level idea behind Kubemark is to write library that allows running artificial "Hollow" Nodes that will be able to simulate a
|
||||
behavior of real Kubelet and KubeProxy in a single, lightweight binary. Hollow components will need to correctly respond to Controllers
|
||||
(via API server), and preferably, in the fullness of time, be able to ‘replay’ previously recorded real traffic (this is out of scope for
|
||||
initial version). To teach Hollow components replaying recorded traffic they will need to store data specifying when given Pod/Container
|
||||
should die (e.g. observed lifetime). Such data can be extracted e.g. from etcd Raft logs, or it can be reconstructed from Events. In the
|
||||
initial version we only want them to be able to fool Master components and put some configurable (in what way TBD) load on them.
|
||||
|
||||
When we have Hollow Node ready, we’ll be able to test performance of Master Components by creating a real Master Node, with API server,
|
||||
Controllers, etcd and whatnot, and create number of Hollow Nodes that will register to the running Master.
|
||||
|
||||
To make Kubemark easier to maintain when system evolves Hollow components will reuse real "production" code for Kubelet and KubeProxy, but
|
||||
will mock all the backends with no-op or very simple mocks. We believe that this approach is better in the long run than writing special
|
||||
"performance-test-aimed" separate version of them. This may take more time to create an initial version, but we think maintenance cost will
|
||||
be noticeably smaller.
|
||||
|
||||
### Option 1
|
||||
|
||||
For the initial version we will teach Master components to use port number to identify Kubelet/KubeProxy. This will allow running those
|
||||
components on non-default ports, and in the same time will allow to run multiple Hollow Nodes on a single machine. During setup we will
|
||||
generate credentials for cluster communication and pass them to HollowKubelet/HollowProxy to use. Master will treat all HollowNodes as
|
||||
normal ones.
|
||||
|
||||

|
||||
*Kubmark architecture diagram for option 1*
|
||||
|
||||
### Option 2
|
||||
|
||||
As a second (equivalent) option we will run Kubemark on top of 'real' Kubernetes cluster, where both Master and Hollow Nodes will be Pods.
|
||||
In this option we'll be able to use Kubernetes mechanisms to streamline setup, e.g. by using Kubernetes networking to ensure unique IPs for
|
||||
Hollow Nodes, or using Secrets to distribute Kubelet credentials. The downside of this configuration is that it's likely that some noise
|
||||
will appear in Kubemark results from either CPU/Memory pressure from other things running on Nodes (e.g. FluentD, or Kubelet) or running
|
||||
cluster over an overlay network. We believe that it'll be possible to turn off cluster monitoring for Kubemark runs, so that the impact
|
||||
of real Node daemons will be minimized, but we don't know what will be the impact of using higher level networking stack. Running a
|
||||
comparison will be an interesting test in itself.
|
||||
|
||||
### Discussion
|
||||
|
||||
Before taking a closer look at steps necessary to set up a minimal Hollow cluster it's hard to tell which approach will be simpler. It's
|
||||
quite possible that the initial version will end up as hybrid between running the Hollow cluster directly on top of VMs and running the
|
||||
Hollow cluster on top of a Kubernetes cluster that is running on top of VMs. E.g. running Nodes as Pods in Kubernetes cluster and Master
|
||||
directly on top of VM.
|
||||
|
||||
## Things to simulate
|
||||
|
||||
In real Kubernetes on a single Node we run two daemons that communicate with Master in some way: Kubelet and KubeProxy.
|
||||
|
||||
### KubeProxy
|
||||
|
||||
As a replacement for KubeProxy we'll use HollowProxy, which will be a real KubeProxy with injected no-op mocks everywhere it makes sense.
|
||||
|
||||
### Kubelet
|
||||
|
||||
As a replacement for Kubelet we'll use HollowKubelet, which will be a real Kubelet with injected no-op or simple mocks everywhere it makes
|
||||
sense.
|
||||
|
||||
Kubelet also exposes cadvisor endpoint which is scraped by Heapster, healthz to be read by supervisord, and we have FluentD running as a
|
||||
Pod on each Node that exports logs to Elasticsearch (or Google Cloud Logging). Both Heapster and Elasticsearch are running in Pods in the
|
||||
cluster so do not add any load on a Master components by themselves. There can be other systems that scrape Heapster through proxy running
|
||||
on Master, which adds additional load, but they're not the part of default setup, so in the first version we won't simulate this behavior.
|
||||
|
||||
In the first version we’ll assume that all started Pods will run indefinitely if not explicitly deleted. In the future we can add a model
|
||||
of short-running batch jobs, but in the initial version we’ll assume only serving-like Pods.
|
||||
|
||||
### Heapster
|
||||
|
||||
In addition to system components we run Heapster as a part of cluster monitoring setup. Heapster currently watches Events, Pods and Nodes
|
||||
through the API server. In the test setup we can use real heapster for watching API server, with mocked out piece that scrapes cAdvisor
|
||||
data from Kubelets.
|
||||
|
||||
### Elasticsearch and Fluentd
|
||||
|
||||
Similarly to Heapster Elasticsearch runs outside the Master machine but generates some traffic on it. Fluentd “daemon” running on Master
|
||||
periodically sends Docker logs it gathered to the Elasticsearch running on one of the Nodes. In the initial version we omit Elasticsearch,
|
||||
as it produces only a constant small load on Master Node that does not change with the size of the cluster.
|
||||
|
||||
## Necessary work
|
||||
|
||||
There are three more or less independent things that needs to be worked on:
|
||||
- HollowNode implementation, creating a library/binary that will be able to listen to Watches and respond in a correct fashion with Status
|
||||
updates. This also involves creation of a CloudProvider that can produce such Hollow Nodes, or making sure that HollowNodes can correctly
|
||||
self-register in no-provider Master.
|
||||
- Kubemark setup, including figuring networking model, number of Hollow Nodes that will be allowed to run on a single “machine”, writing
|
||||
setup/run/teardown scripts (in [option 1](#option-1)), or figuring out how to run Master and Hollow Nodes on top of Kubernetes
|
||||
(in [option 2](#option-2))
|
||||
- Creating a Player component that will send requests to the API server putting a load on a cluster. This involves creating a way to
|
||||
specify desired workload. This task is
|
||||
very well isolated from the rest, as it is about sending requests to the real API server. Because of that we can discuss requirements
|
||||
separately.
|
||||
|
||||
## Concerns
|
||||
|
||||
Network performance most likely won't be a problem for the initial version if running on directly on VMs rather than on top of a Kubernetes
|
||||
cluster, as Kubemark will be running on standard networking stack (no cloud-provider software routes, or overlay network is needed, as we
|
||||
don't need custom routing between Pods). Similarly we don't think that running Kubemark on Kubernetes virtualized cluster networking will
|
||||
cause noticeable performance impact, but it requires testing.
|
||||
|
||||
On the other hand when adding additional features it may turn out that we need to simulate Kubernetes Pod network. In such, when running
|
||||
'pure' Kubemark we may try one of the following:
|
||||
- running overlay network like Flannel or OVS instead of using cloud providers routes,
|
||||
- write simple network multiplexer to multiplex communications from the Hollow Kubelets/KubeProxies on the machine.
|
||||
|
||||
In case of Kubemark on Kubernetes it may turn that we run into a problem with adding yet another layer of network virtualization, but we
|
||||
don't need to solve this problem now.
|
||||
|
||||
## Work plan
|
||||
|
||||
- Teach/make sure that Master can talk to multiple Kubelets on the same Machine [option 1](#option-1):
|
||||
- make sure that Master can talk to a Kubelet on non-default port,
|
||||
- make sure that Master can talk to all Kubelets on different ports,
|
||||
- Write HollowNode library:
|
||||
- new HollowProxy,
|
||||
- new HollowKubelet,
|
||||
- new HollowNode combining the two,
|
||||
- make sure that Master can talk to two HollowKubelets running on the same machine
|
||||
- Make sure that we can run Hollow cluster on top of Kubernetes [option 2](#option-2)
|
||||
- Write a player that will automatically put some predefined load on Master, <- this is the moment when it’s possible to play with it and is useful by itself for
|
||||
scalability tests. Alternatively we can just use current density/load tests,
|
||||
- Benchmark our machines - see how many Watch clients we can have before everything explodes,
|
||||
- See how many HollowNodes we can run on a single machine by attaching them to the real master <- this is the moment it starts to useful
|
||||
- Update kube-up/kube-down scripts to enable creating “HollowClusters”/write a new scripts/something, integrate HollowCluster with a Elasticsearch/Heapster equivalents,
|
||||
- Allow passing custom configuration to the Player
|
||||
|
||||
## Future work
|
||||
|
||||
In the future we want to add following capabilities to the Kubemark system:
|
||||
- replaying real traffic reconstructed from the recorded Events stream,
|
||||
- simulating scraping things running on Nodes through Master proxy.
|
||||
|
||||
|
||||
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
|
||||
[]()
|
||||
<!-- END MUNGE: GENERATED_ANALYTICS -->
|
||||
@@ -108,7 +108,7 @@ spec:
|
||||
restartPolicy: Never
|
||||
```
|
||||
|
||||
[Download example](downward-api/dapi-pod.yaml)
|
||||
[Download example](downward-api/dapi-pod.yaml?raw=true)
|
||||
<!-- END MUNGE: EXAMPLE downward-api/dapi-pod.yaml -->
|
||||
|
||||
|
||||
@@ -178,7 +178,7 @@ spec:
|
||||
fieldPath: metadata.annotations
|
||||
```
|
||||
|
||||
[Download example](downward-api/volume/dapi-volume.yaml)
|
||||
[Download example](downward-api/volume/dapi-volume.yaml?raw=true)
|
||||
<!-- END MUNGE: EXAMPLE downward-api/volume/dapi-volume.yaml -->
|
||||
|
||||
Some more thorough examples:
|
||||
|
||||
@@ -100,7 +100,7 @@ kubectl
|
||||
* [kubectl stop](kubectl_stop.md) - Deprecated: Gracefully shut down a resource by name or filename.
|
||||
* [kubectl version](kubectl_version.md) - Print the client and server version information.
|
||||
|
||||
###### Auto generated by spf13/cobra at 2015-09-03 21:06:22.476725335 +0000 UTC
|
||||
###### Auto generated by spf13/cobra at 2015-09-10 18:53:03.165115265 +0000 UTC
|
||||
|
||||
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
|
||||
[]()
|
||||
|
||||
@@ -119,7 +119,7 @@ $ kubectl annotate pods foo description-
|
||||
|
||||
* [kubectl](kubectl.md) - kubectl controls the Kubernetes cluster manager
|
||||
|
||||
###### Auto generated by spf13/cobra at 2015-09-02 06:24:17.720533039 +0000 UTC
|
||||
###### Auto generated by spf13/cobra at 2015-09-10 18:53:03.16095949 +0000 UTC
|
||||
|
||||
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
|
||||
[]()
|
||||
|
||||
@@ -76,7 +76,7 @@ kubectl api-versions
|
||||
|
||||
* [kubectl](kubectl.md) - kubectl controls the Kubernetes cluster manager
|
||||
|
||||
###### Auto generated by spf13/cobra at 2015-09-03 21:06:22.476265479 +0000 UTC
|
||||
###### Auto generated by spf13/cobra at 2015-09-10 18:53:03.164255617 +0000 UTC
|
||||
|
||||
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
|
||||
[]()
|
||||
|
||||
@@ -98,7 +98,7 @@ $ kubectl attach 123456-7890 -c ruby-container -i -t
|
||||
|
||||
* [kubectl](kubectl.md) - kubectl controls the Kubernetes cluster manager
|
||||
|
||||
###### Auto generated by spf13/cobra at 2015-09-03 21:06:22.471309711 +0000 UTC
|
||||
###### Auto generated by spf13/cobra at 2015-09-10 18:53:03.155651469 +0000 UTC
|
||||
|
||||
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
|
||||
[]()
|
||||
|
||||
@@ -76,7 +76,7 @@ kubectl cluster-info
|
||||
|
||||
* [kubectl](kubectl.md) - kubectl controls the Kubernetes cluster manager
|
||||
|
||||
###### Auto generated by spf13/cobra at 2015-09-03 21:06:22.476078738 +0000 UTC
|
||||
###### Auto generated by spf13/cobra at 2015-09-10 18:53:03.163962347 +0000 UTC
|
||||
|
||||
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
|
||||
[]()
|
||||
|
||||
@@ -94,7 +94,7 @@ kubectl config SUBCOMMAND
|
||||
* [kubectl config use-context](kubectl_config_use-context.md) - Sets the current-context in a kubeconfig file
|
||||
* [kubectl config view](kubectl_config_view.md) - displays Merged kubeconfig settings or a specified kubeconfig file.
|
||||
|
||||
###### Auto generated by spf13/cobra at 2015-09-03 21:06:22.475888484 +0000 UTC
|
||||
###### Auto generated by spf13/cobra at 2015-09-10 18:53:03.163685546 +0000 UTC
|
||||
|
||||
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
|
||||
[]()
|
||||
|
||||
@@ -96,7 +96,7 @@ $ kubectl config set-cluster e2e --insecure-skip-tls-verify=true
|
||||
|
||||
* [kubectl config](kubectl_config.md) - config modifies kubeconfig files
|
||||
|
||||
###### Auto generated by spf13/cobra at 2015-09-03 21:06:22.474677631 +0000 UTC
|
||||
###### Auto generated by spf13/cobra at 2015-09-10 18:53:03.161700827 +0000 UTC
|
||||
|
||||
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
|
||||
[]()
|
||||
|
||||
@@ -89,7 +89,7 @@ $ kubectl config set-context gce --user=cluster-admin
|
||||
|
||||
* [kubectl config](kubectl_config.md) - config modifies kubeconfig files
|
||||
|
||||
###### Auto generated by spf13/cobra at 2015-09-03 21:06:22.475093212 +0000 UTC
|
||||
###### Auto generated by spf13/cobra at 2015-09-10 18:53:03.162402642 +0000 UTC
|
||||
|
||||
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
|
||||
[]()
|
||||
|
||||
@@ -109,7 +109,7 @@ $ kubectl config set-credentials cluster-admin --client-certificate=~/.kube/admi
|
||||
|
||||
* [kubectl config](kubectl_config.md) - config modifies kubeconfig files
|
||||
|
||||
###### Auto generated by spf13/cobra at 2015-09-03 21:06:22.474882527 +0000 UTC
|
||||
###### Auto generated by spf13/cobra at 2015-09-10 18:53:03.162045132 +0000 UTC
|
||||
|
||||
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
|
||||
[]()
|
||||
|
||||
@@ -78,7 +78,7 @@ kubectl config set PROPERTY_NAME PROPERTY_VALUE
|
||||
|
||||
* [kubectl config](kubectl_config.md) - config modifies kubeconfig files
|
||||
|
||||
###### Auto generated by spf13/cobra at 2015-09-03 21:06:22.475281504 +0000 UTC
|
||||
###### Auto generated by spf13/cobra at 2015-09-10 18:53:03.162716308 +0000 UTC
|
||||
|
||||
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
|
||||
[]()
|
||||
|
||||
@@ -77,7 +77,7 @@ kubectl config unset PROPERTY_NAME
|
||||
|
||||
* [kubectl config](kubectl_config.md) - config modifies kubeconfig files
|
||||
|
||||
###### Auto generated by spf13/cobra at 2015-09-03 21:06:22.475473658 +0000 UTC
|
||||
###### Auto generated by spf13/cobra at 2015-09-10 18:53:03.163015642 +0000 UTC
|
||||
|
||||
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
|
||||
[]()
|
||||
|
||||
@@ -76,7 +76,7 @@ kubectl config use-context CONTEXT_NAME
|
||||
|
||||
* [kubectl config](kubectl_config.md) - config modifies kubeconfig files
|
||||
|
||||
###### Auto generated by spf13/cobra at 2015-09-03 21:06:22.475674294 +0000 UTC
|
||||
###### Auto generated by spf13/cobra at 2015-09-10 18:53:03.163336177 +0000 UTC
|
||||
|
||||
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
|
||||
[]()
|
||||
|
||||
@@ -103,7 +103,7 @@ $ kubectl config view -o template --template='{{range .users}}{{ if eq .name "e2
|
||||
|
||||
* [kubectl config](kubectl_config.md) - config modifies kubeconfig files
|
||||
|
||||
###### Auto generated by spf13/cobra at 2015-08-29 13:01:26.775349034 +0000 UTC
|
||||
###### Auto generated by spf13/cobra at 2015-09-10 18:53:03.161359997 +0000 UTC
|
||||
|
||||
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
|
||||
[]()
|
||||
|
||||
@@ -96,7 +96,7 @@ $ cat pod.json | kubectl create -f -
|
||||
|
||||
* [kubectl](kubectl.md) - kubectl controls the Kubernetes cluster manager
|
||||
|
||||
###### Auto generated by spf13/cobra at 2015-09-03 21:06:22.469492371 +0000 UTC
|
||||
###### Auto generated by spf13/cobra at 2015-09-10 18:53:03.152429973 +0000 UTC
|
||||
|
||||
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
|
||||
[]()
|
||||
|
||||
@@ -119,7 +119,7 @@ $ kubectl delete pods --all
|
||||
|
||||
* [kubectl](kubectl.md) - kubectl controls the Kubernetes cluster manager
|
||||
|
||||
###### Auto generated by spf13/cobra at 2015-09-03 21:06:22.470182255 +0000 UTC
|
||||
###### Auto generated by spf13/cobra at 2015-09-10 18:53:03.153952299 +0000 UTC
|
||||
|
||||
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
|
||||
[]()
|
||||
|
||||
@@ -51,7 +51,7 @@ exists, it will output details for every resource that has a name prefixed with
|
||||
Possible resource types include (case insensitive): pods (po), services (svc),
|
||||
replicationcontrollers (rc), nodes (no), events (ev), limitranges (limits),
|
||||
persistentvolumes (pv), persistentvolumeclaims (pvc), resourcequotas (quota),
|
||||
namespaces (ns) or secrets.
|
||||
namespaces (ns), serviceaccounts or secrets.
|
||||
|
||||
```
|
||||
kubectl describe (-f FILENAME | TYPE [NAME_PREFIX | -l label] | TYPE/NAME)
|
||||
@@ -119,7 +119,7 @@ $ kubectl describe pods frontend
|
||||
|
||||
* [kubectl](kubectl.md) - kubectl controls the Kubernetes cluster manager
|
||||
|
||||
###### Auto generated by spf13/cobra at 2015-09-03 21:06:22.469291072 +0000 UTC
|
||||
###### Auto generated by spf13/cobra at 2015-09-10 18:53:03.152057668 +0000 UTC
|
||||
|
||||
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
|
||||
[]()
|
||||
|
||||
@@ -99,7 +99,7 @@ $ kubectl exec 123456-7890 -c ruby-container -i -t -- bash -il
|
||||
|
||||
* [kubectl](kubectl.md) - kubectl controls the Kubernetes cluster manager
|
||||
|
||||
###### Auto generated by spf13/cobra at 2015-09-03 21:06:22.471517301 +0000 UTC
|
||||
###### Auto generated by spf13/cobra at 2015-09-10 18:53:03.156052759 +0000 UTC
|
||||
|
||||
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
|
||||
[]()
|
||||
|
||||
@@ -121,7 +121,7 @@ $ kubectl expose rc streamer --port=4100 --protocol=udp --name=video-stream
|
||||
|
||||
* [kubectl](kubectl.md) - kubectl controls the Kubernetes cluster manager
|
||||
|
||||
###### Auto generated by spf13/cobra at 2015-09-03 09:05:42.928698484 +0000 UTC
|
||||
###### Auto generated by spf13/cobra at 2015-09-10 18:53:03.159044239 +0000 UTC
|
||||
|
||||
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
|
||||
[]()
|
||||
|
||||
@@ -43,7 +43,7 @@ Display one or many resources.
|
||||
Possible resource types include (case insensitive): pods (po), services (svc),
|
||||
replicationcontrollers (rc), nodes (no), events (ev), componentstatuses (cs),
|
||||
limitranges (limits), persistentvolumes (pv), persistentvolumeclaims (pvc),
|
||||
resourcequotas (quota), namespaces (ns), endpoints (ep) or secrets.
|
||||
resourcequotas (quota), namespaces (ns), endpoints (ep), serviceaccounts or secrets.
|
||||
|
||||
By specifying the output as 'template' and providing a Go template as the value
|
||||
of the --template flag, you can filter the attributes of the fetched resource(s).
|
||||
@@ -132,7 +132,7 @@ $ kubectl get rc/web service/frontend pods/web-pod-13je7
|
||||
|
||||
* [kubectl](kubectl.md) - kubectl controls the Kubernetes cluster manager
|
||||
|
||||
###### Auto generated by spf13/cobra at 2015-08-29 13:01:26.761418557 +0000 UTC
|
||||
###### Auto generated by spf13/cobra at 2015-09-10 18:53:03.151532564 +0000 UTC
|
||||
|
||||
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
|
||||
[]()
|
||||
|
||||
@@ -120,7 +120,7 @@ $ kubectl label pods foo bar-
|
||||
|
||||
* [kubectl](kubectl.md) - kubectl controls the Kubernetes cluster manager
|
||||
|
||||
###### Auto generated by spf13/cobra at 2015-08-29 13:01:26.773776248 +0000 UTC
|
||||
###### Auto generated by spf13/cobra at 2015-09-10 18:53:03.160594172 +0000 UTC
|
||||
|
||||
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
|
||||
[]()
|
||||
|
||||
@@ -98,7 +98,7 @@ $ kubectl logs -f 123456-7890 ruby-container
|
||||
|
||||
* [kubectl](kubectl.md) - kubectl controls the Kubernetes cluster manager
|
||||
|
||||
###### Auto generated by spf13/cobra at 2015-09-03 21:06:22.470591683 +0000 UTC
|
||||
###### Auto generated by spf13/cobra at 2015-09-10 18:53:03.154570214 +0000 UTC
|
||||
|
||||
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
|
||||
[]()
|
||||
|
||||
@@ -79,7 +79,7 @@ kubectl namespace [namespace]
|
||||
|
||||
* [kubectl](kubectl.md) - kubectl controls the Kubernetes cluster manager
|
||||
|
||||
###### Auto generated by spf13/cobra at 2015-09-03 21:06:22.470380367 +0000 UTC
|
||||
###### Auto generated by spf13/cobra at 2015-09-10 18:53:03.154262869 +0000 UTC
|
||||
|
||||
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
|
||||
[]()
|
||||
|
||||
@@ -102,7 +102,7 @@ kubectl patch pod valid-pod -p '{"spec":{"containers":[{"name":"kubernetes-serve
|
||||
|
||||
* [kubectl](kubectl.md) - kubectl controls the Kubernetes cluster manager
|
||||
|
||||
###### Auto generated by spf13/cobra at 2015-09-03 21:06:22.469927571 +0000 UTC
|
||||
###### Auto generated by spf13/cobra at 2015-09-10 18:53:03.153568922 +0000 UTC
|
||||
|
||||
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
|
||||
[]()
|
||||
|
||||
@@ -99,7 +99,7 @@ $ kubectl port-forward mypod 0:5000
|
||||
|
||||
* [kubectl](kubectl.md) - kubectl controls the Kubernetes cluster manager
|
||||
|
||||
###### Auto generated by spf13/cobra at 2015-09-03 21:06:22.471732563 +0000 UTC
|
||||
###### Auto generated by spf13/cobra at 2015-09-10 18:53:03.156433376 +0000 UTC
|
||||
|
||||
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
|
||||
[]()
|
||||
|
||||
@@ -121,7 +121,7 @@ $ kubectl proxy --api-prefix=/k8s-api
|
||||
|
||||
* [kubectl](kubectl.md) - kubectl controls the Kubernetes cluster manager
|
||||
|
||||
###### Auto generated by spf13/cobra at 2015-09-03 21:06:22.472010935 +0000 UTC
|
||||
###### Auto generated by spf13/cobra at 2015-09-10 18:53:03.156927042 +0000 UTC
|
||||
|
||||
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
|
||||
[]()
|
||||
|
||||
@@ -110,7 +110,7 @@ kubectl replace --force -f ./pod.json
|
||||
|
||||
* [kubectl](kubectl.md) - kubectl controls the Kubernetes cluster manager
|
||||
|
||||
###### Auto generated by spf13/cobra at 2015-09-03 21:06:22.469727962 +0000 UTC
|
||||
###### Auto generated by spf13/cobra at 2015-09-10 18:53:03.153166598 +0000 UTC
|
||||
|
||||
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
|
||||
[]()
|
||||
|
||||
@@ -118,7 +118,7 @@ $ kubectl rolling-update frontend --image=image:v2
|
||||
|
||||
* [kubectl](kubectl.md) - kubectl controls the Kubernetes cluster manager
|
||||
|
||||
###### Auto generated by spf13/cobra at 2015-08-29 13:01:26.768458355 +0000 UTC
|
||||
###### Auto generated by spf13/cobra at 2015-09-10 18:53:03.154895732 +0000 UTC
|
||||
|
||||
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
|
||||
[]()
|
||||
|
||||
@@ -133,7 +133,7 @@ $ kubectl run nginx --image=nginx --command -- <cmd> <arg1> ... <argN>
|
||||
|
||||
* [kubectl](kubectl.md) - kubectl controls the Kubernetes cluster manager
|
||||
|
||||
###### Auto generated by spf13/cobra at 2015-09-07 06:40:12.142439604 +0000 UTC
|
||||
###### Auto generated by spf13/cobra at 2015-09-10 18:53:03.15783835 +0000 UTC
|
||||
|
||||
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
|
||||
[]()
|
||||
|
||||
@@ -108,7 +108,7 @@ $ kubectl scale --replicas=5 rc/foo rc/bar
|
||||
|
||||
* [kubectl](kubectl.md) - kubectl controls the Kubernetes cluster manager
|
||||
|
||||
###### Auto generated by spf13/cobra at 2015-09-03 21:06:22.471116954 +0000 UTC
|
||||
###### Auto generated by spf13/cobra at 2015-09-10 18:53:03.155304524 +0000 UTC
|
||||
|
||||
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
|
||||
[]()
|
||||
|
||||
@@ -110,7 +110,7 @@ $ kubectl stop -f path/to/resources
|
||||
|
||||
* [kubectl](kubectl.md) - kubectl controls the Kubernetes cluster manager
|
||||
|
||||
###### Auto generated by spf13/cobra at 2015-09-03 21:06:22.47250815 +0000 UTC
|
||||
###### Auto generated by spf13/cobra at 2015-09-10 18:53:03.158360787 +0000 UTC
|
||||
|
||||
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
|
||||
[]()
|
||||
|
||||
@@ -82,7 +82,7 @@ kubectl version
|
||||
|
||||
* [kubectl](kubectl.md) - kubectl controls the Kubernetes cluster manager
|
||||
|
||||
###### Auto generated by spf13/cobra at 2015-09-03 21:06:22.476464324 +0000 UTC
|
||||
###### Auto generated by spf13/cobra at 2015-09-10 18:53:03.164581808 +0000 UTC
|
||||
|
||||
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
|
||||
[]()
|
||||
|
||||
@@ -58,7 +58,7 @@ spec:
|
||||
'for ((i = 0; ; i++)); do echo "$i: $(date)"; sleep 1; done']
|
||||
```
|
||||
|
||||
[Download example](../../examples/blog-logging/counter-pod.yaml)
|
||||
[Download example](../../examples/blog-logging/counter-pod.yaml?raw=true)
|
||||
<!-- END MUNGE: EXAMPLE ../../examples/blog-logging/counter-pod.yaml -->
|
||||
|
||||
we can run the pod:
|
||||
|
||||
@@ -219,7 +219,7 @@ appropriate backend without the clients knowing anything about Kubernetes or
|
||||
|
||||

|
||||
|
||||
By default, the choice of backend is random. Client-IP based session affinity
|
||||
By default, the choice of backend is round robin. Client-IP based session affinity
|
||||
can be selected by setting `service.spec.sessionAffinity` to `"ClientIP"` (the
|
||||
default is `"None"`).
|
||||
|
||||
|
||||
@@ -64,7 +64,7 @@ spec:
|
||||
- containerPort: 80
|
||||
```
|
||||
|
||||
[Download example](pod.yaml)
|
||||
[Download example](pod.yaml?raw=true)
|
||||
<!-- END MUNGE: EXAMPLE pod.yaml -->
|
||||
|
||||
You can see your cluster's pods:
|
||||
@@ -116,7 +116,7 @@ spec:
|
||||
- containerPort: 80
|
||||
```
|
||||
|
||||
[Download example](replication.yaml)
|
||||
[Download example](replication.yaml?raw=true)
|
||||
<!-- END MUNGE: EXAMPLE replication.yaml -->
|
||||
|
||||
To delete the replication controller (and the pods it created):
|
||||
|
||||
@@ -165,7 +165,7 @@ spec:
|
||||
emptyDir: {}
|
||||
```
|
||||
|
||||
[Download example](pod-redis.yaml)
|
||||
[Download example](pod-redis.yaml?raw=true)
|
||||
<!-- END MUNGE: EXAMPLE pod-redis.yaml -->
|
||||
|
||||
Notes:
|
||||
|
||||
@@ -86,7 +86,7 @@ spec:
|
||||
- containerPort: 80
|
||||
```
|
||||
|
||||
[Download example](pod-nginx-with-label.yaml)
|
||||
[Download example](pod-nginx-with-label.yaml?raw=true)
|
||||
<!-- END MUNGE: EXAMPLE pod-nginx-with-label.yaml -->
|
||||
|
||||
Create the labeled pod ([pod-nginx-with-label.yaml](pod-nginx-with-label.yaml)):
|
||||
@@ -142,7 +142,7 @@ spec:
|
||||
- containerPort: 80
|
||||
```
|
||||
|
||||
[Download example](replication-controller.yaml)
|
||||
[Download example](replication-controller.yaml?raw=true)
|
||||
<!-- END MUNGE: EXAMPLE replication-controller.yaml -->
|
||||
|
||||
#### Replication Controller Management
|
||||
@@ -195,7 +195,7 @@ spec:
|
||||
app: nginx
|
||||
```
|
||||
|
||||
[Download example](service.yaml)
|
||||
[Download example](service.yaml?raw=true)
|
||||
<!-- END MUNGE: EXAMPLE service.yaml -->
|
||||
|
||||
#### Service Management
|
||||
@@ -311,7 +311,7 @@ spec:
|
||||
- containerPort: 80
|
||||
```
|
||||
|
||||
[Download example](pod-with-http-healthcheck.yaml)
|
||||
[Download example](pod-with-http-healthcheck.yaml?raw=true)
|
||||
<!-- END MUNGE: EXAMPLE pod-with-http-healthcheck.yaml -->
|
||||
|
||||
For more information about health checking, see [Container Probes](../pod-states.md#container-probes).
|
||||
|
||||
Reference in New Issue
Block a user