Merge remote-tracking branch 'upstream/master'
This commit is contained in:
@@ -222,7 +222,7 @@ you are doing [manual node administration](#manual-node-administration), then yo
|
||||
capacity when adding a node.
|
||||
|
||||
The Kubernetes scheduler ensures that there are enough resources for all the pods on a node. It
|
||||
checks that the sum of the limits of containers on the node is no greater than than the node capacity. It
|
||||
checks that the sum of the limits of containers on the node is no greater than the node capacity. It
|
||||
includes all containers started by kubelet, but not containers started directly by docker, nor
|
||||
processes not in containers.
|
||||
|
||||
|
||||
Binary file not shown.
|
Before Width: | Height: | Size: 70 KiB After Width: | Height: | Size: 94 KiB |
@@ -60,7 +60,7 @@ Instead of a single Timestamp, each event object [contains](http://releases.k8s.
|
||||
|
||||
Each binary that generates events:
|
||||
* Maintains a historical record of previously generated events:
|
||||
* Implemented with ["Least Recently Used Cache"](https://github.com/golang/groupcache/blob/master/lru/lru.go) in [`pkg/client/unversioned/record/events_cache.go`](../../pkg/client/unversioned/record/events_cache.go).
|
||||
* Implemented with ["Least Recently Used Cache"](https://github.com/golang/groupcache/blob/master/lru/lru.go) in [`pkg/client/record/events_cache.go`](../../pkg/client/record/events_cache.go).
|
||||
* The key in the cache is generated from the event object minus timestamps/count/transient fields, specifically the following events fields are used to construct a unique key for an event:
|
||||
* `event.Source.Component`
|
||||
* `event.Source.Host`
|
||||
|
||||
@@ -38,7 +38,7 @@ with a number of existing API types and with the [API
|
||||
conventions](api-conventions.md). If creating a new API
|
||||
type/resource, we also recommend that you first send a PR containing
|
||||
just a proposal for the new API types, and that you initially target
|
||||
the experimental API (pkg/expapi).
|
||||
the experimental API (pkg/apis/experimental).
|
||||
|
||||
The Kubernetes API has two major components - the internal structures and
|
||||
the versioned APIs. The versioned APIs are intended to be stable, while the
|
||||
@@ -399,10 +399,10 @@ The conversion code resides with each versioned API. There are two files:
|
||||
functions
|
||||
- `pkg/api/<version>/conversion_generated.go` containing auto-generated
|
||||
conversion functions
|
||||
- `pkg/expapi/<version>/conversion.go` containing manually written conversion
|
||||
functions
|
||||
- `pkg/expapi/<version>/conversion_generated.go` containing auto-generated
|
||||
- `pkg/apis/experimental/<version>/conversion.go` containing manually written
|
||||
conversion functions
|
||||
- `pkg/apis/experimental/<version>/conversion_generated.go` containing
|
||||
auto-generated conversion functions
|
||||
|
||||
Since auto-generated conversion functions are using manually written ones,
|
||||
those manually written should be named with a defined convention, i.e. a function
|
||||
@@ -437,7 +437,7 @@ of your versioned api objects.
|
||||
|
||||
The deep copy code resides with each versioned API:
|
||||
- `pkg/api/<version>/deep_copy_generated.go` containing auto-generated copy functions
|
||||
- `pkg/expapi/<version>/deep_copy_generated.go` containing auto-generated copy functions
|
||||
- `pkg/apis/experimental/<version>/deep_copy_generated.go` containing auto-generated copy functions
|
||||
|
||||
To regenerate them:
|
||||
- run
|
||||
@@ -446,6 +446,23 @@ To regenerate them:
|
||||
hack/update-generated-deep-copies.sh
|
||||
```
|
||||
|
||||
## Making a new API Group
|
||||
|
||||
This section is under construction, as we make the tooling completely generic.
|
||||
|
||||
At the moment, you'll have to make a new directory under pkg/apis/; copy the
|
||||
directory structure from pkg/apis/experimental. Add the new group/version to all
|
||||
of the hack/{verify,update}-generated-{deep-copy,conversions,swagger}.sh files
|
||||
in the appropriate places--it should just require adding your new group/version
|
||||
to a bash array. You will also need to make sure your new types are imported by
|
||||
the generation commands (cmd/gendeepcopy/ & cmd/genconversion). These
|
||||
instructions may not be complete and will be updated as we gain experience.
|
||||
|
||||
Adding API groups outside of the pkg/apis/ directory is not currently supported,
|
||||
but is clearly desirable. The deep copy & conversion generators need to work by
|
||||
parsing go files instead of by reflection; then they will be easy to point at
|
||||
arbitrary directories: see issue [#13775](http://issue.k8s.io/13775).
|
||||
|
||||
## Update the fuzzer
|
||||
|
||||
Part of our testing regimen for APIs is to "fuzz" (fill with random values) API
|
||||
|
||||
@@ -108,7 +108,7 @@ Once the playbook as finished, it will print out the IP of the Kubernetes master
|
||||
|
||||
SSH to it using the key that was created and using the _core_ user and you can list the machines in your cluster:
|
||||
|
||||
$ ssh -i ~/.ssh/id_rsa_k8s core@<maste IP>
|
||||
$ ssh -i ~/.ssh/id_rsa_k8s core@<master IP>
|
||||
$ fleetctl list-machines
|
||||
MACHINE IP METADATA
|
||||
a017c422... <node #1 IP> role=node
|
||||
|
||||
@@ -42,7 +42,7 @@ Running Kubernetes locally via Docker
|
||||
- [Step Three: Run the service proxy](#step-three-run-the-service-proxy)
|
||||
- [Test it out](#test-it-out)
|
||||
- [Run an application](#run-an-application)
|
||||
- [Expose it as a service:](#expose-it-as-a-service)
|
||||
- [Expose it as a service](#expose-it-as-a-service)
|
||||
- [A note on turning down your cluster](#a-note-on-turning-down-your-cluster)
|
||||
|
||||
### Overview
|
||||
@@ -128,7 +128,7 @@ On OS/X you will need to set up port forwarding via ssh:
|
||||
boot2docker ssh -L8080:localhost:8080
|
||||
```
|
||||
|
||||
List the nodes in your cluster by running::
|
||||
List the nodes in your cluster by running:
|
||||
|
||||
```sh
|
||||
kubectl get nodes
|
||||
@@ -149,7 +149,7 @@ If you are running different Kubernetes clusters, you may need to specify `-s ht
|
||||
kubectl -s http://localhost:8080 run nginx --image=nginx --port=80
|
||||
```
|
||||
|
||||
now run `docker ps` you should see nginx running. You may need to wait a few minutes for the image to get pulled.
|
||||
Now run `docker ps` you should see nginx running. You may need to wait a few minutes for the image to get pulled.
|
||||
|
||||
### Expose it as a service
|
||||
|
||||
@@ -164,7 +164,7 @@ NAME CLUSTER_IP EXTERNAL_IP PORT(S) SELECTOR
|
||||
nginx 10.0.93.211 <none> 80/TCP run=nginx 1h
|
||||
```
|
||||
|
||||
If `CLUSTER_IP` is blank run the following command to obtain it. Know issue #10836
|
||||
If `CLUSTER_IP` is blank run the following command to obtain it. Know issue [#10836](https://github.com/kubernetes/kubernetes/issues/10836)
|
||||
|
||||
```sh
|
||||
kubectl get svc nginx
|
||||
|
||||
@@ -123,7 +123,7 @@ KUBE_SERVICE_ADDRESSES="--service-cluster-ip-range=10.254.0.0/16"
|
||||
KUBE_API_ARGS=""
|
||||
```
|
||||
|
||||
* Edit /etc/etcd/etcd.conf,let the etcd to listen all the ip instead of 127.0.0.1, if not, you will get the error like "connection refused"
|
||||
* Edit /etc/etcd/etcd.conf,let the etcd to listen all the ip instead of 127.0.0.1, if not, you will get the error like "connection refused". Note that Fedora 22 uses etcd 2.0, One of the changes in etcd 2.0 is that now uses port 2379 and 2380 (as opposed to etcd 0.46 which userd 4001 and 7001).
|
||||
|
||||
```sh
|
||||
ETCD_LISTEN_CLIENT_URLS="http://0.0.0.0:4001"
|
||||
|
||||
@@ -132,6 +132,24 @@ However the gcloud bundled kubectl version may be older than the one downloaded
|
||||
get.k8s.io install script. We recommend you use the downloaded binary to avoid
|
||||
potential issues with client/server version skew.
|
||||
|
||||
#### Enabling bash completion of the Kubernetes command line tools
|
||||
|
||||
You may find it useful to enable `kubectl` bash completion:
|
||||
|
||||
```
|
||||
$ source ./contrib/completions/bash/kubectl
|
||||
```
|
||||
|
||||
**Note**: This will last for the duration of your bash session. If you want to make this permanent you need to add this line in your bash profile.
|
||||
|
||||
Alternatively, on most linux distributions you can also move the completions file to your bash_completions.d like this:
|
||||
|
||||
```
|
||||
$ cp ./contrib/completions/bash/kubectl /etc/bash_completion.d/
|
||||
```
|
||||
|
||||
but then you have to update it when you update kubectl.
|
||||
|
||||
### Getting started with your cluster
|
||||
|
||||
#### Inspect your cluster
|
||||
|
||||
@@ -38,36 +38,31 @@ We still have [a bunch of work](http://issue.k8s.io/8262) to do to make the expe
|
||||
|
||||
### **Prerequisite**
|
||||
|
||||
- [systemd](http://www.freedesktop.org/wiki/Software/systemd/) should be installed on your machine and should be enabled. The minimum version required at this moment (2015/05/28) is [215](http://lists.freedesktop.org/archives/systemd-devel/2014-July/020903.html).
|
||||
- [systemd](http://www.freedesktop.org/wiki/Software/systemd/) should be installed on the machine and should be enabled. The minimum version required at this moment (2015/09/01) is 219
|
||||
*(Note that systemd is not required by rkt itself, we are using it here to monitor and manage the pods launched by kubelet.)*
|
||||
|
||||
- Install the latest rkt release according to the instructions [here](https://github.com/coreos/rkt).
|
||||
The minimum version required for now is [v0.5.6](https://github.com/coreos/rkt/releases/tag/v0.5.6).
|
||||
|
||||
- Make sure the `rkt metadata service` is running because it is necessary for running pod in private network mode.
|
||||
More details about the networking of rkt can be found in the [documentation](https://github.com/coreos/rkt/blob/master/Documentation/networking.md).
|
||||
|
||||
To start the `rkt metadata service`, you can simply run:
|
||||
|
||||
```console
|
||||
$ sudo rkt metadata-service
|
||||
```
|
||||
|
||||
If you want the service to be running as a systemd service, then:
|
||||
|
||||
```console
|
||||
$ sudo systemd-run rkt metadata-service
|
||||
```
|
||||
|
||||
Alternatively, you can use the [rkt-metadata.service](https://github.com/coreos/rkt/blob/master/dist/init/systemd/rkt-metadata.service) and [rkt-metadata.socket](https://github.com/coreos/rkt/blob/master/dist/init/systemd/rkt-metadata.socket) to start the service.
|
||||
The minimum version required for now is [v0.8.0](https://github.com/coreos/rkt/releases/tag/v0.8.0).
|
||||
|
||||
- Note that for rkt version later than v0.7.0, `metadata service` is not required for running pods in private networks. So now rkt pods will not register the metadata service be default.
|
||||
|
||||
### Local cluster
|
||||
|
||||
To use rkt as the container runtime, you just need to set the environment variable `CONTAINER_RUNTIME`:
|
||||
To use rkt as the container runtime, we need to supply `--container-runtime=rkt` and `--rkt-path=$PATH_TO_RKT_BINARY` to kubelet. Additionally we can provide `--rkt-stage1-image` flag
|
||||
as well to select which [stage1 image](https://github.com/coreos/rkt/blob/master/Documentation/running-lkvm-stage1.md) we want to use.
|
||||
|
||||
If you are using the [hack/local-up-cluster.sh](../../../hack/local-up-cluster.sh) script to launch the local cluster, then you can edit the environment variable `CONTAINER_RUNTIME`, `RKT_PATH` and `RKT_STAGE1_IMAGE` to
|
||||
set these flags:
|
||||
|
||||
```console
|
||||
$ export CONTAINER_RUNTIME=rkt
|
||||
$ export RKT_PATH=$PATH_TO_RKT_BINARY
|
||||
$ export RKT_STAGE1_IMAGE=PATH=$PATH_TO_STAGE1_IMAGE
|
||||
```
|
||||
|
||||
Then we can launch the local cluster using the script:
|
||||
|
||||
```console
|
||||
$ hack/local-up-cluster.sh
|
||||
```
|
||||
|
||||
@@ -85,7 +80,7 @@ $ export KUBE_CONTAINER_RUNTIME=rkt
|
||||
You can optionally choose the version of rkt used by setting `KUBE_RKT_VERSION`:
|
||||
|
||||
```console
|
||||
$ export KUBE_RKT_VERSION=0.5.6
|
||||
$ export KUBE_RKT_VERSION=0.8.0
|
||||
```
|
||||
|
||||
Then you can launch the cluster by:
|
||||
@@ -109,7 +104,7 @@ $ export KUBE_CONTAINER_RUNTIME=rkt
|
||||
You can optionally choose the version of rkt used by setting `KUBE_RKT_VERSION`:
|
||||
|
||||
```console
|
||||
$ export KUBE_RKT_VERSION=0.5.6
|
||||
$ export KUBE_RKT_VERSION=0.8.0
|
||||
```
|
||||
|
||||
You can optionally choose the CoreOS channel by setting `COREOS_CHANNEL`:
|
||||
@@ -134,6 +129,46 @@ See [a simple nginx example](../../../docs/user-guide/simple-nginx.md) to try ou
|
||||
For more complete applications, please look in the [examples directory](../../../examples/).
|
||||
|
||||
|
||||
### Debugging
|
||||
|
||||
Here are severals tips for you when you run into any issues.
|
||||
|
||||
##### Check logs
|
||||
|
||||
By default, the log verbose level is 2. In order to see more logs related to rkt, we can set the verbose level to 4.
|
||||
For local cluster, we can set the environment variable: `LOG_LEVEL=4`.
|
||||
If the cluster is using salt, we can edit the [logging.sls](../../../cluster/saltbase/pillar/logging.sls) in the saltbase.
|
||||
|
||||
##### Check rkt pod status
|
||||
|
||||
To check the pods' status, we can use rkt command, such as `rkt list`, `rkt status`, `rkt image list`, etc.
|
||||
More information about rkt command line can be found [here](https://github.com/coreos/rkt/blob/master/Documentation/commands.md)
|
||||
|
||||
##### Check journal logs
|
||||
|
||||
As we use systemd to launch rkt pods(by creating service files which will run `rkt run-prepared`, we can check the pods' log
|
||||
using `journalctl`:
|
||||
|
||||
- Check the running state of the systemd service:
|
||||
|
||||
```console
|
||||
$ sudo journalctl -u $SERVICE_FILE
|
||||
```
|
||||
|
||||
where `$SERVICE_FILE` is the name of the service file created for the pod, you can find it in the kubelet logs.
|
||||
|
||||
##### Check the log of the container in the pod:
|
||||
|
||||
```console
|
||||
$ sudo journalctl -M rkt-$UUID -u $CONTAINER_NAME
|
||||
```
|
||||
|
||||
where `$UUID` is the rkt pod's UUID, which you can find via `rkt list --full`, and `$CONTAINER_NAME` is the container's name.
|
||||
|
||||
##### Check Kubernetes events, logs.
|
||||
|
||||
Besides above tricks, Kubernetes also provides us handy tools for debugging the pods. More information can be found [here](../../../docs/user-guide/application-troubleshooting.md)
|
||||
|
||||
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
|
||||
[]()
|
||||
<!-- END MUNGE: GENERATED_ANALYTICS -->
|
||||
|
||||
@@ -246,7 +246,8 @@ kubernetes/cluster/ubuntu/build.sh
|
||||
sudo cp -f binaries/minion/* /usr/bin
|
||||
|
||||
# Get the iptables based kube-proxy reccomended for this demo
|
||||
sudo wget https://github.com/projectcalico/calico-kubernetes/releases/download/v0.1.1/kube-proxy -P /usr/bin/
|
||||
wget https://github.com/projectcalico/calico-kubernetes/releases/download/v0.1.1/kube-proxy
|
||||
sudo cp kube-proxy /usr/bin/
|
||||
sudo chmod +x /usr/bin/kube-proxy
|
||||
```
|
||||
|
||||
|
||||
@@ -28,6 +28,10 @@ JSON and YAML formats are accepted.
|
||||
\fB\-o\fP, \fB\-\-output\fP=""
|
||||
Output mode. Use "\-o name" for shorter output (resource/name).
|
||||
|
||||
.PP
|
||||
\fB\-\-schema\-cache\-dir\fP="/tmp/kubectl.schema"
|
||||
If non\-empty, load/store cached API schemas in this directory, default is '/tmp/kubectl.schema'
|
||||
|
||||
.PP
|
||||
\fB\-\-validate\fP=true
|
||||
If true, use a schema to validate the input before sending it
|
||||
|
||||
@@ -50,6 +50,10 @@ re\-use the labels from the resource it exposes.
|
||||
\fB\-l\fP, \fB\-\-labels\fP=""
|
||||
Labels to apply to the service created by this call.
|
||||
|
||||
.PP
|
||||
\fB\-\-load\-balancer\-ip\fP=""
|
||||
IP to assign to to the Load Balancer. If empty, an ephemeral IP will be created and used(cloud\-provider specific).
|
||||
|
||||
.PP
|
||||
\fB\-\-name\fP=""
|
||||
The name for the newly created object.
|
||||
|
||||
@@ -46,6 +46,10 @@ Please refer to the models in
|
||||
\fB\-o\fP, \fB\-\-output\fP=""
|
||||
Output mode. Use "\-o name" for shorter output (resource/name).
|
||||
|
||||
.PP
|
||||
\fB\-\-schema\-cache\-dir\fP="/tmp/kubectl.schema"
|
||||
If non\-empty, load/store cached API schemas in this directory, default is '/tmp/kubectl.schema'
|
||||
|
||||
.PP
|
||||
\fB\-\-timeout\fP=0
|
||||
Only relevant during a force replace. The length of time to wait before giving up on a delete of the old resource, zero means determine a timeout from the size of the object
|
||||
|
||||
@@ -60,6 +60,10 @@ existing replication controller and overwrite at least one (common) label in its
|
||||
\fB\-\-rollback\fP=false
|
||||
If true, this is a request to abort an existing rollout that is partially rolled out. It effectively reverses current and next and runs a rollout
|
||||
|
||||
.PP
|
||||
\fB\-\-schema\-cache\-dir\fP="/tmp/kubectl.schema"
|
||||
If non\-empty, load/store cached API schemas in this directory, default is '/tmp/kubectl.schema'
|
||||
|
||||
.PP
|
||||
\fB\-a\fP, \fB\-\-show\-all\fP=false
|
||||
When printing, show all resources (default hide terminated pods.)
|
||||
|
||||
@@ -50,6 +50,10 @@ Creates a replication controller to manage the created container(s).
|
||||
\fB\-l\fP, \fB\-\-labels\fP=""
|
||||
Labels to apply to the pod(s).
|
||||
|
||||
.PP
|
||||
\fB\-\-limits\fP=""
|
||||
The resource requirement limits for this container. For example, 'cpu=200m,memory=512Mi'
|
||||
|
||||
.PP
|
||||
\fB\-\-no\-headers\fP=false
|
||||
When using the default output, don't print headers.
|
||||
@@ -76,6 +80,10 @@ Creates a replication controller to manage the created container(s).
|
||||
\fB\-r\fP, \fB\-\-replicas\fP=1
|
||||
Number of replicas to create for this container. Default is 1.
|
||||
|
||||
.PP
|
||||
\fB\-\-requests\fP=""
|
||||
The resource requirement requests for this container. For example, 'cpu=100m,memory=256Mi'
|
||||
|
||||
.PP
|
||||
\fB\-\-restart\fP="Always"
|
||||
The restart policy for this Pod. Legal values [Always, OnFailure, Never]. If set to 'Always' a replication controller is created for this pod, if set to OnFailure or Never, only the Pod is created and \-\-replicas must be 1. Default 'Always'
|
||||
|
||||
@@ -166,7 +166,7 @@ the same time, we can introduce an additional etcd event type:
|
||||
Thus, we need to create the EtcdResync event, extend watch.Interface and
|
||||
its implementations to support it and handle those events appropriately
|
||||
in places like
|
||||
[Reflector](../../pkg/client/unversioned/cache/reflector.go)
|
||||
[Reflector](../../pkg/client/cache/reflector.go)
|
||||
|
||||
However, this might turn out to be unnecessary optimization if apiserver
|
||||
will always keep up (which is possible in the new design). We will work
|
||||
|
||||
@@ -88,7 +88,7 @@ use the full image name (e.g. gcr.io/my_project/image:tag).
|
||||
|
||||
All pods in a cluster will have read access to images in this registry.
|
||||
|
||||
The kubelet kubelet will authenticate to GCR using the instance's
|
||||
The kubelet will authenticate to GCR using the instance's
|
||||
Google service account. The service account on the instance
|
||||
will have a `https://www.googleapis.com/auth/devstorage.read_only`,
|
||||
so it can pull from the project's GCR, but not push.
|
||||
|
||||
@@ -61,6 +61,7 @@ $ cat pod.json | kubectl create -f -
|
||||
```
|
||||
-f, --filename=[]: Filename, directory, or URL to file to use to create the resource
|
||||
-o, --output="": Output mode. Use "-o name" for shorter output (resource/name).
|
||||
--schema-cache-dir="/tmp/kubectl.schema": If non-empty, load/store cached API schemas in this directory, default is '/tmp/kubectl.schema'
|
||||
--validate[=true]: If true, use a schema to validate the input before sending it
|
||||
```
|
||||
|
||||
@@ -96,7 +97,7 @@ $ cat pod.json | kubectl create -f -
|
||||
|
||||
* [kubectl](kubectl.md) - kubectl controls the Kubernetes cluster manager
|
||||
|
||||
###### Auto generated by spf13/cobra at 2015-09-10 18:53:03.152429973 +0000 UTC
|
||||
###### Auto generated by spf13/cobra at 2015-09-11 20:48:33.289761103 +0000 UTC
|
||||
|
||||
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
|
||||
[]()
|
||||
|
||||
@@ -45,7 +45,7 @@ selector for a new Service on the specified port. If no labels are specified, th
|
||||
re-use the labels from the resource it exposes.
|
||||
|
||||
```
|
||||
kubectl expose (-f FILENAME | TYPE NAME) --port=port [--protocol=TCP|UDP] [--target-port=number-or-name] [--name=name] [----external-ip=external-ip-of-service] [--type=type]
|
||||
kubectl expose (-f FILENAME | TYPE NAME) [--port=port] [--protocol=TCP|UDP] [--target-port=number-or-name] [--name=name] [----external-ip=external-ip-of-service] [--type=type]
|
||||
```
|
||||
|
||||
### Examples
|
||||
@@ -73,6 +73,7 @@ $ kubectl expose rc streamer --port=4100 --protocol=udp --name=video-stream
|
||||
-f, --filename=[]: Filename, directory, or URL to a file identifying the resource to expose a service
|
||||
--generator="service/v2": The name of the API generator to use. There are 2 generators: 'service/v1' and 'service/v2'. The only difference between them is that service port in v1 is named 'default', while it is left unnamed in v2. Default is 'service/v2'.
|
||||
-l, --labels="": Labels to apply to the service created by this call.
|
||||
--load-balancer-ip="": IP to assign to to the Load Balancer. If empty, an ephemeral IP will be created and used(cloud-provider specific).
|
||||
--name="": The name for the newly created object.
|
||||
--no-headers[=false]: When using the default output, don't print headers.
|
||||
-o, --output="": Output format. One of: json|yaml|wide|name|go-template=...|go-template-file=...|jsonpath=...|jsonpath-file=... See golang template [http://golang.org/pkg/text/template/#pkg-overview] and jsonpath template [http://releases.k8s.io/HEAD/docs/user-guide/jsonpath.md].
|
||||
@@ -121,7 +122,7 @@ $ kubectl expose rc streamer --port=4100 --protocol=udp --name=video-stream
|
||||
|
||||
* [kubectl](kubectl.md) - kubectl controls the Kubernetes cluster manager
|
||||
|
||||
###### Auto generated by spf13/cobra at 2015-09-10 18:53:03.159044239 +0000 UTC
|
||||
###### Auto generated by spf13/cobra at 2015-09-11 03:36:48.458259032 +0000 UTC
|
||||
|
||||
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
|
||||
[]()
|
||||
|
||||
@@ -74,6 +74,7 @@ kubectl replace --force -f ./pod.json
|
||||
--force[=false]: Delete and re-create the specified resource
|
||||
--grace-period=-1: Only relevant during a force replace. Period of time in seconds given to the old resource to terminate gracefully. Ignored if negative.
|
||||
-o, --output="": Output mode. Use "-o name" for shorter output (resource/name).
|
||||
--schema-cache-dir="/tmp/kubectl.schema": If non-empty, load/store cached API schemas in this directory, default is '/tmp/kubectl.schema'
|
||||
--timeout=0: Only relevant during a force replace. The length of time to wait before giving up on a delete of the old resource, zero means determine a timeout from the size of the object
|
||||
--validate[=true]: If true, use a schema to validate the input before sending it
|
||||
```
|
||||
@@ -110,7 +111,7 @@ kubectl replace --force -f ./pod.json
|
||||
|
||||
* [kubectl](kubectl.md) - kubectl controls the Kubernetes cluster manager
|
||||
|
||||
###### Auto generated by spf13/cobra at 2015-09-10 18:53:03.153166598 +0000 UTC
|
||||
###### Auto generated by spf13/cobra at 2015-09-11 20:48:33.290279625 +0000 UTC
|
||||
|
||||
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
|
||||
[]()
|
||||
|
||||
@@ -78,6 +78,7 @@ $ kubectl rolling-update frontend --image=image:v2
|
||||
--output-version="": Output the formatted object with the given version (default api-version).
|
||||
--poll-interval=3s: Time delay between polling for replication controller status after the update. Valid time units are "ns", "us" (or "µs"), "ms", "s", "m", "h".
|
||||
--rollback[=false]: If true, this is a request to abort an existing rollout that is partially rolled out. It effectively reverses current and next and runs a rollout
|
||||
--schema-cache-dir="/tmp/kubectl.schema": If non-empty, load/store cached API schemas in this directory, default is '/tmp/kubectl.schema'
|
||||
-a, --show-all[=false]: When printing, show all resources (default hide terminated pods.)
|
||||
--sort-by="": If non-empty, sort list types using this field specification. The field specification is expressed as a JSONPath expression (e.g. 'ObjectMeta.Name'). The field in the API resource specified by this JSONPath expression must be an integer or a string.
|
||||
--template="": Template string or path to template file to use when -o=go-template, -o=go-template-file. The template format is golang templates [http://golang.org/pkg/text/template/#pkg-overview].
|
||||
@@ -118,7 +119,7 @@ $ kubectl rolling-update frontend --image=image:v2
|
||||
|
||||
* [kubectl](kubectl.md) - kubectl controls the Kubernetes cluster manager
|
||||
|
||||
###### Auto generated by spf13/cobra at 2015-09-10 18:53:03.154895732 +0000 UTC
|
||||
###### Auto generated by spf13/cobra at 2015-09-11 20:48:33.293748592 +0000 UTC
|
||||
|
||||
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
|
||||
[]()
|
||||
|
||||
@@ -87,12 +87,14 @@ $ kubectl run nginx --image=nginx --command -- <cmd> <arg1> ... <argN>
|
||||
--hostport=-1: The host port mapping for the container port. To demonstrate a single-machine container.
|
||||
--image="": The image for the container to run.
|
||||
-l, --labels="": Labels to apply to the pod(s).
|
||||
--limits="": The resource requirement limits for this container. For example, 'cpu=200m,memory=512Mi'
|
||||
--no-headers[=false]: When using the default output, don't print headers.
|
||||
-o, --output="": Output format. One of: json|yaml|wide|name|go-template=...|go-template-file=...|jsonpath=...|jsonpath-file=... See golang template [http://golang.org/pkg/text/template/#pkg-overview] and jsonpath template [http://releases.k8s.io/HEAD/docs/user-guide/jsonpath.md].
|
||||
--output-version="": Output the formatted object with the given version (default api-version).
|
||||
--overrides="": An inline JSON override for the generated object. If this is non-empty, it is used to override the generated object. Requires that the object supply a valid apiVersion field.
|
||||
--port=-1: The port that this container exposes.
|
||||
-r, --replicas=1: Number of replicas to create for this container. Default is 1.
|
||||
--requests="": The resource requirement requests for this container. For example, 'cpu=100m,memory=256Mi'
|
||||
--restart="Always": The restart policy for this Pod. Legal values [Always, OnFailure, Never]. If set to 'Always' a replication controller is created for this pod, if set to OnFailure or Never, only the Pod is created and --replicas must be 1. Default 'Always'
|
||||
-a, --show-all[=false]: When printing, show all resources (default hide terminated pods.)
|
||||
--sort-by="": If non-empty, sort list types using this field specification. The field specification is expressed as a JSONPath expression (e.g. 'ObjectMeta.Name'). The field in the API resource specified by this JSONPath expression must be an integer or a string.
|
||||
|
||||
@@ -79,7 +79,7 @@ Note that replication controllers may themselves have labels and would generally
|
||||
|
||||
Pods may be removed from a replication controller's target set by changing their labels. This technique may be used to remove pods from service for debugging, data recovery, etc. Pods that are removed in this way will be replaced automatically (assuming that the number of replicas is not also changed).
|
||||
|
||||
Similarly, deleting a replication controller does not affect the pods it created. Its `replicas` field must first be set to 0 in order to delete the pods controlled. (Note that the client tool, kubectl, provides a single operation, [stop](kubectl/kubectl_stop.md) to delete both the replication controller and the pods it controls. However, there is no such operation in the API at the moment)
|
||||
Similarly, deleting a replication controller using the API does not affect the pods it created. Its `replicas` field must first be set to `0` in order to delete the pods controlled. (Note that the client tool, `kubectl`, provides a single operation, [delete](kubectl/kubectl_delete.md) to delete both the replication controller and the pods it controls. If you want to leave the pods running when deleting a replication controller, specify `--cascade=false`. However, there is no such operation in the API at the moment)
|
||||
|
||||
## Responsibilities of the replication controller
|
||||
|
||||
|
||||
@@ -144,7 +144,7 @@ secrets/build-robot-secret
|
||||
Now you can confirm that the newly built secret is populated with an API token for the "build-robot" service account.
|
||||
|
||||
```console
|
||||
kubectl describe secrets/build-robot-secret
|
||||
$ kubectl describe secrets/build-robot-secret
|
||||
Name: build-robot-secret
|
||||
Namespace: default
|
||||
Labels: <none>
|
||||
|
||||
@@ -433,6 +433,7 @@ information about the provisioned balancer will be published in the `Service`'s
|
||||
}
|
||||
],
|
||||
"clusterIP": "10.0.171.239",
|
||||
"loadBalancerIP": "78.11.24.19",
|
||||
"type": "LoadBalancer"
|
||||
},
|
||||
"status": {
|
||||
@@ -448,7 +449,11 @@ information about the provisioned balancer will be published in the `Service`'s
|
||||
```
|
||||
|
||||
Traffic from the external load balancer will be directed at the backend `Pods`,
|
||||
though exactly how that works depends on the cloud provider.
|
||||
though exactly how that works depends on the cloud provider. Some cloud providers allow
|
||||
the `loadBalancerIP` to be specified. In those cases, the load-balancer will be created
|
||||
with the user-specified `loadBalancerIP`. If the `loadBalancerIP` field is not specified,
|
||||
an ephemeral IP will be assigned to the loadBalancer. If the `loadBalancerIP` is specified, but the
|
||||
cloud provider does not support the feature, the field will be ignored.
|
||||
|
||||
## Shortcomings
|
||||
|
||||
|
||||
Reference in New Issue
Block a user