Purge cluster/kubectl.sh from nearly all docs.

Mark cluster/kubectl.sh as deprecated.
This commit is contained in:
Brendan Burns
2015-06-05 14:50:11 -07:00
parent 6a979704b7
commit 9e198a6ed9
22 changed files with 149 additions and 140 deletions

View File

@@ -26,7 +26,7 @@ services, and replication controllers used by the cluster.
Assuming you have a fresh cluster, you can introspect the available namespace's by doing the following:
```shell
$ cluster/kubectl.sh get namespaces
$ kubectl get namespaces
NAME LABELS
default <none>
```
@@ -66,19 +66,19 @@ Use the file [`examples/kubernetes-namespaces/namespace-dev.json`](namespace-dev
Create the development namespace using kubectl.
```shell
$ cluster/kubectl.sh create -f examples/kubernetes-namespaces/namespace-dev.json
$ kubectl create -f examples/kubernetes-namespaces/namespace-dev.json
```
And then lets create the production namespace using kubectl.
```shell
$ cluster/kubectl.sh create -f examples/kubernetes-namespaces/namespace-prod.json
$ kubectl create -f examples/kubernetes-namespaces/namespace-prod.json
```
To be sure things are right, let's list all of the namespaces in our cluster.
```shell
$ cluster/kubectl.sh get namespaces
$ kubectl get namespaces
NAME LABELS STATUS
default <none> Active
development name=development Active
@@ -126,8 +126,8 @@ users:
The next step is to define a context for the kubectl client to work in each namespace. The value of "cluster" and "user" fields are copied from the current context.
```shell
$ cluster/kubectl.sh config set-context dev --namespace=development --cluster=lithe-cocoa-92103_kubernetes --user=lithe-cocoa-92103_kubernetes
$ cluster/kubectl.sh config set-context prod --namespace=production --cluster=lithe-cocoa-92103_kubernetes --user=lithe-cocoa-92103_kubernetes
$ kubectl config set-context dev --namespace=development --cluster=lithe-cocoa-92103_kubernetes --user=lithe-cocoa-92103_kubernetes
$ kubectl config set-context prod --namespace=production --cluster=lithe-cocoa-92103_kubernetes --user=lithe-cocoa-92103_kubernetes
```
The above commands provided two request contexts you can alternate against depending on what namespace you
@@ -136,13 +136,13 @@ wish to work against.
Let's switch to operate in the development namespace.
```shell
$ cluster/kubectl.sh config use-context dev
$ kubectl config use-context dev
```
You can verify your current context by doing the following:
```shell
$ cluster/kubectl.sh config view
$ kubectl config view
apiVersion: v1
clusters:
- cluster:
@@ -184,17 +184,17 @@ At this point, all requests we make to the Kubernetes cluster from the command l
Let's create some content.
```shell
$ cluster/kubectl.sh run snowflake --image=kubernetes/serve_hostname --replicas=2
$ kubectl run snowflake --image=kubernetes/serve_hostname --replicas=2
```
We have just created a replication controller whose replica size is 2 that is running the pod called snowflake with a basic container that just serves the hostname.
```shell
cluster/kubectl.sh get rc
kubectl get rc
CONTROLLER CONTAINER(S) IMAGE(S) SELECTOR REPLICAS
snowflake snowflake kubernetes/serve_hostname run=snowflake 2
$ cluster/kubectl.sh get pods
$ kubectl get pods
POD IP CONTAINER(S) IMAGE(S) HOST LABELS STATUS CREATED MESSAGE
snowflake-mbrfi 10.244.2.4 kubernetes-minion-ilqx/104.197.8.214 run=snowflake Running About an hour
snowflake kubernetes/serve_hostname Running About an hour
@@ -207,29 +207,29 @@ And this is great, developers are able to do what they want, and they do not hav
Let's switch to the production namespace and show how resources in one namespace are hidden from the other.
```shell
$ cluster/kubectl.sh config use-context prod
$ kubectl config use-context prod
```
The production namespace should be empty.
```shell
$ cluster/kubectl.sh get rc
$ kubectl get rc
CONTROLLER CONTAINER(S) IMAGE(S) SELECTOR REPLICAS
$ cluster/kubectl.sh get pods
$ kubectl get pods
POD IP CONTAINER(S) IMAGE(S) HOST LABELS STATUS CREATED MESSAGE
```
Production likes to run cattle, so let's create some cattle pods.
```shell
$ cluster/kubectl.sh run cattle --image=kubernetes/serve_hostname --replicas=5
$ kubectl run cattle --image=kubernetes/serve_hostname --replicas=5
$ cluster/kubectl.sh get rc
$ kubectl get rc
CONTROLLER CONTAINER(S) IMAGE(S) SELECTOR REPLICAS
cattle cattle kubernetes/serve_hostname run=cattle 5
$ cluster/kubectl.sh get pods
$ kubectl get pods
POD IP CONTAINER(S) IMAGE(S) HOST LABELS STATUS CREATED MESSAGE
cattle-1kyvj 10.244.0.4 kubernetes-minion-7s1y/23.236.54.97 run=cattle Running About an hour
cattle kubernetes/serve_hostname Running About an hour