Add munger to verify kubectl -f targets, fix docs

This commit is contained in:
Tim Hockin
2015-07-15 17:20:39 -07:00
parent 596a8a40d1
commit f7512d007b
47 changed files with 377 additions and 122 deletions

View File

@@ -27,7 +27,7 @@ Create a volume in the same region as your node add your volume
information in the pod description file aws-ebs-web.yaml then create
the pod:
```shell
$ kubectl create -f aws-ebs-web.yaml
$ kubectl create -f examples/aws_ebs/aws-ebs-web.yaml
```
Add some data to the volume if is empty:
```shell

View File

@@ -104,13 +104,13 @@ The important thing to note here is the ```selector```. It is a query over label
Create this service as follows:
```sh
$ kubectl create -f cassandra-service.yaml
$ kubectl create -f examples/cassandra/cassandra-service.yaml
```
Now, as the service is running, we can create the first Cassandra pod using the mentioned specification.
```sh
$ kubectl create -f cassandra.yaml
$ kubectl create -f examples/cassandra/cassandra.yaml
```
After a few moments, you should be able to see the pod running, plus its single container:
@@ -208,7 +208,7 @@ Most of this replication controller definition is identical to the Cassandra pod
Create this controller:
```sh
$ kubectl create -f cassandra-controller.yaml
$ kubectl create -f examples/cassandra/cassandra-controller.yaml
```
Now this is actually not that interesting, since we haven't actually done anything new. Now it will get interesting.
@@ -267,13 +267,13 @@ For those of you who are impatient, here is the summary of the commands we ran i
```sh
# create a service to track all cassandra nodes
kubectl create -f cassandra-service.yaml
kubectl create -f examples/cassandra/cassandra-service.yaml
# create a single cassandra node
kubectl create -f cassandra.yaml
kubectl create -f examples/cassandra/cassandra.yaml
# create a replication controller to replicate cassandra nodes
kubectl create -f cassandra-controller.yaml
kubectl create -f examples/cassandra/cassandra-controller.yaml
# scale up to 2 nodes
kubectl scale rc cassandra --replicas=2

View File

@@ -125,13 +125,13 @@ data:
```
which can be used to create the secret in your namespace:
```
kubectl create -f apiserver-secret.yaml --namespace=mytunes
kubectl create -f examples/elasticsearch/apiserver-secret.yaml --namespace=mytunes
secrets/apiserver-secret
```
Now you are ready to create the replication controller which will then create the pods:
```
$ kubectl create -f music-rc.yaml --namespace=mytunes
$ kubectl create -f examples/elasticsearch/music-rc.yaml --namespace=mytunes
replicationcontrollers/music-db
```
@@ -156,7 +156,7 @@ spec:
```
Let's create the service with an external load balancer:
```
$ kubectl create -f music-service.yaml --namespace=mytunes
$ kubectl create -f examples/elasticsearch/music-service.yaml --namespace=mytunes
services/music-server
```

View File

@@ -35,7 +35,7 @@ Currently, you can look at:
Example from command line (the DNS lookup looks better from a web browser):
```
$ kubectl create -f pod.json
$ kubectl create -f examples/explorer/pod.json
$ kubectl proxy &
Starting to serve on localhost:8001

View File

@@ -93,7 +93,7 @@ spec:
Change to the `<kubernetes>/examples/guestbook` directory if you're not already there. Create the redis master pod in your Kubernetes cluster by running:
```shell
$ kubectl create -f redis-master-controller.yaml
$ kubectl create -f examples/guestbook/redis-master-controller.yaml
replicationcontrollers/redis-master
```
@@ -208,7 +208,7 @@ spec:
Create the service by running:
```shell
$ kubectl create -f redis-master-service.yaml
$ kubectl create -f examples/guestbook/redis-master-service.yaml
services/redis-master
```
Then check the list of services, which should include the redis-master:
@@ -276,7 +276,7 @@ spec:
and create the replication controller by running:
```shell
$ kubectl create -f redis-slave-controller.yaml
$ kubectl create -f examples/guestbook/redis-slave-controller.yaml
replicationcontrollers/redis-slave
$ kubectl get rc
@@ -324,7 +324,7 @@ This time the selector for the service is `name=redis-slave`, because that ident
Now that you have created the service specification, create it in your cluster by running:
```shell
$ kubectl create -f redis-slave-service.yaml
$ kubectl create -f examples/guestbook/redis-slave-service.yaml
services/redis-slave
$ kubectl get services
@@ -367,7 +367,7 @@ spec:
Using this file, you can turn up your frontend with:
```shell
$ kubectl create -f frontend-controller.yaml
$ kubectl create -f examples/guestbook/frontend-controller.yaml
replicationcontrollers/frontend
```
@@ -476,7 +476,7 @@ To do this, uncomment the `type: LoadBalancer` line in the `frontend-service.yam
Create the service like this:
```shell
$ kubectl create -f frontend-service.yaml
$ kubectl create -f examples/guestbook/frontend-service.yaml
services/frontend
```

View File

@@ -69,7 +69,7 @@ The important thing to note here is the `selector`. It is a query over labels, t
Create this service as follows:
```sh
$ kubectl create -f hazelcast-service.yaml
$ kubectl create -f examples/hazelcast/hazelcast-service.yaml
```
### Adding replicated nodes
@@ -124,7 +124,7 @@ Last but not least, we set `DNS_DOMAIN` environment variable according to your K
Create this controller:
```sh
$ kubectl create -f hazelcast-controller.yaml
$ kubectl create -f examples/hazelcast/hazelcast-controller.yaml
```
After the controller provisions successfully the pod, you can query the service endpoints:
@@ -230,10 +230,10 @@ For those of you who are impatient, here is the summary of the commands we ran i
```sh
# create a service to track all hazelcast nodes
kubectl create -f hazelcast-service.yaml
kubectl create -f examples/hazelcast/hazelcast-service.yaml
# create a replication controller to replicate hazelcast nodes
kubectl create -f hazelcast-controller.yaml
kubectl create -f examples/hazelcast/hazelcast-controller.yaml
# scale up to 2 nodes
kubectl scale rc hazelcast --replicas=2

View File

@@ -40,7 +40,7 @@ You need a [running kubernetes cluster](../../docs/getting-started-guides/) for
$ kubectl create -f /tmp/secret.json
secrets/nginxsecret
$ kubectl create -f nginx-app.yaml
$ kubectl create -f examples/https-nginx/nginx-app.yaml
services/nginxsvc
replicationcontrollers/my-nginx

View File

@@ -52,7 +52,7 @@ mkfs.ext4 /dev/<name of device>
Once your pod is created, run it on the Kubernetes master:
```console
kubectl create -f your_new_pod.json
kubectl create -f ./your_new_pod.json
```
Here is my command and output:

View File

@@ -135,14 +135,14 @@ gcloud compute disks create --size=200GB mongo-disk
Now you can start Mongo using that disk:
```
kubectl create -f mongo-pod.json
kubectl create -f mongo-service.json
kubectl create -f examples/meteor/mongo-pod.json
kubectl create -f examples/meteor/mongo-service.json
```
Wait until Mongo is started completely and then start up your Meteor app:
```
kubectl create -f meteor-controller.json
kubectl create -f meteor-service.json
kubectl create -f examples/meteor/meteor-controller.json
kubectl create -f examples/meteor/meteor-service.json
```
Note that [`meteor-service.json`](meteor-service.json) creates a load balancer, so

View File

@@ -122,7 +122,7 @@ Note that we've defined a volume mount for `/var/lib/mysql`, and specified a vol
Once you've edited the file to set your database password, create the pod as follows, where `<kubernetes>` is the path to your Kubernetes installation:
```shell
$ kubectl create -f mysql.yaml
$ kubectl create -f examples/mysql-wordpress-pd/mysql.yaml
```
It may take a short period before the new pod reaches the `Running` state.
@@ -171,7 +171,7 @@ spec:
Start the service like this:
```shell
$ kubectl create -f mysql-service.yaml
$ kubectl create -f examples/mysql-wordpress-pd/mysql-service.yaml
```
You can see what services are running via:
@@ -221,7 +221,7 @@ spec:
Create the pod:
```shell
$ kubectl create -f wordpress.yaml
$ kubectl create -f examples/mysql-wordpress-pd/wordpress.yaml
```
And list the pods to check that the status of the new pod changes
@@ -260,7 +260,7 @@ Note also that we've set the service port to 80. We'll return to that shortly.
Start the service:
```shell
$ kubectl create -f wordpress-service.yaml
$ kubectl create -f examples/mysql-wordpress-pd/wordpress-service.yaml
```
and see it in the list of services:
@@ -307,8 +307,8 @@ Set up your WordPress blog and play around with it a bit. Then, take down its p
If you are just experimenting, you can take down and bring up only the pods:
```shell
$ kubectl delete -f wordpress.yaml
$ kubectl delete -f mysql.yaml
$ kubectl delete -f examples/mysql-wordpress-pd/wordpress.yaml
$ kubectl delete -f examples/mysql-wordpress-pd/mysql.yaml
```
When you restart the pods again (using the `create` operation as described above), their services will pick up the new pods based on their labels.

View File

@@ -39,7 +39,7 @@ Rethinkdb will discover peer using endpoints provided by kubernetes service,
so first create a service so the following pod can query its endpoint
```shell
$kubectl create -f driver-service.yaml
$kubectl create -f examples/rethinkdb/driver-service.yaml
```
check out:
@@ -56,7 +56,7 @@ rethinkdb-driver db=influxdb db=rethinkdb 10.0.27.114 28015/TCP
start fist server in cluster
```shell
$kubectl create -f rc.yaml
$kubectl create -f examples/rethinkdb/rc.yaml
```
Actually, you can start servers as many as you want at one time, just modify the `replicas` in `rc.ymal`
@@ -99,8 +99,8 @@ Admin
You need a separate pod (labeled as role:admin) to access Web Admin UI
```shell
kubectl create -f admin-pod.yaml
kubectl create -f admin-service.yaml
kubectl create -f examples/rethinkdb/admin-pod.yaml
kubectl create -f examples/rethinkdb/admin-service.yaml
```
find the service