Qualified all references to "controller" so that references to "replication controller" are clear. fixes #9404
Also ran hacks/run-gendocs.sh
This commit is contained in:
@@ -131,7 +131,7 @@ Of course, a single node cluster isn't particularly interesting. The real power
|
||||
|
||||
In Kubernetes a _[Replication Controller](../../docs/replication-controller.md)_ is responsible for replicating sets of identical pods. Like a _Service_ it has a selector query which identifies the members of it's set. Unlike a _Service_ it also has a desired number of replicas, and it will create or delete _Pods_ to ensure that the number of _Pods_ matches up with it's desired state.
|
||||
|
||||
Replication Controllers will "adopt" existing pods that match their selector query, so let's create a Replication Controller with a single replica to adopt our existing Cassandra Pod.
|
||||
Replication controllers will "adopt" existing pods that match their selector query, so let's create a replication controller with a single replica to adopt our existing Cassandra pod.
|
||||
|
||||
```yaml
|
||||
apiVersion: v1beta3
|
||||
@@ -177,7 +177,7 @@ spec:
|
||||
emptyDir: {}
|
||||
```
|
||||
|
||||
The bulk of the replication controller config is actually identical to the Cassandra pod declaration above, it simply gives the controller a recipe to use when creating new pods. The other parts are the ```replicaSelector``` which contains the controller's selector query, and the ```replicas``` parameter which specifies the desired number of replicas, in this case 1.
|
||||
Most of this replication controller definition is identical to the Cassandra pod definition above, it simply gives the resplication controller a recipe to use when it creates new Cassandra pods. The other differentiating parts are the ```selector``` attribute which contains the controller's selector query, and the ```replicas``` attribute which specifies the desired number of replicas, in this case 1.
|
||||
|
||||
Create this controller:
|
||||
|
||||
|
@@ -38,7 +38,7 @@ I0218 15:18:31.623279 67480 proxy.go:36] Starting to serve on localhost:8001
|
||||
|
||||
Now visit the the [demo website](http://localhost:8001/static). You won't see anything much quite yet.
|
||||
|
||||
### Step Two: Run the controller
|
||||
### Step Two: Run the replication controller
|
||||
Now we will turn up two replicas of an image. They all serve on internal port 80.
|
||||
|
||||
```bash
|
||||
@@ -47,7 +47,7 @@ $ ./kubectl create -f examples/update-demo/nautilus-rc.yaml
|
||||
|
||||
After pulling the image from the Docker Hub to your worker nodes (which may take a minute or so) you'll see a couple of squares in the UI detailing the pods that are running along with the image that they are serving up. A cute little nautilus.
|
||||
|
||||
### Step Three: Try scaling the controller
|
||||
### Step Three: Try scaling the replication controller
|
||||
|
||||
Now we will increase the number of replicas from two to four:
|
||||
|
||||
@@ -76,7 +76,7 @@ Watch the [demo website](http://localhost:8001/static/index.html), it will updat
|
||||
$ ./kubectl stop rc update-demo-kitten
|
||||
```
|
||||
|
||||
This will first 'stop' the replication controller by turning the target number of replicas to 0. It'll then delete that controller.
|
||||
This first stops the replication controller by turning the target number of replicas to 0 and then deletes the controller.
|
||||
|
||||
### Step Six: Cleanup
|
||||
|
||||
|
@@ -4,11 +4,11 @@ metadata:
|
||||
name: nginx-controller
|
||||
spec:
|
||||
replicas: 2
|
||||
# selector identifies the set of Pods that this
|
||||
# replicaController is responsible for managing
|
||||
# selector identifies the set of pods that this
|
||||
# replication controller is responsible for managing
|
||||
selector:
|
||||
name: nginx
|
||||
# podTemplate defines the 'cookie cutter' used for creating
|
||||
# template defines the 'cookie cutter' used for creating
|
||||
# new pods when necessary
|
||||
template:
|
||||
metadata:
|
||||
|
Reference in New Issue
Block a user