From 5cf5445d24af7a511956d6da95b3fb2a955dbe31 Mon Sep 17 00:00:00 2001 From: Tim Hockin Date: Sun, 12 Jul 2015 21:15:58 -0700 Subject: [PATCH] Change 'minion' to 'node' in docs --- docs/devel/developer-guides/vagrant.md | 16 ++++++++-------- .../centos/centos_manual_config.md | 8 ++++---- .../coreos/azure/README.md | 2 +- .../kubernetes-cluster-main-nodes-template.yml | 4 ++-- .../fedora/fedora_ansible_config.md | 2 +- .../fedora/flannel_multi_node_cluster.md | 2 +- docs/getting-started-guides/gce.md | 2 +- docs/getting-started-guides/juju.md | 2 +- docs/getting-started-guides/ubuntu.md | 2 +- docs/getting-started-guides/vagrant.md | 12 ++++++------ docs/user-guide/kubectl/kubectl_patch.md | 2 +- examples/celery-rabbitmq/README.md | 2 +- .../high-availability/etc_kubernetes_kubelet | 2 +- examples/high-availability/provision.sh | 2 +- 14 files changed, 30 insertions(+), 30 deletions(-) diff --git a/docs/devel/developer-guides/vagrant.md b/docs/devel/developer-guides/vagrant.md index 1edf07a669f..1316e26b0ec 100644 --- a/docs/devel/developer-guides/vagrant.md +++ b/docs/devel/developer-guides/vagrant.md @@ -27,7 +27,7 @@ Running kubernetes with Vagrant (and VirtualBox) is an easy way to run/test/deve ### Setup -By default, the Vagrant setup will create a single kubernetes-master and 1 kubernetes-minion. Each VM will take 1 GB, so make sure you have at least 2GB to 4GB of free memory (plus appropriate free disk space). To start your local cluster, open a shell and run: +By default, the Vagrant setup will create a single master VM (called kubernetes-master) and one node (called kubernetes-minion-1). Each VM will take 1 GB, so make sure you have at least 2GB to 4GB of free memory (plus appropriate free disk space). To start your local cluster, open a shell and run: ```sh cd kubernetes @@ -77,7 +77,7 @@ vagrant ssh master [vagrant@kubernetes-master ~] $ sudo systemctl status nginx ``` -To view the services on any of the kubernetes-minion(s): +To view the services on any of the nodes: ```sh vagrant ssh minion-1 [vagrant@kubernetes-minion-1] $ sudo systemctl status docker @@ -312,20 +312,20 @@ cat ~/.kubernetes_vagrant_auth #### I just created the cluster, but I do not see my container running! -If this is your first time creating the cluster, the kubelet on each minion schedules a number of docker pull requests to fetch prerequisite images. This can take some time and as a result may delay your initial pod getting provisioned. +If this is your first time creating the cluster, the kubelet on each node schedules a number of docker pull requests to fetch prerequisite images. This can take some time and as a result may delay your initial pod getting provisioned. #### I changed Kubernetes code, but it's not running! Are you sure there was no build error? After running `$ vagrant provision`, scroll up and ensure that each Salt state was completed successfully on each box in the cluster. It's very likely you see a build error due to an error in your source files! -#### I have brought Vagrant up but the minions won't validate! +#### I have brought Vagrant up but the nodes won't validate! -Are you sure you built a release first? Did you install `net-tools`? For more clues, login to one of the minions (`vagrant ssh minion-1`) and inspect the salt minion log (`sudo cat /var/log/salt/minion`). +Are you sure you built a release first? Did you install `net-tools`? For more clues, login to one of the nodes (`vagrant ssh minion-1`) and inspect the salt minion log (`sudo cat /var/log/salt/minion`). -#### I want to change the number of minions! +#### I want to change the number of nodes! -You can control the number of minions that are instantiated via the environment variable `NUM_MINIONS` on your host machine. If you plan to work with replicas, we strongly encourage you to work with enough minions to satisfy your largest intended replica size. If you do not plan to work with replicas, you can save some system resources by running with a single minion. You do this, by setting `NUM_MINIONS` to 1 like so: +You can control the number of nodes that are instantiated via the environment variable `NUM_MINIONS` on your host machine. If you plan to work with replicas, we strongly encourage you to work with enough nodes to satisfy your largest intended replica size. If you do not plan to work with replicas, you can save some system resources by running with a single node. You do this, by setting `NUM_MINIONS` to 1 like so: ```sh export NUM_MINIONS=1 @@ -340,7 +340,7 @@ Just set it to the number of megabytes you would like the machines to have. For export KUBERNETES_MEMORY=2048 ``` -If you need more granular control, you can set the amount of memory for the master and minions independently. For example: +If you need more granular control, you can set the amount of memory for the master and nodes independently. For example: ```sh export KUBERNETES_MASTER_MEMORY=1536 diff --git a/docs/getting-started-guides/centos/centos_manual_config.md b/docs/getting-started-guides/centos/centos_manual_config.md index e20ee9d2504..b10c36283e1 100644 --- a/docs/getting-started-guides/centos/centos_manual_config.md +++ b/docs/getting-started-guides/centos/centos_manual_config.md @@ -25,7 +25,7 @@ You need two machines with CentOS installed on them. ## Starting a cluster This is a getting started guide for CentOS. It is a manual configuration so you understand all the underlying packages / services / ports, etc... -This guide will only get ONE minion working. Multiple minions requires a functional [networking configuration](../../networking.md) done outside of kubernetes. Although the additional kubernetes configuration requirements should be obvious. +This guide will only get ONE node working. Multiple nodes requires a functional [networking configuration](../../networking.md) done outside of kubernetes. Although the additional kubernetes configuration requirements should be obvious. The kubernetes package provides a few services: kube-apiserver, kube-scheduler, kube-controller-manager, kubelet, kube-proxy. These services are managed by systemd and the configuration resides in a central location: /etc/kubernetes. We will break the services up between the hosts. The first host, centos-master, will be the kubernetes master. This host will run the kube-apiserver, kube-controller-manager, and kube-scheduler. In addition, the master will also run _etcd_. The remaining host, centos-minion will be the node and run kubelet, proxy, cadvisor and docker. @@ -71,7 +71,7 @@ yum install http://cbs.centos.org/kojifiles/packages/etcd/0.4.6/7.el7.centos/x86 yum -y install --enablerepo=virt7-testing kubernetes ``` -* Add master and minion to /etc/hosts on all machines (not needed if hostnames already in DNS) +* Add master and node to /etc/hosts on all machines (not needed if hostnames already in DNS) ``` echo "192.168.121.9 centos-master @@ -94,7 +94,7 @@ KUBE_LOG_LEVEL="--v=0" KUBE_ALLOW_PRIV="--allow_privileged=false" ``` -* Disable the firewall on both the master and minion, as docker does not play well with other firewall rule managers +* Disable the firewall on both the master and node, as docker does not play well with other firewall rule managers ``` systemctl disable iptables-services firewalld @@ -115,7 +115,7 @@ KUBE_API_PORT="--port=8080" # How the replication controller and scheduler find the kube-apiserver KUBE_MASTER="--master=http://centos-master:8080" -# Port nodes listen on +# Port kubelets listen on KUBELET_PORT="--kubelet_port=10250" # Address range to use for services diff --git a/docs/getting-started-guides/coreos/azure/README.md b/docs/getting-started-guides/coreos/azure/README.md index c39d7893292..e51301f6dd5 100644 --- a/docs/getting-started-guides/coreos/azure/README.md +++ b/docs/getting-started-guides/coreos/azure/README.md @@ -56,7 +56,7 @@ Now, all you need to do is: ./create-kubernetes-cluster.js ``` -This script will provision a cluster suitable for production use, where there is a ring of 3 dedicated etcd nodes, 1 kubernetes master and 2 kubernetes nodes. The `kube-00` VM will be the master, your work loads are only to be deployed on the nodes, `kube-01` and `kube-02`. Initially, all VMs are single-core, to ensure a user of the free tier can reproduce it without paying extra. I will show how to add more bigger VMs later. +This script will provision a cluster suitable for production use, where there is a ring of 3 dedicated etcd nodes: 1 kubernetes master and 2 kubernetes nodes. The `kube-00` VM will be the master, your work loads are only to be deployed on the nodes, `kube-01` and `kube-02`. Initially, all VMs are single-core, to ensure a user of the free tier can reproduce it without paying extra. I will show how to add more bigger VMs later. ![VMs in Azure](initial_cluster.png) diff --git a/docs/getting-started-guides/coreos/azure/cloud_config_templates/kubernetes-cluster-main-nodes-template.yml b/docs/getting-started-guides/coreos/azure/cloud_config_templates/kubernetes-cluster-main-nodes-template.yml index a40930cc44f..814642522d7 100644 --- a/docs/getting-started-guides/coreos/azure/cloud_config_templates/kubernetes-cluster-main-nodes-template.yml +++ b/docs/getting-started-guides/coreos/azure/cloud_config_templates/kubernetes-cluster-main-nodes-template.yml @@ -17,7 +17,7 @@ write_files: owner: root content: | #!/bin/sh -xe - minion_id="${1}" + node_id="${1}" master_url="${2}" env_label="${3}" until healthcheck=$(curl --fail --silent "${master_url}/healthz") @@ -31,7 +31,7 @@ write_files: "metadata": { "name": "%s", "labels": { "environment": "%s" } - }' "${minion_id}" "${env_label}" \ + }' "${node_id}" "${env_label}" \ | /opt/bin/kubectl create -s "${master_url}" -f - - path: /etc/kubernetes/manifests/fluentd.manifest diff --git a/docs/getting-started-guides/fedora/fedora_ansible_config.md b/docs/getting-started-guides/fedora/fedora_ansible_config.md index 13e05135df5..bd55b353618 100644 --- a/docs/getting-started-guides/fedora/fedora_ansible_config.md +++ b/docs/getting-started-guides/fedora/fedora_ansible_config.md @@ -120,7 +120,7 @@ ansible-playbook -i inventory keys.yml If you already have configured your network and docker will use it correctly, skip to [setting up the cluster](#setting-up-the-cluster) -The ansible scripts are quite hacky configuring the network, you can see the [README](https://github.com/eparis/kubernetes-ansible) for details, or you can simply enter in variants of the 'kube_service_addresses' (in the all.yaml file) as `kube_ip_addr` entries in the minions field, as shown in the next section. +The ansible scripts are quite hacky configuring the network, you can see the [README](https://github.com/eparis/kubernetes-ansible) for details, or you can simply enter in variants of the 'kube_service_addresses' (in the all.yaml file) as `kube_ip_addr` entries in the nodes field, as shown in the next section. **Configure the ip addresses which should be used to run pods on each machine** diff --git a/docs/getting-started-guides/fedora/flannel_multi_node_cluster.md b/docs/getting-started-guides/fedora/flannel_multi_node_cluster.md index bf9239198df..955a6a691a8 100644 --- a/docs/getting-started-guides/fedora/flannel_multi_node_cluster.md +++ b/docs/getting-started-guides/fedora/flannel_multi_node_cluster.md @@ -25,7 +25,7 @@ Kubernetes multiple nodes cluster with flannel on Fedora ## Introduction -This document describes how to deploy kubernetes on multiple hosts to set up a multi-node cluster and networking with flannel. Follow fedora [getting started guide](fedora_manual_config.md) to setup 1 master (fed-master) and 2 or more nodes (minions). Make sure that all nodes (minions) have different names (fed-node1, fed-node2 and so on) and labels (fed-node1-label, fed-node2-label, and so on) to avoid any conflict. Also make sure that the kubernetes master host is running etcd, kube-controller-manager, kube-scheduler, and kube-apiserver services, and the nodes (minions) are running docker, kube-proxy and kubelet services. Now install flannel on kubernetes nodes (minions). flannel on each node configures an overlay network that docker uses. flannel runs on each node to setup a unique class-C container network. +This document describes how to deploy kubernetes on multiple hosts to set up a multi-node cluster and networking with flannel. Follow fedora [getting started guide](fedora_manual_config.md) to setup 1 master (fed-master) and 2 or more nodes. Make sure that all nodes have different names (fed-node1, fed-node2 and so on) and labels (fed-node1-label, fed-node2-label, and so on) to avoid any conflict. Also make sure that the kubernetes master host is running etcd, kube-controller-manager, kube-scheduler, and kube-apiserver services, and the nodes are running docker, kube-proxy and kubelet services. Now install flannel on kubernetes nodes. flannel on each node configures an overlay network that docker uses. flannel runs on each node to setup a unique class-C container network. ## Prerequisites 1. You need 2 or more machines with Fedora installed. diff --git a/docs/getting-started-guides/gce.md b/docs/getting-started-guides/gce.md index 79085de7647..229667badc6 100644 --- a/docs/getting-started-guides/gce.md +++ b/docs/getting-started-guides/gce.md @@ -198,7 +198,7 @@ Also ensure that-- as listed in the [Prerequsites section](#prerequisites)-- you #### Cluster initialization hang -If the Kubernetes startup script hangs waiting for the API to be reachable, you can troubleshoot by SSHing into the master and minion VMs and looking at logs such as `/var/log/startupscript.log`. +If the Kubernetes startup script hangs waiting for the API to be reachable, you can troubleshoot by SSHing into the master and node VMs and looking at logs such as `/var/log/startupscript.log`. **Once you fix the issue, you should run `kube-down.sh` to cleanup** after the partial cluster creation, before running `kube-up.sh` to try again. diff --git a/docs/getting-started-guides/juju.md b/docs/getting-started-guides/juju.md index 301282a820f..f9e335c4be8 100644 --- a/docs/getting-started-guides/juju.md +++ b/docs/getting-started-guides/juju.md @@ -213,7 +213,7 @@ Kubernetes Bundle on Github - [Bundle Repository](https://github.com/whitmo/bundle-kubernetes) * [Kubernetes master charm](https://github.com/whitmo/charm-kubernetes-master) - * [Kubernetes minion charm](https://github.com/whitmo/charm-kubernetes) + * [Kubernetes node charm](https://github.com/whitmo/charm-kubernetes) - [Bundle Documentation](http://whitmo.github.io/bundle-kubernetes) - [More about Juju](https://juju.ubuntu.com) diff --git a/docs/getting-started-guides/ubuntu.md b/docs/getting-started-guides/ubuntu.md index 41658df0dc8..d57d440819d 100644 --- a/docs/getting-started-guides/ubuntu.md +++ b/docs/getting-started-guides/ubuntu.md @@ -84,7 +84,7 @@ The first variable `nodes` defines all your cluster nodes, MASTER node comes fir Then the `roles ` variable defines the role of above machine in the same order, "ai" stands for machine acts as both master and node, "a" stands for master, "i" stands for node. So they are just defined the k8s cluster as the table above described. -The `NUM_MINIONS` variable defines the total number of minions. +The `NUM_MINIONS` variable defines the total number of nodes. The `SERVICE_CLUSTER_IP_RANGE` variable defines the kubernetes service IP range. Please make sure that you do have a valid private ip range defined here, because some IaaS provider may reserve private ips. You can use below three private network range according to rfc1918. Besides you'd better not choose the one that conflicts with your own private network range. diff --git a/docs/getting-started-guides/vagrant.md b/docs/getting-started-guides/vagrant.md index 556ff73ea7d..84a78c08139 100644 --- a/docs/getting-started-guides/vagrant.md +++ b/docs/getting-started-guides/vagrant.md @@ -54,7 +54,7 @@ curl -sS https://get.k8s.io | bash The `KUBERNETES_PROVIDER` environment variable tells all of the various cluster management scripts which variant to use. If you forget to set this, the assumption is you are running on Google Compute Engine. -By default, the Vagrant setup will create a single kubernetes-master and 1 kubernetes-minion. Each VM will take 1 GB, so make sure you have at least 2GB to 4GB of free memory (plus appropriate free disk space). To start your local cluster, open a shell and run: +By default, the Vagrant setup will create a single master VM (called kubernetes-master) and one node (called kubernetes-minion-1). Each VM will take 1 GB, so make sure you have at least 2GB to 4GB of free memory (plus appropriate free disk space). To start your local cluster, open a shell and run: ```sh cd kubernetes @@ -75,14 +75,14 @@ export KUBERNETES_PROVIDER=vagrant By default, each VM in the cluster is running Fedora. -To access the master or any minion: +To access the master or any node: ```sh vagrant ssh master vagrant ssh minion-1 ``` -If you are running more than one minion, you can access the others by: +If you are running more than one node, you can access the others by: ```sh vagrant ssh minion-2 @@ -110,7 +110,7 @@ vagrant ssh master [root@kubernetes-master ~] $ tail -f /var/log/kube-scheduler.log ``` -To view the services on any of the kubernetes-minion(s): +To view the services on any of the nodes: ```sh vagrant ssh minion-1 [vagrant@kubernetes-master ~] $ sudo su @@ -307,7 +307,7 @@ cat ~/.kubernetes_vagrant_auth #### I just created the cluster, but I do not see my container running! -If this is your first time creating the cluster, the kubelet on each minion schedules a number of docker pull requests to fetch prerequisite images. This can take some time and as a result may delay your initial pod getting provisioned. +If this is your first time creating the cluster, the kubelet on each node schedules a number of docker pull requests to fetch prerequisite images. This can take some time and as a result may delay your initial pod getting provisioned. #### I want to make changes to Kubernetes code! @@ -319,7 +319,7 @@ Log on to one of the nodes (`vagrant ssh minion-1`) and inspect the salt minion #### I want to change the number of nodes! -You can control the number of nodes that are instantiated via the environment variable `NUM_MINIONS` on your host machine. If you plan to work with replicas, we strongly encourage you to work with enough nodes to satisfy your largest intended replica size. If you do not plan to work with replicas, you can save some system resources by running with a single minion. You do this, by setting `NUM_MINIONS` to 1 like so: +You can control the number of nodes that are instantiated via the environment variable `NUM_MINIONS` on your host machine. If you plan to work with replicas, we strongly encourage you to work with enough nodes to satisfy your largest intended replica size. If you do not plan to work with replicas, you can save some system resources by running with a single node. You do this, by setting `NUM_MINIONS` to 1 like so: ```sh export NUM_MINIONS=1 diff --git a/docs/user-guide/kubectl/kubectl_patch.md b/docs/user-guide/kubectl/kubectl_patch.md index a652f7df13d..37c650943fe 100644 --- a/docs/user-guide/kubectl/kubectl_patch.md +++ b/docs/user-guide/kubectl/kubectl_patch.md @@ -74,6 +74,6 @@ kubectl patch node k8s-node-1 -p '{"spec":{"unschedulable":true}}' ### SEE ALSO * [kubectl](kubectl.md) - kubectl controls the Kubernetes cluster manager -###### Auto generated by spf13/cobra at 2015-07-13 16:38:17.586247279 +0000 UTC +###### Auto generated by spf13/cobra at 2015-07-13 18:16:24.525726093 +0000 UTC [![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/user-guide/kubectl/kubectl_patch.md?pixel)]() diff --git a/examples/celery-rabbitmq/README.md b/examples/celery-rabbitmq/README.md index 87ea4ecf5af..6997a215f7a 100644 --- a/examples/celery-rabbitmq/README.md +++ b/examples/celery-rabbitmq/README.md @@ -24,7 +24,7 @@ At the end of the example, we will have: ## Prerequisites -You should already have turned up a Kubernetes cluster. To get the most of this example, ensure that Kubernetes will create more than one minion (e.g. by setting your `NUM_MINIONS` environment variable to 2 or more). +You should already have turned up a Kubernetes cluster. To get the most of this example, ensure that Kubernetes will create more than one node (e.g. by setting your `NUM_MINIONS` environment variable to 2 or more). ## Step 1: Start the RabbitMQ service diff --git a/examples/high-availability/etc_kubernetes_kubelet b/examples/high-availability/etc_kubernetes_kubelet index 86c17b67422..6c963f416c7 100644 --- a/examples/high-availability/etc_kubernetes_kubelet +++ b/examples/high-availability/etc_kubernetes_kubelet @@ -1,5 +1,5 @@ ### -# kubernetes kubelet (minion) config +# kubernetes kubelet config # The address for the info server to serve on (set to 0.0.0.0 or "" for all interfaces) KUBELET_ADDRESS="" diff --git a/examples/high-availability/provision.sh b/examples/high-availability/provision.sh index a7c95c08c17..dacfebdd924 100755 --- a/examples/high-availability/provision.sh +++ b/examples/high-availability/provision.sh @@ -142,7 +142,7 @@ function install_components { ### precaution to make sure etcd is writable, flush iptables. iptables -F - ### minions: these will each run their own api server. + ### nodes: these will each run their own api server. else ### Make sure etcd running, flannel needs it. test_etcd