Fix trailing whitespace in all docs
This commit is contained in:
@@ -39,7 +39,7 @@ crafting your own customized cluster. We'll guide you in picking a solution tha
|
||||
|
||||
## Picking the Right Solution
|
||||
|
||||
If you just want to "kick the tires" on Kubernetes, we recommend the [local Docker-based](docker.md) solution.
|
||||
If you just want to "kick the tires" on Kubernetes, we recommend the [local Docker-based](docker.md) solution.
|
||||
|
||||
The local Docker-based solution is one of several [Local cluster](#local-machine-solutions) solutions
|
||||
that are quick to set up, but are limited to running on one machine.
|
||||
@@ -50,9 +50,9 @@ solution is the easiest to create and maintain.
|
||||
[Turn-key cloud solutions](#turn-key-cloud-solutions) require only a few commands to create
|
||||
and cover a wider range of cloud providers.
|
||||
|
||||
[Custom solutions](#custom-solutions) require more effort to setup but cover and even
|
||||
[Custom solutions](#custom-solutions) require more effort to setup but cover and even
|
||||
they vary from step-by-step instructions to general advice for setting up
|
||||
a Kubernetes cluster from scratch.
|
||||
a Kubernetes cluster from scratch.
|
||||
|
||||
### Local-machine Solutions
|
||||
|
||||
@@ -117,8 +117,8 @@ These solutions are combinations of cloud provider and OS not covered by the abo
|
||||
|
||||
- [Offline](coreos/bare_metal_offline.md) (no internet required. Uses CoreOS and Flannel)
|
||||
- [fedora/fedora_ansible_config.md](fedora/fedora_ansible_config.md)
|
||||
- [Fedora single node](fedora/fedora_manual_config.md)
|
||||
- [Fedora multi node](fedora/flannel_multi_node_cluster.md)
|
||||
- [Fedora single node](fedora/fedora_manual_config.md)
|
||||
- [Fedora multi node](fedora/flannel_multi_node_cluster.md)
|
||||
- [Centos](centos/centos_manual_config.md)
|
||||
- [Ubuntu](ubuntu.md)
|
||||
- [Docker Multi Node](docker-multinode.md)
|
||||
|
@@ -215,7 +215,7 @@ kubectl get pods
|
||||
|
||||
Record the **Host** of the pod, which should be the private IP address.
|
||||
|
||||
Gather the public IP address for the worker node.
|
||||
Gather the public IP address for the worker node.
|
||||
|
||||
```bash
|
||||
aws ec2 describe-instances --filters 'Name=private-ip-address,Values=<host>'
|
||||
|
@@ -60,7 +60,7 @@ centos-minion = 192.168.121.65
|
||||
```
|
||||
|
||||
**Prepare the hosts:**
|
||||
|
||||
|
||||
* Create virt7-testing repo on all hosts - centos-{master,minion} with following information.
|
||||
|
||||
```
|
||||
@@ -175,7 +175,7 @@ KUBELET_HOSTNAME="--hostname_override=centos-minion"
|
||||
|
||||
# Add your own!
|
||||
KUBELET_ARGS=""
|
||||
```
|
||||
```
|
||||
|
||||
* Start the appropriate services on node (centos-minion).
|
||||
|
||||
|
@@ -68,8 +68,8 @@ Or create a `~/.cloudstack.ini` file:
|
||||
|
||||
[cloudstack]
|
||||
endpoint = <your cloudstack api endpoint>
|
||||
key = <your api access key>
|
||||
secret = <your api secret key>
|
||||
key = <your api access key>
|
||||
secret = <your api secret key>
|
||||
method = post
|
||||
|
||||
We need to use the http POST method to pass the _large_ userdata to the coreOS instances.
|
||||
@@ -104,7 +104,7 @@ Check the tasks and templates in `roles/k8s` if you want to modify anything.
|
||||
|
||||
Once the playbook as finished, it will print out the IP of the Kubernetes master:
|
||||
|
||||
TASK: [k8s | debug msg='k8s master IP is {{ k8s_master.default_ip }}'] ********
|
||||
TASK: [k8s | debug msg='k8s master IP is {{ k8s_master.default_ip }}'] ********
|
||||
|
||||
SSH to it using the key that was created and using the _core_ user and you can list the machines in your cluster:
|
||||
|
||||
|
@@ -59,13 +59,13 @@ Deploy a CoreOS running Kubernetes environment. This particular guild is made to
|
||||
|
||||
## High Level Design
|
||||
|
||||
1. Manage the tftp directory
|
||||
1. Manage the tftp directory
|
||||
* /tftpboot/(coreos)(centos)(RHEL)
|
||||
* /tftpboot/pxelinux.0/(MAC) -> linked to Linux image config file
|
||||
2. Update per install the link for pxelinux
|
||||
3. Update the DHCP config to reflect the host needing deployment
|
||||
4. Setup nodes to deploy CoreOS creating a etcd cluster.
|
||||
5. Have no access to the public [etcd discovery tool](https://discovery.etcd.io/).
|
||||
4. Setup nodes to deploy CoreOS creating a etcd cluster.
|
||||
5. Have no access to the public [etcd discovery tool](https://discovery.etcd.io/).
|
||||
6. Installing the CoreOS slaves to become Kubernetes nodes.
|
||||
|
||||
## This Guides variables
|
||||
@@ -115,7 +115,7 @@ To setup CentOS PXELINUX environment there is a complete [guide here](http://doc
|
||||
timeout 15
|
||||
ONTIMEOUT local
|
||||
display boot.msg
|
||||
|
||||
|
||||
MENU TITLE Main Menu
|
||||
|
||||
LABEL local
|
||||
@@ -126,7 +126,7 @@ Now you should have a working PXELINUX setup to image CoreOS nodes. You can veri
|
||||
|
||||
## Adding CoreOS to PXE
|
||||
|
||||
This section describes how to setup the CoreOS images to live alongside a pre-existing PXELINUX environment.
|
||||
This section describes how to setup the CoreOS images to live alongside a pre-existing PXELINUX environment.
|
||||
|
||||
1. Find or create the TFTP root directory that everything will be based off of.
|
||||
* For this document we will assume `/tftpboot/` is our root directory.
|
||||
@@ -170,9 +170,9 @@ This section describes how to setup the CoreOS images to live alongside a pre-ex
|
||||
APPEND initrd=images/coreos/coreos_production_pxe_image.cpio.gz cloud-config-url=http://<xxx.xxx.xxx.xxx>/pxe-cloud-config-slave.yml
|
||||
MENU END
|
||||
|
||||
This configuration file will now boot from local drive but have the option to PXE image CoreOS.
|
||||
This configuration file will now boot from local drive but have the option to PXE image CoreOS.
|
||||
|
||||
## DHCP configuration
|
||||
## DHCP configuration
|
||||
|
||||
This section covers configuring the DHCP server to hand out our new images. In this case we are assuming that there are other servers that will boot alongside other images.
|
||||
|
||||
@@ -186,7 +186,7 @@ This section covers configuring the DHCP server to hand out our new images. In t
|
||||
next-server 10.20.30.242;
|
||||
option broadcast-address 10.20.30.255;
|
||||
filename "<other default image>";
|
||||
|
||||
|
||||
...
|
||||
# http://www.syslinux.org/wiki/index.php/PXELINUX
|
||||
host core_os_master {
|
||||
@@ -194,7 +194,7 @@ This section covers configuring the DHCP server to hand out our new images. In t
|
||||
option routers 10.20.30.1;
|
||||
fixed-address 10.20.30.40;
|
||||
option domain-name-servers 10.20.30.242;
|
||||
filename "/pxelinux.0";
|
||||
filename "/pxelinux.0";
|
||||
}
|
||||
host core_os_slave {
|
||||
hardware ethernet d0:00:67:13:0d:01;
|
||||
@@ -217,7 +217,7 @@ We will be specifying the node configuration later in the guide.
|
||||
|
||||
## Kubernetes
|
||||
|
||||
To deploy our configuration we need to create an `etcd` master. To do so we want to pxe CoreOS with a specific cloud-config.yml. There are two options we have here.
|
||||
To deploy our configuration we need to create an `etcd` master. To do so we want to pxe CoreOS with a specific cloud-config.yml. There are two options we have here.
|
||||
1. Is to template the cloud config file and programmatically create new static configs for different cluster setups.
|
||||
2. Have a service discovery protocol running in our stack to do auto discovery.
|
||||
|
||||
@@ -427,7 +427,7 @@ On the PXE server make and fill in the variables `vi /var/www/html/coreos/pxe-cl
|
||||
--logtostderr=true
|
||||
Restart=always
|
||||
RestartSec=10
|
||||
- name: kube-controller-manager.service
|
||||
- name: kube-controller-manager.service
|
||||
command: start
|
||||
content: |
|
||||
[Unit]
|
||||
@@ -535,7 +535,7 @@ On the PXE server make and fill in the variables `vi /var/www/html/coreos/pxe-cl
|
||||
command: start
|
||||
content: |
|
||||
[Unit]
|
||||
After=network-online.target
|
||||
After=network-online.target
|
||||
Wants=network-online.target
|
||||
Description=flannel is an etcd backed overlay network for containers
|
||||
[Service]
|
||||
|
@@ -44,7 +44,7 @@ Use the [master.yaml](cloud-configs/master.yaml) and [node.yaml](cloud-configs/n
|
||||
* Provision the master node
|
||||
* Capture the master node private IP address
|
||||
* Edit node.yaml
|
||||
* Provision one or more worker nodes
|
||||
* Provision one or more worker nodes
|
||||
|
||||
### AWS
|
||||
|
||||
|
@@ -79,7 +79,7 @@ curl <insert-ip-from-above-here>
|
||||
|
||||
Note that you will need run this curl command on your boot2docker VM if you are running on OS X.
|
||||
|
||||
### Scaling
|
||||
### Scaling
|
||||
|
||||
Now try to scale up the nginx you created before:
|
||||
|
||||
|
@@ -80,7 +80,7 @@ docker run -d --net=host --privileged gcr.io/google_containers/hyperkube:v1.0.1
|
||||
|
||||
### Test it out
|
||||
|
||||
At this point you should have a running Kubernetes cluster. You can test this by downloading the kubectl
|
||||
At this point you should have a running Kubernetes cluster. You can test this by downloading the kubectl
|
||||
binary
|
||||
([OS X](https://storage.googleapis.com/kubernetes-release/release/v1.0.1/bin/darwin/amd64/kubectl))
|
||||
([linux](https://storage.googleapis.com/kubernetes-release/release/v1.0.1/bin/linux/amd64/kubectl))
|
||||
|
@@ -60,9 +60,9 @@ fed-node = 192.168.121.65
|
||||
```
|
||||
|
||||
**Prepare the hosts:**
|
||||
|
||||
|
||||
* Install Kubernetes on all hosts - fed-{master,node}. This will also pull in docker. Also install etcd on fed-master. This guide has been tested with kubernetes-0.18 and beyond.
|
||||
* The [--enablerepo=update-testing](https://fedoraproject.org/wiki/QA:Updates_Testing) directive in the yum command below will ensure that the most recent Kubernetes version that is scheduled for pre-release will be installed. This should be a more recent version than the Fedora "stable" release for Kubernetes that you would get without adding the directive.
|
||||
* The [--enablerepo=update-testing](https://fedoraproject.org/wiki/QA:Updates_Testing) directive in the yum command below will ensure that the most recent Kubernetes version that is scheduled for pre-release will be installed. This should be a more recent version than the Fedora "stable" release for Kubernetes that you would get without adding the directive.
|
||||
* If you want the very latest Kubernetes release [you can download and yum install the RPM directly from Fedora Koji](http://koji.fedoraproject.org/koji/packageinfo?packageID=19202) instead of using the yum install command below.
|
||||
|
||||
```sh
|
||||
|
@@ -262,10 +262,10 @@ works with [Amazon Web Service](https://jujucharms.com/docs/stable/config-aws),
|
||||
[Vmware vSphere](https://jujucharms.com/docs/stable/config-vmware).
|
||||
|
||||
If you do not see your favorite cloud provider listed many clouds can be
|
||||
configured for [manual provisioning](https://jujucharms.com/docs/stable/config-manual).
|
||||
configured for [manual provisioning](https://jujucharms.com/docs/stable/config-manual).
|
||||
|
||||
The Kubernetes bundle has been tested on GCE and AWS and found to work with
|
||||
version 1.0.0.
|
||||
version 1.0.0.
|
||||
|
||||
|
||||
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
|
||||
|
@@ -92,7 +92,7 @@ NAME READY STATUS RESTARTS AG
|
||||
counter 1/1 Running 0 5m
|
||||
```
|
||||
|
||||
This step may take a few minutes to download the ubuntu:14.04 image during which the pod status will be shown as `Pending`.
|
||||
This step may take a few minutes to download the ubuntu:14.04 image during which the pod status will be shown as `Pending`.
|
||||
|
||||
One of the nodes is now running the counter pod:
|
||||
|
||||
|
@@ -92,7 +92,7 @@ steps that existing cluster setup scripts are making.
|
||||
|
||||
## Designing and Preparing
|
||||
|
||||
### Learning
|
||||
### Learning
|
||||
|
||||
1. You should be familiar with using Kubernetes already. We suggest you set
|
||||
up a temporary cluster by following one of the other Getting Started Guides.
|
||||
@@ -108,7 +108,7 @@ an interface for managing TCP Load Balancers, Nodes (Instances) and Networking R
|
||||
The interface is defined in `pkg/cloudprovider/cloud.go`. It is possible to
|
||||
create a custom cluster without implementing a cloud provider (for example if using
|
||||
bare-metal), and not all parts of the interface need to be implemented, depending
|
||||
on how flags are set on various components.
|
||||
on how flags are set on various components.
|
||||
|
||||
### Nodes
|
||||
|
||||
@@ -220,13 +220,13 @@ all the necessary binaries.
|
||||
#### Selecting Images
|
||||
|
||||
You will run docker, kubelet, and kube-proxy outside of a container, the same way you would run any system daemon, so
|
||||
you just need the bare binaries. For etcd, kube-apiserver, kube-controller-manager, and kube-scheduler,
|
||||
you just need the bare binaries. For etcd, kube-apiserver, kube-controller-manager, and kube-scheduler,
|
||||
we recommend that you run these as containers, so you need an image to be built.
|
||||
|
||||
You have several choices for Kubernetes images:
|
||||
- Use images hosted on Google Container Registry (GCR):
|
||||
- e.g `gcr.io/google_containers/kube-apiserver:$TAG`, where `TAG` is the latest
|
||||
release tag, which can be found on the [latest releases page](https://github.com/GoogleCloudPlatform/kubernetes/releases/latest).
|
||||
release tag, which can be found on the [latest releases page](https://github.com/GoogleCloudPlatform/kubernetes/releases/latest).
|
||||
- Ensure $TAG is the same tag as the release tag you are using for kubelet and kube-proxy.
|
||||
- Build your own images.
|
||||
- Useful if you are using a private registry.
|
||||
@@ -294,7 +294,7 @@ You will end up with the following files (we will use these variables later on)
|
||||
#### Preparing Credentials
|
||||
|
||||
The admin user (and any users) need:
|
||||
- a token or a password to identify them.
|
||||
- a token or a password to identify them.
|
||||
- tokens are just long alphanumeric strings, e.g. 32 chars. See
|
||||
- `TOKEN=$(dd if=/dev/urandom bs=128 count=1 2>/dev/null | base64 | tr -d "=+/" | dd bs=32 count=1 2>/dev/null)`
|
||||
|
||||
@@ -318,7 +318,7 @@ The kubeconfig file for the administrator can be created as follows:
|
||||
- `kubectl config set-context $CONTEXT_NAME --cluster=$CLUSTER_NAME --user=$USER`
|
||||
- `kubectl config use-context $CONTEXT_NAME`
|
||||
|
||||
Next, make a kubeconfig file for the kubelets and kube-proxy. There are a couple of options for how
|
||||
Next, make a kubeconfig file for the kubelets and kube-proxy. There are a couple of options for how
|
||||
many distinct files to make:
|
||||
1. Use the same credential as the admin
|
||||
- This is simplest to setup.
|
||||
@@ -355,7 +355,7 @@ guide assume that there are kubeconfigs in `/var/lib/kube-proxy/kubeconfig` and
|
||||
|
||||
## Configuring and Installing Base Software on Nodes
|
||||
|
||||
This section discusses how to configure machines to be Kubernetes nodes.
|
||||
This section discusses how to configure machines to be Kubernetes nodes.
|
||||
|
||||
You should run three daemons on every node:
|
||||
- docker or rkt
|
||||
@@ -395,7 +395,7 @@ so that kube-proxy can manage iptables instead of docker.
|
||||
- if you are using an overlay network, consult those instructions.
|
||||
- `--mtu=`
|
||||
- may be required when using Flannel, because of the extra packet size due to udp encapsulation
|
||||
- `--insecure-registry $CLUSTER_SUBNET`
|
||||
- `--insecure-registry $CLUSTER_SUBNET`
|
||||
- to connect to a private registry, if you set one up, without using SSL.
|
||||
|
||||
You may want to increase the number of open files for docker:
|
||||
@@ -412,7 +412,7 @@ installation, by following examples given in the Docker documentation.
|
||||
The minimum version required is [v0.5.6](https://github.com/coreos/rkt/releases/tag/v0.5.6).
|
||||
|
||||
[systemd](http://www.freedesktop.org/wiki/Software/systemd/) is required on your node to run rkt. The
|
||||
minimum version required to match rkt v0.5.6 is
|
||||
minimum version required to match rkt v0.5.6 is
|
||||
[systemd 215](http://lists.freedesktop.org/archives/systemd-devel/2014-July/020903.html).
|
||||
|
||||
[rkt metadata service](https://github.com/coreos/rkt/blob/master/Documentation/networking.md) is also required
|
||||
@@ -444,7 +444,7 @@ Arguments to consider:
|
||||
|
||||
All nodes should run kube-proxy. (Running kube-proxy on a "master" node is not
|
||||
strictly required, but being consistent is easier.) Obtain a binary as described for
|
||||
kubelet.
|
||||
kubelet.
|
||||
|
||||
Arguments to consider:
|
||||
- If following the HTTPS security approach:
|
||||
@@ -456,7 +456,7 @@ Arguments to consider:
|
||||
### Networking
|
||||
|
||||
Each node needs to be allocated its own CIDR range for pod networking.
|
||||
Call this `NODE_X_POD_CIDR`.
|
||||
Call this `NODE_X_POD_CIDR`.
|
||||
|
||||
A bridge called `cbr0` needs to be created on each node. The bridge is explained
|
||||
further in the [networking documentation](../admin/networking.md). The bridge itself
|
||||
@@ -498,7 +498,7 @@ NOTE: This is environment specific. Some environments will not need
|
||||
any masquerading at all. Others, such as GCE, will not allow pod IPs to send
|
||||
traffic to the internet, but have no problem with them inside your GCE Project.
|
||||
|
||||
### Other
|
||||
### Other
|
||||
|
||||
- Enable auto-upgrades for your OS package manager, if desired.
|
||||
- Configure log rotation for all node components (e.g. using [logrotate](http://linux.die.net/man/8/logrotate)).
|
||||
@@ -529,7 +529,7 @@ You will need to run one or more instances of etcd.
|
||||
- Recommended approach: run one etcd instance, with its log written to a directory backed
|
||||
by durable storage (RAID, GCE PD)
|
||||
- Alternative: run 3 or 5 etcd instances.
|
||||
- Log can be written to non-durable storage because storage is replicated.
|
||||
- Log can be written to non-durable storage because storage is replicated.
|
||||
- run a single apiserver which connects to one of the etc nodes.
|
||||
See [cluster-troubleshooting](../admin/cluster-troubleshooting.md) for more discussion on factors affecting cluster
|
||||
availability.
|
||||
|
@@ -49,7 +49,7 @@ On the Master:
|
||||
On each Node:
|
||||
- `kube-proxy`
|
||||
- `kube-kubelet`
|
||||
- `calico-node`
|
||||
- `calico-node`
|
||||
|
||||
## Prerequisites
|
||||
|
||||
@@ -191,7 +191,7 @@ node-X | 192.168.X.1/24
|
||||
|
||||
#### Start docker on cbr0
|
||||
|
||||
The Docker daemon must be started and told to use the already configured cbr0 instead of using the usual docker0, as well as disabling ip-masquerading and modification of the ip-tables.
|
||||
The Docker daemon must be started and told to use the already configured cbr0 instead of using the usual docker0, as well as disabling ip-masquerading and modification of the ip-tables.
|
||||
|
||||
1.) Edit the ubuntu-15.04 docker.service for systemd at: `/lib/systemd/system/docker.service`
|
||||
|
||||
|
@@ -49,7 +49,7 @@ This document describes how to deploy Kubernetes on ubuntu nodes, including 1 Ku
|
||||
|
||||
## Prerequisites
|
||||
|
||||
*1 The nodes have installed docker version 1.2+ and bridge-utils to manipulate linux bridge*
|
||||
*1 The nodes have installed docker version 1.2+ and bridge-utils to manipulate linux bridge*
|
||||
|
||||
*2 All machines can communicate with each other, no need to connect Internet (should use private docker registry in this case)*
|
||||
|
||||
@@ -57,7 +57,7 @@ This document describes how to deploy Kubernetes on ubuntu nodes, including 1 Ku
|
||||
|
||||
*4 Dependencies of this guide: etcd-2.0.12, flannel-0.4.0, k8s-1.0.1, but it may work with higher versions*
|
||||
|
||||
*5 All the remote servers can be ssh logged in without a password by using key authentication*
|
||||
*5 All the remote servers can be ssh logged in without a password by using key authentication*
|
||||
|
||||
|
||||
### Starting a Cluster
|
||||
@@ -80,7 +80,7 @@ Please make sure that there are `kube-apiserver`, `kube-controller-manager`, `ku
|
||||
|
||||
An example cluster is listed as below:
|
||||
|
||||
| IP Address|Role |
|
||||
| IP Address|Role |
|
||||
|---------|------|
|
||||
|10.10.103.223| node |
|
||||
|10.10.103.162| node |
|
||||
@@ -112,13 +112,13 @@ The `SERVICE_CLUSTER_IP_RANGE` variable defines the Kubernetes service IP range.
|
||||
|
||||
172.16.0.0 - 172.31.255.255 (172.16/12 prefix)
|
||||
|
||||
192.168.0.0 - 192.168.255.255 (192.168/16 prefix)
|
||||
192.168.0.0 - 192.168.255.255 (192.168/16 prefix)
|
||||
|
||||
The `FLANNEL_NET` variable defines the IP range used for flannel overlay network, should not conflict with above `SERVICE_CLUSTER_IP_RANGE`.
|
||||
|
||||
After all the above variables being set correctly, we can use following command in cluster/ directory to bring up the whole cluster.
|
||||
|
||||
`$ KUBERNETES_PROVIDER=ubuntu ./kube-up.sh`
|
||||
`$ KUBERNETES_PROVIDER=ubuntu ./kube-up.sh`
|
||||
|
||||
The scripts automatically scp binaries and config files to all the machines and start the k8s service on them. The only thing you need to do is to type the sudo password when promoted. The current machine name is shown below, so you will not type in the wrong password.
|
||||
|
||||
@@ -135,9 +135,9 @@ If all things goes right, you will see the below message from console
|
||||
|
||||
**All done !**
|
||||
|
||||
You can also use `kubectl` command to see if the newly created k8s is working correctly. The `kubectl` binary is under the `cluster/ubuntu/binaries` directory. You can move it into your PATH. Then you can use the below command smoothly.
|
||||
You can also use `kubectl` command to see if the newly created k8s is working correctly. The `kubectl` binary is under the `cluster/ubuntu/binaries` directory. You can move it into your PATH. Then you can use the below command smoothly.
|
||||
|
||||
For example, use `$ kubectl get nodes` to see if all your nodes are in ready status. It may take some time for the nodes ready to use like below.
|
||||
For example, use `$ kubectl get nodes` to see if all your nodes are in ready status. It may take some time for the nodes ready to use like below.
|
||||
|
||||
```console
|
||||
NAME LABELS STATUS
|
||||
@@ -192,19 +192,19 @@ We are working on these features which we'd like to let everybody know:
|
||||
|
||||
#### Trouble Shooting
|
||||
|
||||
Generally, what this approach did is quite simple:
|
||||
Generally, what this approach did is quite simple:
|
||||
|
||||
1. Download and copy binaries and configuration files to proper directories on every node
|
||||
|
||||
2. Configure `etcd` using IPs based on input from user
|
||||
2. Configure `etcd` using IPs based on input from user
|
||||
|
||||
3. Create and start flannel network
|
||||
|
||||
So, if you see a problem, **check etcd configuration first**
|
||||
So, if you see a problem, **check etcd configuration first**
|
||||
|
||||
Please try:
|
||||
|
||||
1. Check `/var/log/upstart/etcd.log` for suspicious etcd log
|
||||
1. Check `/var/log/upstart/etcd.log` for suspicious etcd log
|
||||
|
||||
2. Check `/etc/default/etcd`, as we do not have much input validation, a right config should be like:
|
||||
|
||||
@@ -212,11 +212,11 @@ Please try:
|
||||
ETCD_OPTS="-name infra1 -initial-advertise-peer-urls <http://ip_of_this_node:2380> -listen-peer-urls <http://ip_of_this_node:2380> -initial-cluster-token etcd-cluster-1 -initial-cluster infra1=<http://ip_of_this_node:2380>,infra2=<http://ip_of_another_node:2380>,infra3=<http://ip_of_another_node:2380> -initial-cluster-state new"
|
||||
```
|
||||
|
||||
3. You can use below command
|
||||
3. You can use below command
|
||||
`$ KUBERNETES_PROVIDER=ubuntu ./kube-down.sh` to bring down the cluster and run
|
||||
`$ KUBERNETES_PROVIDER=ubuntu ./kube-up.sh` again to start again.
|
||||
|
||||
4. You can also customize your own settings in `/etc/default/{component_name}` after configured success.
|
||||
|
||||
4. You can also customize your own settings in `/etc/default/{component_name}` after configured success.
|
||||
|
||||
|
||||
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
|
||||
|
Reference in New Issue
Block a user