Remove all docs which are moving to http://kubernetes.github.io
All .md files now are only a pointer to where they likely are on the new site. All other files are untouched.
This commit is contained in:
@@ -31,245 +31,8 @@ Documentation for other releases can be found at
|
||||
<!-- END STRIP_FOR_RELEASE -->
|
||||
|
||||
<!-- END MUNGE: UNVERSIONED_WARNING -->
|
||||
Kubernetes on Azure with CoreOS and [Weave](http://weave.works)
|
||||
---------------------------------------------------------------
|
||||
|
||||
**Table of Contents**
|
||||
|
||||
- [Introduction](#introduction)
|
||||
- [Prerequisites](#prerequisites)
|
||||
- [Let's go!](#lets-go)
|
||||
- [Deploying the workload](#deploying-the-workload)
|
||||
- [Scaling](#scaling)
|
||||
- [Exposing the app to the outside world](#exposing-the-app-to-the-outside-world)
|
||||
- [Next steps](#next-steps)
|
||||
- [Tear down...](#tear-down)
|
||||
|
||||
## Introduction
|
||||
|
||||
In this guide I will demonstrate how to deploy a Kubernetes cluster to Azure cloud. You will be using CoreOS with Weave, which implements simple and secure networking, in a transparent, yet robust way. The purpose of this guide is to provide an out-of-the-box implementation that can ultimately be taken into production with little change. It will demonstrate how to provision a dedicated Kubernetes master and etcd nodes, and show how to scale the cluster with ease.
|
||||
|
||||
### Prerequisites
|
||||
|
||||
1. You need an Azure account.
|
||||
|
||||
## Let's go!
|
||||
|
||||
To get started, you need to checkout the code:
|
||||
|
||||
```sh
|
||||
git clone https://github.com/kubernetes/kubernetes
|
||||
cd kubernetes/docs/getting-started-guides/coreos/azure/
|
||||
```
|
||||
|
||||
You will need to have [Node.js installed](http://nodejs.org/download/) on you machine. If you have previously used Azure CLI, you should have it already.
|
||||
|
||||
First, you need to install some of the dependencies with
|
||||
|
||||
```sh
|
||||
npm install
|
||||
```
|
||||
|
||||
Now, all you need to do is:
|
||||
|
||||
```sh
|
||||
./azure-login.js -u <your_username>
|
||||
./create-kubernetes-cluster.js
|
||||
```
|
||||
|
||||
This script will provision a cluster suitable for production use, where there is a ring of 3 dedicated etcd nodes: 1 kubernetes master and 2 kubernetes nodes. The `kube-00` VM will be the master, your work loads are only to be deployed on the nodes, `kube-01` and `kube-02`. Initially, all VMs are single-core, to ensure a user of the free tier can reproduce it without paying extra. I will show how to add more bigger VMs later.
|
||||
If you need to pass Azure specific options for the creation script you can do this via additional environment variables e.g.
|
||||
|
||||
```
|
||||
AZ_SUBSCRIPTION=<id> AZ_LOCATION="East US" ./create-kubernetes-cluster.js
|
||||
# or
|
||||
AZ_VM_COREOS_CHANNEL=beta ./create-kubernetes-cluster.js
|
||||
```
|
||||
|
||||

|
||||
|
||||
Once the creation of Azure VMs has finished, you should see the following:
|
||||
|
||||
```console
|
||||
...
|
||||
azure_wrapper/info: Saved SSH config, you can use it like so: `ssh -F ./output/kube_1c1496016083b4_ssh_conf <hostname>`
|
||||
azure_wrapper/info: The hosts in this deployment are:
|
||||
[ 'etcd-00', 'etcd-01', 'etcd-02', 'kube-00', 'kube-01', 'kube-02' ]
|
||||
azure_wrapper/info: Saved state into `./output/kube_1c1496016083b4_deployment.yml`
|
||||
```
|
||||
|
||||
Let's login to the master node like so:
|
||||
|
||||
```sh
|
||||
ssh -F ./output/kube_1c1496016083b4_ssh_conf kube-00
|
||||
```
|
||||
|
||||
> Note: config file name will be different, make sure to use the one you see.
|
||||
|
||||
Check there are 2 nodes in the cluster:
|
||||
|
||||
```console
|
||||
core@kube-00 ~ $ kubectl get nodes
|
||||
NAME LABELS STATUS
|
||||
kube-01 kubernetes.io/hostname=kube-01 Ready
|
||||
kube-02 kubernetes.io/hostname=kube-02 Ready
|
||||
```
|
||||
|
||||
## Deploying the workload
|
||||
|
||||
Let's follow the Guestbook example now:
|
||||
|
||||
```sh
|
||||
kubectl create -f ~/guestbook-example
|
||||
```
|
||||
|
||||
You need to wait for the pods to get deployed, run the following and wait for `STATUS` to change from `Pending` to `Running`.
|
||||
|
||||
```sh
|
||||
kubectl get pods --watch
|
||||
```
|
||||
|
||||
> Note: the most time it will spend downloading Docker container images on each of the nodes.
|
||||
|
||||
Eventually you should see:
|
||||
|
||||
```console
|
||||
NAME READY STATUS RESTARTS AGE
|
||||
frontend-0a9xi 1/1 Running 0 4m
|
||||
frontend-4wahe 1/1 Running 0 4m
|
||||
frontend-6l36j 1/1 Running 0 4m
|
||||
redis-master-talmr 1/1 Running 0 4m
|
||||
redis-slave-12zfd 1/1 Running 0 4m
|
||||
redis-slave-3nbce 1/1 Running 0 4m
|
||||
```
|
||||
|
||||
## Scaling
|
||||
|
||||
Two single-core nodes are certainly not enough for a production system of today. Let's scale the cluster by adding a couple of bigger nodes.
|
||||
|
||||
You will need to open another terminal window on your machine and go to the same working directory (e.g. `~/Workspace/kubernetes/docs/getting-started-guides/coreos/azure/`).
|
||||
|
||||
First, lets set the size of new VMs:
|
||||
|
||||
```sh
|
||||
export AZ_VM_SIZE=Large
|
||||
```
|
||||
|
||||
Now, run scale script with state file of the previous deployment and number of nodes to add:
|
||||
|
||||
```console
|
||||
core@kube-00 ~ $ ./scale-kubernetes-cluster.js ./output/kube_1c1496016083b4_deployment.yml 2
|
||||
...
|
||||
azure_wrapper/info: Saved SSH config, you can use it like so: `ssh -F ./output/kube_8f984af944f572_ssh_conf <hostname>`
|
||||
azure_wrapper/info: The hosts in this deployment are:
|
||||
[ 'etcd-00',
|
||||
'etcd-01',
|
||||
'etcd-02',
|
||||
'kube-00',
|
||||
'kube-01',
|
||||
'kube-02',
|
||||
'kube-03',
|
||||
'kube-04' ]
|
||||
azure_wrapper/info: Saved state into `./output/kube_8f984af944f572_deployment.yml`
|
||||
```
|
||||
|
||||
> Note: this step has created new files in `./output`.
|
||||
|
||||
Back on `kube-00`:
|
||||
|
||||
```console
|
||||
core@kube-00 ~ $ kubectl get nodes
|
||||
NAME LABELS STATUS
|
||||
kube-01 kubernetes.io/hostname=kube-01 Ready
|
||||
kube-02 kubernetes.io/hostname=kube-02 Ready
|
||||
kube-03 kubernetes.io/hostname=kube-03 Ready
|
||||
kube-04 kubernetes.io/hostname=kube-04 Ready
|
||||
```
|
||||
|
||||
You can see that two more nodes joined happily. Let's scale the number of Guestbook instances now.
|
||||
|
||||
First, double-check how many replication controllers there are:
|
||||
|
||||
```console
|
||||
core@kube-00 ~ $ kubectl get rc
|
||||
ONTROLLER CONTAINER(S) IMAGE(S) SELECTOR REPLICAS
|
||||
frontend php-redis kubernetes/example-guestbook-php-redis:v2 name=frontend 3
|
||||
redis-master master redis name=redis-master 1
|
||||
redis-slave worker kubernetes/redis-slave:v2 name=redis-slave 2
|
||||
```
|
||||
|
||||
As there are 4 nodes, let's scale proportionally:
|
||||
|
||||
```console
|
||||
core@kube-00 ~ $ kubectl scale --replicas=4 rc redis-slave
|
||||
>>>>>>> coreos/azure: Updates for 1.0
|
||||
scaled
|
||||
core@kube-00 ~ $ kubectl scale --replicas=4 rc frontend
|
||||
scaled
|
||||
```
|
||||
|
||||
Check what you have now:
|
||||
|
||||
```console
|
||||
core@kube-00 ~ $ kubectl get rc
|
||||
CONTROLLER CONTAINER(S) IMAGE(S) SELECTOR REPLICAS
|
||||
frontend php-redis kubernetes/example-guestbook-php-redis:v2 name=frontend 4
|
||||
redis-master master redis name=redis-master 1
|
||||
redis-slave worker kubernetes/redis-slave:v2 name=redis-slave 4
|
||||
```
|
||||
|
||||
You now will have more instances of front-end Guestbook apps and Redis slaves; and, if you look up all pods labeled `name=frontend`, you should see one running on each node.
|
||||
|
||||
```console
|
||||
core@kube-00 ~/guestbook-example $ kubectl get pods -l name=frontend
|
||||
NAME READY STATUS RESTARTS AGE
|
||||
frontend-0a9xi 1/1 Running 0 22m
|
||||
frontend-4wahe 1/1 Running 0 22m
|
||||
frontend-6l36j 1/1 Running 0 22m
|
||||
frontend-z9oxo 1/1 Running 0 41s
|
||||
```
|
||||
|
||||
## Exposing the app to the outside world
|
||||
|
||||
There is no native Azure load-balancer support in Kubernetes 1.0, however here is how you can expose the Guestbook app to the Internet.
|
||||
|
||||
```
|
||||
./expose_guestbook_app_port.sh ./output/kube_1c1496016083b4_ssh_conf
|
||||
Guestbook app is on port 31605, will map it to port 80 on kube-00
|
||||
info: Executing command vm endpoint create
|
||||
+ Getting virtual machines
|
||||
+ Reading network configuration
|
||||
+ Updating network configuration
|
||||
info: vm endpoint create command OK
|
||||
info: Executing command vm endpoint show
|
||||
+ Getting virtual machines
|
||||
data: Name : tcp-80-31605
|
||||
data: Local port : 31605
|
||||
data: Protcol : tcp
|
||||
data: Virtual IP Address : 137.117.156.164
|
||||
data: Direct server return : Disabled
|
||||
info: vm endpoint show command OK
|
||||
```
|
||||
|
||||
You then should be able to access it from anywhere via the Azure virtual IP for `kube-00` displayed above, i.e. `http://137.117.156.164/` in my case.
|
||||
|
||||
## Next steps
|
||||
|
||||
You now have a full-blow cluster running in Azure, congrats!
|
||||
|
||||
You should probably try deploy other [example apps](../../../../examples/) or write your own ;)
|
||||
|
||||
## Tear down...
|
||||
|
||||
If you don't wish care about the Azure bill, you can tear down the cluster. It's easy to redeploy it, as you can see.
|
||||
|
||||
```sh
|
||||
./destroy-cluster.js ./output/kube_8f984af944f572_deployment.yml
|
||||
```
|
||||
|
||||
> Note: make sure to use the _latest state file_, as after scaling there is a new one.
|
||||
|
||||
By the way, with the scripts shown, you can deploy multiple clusters, if you like :)
|
||||
This file has moved to: http://kubernetes.github.io/docs/getting-started-guides/coreos/azure/README/
|
||||
|
||||
|
||||
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
|
||||
|
@@ -32,203 +32,8 @@ Documentation for other releases can be found at
|
||||
|
||||
<!-- END MUNGE: UNVERSIONED_WARNING -->
|
||||
|
||||
Bare Metal Kubernetes on CoreOS with Calico Networking
|
||||
------------------------------------------
|
||||
This document describes how to deploy Kubernetes with Calico networking on _bare metal_ CoreOS. For more information on Project Calico, visit [projectcalico.org](http://projectcalico.org) and the [calico-containers repository](https://github.com/projectcalico/calico-containers).
|
||||
This file has moved to: http://kubernetes.github.io/docs/getting-started-guides/coreos/bare_metal_calico/
|
||||
|
||||
To install Calico on an existing Kubernetes cluster, or for more information on deploying Calico with Kubernetes in a number of other environments take a look at our supported [deployment guides](https://github.com/projectcalico/calico-containers/tree/master/docs/cni/kubernetes).
|
||||
|
||||
Specifically, this guide will have you do the following:
|
||||
- Deploy a Kubernetes master node on CoreOS using cloud-config.
|
||||
- Deploy two Kubernetes compute nodes with Calico Networking using cloud-config.
|
||||
- Configure `kubectl` to access your cluster.
|
||||
|
||||
The resulting cluster will use SSL between Kubernetes components. It will run the SkyDNS service and kube-ui, and be fully conformant with the Kubernetes v1.1 conformance tests.
|
||||
|
||||
## Prerequisites and Assumptions
|
||||
|
||||
- At least three bare-metal machines (or VMs) to work with. This guide will configure them as follows:
|
||||
- 1 Kubernetes Master
|
||||
- 2 Kubernetes Nodes
|
||||
- Your nodes should have IP connectivity to each other and the internet.
|
||||
- This guide assumes a DHCP server on your network to assign server IPs.
|
||||
- This guide uses `192.168.0.0/16` as the subnet from which pod IP addresses are assigned. If this overlaps with your host subnet, you will need to configure Calico to use a different [IP pool](https://github.com/projectcalico/calico-containers/blob/master/docs/calicoctl/pool.md#calicoctl-pool-commands).
|
||||
|
||||
## Cloud-config
|
||||
|
||||
This guide will use [cloud-config](https://coreos.com/docs/cluster-management/setup/cloudinit-cloud-config/) to configure each of the nodes in our Kubernetes cluster.
|
||||
|
||||
We'll use two cloud-config files:
|
||||
- `master-config.yaml`: cloud-config for the Kubernetes master
|
||||
- `node-config.yaml`: cloud-config for each Kubernetes node
|
||||
|
||||
## Download CoreOS
|
||||
|
||||
Download the stable CoreOS bootable ISO from the [CoreOS website](https://coreos.com/docs/running-coreos/platforms/iso/).
|
||||
|
||||
## Configure the Kubernetes Master
|
||||
|
||||
1. Once you've downloaded the ISO image, burn the ISO to a CD/DVD/USB key and boot from it (if using a virtual machine you can boot directly from the ISO). Once booted, you should be automatically logged in as the `core` user at the terminal. At this point CoreOS is running from the ISO and it hasn't been installed yet.
|
||||
|
||||
2. *On another machine*, download the the [master cloud-config template](https://raw.githubusercontent.com/projectcalico/calico-cni/k8s-1.1-docs/samples/kubernetes/cloud-config/master-config-template.yaml) and save it as `master-config.yaml`.
|
||||
|
||||
3. Replace the following variables in the `master-config.yaml` file.
|
||||
|
||||
- `<SSH_PUBLIC_KEY>`: The public key you will use for SSH access to this server. See [generating ssh keys](https://help.github.com/articles/generating-ssh-keys/)
|
||||
|
||||
4. Copy the edited `master-config.yaml` to your Kubernetes master machine (using a USB stick, for example).
|
||||
|
||||
5. The CoreOS bootable ISO comes with a tool called `coreos-install` which will allow us to install CoreOS and configure the machine using a cloud-config file. The following command will download and install stable CoreOS using the `master-config.yaml` file we just created for configuration. Run this on the Kubernetes master.
|
||||
|
||||
> **Warning:** this is a destructive operation that erases disk `sda` on your server.
|
||||
|
||||
```
|
||||
sudo coreos-install -d /dev/sda -C stable -c master-config.yaml
|
||||
```
|
||||
|
||||
6. Once complete, restart the server and boot from `/dev/sda` (you may need to remove the ISO image). When it comes back up, you should have SSH access as the `core` user using the public key provided in the `master-config.yaml` file.
|
||||
|
||||
### Configure TLS
|
||||
|
||||
The master requires the CA certificate, `ca.pem`; its own certificate, `apiserver.pem` and its private key, `apiserver-key.pem`. This [CoreOS guide](https://coreos.com/kubernetes/docs/latest/openssl.html) explains how to generate these.
|
||||
|
||||
1. Generate the necessary certificates for the master. This [guide for generating Kubernetes TLS Assets](https://coreos.com/kubernetes/docs/latest/openssl.html) explains how to use OpenSSL to generate the required assets.
|
||||
|
||||
2. Send the three files to your master host (using `scp` for example).
|
||||
|
||||
3. Move them to the `/etc/kubernetes/ssl` folder and ensure that only the root user can read the key:
|
||||
|
||||
```
|
||||
# Move keys
|
||||
sudo mkdir -p /etc/kubernetes/ssl/
|
||||
sudo mv -t /etc/kubernetes/ssl/ ca.pem apiserver.pem apiserver-key.pem
|
||||
|
||||
# Set Permissions
|
||||
sudo chmod 600 /etc/kubernetes/ssl/apiserver-key.pem
|
||||
sudo chown root:root /etc/kubernetes/ssl/apiserver-key.pem
|
||||
```
|
||||
|
||||
4. Restart the kubelet to pick up the changes:
|
||||
|
||||
```
|
||||
sudo systemctl restart kubelet
|
||||
```
|
||||
|
||||
## Configure the compute nodes
|
||||
|
||||
The following steps will set up a single Kubernetes node for use as a compute host. Run these steps to deploy each Kubernetes node in your cluster.
|
||||
|
||||
1. Boot up the node machine using the bootable ISO we downloaded earlier. You should be automatically logged in as the `core` user.
|
||||
|
||||
2. Make a copy of the [node cloud-config template](https://raw.githubusercontent.com/projectcalico/calico-cni/k8s-1.1-docs/samples/kubernetes/cloud-config/node-config-template.yaml) for this machine.
|
||||
|
||||
3. Replace the following placeholders in the `node-config.yaml` file to match your deployment.
|
||||
|
||||
- `<HOSTNAME>`: Hostname for this node (e.g. kube-node1, kube-node2)
|
||||
- `<SSH_PUBLIC_KEY>`: The public key you will use for SSH access to this server.
|
||||
- `<KUBERNETES_MASTER>`: The IPv4 address of the Kubernetes master.
|
||||
|
||||
4. Replace the following placeholders with the contents of their respective files.
|
||||
|
||||
- `<CA_CERT>`: Complete contents of `ca.pem`
|
||||
- `<CA_KEY_CERT>`: Complete contents of `ca-key.pem`
|
||||
|
||||
> **Important:** in a production deployment, embedding the secret key in cloud-config is a bad idea! In production you should use an appropriate secret manager.
|
||||
|
||||
> **Important:** Make sure you indent the entire file to match the indentation of the placeholder. For example:
|
||||
>
|
||||
> ```
|
||||
> - path: /etc/kubernetes/ssl/ca.pem
|
||||
> owner: core
|
||||
> permissions: 0644
|
||||
> content: |
|
||||
> <CA_CERT>
|
||||
> ```
|
||||
>
|
||||
> should look like this once the certificate is in place:
|
||||
>
|
||||
> ```
|
||||
> - path: /etc/kubernetes/ssl/ca.pem
|
||||
> owner: core
|
||||
> permissions: 0644
|
||||
> content: |
|
||||
> -----BEGIN CERTIFICATE-----
|
||||
> MIIC9zCCAd+gAwIBAgIJAJMnVnhVhy5pMA0GCSqGSIb3DQEBCwUAMBIxEDAOBgNV
|
||||
> ...<snip>...
|
||||
> QHwi1rNc8eBLNrd4BM/A1ZeDVh/Q9KxN+ZG/hHIXhmWKgN5wQx6/81FIFg==
|
||||
> -----END CERTIFICATE-----
|
||||
> ```
|
||||
|
||||
5. Move the modified `node-config.yaml` to your Kubernetes node machine and install and configure CoreOS on the node using the following command.
|
||||
|
||||
> **Warning:** this is a destructive operation that erases disk `sda` on your server.
|
||||
|
||||
```
|
||||
sudo coreos-install -d /dev/sda -C stable -c node-config.yaml
|
||||
```
|
||||
|
||||
6. Once complete, restart the server and boot into `/dev/sda`. When it comes back up, you should have SSH access as the `core` user using the public key provided in the `node-config.yaml` file. It will take some time for the node to be fully configured.
|
||||
|
||||
## Configure Kubeconfig
|
||||
|
||||
To administer your cluster from a separate host, you will need the client and admin certificates generated earlier (`ca.pem`, `admin.pem`, `admin-key.pem`). With certificates in place, run the following commands with the appropriate filepaths.
|
||||
|
||||
```
|
||||
kubectl config set-cluster calico-cluster --server=https://<KUBERNETES_MASTER> --certificate-authority=<CA_CERT_PATH>
|
||||
kubectl config set-credentials calico-admin --certificate-authority=<CA_CERT_PATH> --client-key=<ADMIN_KEY_PATH> --client-certificate=<ADMIN_CERT_PATH>
|
||||
kubectl config set-context calico --cluster=calico-cluster --user=calico-admin
|
||||
kubectl config use-context calico
|
||||
```
|
||||
|
||||
Check your work with `kubectl get nodes`.
|
||||
|
||||
## Install the DNS Addon
|
||||
|
||||
Most Kubernetes deployments will require the DNS addon for service discovery. To install DNS, create the skydns service and replication controller provided.
|
||||
|
||||
```
|
||||
kubectl create -f https://raw.githubusercontent.com/projectcalico/calico-cni/k8s-1.1-docs/samples/kubernetes/master/dns/skydns.yaml
|
||||
```
|
||||
|
||||
## Install the Kubernetes UI Addon (Optional)
|
||||
|
||||
The Kubernetes UI can be installed using `kubectl` to run the following manifest file.
|
||||
|
||||
```
|
||||
kubectl create -f https://raw.githubusercontent.com/projectcalico/calico-cni/k8s-1.1-docs/samples/kubernetes/master/kube-ui/kube-ui.yaml
|
||||
```
|
||||
|
||||
## Launch other Services With Calico-Kubernetes
|
||||
|
||||
At this point, you have a fully functioning cluster running on Kubernetes with a master and two nodes networked with Calico. You can now follow any of the [standard documentation](../../../examples/) to set up other services on your cluster.
|
||||
|
||||
## Connectivity to outside the cluster
|
||||
|
||||
Because containers in this guide have private `192.168.0.0/16` IPs, you will need NAT to allow connectivity between containers and the internet. However, in a production data center deployment, NAT is not always necessary, since Calico can peer with the data center's border routers over BGP.
|
||||
|
||||
### NAT on the nodes
|
||||
|
||||
The simplest method for enabling connectivity from containers to the internet is to use outgoing NAT on your Kubernetes nodes.
|
||||
|
||||
Calico can provide outgoing NAT for containers. To enable it, use the following `calicoctl` command:
|
||||
|
||||
```
|
||||
ETCD_AUTHORITY=<master_ip:6666> calicoctl pool add <CONTAINER_SUBNET> --nat-outgoing
|
||||
```
|
||||
|
||||
By default, `<CONTAINER_SUBNET>` will be `192.168.0.0/16`. You can find out which pools have been configured with the following command:
|
||||
|
||||
```
|
||||
ETCD_AUTHORITY=<master_ip:6666> calicoctl pool show
|
||||
```
|
||||
|
||||
### NAT at the border router
|
||||
|
||||
In a data center environment, it is recommended to configure Calico to peer with the border routers over BGP. This means that the container IPs will be routable anywhere in the data center, and so NAT is not needed on the nodes (though it may be enabled at the data center edge to allow outbound-only internet connectivity).
|
||||
|
||||
The Calico documentation contains more information on how to configure Calico to [peer with existing infrastructure](https://github.com/projectcalico/calico-containers/blob/master/docs/ExternalConnectivity.md).
|
||||
|
||||
[](https://github.com/igrigorik/ga-beacon)
|
||||
|
||||
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
|
||||
[]()
|
||||
|
@@ -31,676 +31,8 @@ Documentation for other releases can be found at
|
||||
<!-- END STRIP_FOR_RELEASE -->
|
||||
|
||||
<!-- END MUNGE: UNVERSIONED_WARNING -->
|
||||
Bare Metal CoreOS with Kubernetes (OFFLINE)
|
||||
------------------------------------------
|
||||
Deploy a CoreOS running Kubernetes environment. This particular guild is made to help those in an OFFLINE system, wither for testing a POC before the real deal, or you are restricted to be totally offline for your applications.
|
||||
|
||||
**Table of Contents**
|
||||
|
||||
- [Prerequisites](#prerequisites)
|
||||
- [High Level Design](#high-level-design)
|
||||
- [This Guides variables](#this-guides-variables)
|
||||
- [Setup PXELINUX CentOS](#setup-pxelinux-centos)
|
||||
- [Adding CoreOS to PXE](#adding-coreos-to-pxe)
|
||||
- [DHCP configuration](#dhcp-configuration)
|
||||
- [Kubernetes](#kubernetes)
|
||||
- [Cloud Configs](#cloud-configs)
|
||||
- [master.yml](#masteryml)
|
||||
- [node.yml](#nodeyml)
|
||||
- [New pxelinux.cfg file](#new-pxelinuxcfg-file)
|
||||
- [Specify the pxelinux targets](#specify-the-pxelinux-targets)
|
||||
- [Creating test pod](#creating-test-pod)
|
||||
- [Helping commands for debugging](#helping-commands-for-debugging)
|
||||
|
||||
|
||||
## Prerequisites
|
||||
|
||||
1. Installed *CentOS 6* for PXE server
|
||||
2. At least two bare metal nodes to work with
|
||||
|
||||
## High Level Design
|
||||
|
||||
1. Manage the tftp directory
|
||||
* /tftpboot/(coreos)(centos)(RHEL)
|
||||
* /tftpboot/pxelinux.0/(MAC) -> linked to Linux image config file
|
||||
2. Update per install the link for pxelinux
|
||||
3. Update the DHCP config to reflect the host needing deployment
|
||||
4. Setup nodes to deploy CoreOS creating a etcd cluster.
|
||||
5. Have no access to the public [etcd discovery tool](https://discovery.etcd.io/).
|
||||
6. Installing the CoreOS slaves to become Kubernetes nodes.
|
||||
|
||||
## This Guides variables
|
||||
|
||||
| Node Description | MAC | IP |
|
||||
| :---------------------------- | :---------------: | :---------: |
|
||||
| CoreOS/etcd/Kubernetes Master | d0:00:67:13:0d:00 | 10.20.30.40 |
|
||||
| CoreOS Slave 1 | d0:00:67:13:0d:01 | 10.20.30.41 |
|
||||
| CoreOS Slave 2 | d0:00:67:13:0d:02 | 10.20.30.42 |
|
||||
|
||||
|
||||
## Setup PXELINUX CentOS
|
||||
|
||||
To setup CentOS PXELINUX environment there is a complete [guide here](http://docs.fedoraproject.org/en-US/Fedora/7/html/Installation_Guide/ap-pxe-server.html). This section is the abbreviated version.
|
||||
|
||||
1. Install packages needed on CentOS
|
||||
|
||||
sudo yum install tftp-server dhcp syslinux
|
||||
|
||||
2. `vi /etc/xinetd.d/tftp` to enable tftp service and change disable to 'no'
|
||||
disable = no
|
||||
|
||||
3. Copy over the syslinux images we will need.
|
||||
|
||||
su -
|
||||
mkdir -p /tftpboot
|
||||
cd /tftpboot
|
||||
cp /usr/share/syslinux/pxelinux.0 /tftpboot
|
||||
cp /usr/share/syslinux/menu.c32 /tftpboot
|
||||
cp /usr/share/syslinux/memdisk /tftpboot
|
||||
cp /usr/share/syslinux/mboot.c32 /tftpboot
|
||||
cp /usr/share/syslinux/chain.c32 /tftpboot
|
||||
|
||||
/sbin/service dhcpd start
|
||||
/sbin/service xinetd start
|
||||
/sbin/chkconfig tftp on
|
||||
|
||||
4. Setup default boot menu
|
||||
|
||||
mkdir /tftpboot/pxelinux.cfg
|
||||
touch /tftpboot/pxelinux.cfg/default
|
||||
|
||||
5. Edit the menu `vi /tftpboot/pxelinux.cfg/default`
|
||||
|
||||
default menu.c32
|
||||
prompt 0
|
||||
timeout 15
|
||||
ONTIMEOUT local
|
||||
display boot.msg
|
||||
|
||||
MENU TITLE Main Menu
|
||||
|
||||
LABEL local
|
||||
MENU LABEL Boot local hard drive
|
||||
LOCALBOOT 0
|
||||
|
||||
Now you should have a working PXELINUX setup to image CoreOS nodes. You can verify the services by using VirtualBox locally or with bare metal servers.
|
||||
|
||||
## Adding CoreOS to PXE
|
||||
|
||||
This section describes how to setup the CoreOS images to live alongside a pre-existing PXELINUX environment.
|
||||
|
||||
1. Find or create the TFTP root directory that everything will be based off of.
|
||||
* For this document we will assume `/tftpboot/` is our root directory.
|
||||
2. Once we know and have our tftp root directory we will create a new directory structure for our CoreOS images.
|
||||
3. Download the CoreOS PXE files provided by the CoreOS team.
|
||||
|
||||
MY_TFTPROOT_DIR=/tftpboot
|
||||
mkdir -p $MY_TFTPROOT_DIR/images/coreos/
|
||||
cd $MY_TFTPROOT_DIR/images/coreos/
|
||||
wget http://stable.release.core-os.net/amd64-usr/current/coreos_production_pxe.vmlinuz
|
||||
wget http://stable.release.core-os.net/amd64-usr/current/coreos_production_pxe.vmlinuz.sig
|
||||
wget http://stable.release.core-os.net/amd64-usr/current/coreos_production_pxe_image.cpio.gz
|
||||
wget http://stable.release.core-os.net/amd64-usr/current/coreos_production_pxe_image.cpio.gz.sig
|
||||
gpg --verify coreos_production_pxe.vmlinuz.sig
|
||||
gpg --verify coreos_production_pxe_image.cpio.gz.sig
|
||||
|
||||
4. Edit the menu `vi /tftpboot/pxelinux.cfg/default` again
|
||||
|
||||
default menu.c32
|
||||
prompt 0
|
||||
timeout 300
|
||||
ONTIMEOUT local
|
||||
display boot.msg
|
||||
|
||||
MENU TITLE Main Menu
|
||||
|
||||
LABEL local
|
||||
MENU LABEL Boot local hard drive
|
||||
LOCALBOOT 0
|
||||
|
||||
MENU BEGIN CoreOS Menu
|
||||
|
||||
LABEL coreos-master
|
||||
MENU LABEL CoreOS Master
|
||||
KERNEL images/coreos/coreos_production_pxe.vmlinuz
|
||||
APPEND initrd=images/coreos/coreos_production_pxe_image.cpio.gz cloud-config-url=http://<xxx.xxx.xxx.xxx>/pxe-cloud-config-single-master.yml
|
||||
|
||||
LABEL coreos-slave
|
||||
MENU LABEL CoreOS Slave
|
||||
KERNEL images/coreos/coreos_production_pxe.vmlinuz
|
||||
APPEND initrd=images/coreos/coreos_production_pxe_image.cpio.gz cloud-config-url=http://<xxx.xxx.xxx.xxx>/pxe-cloud-config-slave.yml
|
||||
MENU END
|
||||
|
||||
This configuration file will now boot from local drive but have the option to PXE image CoreOS.
|
||||
|
||||
## DHCP configuration
|
||||
|
||||
This section covers configuring the DHCP server to hand out our new images. In this case we are assuming that there are other servers that will boot alongside other images.
|
||||
|
||||
1. Add the `filename` to the _host_ or _subnet_ sections.
|
||||
|
||||
filename "/tftpboot/pxelinux.0";
|
||||
|
||||
2. At this point we want to make pxelinux configuration files that will be the templates for the different CoreOS deployments.
|
||||
|
||||
subnet 10.20.30.0 netmask 255.255.255.0 {
|
||||
next-server 10.20.30.242;
|
||||
option broadcast-address 10.20.30.255;
|
||||
filename "<other default image>";
|
||||
|
||||
...
|
||||
# http://www.syslinux.org/wiki/index.php/PXELINUX
|
||||
host core_os_master {
|
||||
hardware ethernet d0:00:67:13:0d:00;
|
||||
option routers 10.20.30.1;
|
||||
fixed-address 10.20.30.40;
|
||||
option domain-name-servers 10.20.30.242;
|
||||
filename "/pxelinux.0";
|
||||
}
|
||||
host core_os_slave {
|
||||
hardware ethernet d0:00:67:13:0d:01;
|
||||
option routers 10.20.30.1;
|
||||
fixed-address 10.20.30.41;
|
||||
option domain-name-servers 10.20.30.242;
|
||||
filename "/pxelinux.0";
|
||||
}
|
||||
host core_os_slave2 {
|
||||
hardware ethernet d0:00:67:13:0d:02;
|
||||
option routers 10.20.30.1;
|
||||
fixed-address 10.20.30.42;
|
||||
option domain-name-servers 10.20.30.242;
|
||||
filename "/pxelinux.0";
|
||||
}
|
||||
...
|
||||
}
|
||||
|
||||
We will be specifying the node configuration later in the guide.
|
||||
|
||||
## Kubernetes
|
||||
|
||||
To deploy our configuration we need to create an `etcd` master. To do so we want to pxe CoreOS with a specific cloud-config.yml. There are two options we have here.
|
||||
1. Is to template the cloud config file and programmatically create new static configs for different cluster setups.
|
||||
2. Have a service discovery protocol running in our stack to do auto discovery.
|
||||
|
||||
This demo we just make a static single `etcd` server to host our Kubernetes and `etcd` master servers.
|
||||
|
||||
Since we are OFFLINE here most of the helping processes in CoreOS and Kubernetes are then limited. To do our setup we will then have to download and serve up our binaries for Kubernetes in our local environment.
|
||||
|
||||
An easy solution is to host a small web server on the DHCP/TFTP host for all our binaries to make them available to the local CoreOS PXE machines.
|
||||
|
||||
To get this up and running we are going to setup a simple `apache` server to serve our binaries needed to bootstrap Kubernetes.
|
||||
|
||||
This is on the PXE server from the previous section:
|
||||
|
||||
rm /etc/httpd/conf.d/welcome.conf
|
||||
cd /var/www/html/
|
||||
wget -O kube-register https://github.com/kelseyhightower/kube-register/releases/download/v0.0.2/kube-register-0.0.2-linux-amd64
|
||||
wget -O setup-network-environment https://github.com/kelseyhightower/setup-network-environment/releases/download/v1.0.0/setup-network-environment
|
||||
wget https://storage.googleapis.com/kubernetes-release/release/v0.15.0/bin/linux/amd64/kubernetes --no-check-certificate
|
||||
wget https://storage.googleapis.com/kubernetes-release/release/v0.15.0/bin/linux/amd64/kube-apiserver --no-check-certificate
|
||||
wget https://storage.googleapis.com/kubernetes-release/release/v0.15.0/bin/linux/amd64/kube-controller-manager --no-check-certificate
|
||||
wget https://storage.googleapis.com/kubernetes-release/release/v0.15.0/bin/linux/amd64/kube-scheduler --no-check-certificate
|
||||
wget https://storage.googleapis.com/kubernetes-release/release/v0.15.0/bin/linux/amd64/kubectl --no-check-certificate
|
||||
wget https://storage.googleapis.com/kubernetes-release/release/v0.15.0/bin/linux/amd64/kubecfg --no-check-certificate
|
||||
wget https://storage.googleapis.com/kubernetes-release/release/v0.15.0/bin/linux/amd64/kubelet --no-check-certificate
|
||||
wget https://storage.googleapis.com/kubernetes-release/release/v0.15.0/bin/linux/amd64/kube-proxy --no-check-certificate
|
||||
wget -O flanneld https://storage.googleapis.com/k8s/flanneld --no-check-certificate
|
||||
|
||||
This sets up our binaries we need to run Kubernetes. This would need to be enhanced to download from the Internet for updates in the future.
|
||||
|
||||
Now for the good stuff!
|
||||
|
||||
## Cloud Configs
|
||||
|
||||
The following config files are tailored for the OFFLINE version of a Kubernetes deployment.
|
||||
|
||||
These are based on the work found here: [master.yml](cloud-configs/master.yaml), [node.yml](cloud-configs/node.yaml)
|
||||
|
||||
To make the setup work, you need to replace a few placeholders:
|
||||
|
||||
- Replace `<PXE_SERVER_IP>` with your PXE server ip address (e.g. 10.20.30.242)
|
||||
- Replace `<MASTER_SERVER_IP>` with the Kubernetes master ip address (e.g. 10.20.30.40)
|
||||
- If you run a private docker registry, replace `rdocker.example.com` with your docker registry dns name.
|
||||
- If you use a proxy, replace `rproxy.example.com` with your proxy server (and port)
|
||||
- Add your own SSH public key(s) to the cloud config at the end
|
||||
|
||||
### master.yml
|
||||
|
||||
On the PXE server make and fill in the variables `vi /var/www/html/coreos/pxe-cloud-config-master.yml`.
|
||||
|
||||
|
||||
#cloud-config
|
||||
---
|
||||
write_files:
|
||||
- path: /opt/bin/waiter.sh
|
||||
owner: root
|
||||
content: |
|
||||
#! /usr/bin/bash
|
||||
until curl http://127.0.0.1:4001/v2/machines; do sleep 2; done
|
||||
- path: /opt/bin/kubernetes-download.sh
|
||||
owner: root
|
||||
permissions: 0755
|
||||
content: |
|
||||
#! /usr/bin/bash
|
||||
/usr/bin/wget -N -P "/opt/bin" "http://<PXE_SERVER_IP>/kubectl"
|
||||
/usr/bin/wget -N -P "/opt/bin" "http://<PXE_SERVER_IP>/kubernetes"
|
||||
/usr/bin/wget -N -P "/opt/bin" "http://<PXE_SERVER_IP>/kubecfg"
|
||||
chmod +x /opt/bin/*
|
||||
- path: /etc/profile.d/opt-path.sh
|
||||
owner: root
|
||||
permissions: 0755
|
||||
content: |
|
||||
#! /usr/bin/bash
|
||||
PATH=$PATH/opt/bin
|
||||
coreos:
|
||||
units:
|
||||
- name: 10-eno1.network
|
||||
runtime: true
|
||||
content: |
|
||||
[Match]
|
||||
Name=eno1
|
||||
[Network]
|
||||
DHCP=yes
|
||||
- name: 20-nodhcp.network
|
||||
runtime: true
|
||||
content: |
|
||||
[Match]
|
||||
Name=en*
|
||||
[Network]
|
||||
DHCP=none
|
||||
- name: get-kube-tools.service
|
||||
runtime: true
|
||||
command: start
|
||||
content: |
|
||||
[Service]
|
||||
ExecStartPre=-/usr/bin/mkdir -p /opt/bin
|
||||
ExecStart=/opt/bin/kubernetes-download.sh
|
||||
RemainAfterExit=yes
|
||||
Type=oneshot
|
||||
- name: setup-network-environment.service
|
||||
command: start
|
||||
content: |
|
||||
[Unit]
|
||||
Description=Setup Network Environment
|
||||
Documentation=https://github.com/kelseyhightower/setup-network-environment
|
||||
Requires=network-online.target
|
||||
After=network-online.target
|
||||
[Service]
|
||||
ExecStartPre=-/usr/bin/mkdir -p /opt/bin
|
||||
ExecStartPre=/usr/bin/wget -N -P /opt/bin http://<PXE_SERVER_IP>/setup-network-environment
|
||||
ExecStartPre=/usr/bin/chmod +x /opt/bin/setup-network-environment
|
||||
ExecStart=/opt/bin/setup-network-environment
|
||||
RemainAfterExit=yes
|
||||
Type=oneshot
|
||||
- name: etcd.service
|
||||
command: start
|
||||
content: |
|
||||
[Unit]
|
||||
Description=etcd
|
||||
Requires=setup-network-environment.service
|
||||
After=setup-network-environment.service
|
||||
[Service]
|
||||
EnvironmentFile=/etc/network-environment
|
||||
User=etcd
|
||||
PermissionsStartOnly=true
|
||||
ExecStart=/usr/bin/etcd \
|
||||
--name ${DEFAULT_IPV4} \
|
||||
--addr ${DEFAULT_IPV4}:4001 \
|
||||
--bind-addr 0.0.0.0 \
|
||||
--cluster-active-size 1 \
|
||||
--data-dir /var/lib/etcd \
|
||||
--http-read-timeout 86400 \
|
||||
--peer-addr ${DEFAULT_IPV4}:7001 \
|
||||
--snapshot true
|
||||
Restart=always
|
||||
RestartSec=10s
|
||||
- name: fleet.socket
|
||||
command: start
|
||||
content: |
|
||||
[Socket]
|
||||
ListenStream=/var/run/fleet.sock
|
||||
- name: fleet.service
|
||||
command: start
|
||||
content: |
|
||||
[Unit]
|
||||
Description=fleet daemon
|
||||
Wants=etcd.service
|
||||
After=etcd.service
|
||||
Wants=fleet.socket
|
||||
After=fleet.socket
|
||||
[Service]
|
||||
Environment="FLEET_ETCD_SERVERS=http://127.0.0.1:4001"
|
||||
Environment="FLEET_METADATA=role=master"
|
||||
ExecStart=/usr/bin/fleetd
|
||||
Restart=always
|
||||
RestartSec=10s
|
||||
- name: etcd-waiter.service
|
||||
command: start
|
||||
content: |
|
||||
[Unit]
|
||||
Description=etcd waiter
|
||||
Wants=network-online.target
|
||||
Wants=etcd.service
|
||||
After=etcd.service
|
||||
After=network-online.target
|
||||
Before=flannel.service
|
||||
Before=setup-network-environment.service
|
||||
[Service]
|
||||
ExecStartPre=/usr/bin/chmod +x /opt/bin/waiter.sh
|
||||
ExecStart=/usr/bin/bash /opt/bin/waiter.sh
|
||||
RemainAfterExit=true
|
||||
Type=oneshot
|
||||
- name: flannel.service
|
||||
command: start
|
||||
content: |
|
||||
[Unit]
|
||||
Wants=etcd-waiter.service
|
||||
After=etcd-waiter.service
|
||||
Requires=etcd.service
|
||||
After=etcd.service
|
||||
After=network-online.target
|
||||
Wants=network-online.target
|
||||
Description=flannel is an etcd backed overlay network for containers
|
||||
[Service]
|
||||
Type=notify
|
||||
ExecStartPre=-/usr/bin/mkdir -p /opt/bin
|
||||
ExecStartPre=/usr/bin/wget -N -P /opt/bin http://<PXE_SERVER_IP>/flanneld
|
||||
ExecStartPre=/usr/bin/chmod +x /opt/bin/flanneld
|
||||
ExecStartPre=-/usr/bin/etcdctl mk /coreos.com/network/config '{"Network":"10.100.0.0/16", "Backend": {"Type": "vxlan"}}'
|
||||
ExecStart=/opt/bin/flanneld
|
||||
- name: kube-apiserver.service
|
||||
command: start
|
||||
content: |
|
||||
[Unit]
|
||||
Description=Kubernetes API Server
|
||||
Documentation=https://github.com/kubernetes/kubernetes
|
||||
Requires=etcd.service
|
||||
After=etcd.service
|
||||
[Service]
|
||||
ExecStartPre=-/usr/bin/mkdir -p /opt/bin
|
||||
ExecStartPre=/usr/bin/wget -N -P /opt/bin http://<PXE_SERVER_IP>/kube-apiserver
|
||||
ExecStartPre=/usr/bin/chmod +x /opt/bin/kube-apiserver
|
||||
ExecStart=/opt/bin/kube-apiserver \
|
||||
--address=0.0.0.0 \
|
||||
--port=8080 \
|
||||
--service-cluster-ip-range=10.100.0.0/16 \
|
||||
--etcd-servers=http://127.0.0.1:4001 \
|
||||
--logtostderr=true
|
||||
Restart=always
|
||||
RestartSec=10
|
||||
- name: kube-controller-manager.service
|
||||
command: start
|
||||
content: |
|
||||
[Unit]
|
||||
Description=Kubernetes Controller Manager
|
||||
Documentation=https://github.com/kubernetes/kubernetes
|
||||
Requires=kube-apiserver.service
|
||||
After=kube-apiserver.service
|
||||
[Service]
|
||||
ExecStartPre=/usr/bin/wget -N -P /opt/bin http://<PXE_SERVER_IP>/kube-controller-manager
|
||||
ExecStartPre=/usr/bin/chmod +x /opt/bin/kube-controller-manager
|
||||
ExecStart=/opt/bin/kube-controller-manager \
|
||||
--master=127.0.0.1:8080 \
|
||||
--logtostderr=true
|
||||
Restart=always
|
||||
RestartSec=10
|
||||
- name: kube-scheduler.service
|
||||
command: start
|
||||
content: |
|
||||
[Unit]
|
||||
Description=Kubernetes Scheduler
|
||||
Documentation=https://github.com/kubernetes/kubernetes
|
||||
Requires=kube-apiserver.service
|
||||
After=kube-apiserver.service
|
||||
[Service]
|
||||
ExecStartPre=/usr/bin/wget -N -P /opt/bin http://<PXE_SERVER_IP>/kube-scheduler
|
||||
ExecStartPre=/usr/bin/chmod +x /opt/bin/kube-scheduler
|
||||
ExecStart=/opt/bin/kube-scheduler --master=127.0.0.1:8080
|
||||
Restart=always
|
||||
RestartSec=10
|
||||
- name: kube-register.service
|
||||
command: start
|
||||
content: |
|
||||
[Unit]
|
||||
Description=Kubernetes Registration Service
|
||||
Documentation=https://github.com/kelseyhightower/kube-register
|
||||
Requires=kube-apiserver.service
|
||||
After=kube-apiserver.service
|
||||
Requires=fleet.service
|
||||
After=fleet.service
|
||||
[Service]
|
||||
ExecStartPre=/usr/bin/wget -N -P /opt/bin http://<PXE_SERVER_IP>/kube-register
|
||||
ExecStartPre=/usr/bin/chmod +x /opt/bin/kube-register
|
||||
ExecStart=/opt/bin/kube-register \
|
||||
--metadata=role=node \
|
||||
--fleet-endpoint=unix:///var/run/fleet.sock \
|
||||
--healthz-port=10248 \
|
||||
--api-endpoint=http://127.0.0.1:8080
|
||||
Restart=always
|
||||
RestartSec=10
|
||||
update:
|
||||
group: stable
|
||||
reboot-strategy: off
|
||||
ssh_authorized_keys:
|
||||
- ssh-rsa AAAAB3NzaC1yc2EAAAAD...
|
||||
|
||||
|
||||
### node.yml
|
||||
|
||||
On the PXE server make and fill in the variables `vi /var/www/html/coreos/pxe-cloud-config-slave.yml`.
|
||||
|
||||
#cloud-config
|
||||
---
|
||||
write_files:
|
||||
- path: /etc/default/docker
|
||||
content: |
|
||||
DOCKER_EXTRA_OPTS='--insecure-registry="rdocker.example.com:5000"'
|
||||
coreos:
|
||||
units:
|
||||
- name: 10-eno1.network
|
||||
runtime: true
|
||||
content: |
|
||||
[Match]
|
||||
Name=eno1
|
||||
[Network]
|
||||
DHCP=yes
|
||||
- name: 20-nodhcp.network
|
||||
runtime: true
|
||||
content: |
|
||||
[Match]
|
||||
Name=en*
|
||||
[Network]
|
||||
DHCP=none
|
||||
- name: etcd.service
|
||||
mask: true
|
||||
- name: docker.service
|
||||
drop-ins:
|
||||
- name: 50-insecure-registry.conf
|
||||
content: |
|
||||
[Service]
|
||||
Environment="HTTP_PROXY=http://rproxy.example.com:3128/" "NO_PROXY=localhost,127.0.0.0/8,rdocker.example.com"
|
||||
- name: fleet.service
|
||||
command: start
|
||||
content: |
|
||||
[Unit]
|
||||
Description=fleet daemon
|
||||
Wants=fleet.socket
|
||||
After=fleet.socket
|
||||
[Service]
|
||||
Environment="FLEET_ETCD_SERVERS=http://<MASTER_SERVER_IP>:4001"
|
||||
Environment="FLEET_METADATA=role=node"
|
||||
ExecStart=/usr/bin/fleetd
|
||||
Restart=always
|
||||
RestartSec=10s
|
||||
- name: flannel.service
|
||||
command: start
|
||||
content: |
|
||||
[Unit]
|
||||
After=network-online.target
|
||||
Wants=network-online.target
|
||||
Description=flannel is an etcd backed overlay network for containers
|
||||
[Service]
|
||||
Type=notify
|
||||
ExecStartPre=-/usr/bin/mkdir -p /opt/bin
|
||||
ExecStartPre=/usr/bin/wget -N -P /opt/bin http://<PXE_SERVER_IP>/flanneld
|
||||
ExecStartPre=/usr/bin/chmod +x /opt/bin/flanneld
|
||||
ExecStart=/opt/bin/flanneld -etcd-endpoints http://<MASTER_SERVER_IP>:4001
|
||||
- name: docker.service
|
||||
command: start
|
||||
content: |
|
||||
[Unit]
|
||||
After=flannel.service
|
||||
Wants=flannel.service
|
||||
Description=Docker Application Container Engine
|
||||
Documentation=http://docs.docker.io
|
||||
[Service]
|
||||
EnvironmentFile=-/etc/default/docker
|
||||
EnvironmentFile=/run/flannel/subnet.env
|
||||
ExecStartPre=/bin/mount --make-rprivate /
|
||||
ExecStart=/usr/bin/docker daemon --bip=${FLANNEL_SUBNET} --mtu=${FLANNEL_MTU} -s=overlay -H fd:// ${DOCKER_EXTRA_OPTS}
|
||||
[Install]
|
||||
WantedBy=multi-user.target
|
||||
- name: setup-network-environment.service
|
||||
command: start
|
||||
content: |
|
||||
[Unit]
|
||||
Description=Setup Network Environment
|
||||
Documentation=https://github.com/kelseyhightower/setup-network-environment
|
||||
Requires=network-online.target
|
||||
After=network-online.target
|
||||
[Service]
|
||||
ExecStartPre=-/usr/bin/mkdir -p /opt/bin
|
||||
ExecStartPre=/usr/bin/wget -N -P /opt/bin http://<PXE_SERVER_IP>/setup-network-environment
|
||||
ExecStartPre=/usr/bin/chmod +x /opt/bin/setup-network-environment
|
||||
ExecStart=/opt/bin/setup-network-environment
|
||||
RemainAfterExit=yes
|
||||
Type=oneshot
|
||||
- name: kube-proxy.service
|
||||
command: start
|
||||
content: |
|
||||
[Unit]
|
||||
Description=Kubernetes Proxy
|
||||
Documentation=https://github.com/kubernetes/kubernetes
|
||||
Requires=setup-network-environment.service
|
||||
After=setup-network-environment.service
|
||||
[Service]
|
||||
ExecStartPre=/usr/bin/wget -N -P /opt/bin http://<PXE_SERVER_IP>/kube-proxy
|
||||
ExecStartPre=/usr/bin/chmod +x /opt/bin/kube-proxy
|
||||
ExecStart=/opt/bin/kube-proxy \
|
||||
--etcd-servers=http://<MASTER_SERVER_IP>:4001 \
|
||||
--logtostderr=true
|
||||
Restart=always
|
||||
RestartSec=10
|
||||
- name: kube-kubelet.service
|
||||
command: start
|
||||
content: |
|
||||
[Unit]
|
||||
Description=Kubernetes Kubelet
|
||||
Documentation=https://github.com/kubernetes/kubernetes
|
||||
Requires=setup-network-environment.service
|
||||
After=setup-network-environment.service
|
||||
[Service]
|
||||
EnvironmentFile=/etc/network-environment
|
||||
ExecStartPre=/usr/bin/wget -N -P /opt/bin http://<PXE_SERVER_IP>/kubelet
|
||||
ExecStartPre=/usr/bin/chmod +x /opt/bin/kubelet
|
||||
ExecStart=/opt/bin/kubelet \
|
||||
--address=0.0.0.0 \
|
||||
--port=10250 \
|
||||
--hostname-override=${DEFAULT_IPV4} \
|
||||
--api-servers=<MASTER_SERVER_IP>:8080 \
|
||||
--healthz-bind-address=0.0.0.0 \
|
||||
--healthz-port=10248 \
|
||||
--logtostderr=true
|
||||
Restart=always
|
||||
RestartSec=10
|
||||
update:
|
||||
group: stable
|
||||
reboot-strategy: off
|
||||
ssh_authorized_keys:
|
||||
- ssh-rsa AAAAB3NzaC1yc2EAAAAD...
|
||||
|
||||
|
||||
## New pxelinux.cfg file
|
||||
|
||||
Create a pxelinux target file for a _slave_ node: `vi /tftpboot/pxelinux.cfg/coreos-node-slave`
|
||||
|
||||
default coreos
|
||||
prompt 1
|
||||
timeout 15
|
||||
|
||||
display boot.msg
|
||||
|
||||
label coreos
|
||||
menu default
|
||||
kernel images/coreos/coreos_production_pxe.vmlinuz
|
||||
append initrd=images/coreos/coreos_production_pxe_image.cpio.gz cloud-config-url=http://<pxe-host-ip>/coreos/pxe-cloud-config-slave.yml console=tty0 console=ttyS0 coreos.autologin=tty1 coreos.autologin=ttyS0
|
||||
|
||||
And one for the _master_ node: `vi /tftpboot/pxelinux.cfg/coreos-node-master`
|
||||
|
||||
default coreos
|
||||
prompt 1
|
||||
timeout 15
|
||||
|
||||
display boot.msg
|
||||
|
||||
label coreos
|
||||
menu default
|
||||
kernel images/coreos/coreos_production_pxe.vmlinuz
|
||||
append initrd=images/coreos/coreos_production_pxe_image.cpio.gz cloud-config-url=http://<pxe-host-ip>/coreos/pxe-cloud-config-master.yml console=tty0 console=ttyS0 coreos.autologin=tty1 coreos.autologin=ttyS0
|
||||
|
||||
## Specify the pxelinux targets
|
||||
|
||||
Now that we have our new targets setup for master and slave we want to configure the specific hosts to those targets. We will do this by using the pxelinux mechanism of setting a specific MAC addresses to a specific pxelinux.cfg file.
|
||||
|
||||
Refer to the MAC address table in the beginning of this guide. Documentation for more details can be found [here](http://www.syslinux.org/wiki/index.php/PXELINUX).
|
||||
|
||||
cd /tftpboot/pxelinux.cfg
|
||||
ln -s coreos-node-master 01-d0-00-67-13-0d-00
|
||||
ln -s coreos-node-slave 01-d0-00-67-13-0d-01
|
||||
ln -s coreos-node-slave 01-d0-00-67-13-0d-02
|
||||
|
||||
|
||||
Reboot these servers to get the images PXEd and ready for running containers!
|
||||
|
||||
## Creating test pod
|
||||
|
||||
Now that the CoreOS with Kubernetes installed is up and running lets spin up some Kubernetes pods to demonstrate the system.
|
||||
|
||||
See [a simple nginx example](../../../docs/user-guide/simple-nginx.md) to try out your new cluster.
|
||||
|
||||
For more complete applications, please look in the [examples directory](../../../examples/).
|
||||
|
||||
## Helping commands for debugging
|
||||
|
||||
List all keys in etcd:
|
||||
|
||||
etcdctl ls --recursive
|
||||
|
||||
List fleet machines
|
||||
|
||||
fleetctl list-machines
|
||||
|
||||
Check system status of services on master:
|
||||
|
||||
systemctl status kube-apiserver
|
||||
systemctl status kube-controller-manager
|
||||
systemctl status kube-scheduler
|
||||
systemctl status kube-register
|
||||
|
||||
Check system status of services on a node:
|
||||
|
||||
systemctl status kube-kubelet
|
||||
systemctl status docker.service
|
||||
|
||||
List Kubernetes
|
||||
|
||||
kubectl get pods
|
||||
kubectl get nodes
|
||||
|
||||
|
||||
Kill all pods:
|
||||
|
||||
for i in `kubectl get pods | awk '{print $1}'`; do kubectl delete pod $i; done
|
||||
This file has moved to: http://kubernetes.github.io/docs/getting-started-guides/coreos/bare_metal_offline/
|
||||
|
||||
|
||||
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
|
||||
|
@@ -32,203 +32,8 @@ Documentation for other releases can be found at
|
||||
|
||||
<!-- END MUNGE: UNVERSIONED_WARNING -->
|
||||
|
||||
# CoreOS Multinode Cluster
|
||||
This file has moved to: http://kubernetes.github.io/docs/getting-started-guides/coreos/coreos_multinode_cluster/
|
||||
|
||||
Use the [master.yaml](cloud-configs/master.yaml) and [node.yaml](cloud-configs/node.yaml) cloud-configs to provision a multi-node Kubernetes cluster.
|
||||
|
||||
> **Attention**: This requires at least CoreOS version **[695.0.0][coreos695]**, which includes `etcd2`.
|
||||
|
||||
[coreos695]: https://coreos.com/releases/#695.0.0
|
||||
|
||||
## Overview
|
||||
|
||||
* Provision the master node
|
||||
* Capture the master node private IP address
|
||||
* Edit node.yaml
|
||||
* Provision one or more worker nodes
|
||||
|
||||
### AWS
|
||||
|
||||
*Attention:* Replace `<ami_image_id>` below for a [suitable version of CoreOS image for AWS](https://coreos.com/docs/running-coreos/cloud-providers/ec2/).
|
||||
|
||||
#### Provision the Master
|
||||
|
||||
```sh
|
||||
aws ec2 create-security-group --group-name kubernetes --description "Kubernetes Security Group"
|
||||
aws ec2 authorize-security-group-ingress --group-name kubernetes --protocol tcp --port 22 --cidr 0.0.0.0/0
|
||||
aws ec2 authorize-security-group-ingress --group-name kubernetes --protocol tcp --port 80 --cidr 0.0.0.0/0
|
||||
aws ec2 authorize-security-group-ingress --group-name kubernetes --source-security-group-name kubernetes
|
||||
```
|
||||
|
||||
```sh
|
||||
aws ec2 run-instances \
|
||||
--image-id <ami_image_id> \
|
||||
--key-name <keypair> \
|
||||
--region us-west-2 \
|
||||
--security-groups kubernetes \
|
||||
--instance-type m3.medium \
|
||||
--user-data file://master.yaml
|
||||
```
|
||||
|
||||
#### Capture the private IP address
|
||||
|
||||
```sh
|
||||
aws ec2 describe-instances --instance-id <master-instance-id>
|
||||
```
|
||||
|
||||
#### Edit node.yaml
|
||||
|
||||
Edit `node.yaml` and replace all instances of `<master-private-ip>` with the private IP address of the master node.
|
||||
|
||||
#### Provision worker nodes
|
||||
|
||||
```sh
|
||||
aws ec2 run-instances \
|
||||
--count 1 \
|
||||
--image-id <ami_image_id> \
|
||||
--key-name <keypair> \
|
||||
--region us-west-2 \
|
||||
--security-groups kubernetes \
|
||||
--instance-type m3.medium \
|
||||
--user-data file://node.yaml
|
||||
```
|
||||
|
||||
### Google Compute Engine (GCE)
|
||||
|
||||
*Attention:* Replace `<gce_image_id>` below for a [suitable version of CoreOS image for Google Compute Engine](https://coreos.com/docs/running-coreos/cloud-providers/google-compute-engine/).
|
||||
|
||||
#### Provision the Master
|
||||
|
||||
```sh
|
||||
gcloud compute instances create master \
|
||||
--image-project coreos-cloud \
|
||||
--image <gce_image_id> \
|
||||
--boot-disk-size 200GB \
|
||||
--machine-type n1-standard-1 \
|
||||
--zone us-central1-a \
|
||||
--metadata-from-file user-data=master.yaml
|
||||
```
|
||||
|
||||
#### Capture the private IP address
|
||||
|
||||
```sh
|
||||
gcloud compute instances list
|
||||
```
|
||||
|
||||
#### Edit node.yaml
|
||||
|
||||
Edit `node.yaml` and replace all instances of `<master-private-ip>` with the private IP address of the master node.
|
||||
|
||||
#### Provision worker nodes
|
||||
|
||||
```sh
|
||||
gcloud compute instances create node1 \
|
||||
--image-project coreos-cloud \
|
||||
--image <gce_image_id> \
|
||||
--boot-disk-size 200GB \
|
||||
--machine-type n1-standard-1 \
|
||||
--zone us-central1-a \
|
||||
--metadata-from-file user-data=node.yaml
|
||||
```
|
||||
|
||||
#### Establish network connectivity
|
||||
|
||||
Next, setup an ssh tunnel to the master so you can run kubectl from your local host.
|
||||
In one terminal, run `gcloud compute ssh master --ssh-flag="-L 8080:127.0.0.1:8080"` and in a second
|
||||
run `gcloud compute ssh master --ssh-flag="-R 8080:127.0.0.1:8080"`.
|
||||
|
||||
### OpenStack
|
||||
|
||||
These instructions are for running on the command line. Most of this you can also do through the Horizon dashboard.
|
||||
These instructions were tested on the Ice House release on a Metacloud distribution of OpenStack but should be similar if not the same across other versions/distributions of OpenStack.
|
||||
|
||||
#### Make sure you can connect with OpenStack
|
||||
|
||||
Make sure the environment variables are set for OpenStack such as:
|
||||
|
||||
```sh
|
||||
OS_TENANT_ID
|
||||
OS_PASSWORD
|
||||
OS_AUTH_URL
|
||||
OS_USERNAME
|
||||
OS_TENANT_NAME
|
||||
```
|
||||
|
||||
Test this works with something like:
|
||||
|
||||
```
|
||||
nova list
|
||||
```
|
||||
|
||||
#### Get a Suitable CoreOS Image
|
||||
|
||||
You'll need a [suitable version of CoreOS image for OpenStack](https://coreos.com/os/docs/latest/booting-on-openstack.html)
|
||||
Once you download that, upload it to glance. An example is shown below:
|
||||
|
||||
```sh
|
||||
glance image-create --name CoreOS723 \
|
||||
--container-format bare --disk-format qcow2 \
|
||||
--file coreos_production_openstack_image.img \
|
||||
--is-public True
|
||||
```
|
||||
|
||||
#### Create security group
|
||||
|
||||
```sh
|
||||
nova secgroup-create kubernetes "Kubernetes Security Group"
|
||||
nova secgroup-add-rule kubernetes tcp 22 22 0.0.0.0/0
|
||||
nova secgroup-add-rule kubernetes tcp 80 80 0.0.0.0/0
|
||||
```
|
||||
|
||||
#### Provision the Master
|
||||
|
||||
```sh
|
||||
nova boot \
|
||||
--image <image_name> \
|
||||
--key-name <my_key> \
|
||||
--flavor <flavor id> \
|
||||
--security-group kubernetes \
|
||||
--user-data files/master.yaml \
|
||||
kube-master
|
||||
```
|
||||
|
||||
```<image_name>``` is the CoreOS image name. In our example we can use the image we created in the previous step and put in 'CoreOS723'
|
||||
|
||||
```<my_key>``` is the keypair name that you already generated to access the instance.
|
||||
|
||||
```<flavor_id>``` is the flavor ID you use to size the instance. Run ```nova flavor-list``` to get the IDs. 3 on the system this was tested with gives the m1.large size.
|
||||
|
||||
The important part is to ensure you have the files/master.yml as this is what will do all the post boot configuration. This path is relevant so we are assuming in this example that you are running the nova command in a directory where there is a subdirectory called files that has the master.yml file in it. Absolute paths also work.
|
||||
|
||||
Next, assign it a public IP address:
|
||||
|
||||
```
|
||||
nova floating-ip-list
|
||||
```
|
||||
|
||||
Get an IP address that's free and run:
|
||||
|
||||
```
|
||||
nova floating-ip-associate kube-master <ip address>
|
||||
```
|
||||
|
||||
where ```<ip address>``` is the IP address that was available from the ```nova floating-ip-list``` command.
|
||||
|
||||
#### Provision Worker Nodes
|
||||
|
||||
Edit ```node.yaml``` and replace all instances of ```<master-private-ip>``` with the private IP address of the master node. You can get this by running ```nova show kube-master``` assuming you named your instance kube master. This is not the floating IP address you just assigned it.
|
||||
|
||||
```sh
|
||||
nova boot \
|
||||
--image <image_name> \
|
||||
--key-name <my_key> \
|
||||
--flavor <flavor id> \
|
||||
--security-group kubernetes \
|
||||
--user-data files/node.yaml \
|
||||
minion01
|
||||
```
|
||||
|
||||
This is basically the same as the master nodes but with the node.yaml post-boot script instead of the master.
|
||||
|
||||
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
|
||||
[]()
|
||||
|
Reference in New Issue
Block a user