Cloning docs for 0.20.0

This commit is contained in:
Brendan Burns
2015-06-25 20:07:34 -07:00
parent 712f303350
commit 82f7303a00
538 changed files with 95131 additions and 0 deletions

View File

@@ -0,0 +1,66 @@
If you are not sure what OSes and infrastructure is supported, the table below lists all the combinations which have
been tested recently.
For the easiest "kick the tires" experience, please try the [local docker](docker.md) guide.
If you are considering contributing a new guide, please read the
[guidelines](../../docs/devel/writing-a-getting-started-guide.md).
IaaS Provider | Config. Mgmt | OS | Networking | Docs | Conformance | Support Level | Notes
-------------------- | ------------ | ------ | ---------- | ------------------------------------------------------------------------------ | ----------- | ---------------------------- | -----
GKE | | | GCE | [docs](https://cloud.google.com/container-engine) | | Commercial | Uses K8s version 0.15.0
Vagrant | Saltstack | Fedora | OVS | [docs](../../docs/getting-started-guides/vagrant.md) | | Project | Uses latest via https://get.k8s.io/
GCE | Saltstack | Debian | GCE | [docs](../../docs/getting-started-guides/gce.md) | | Project | Tested with 0.15.0 by @robertbailey
Azure | CoreOS | CoreOS | Weave | [docs](../../docs/getting-started-guides/coreos/azure/README.md) | | Community ([@errordeveloper](https://github.com/errordeveloper), [@squillace](https://github.com/squillace), [@chanezon](https://github.com/chanezon), [@crossorigin](https://github.com/crossorigin)) | Uses K8s version 0.17.0
Docker Single Node | custom | N/A | local | [docs](docker.md) | | Project (@brendandburns) | Tested @ 0.14.1 |
Docker Multi Node | Flannel | N/A | local | [docs](docker-multinode.md) | | Project (@brendandburns) | Tested @ 0.14.1 |
Bare-metal | Ansible | Fedora | flannel | [docs](../../docs/getting-started-guides/fedora/fedora_ansible_config.md) | | Project | Uses K8s v0.13.2
Bare-metal | custom | Fedora | _none_ | [docs](../../docs/getting-started-guides/fedora/fedora_manual_config.md) | | Project | Uses K8s v0.13.2
Bare-metal | custom | Fedora | flannel | [docs](../../docs/getting-started-guides/fedora/flannel_multi_node_cluster.md) | | Community ([@aveshagarwal](https://github.com/aveshagarwal))| Tested with 0.15.0
libvirt | custom | Fedora | flannel | [docs](../../docs/getting-started-guides/fedora/flannel_multi_node_cluster.md) | | Community ([@aveshagarwal](https://github.com/aveshagarwal))| Tested with 0.15.0
KVM | custom | Fedora | flannel | [docs](../../docs/getting-started-guides/fedora/flannel_multi_node_cluster.md) | | Community ([@aveshagarwal](https://github.com/aveshagarwal))| Tested with 0.15.0
Mesos/GCE | | | | [docs](../../docs/getting-started-guides/mesos.md) | | [Community](https://github.com/mesosphere/kubernetes-mesos) ([@jdef](https://github.com/jdef)) | Uses K8s v0.11.2
AWS | CoreOS | CoreOS | flannel | [docs](../../docs/getting-started-guides/coreos.md) | | Community | Uses K8s version 0.17.0
GCE | CoreOS | CoreOS | flannel | [docs](../../docs/getting-started-guides/coreos.md) | | Community (@kelseyhightower) | Uses K8s version 0.15.0
Vagrant | CoreOS | CoreOS | flannel | [docs](../../docs/getting-started-guides/coreos.md) | | Community ( [@pires](https://github.com/pires), [@AntonioMeireles](https://github.com/AntonioMeireles) ) | Uses K8s version 0.15.0
Bare-metal (Offline) | CoreOS | CoreOS | flannel | [docs](../../docs/getting-started-guides/coreos/bare_metal_offline.md) | | Community([@jeffbean](https://github.com/jeffbean)) | Uses K8s version 0.15.0
CloudStack | Ansible | CoreOS | flannel | [docs](../../docs/getting-started-guides/cloudstack.md) | | Community (@runseb) | Uses K8s version 0.9.1
Vmware | | Debian | OVS | [docs](../../docs/getting-started-guides/vsphere.md) | | Community (@pietern) | Uses K8s version 0.9.1
Bare-metal | custom | CentOS | _none_ | [docs](../../docs/getting-started-guides/centos/centos_manual_config.md) | | Community(@coolsvap) | Uses K8s v0.9.1
AWS | Juju | Ubuntu | flannel | [docs](../../docs/getting-started-guides/juju.md) | | [Community](https://github.com/whitmo/bundle-kubernetes) ( [@whit](https://github.com/whitmo), [@matt](https://github.com/mbruzek), [@chuck](https://github.com/chuckbutler) ) | [Tested](http://reports.vapour.ws/charm-tests-by-charm/kubernetes) K8s v0.8.1
OpenStack/HPCloud | Juju | Ubuntu | flannel | [docs](../../docs/getting-started-guides/juju.md) | | [Community](https://github.com/whitmo/bundle-kubernetes) ( [@whit](https://github.com/whitmo), [@matt](https://github.com/mbruzek), [@chuck](https://github.com/chuckbutler) ) | [Tested](http://reports.vapour.ws/charm-tests-by-charm/kubernetes) K8s v0.8.1
Joyent | Juju | Ubuntu | flannel | [docs](../../docs/getting-started-guides/juju.md) | | [Community](https://github.com/whitmo/bundle-kubernetes) ( [@whit](https://github.com/whitmo), [@matt](https://github.com/mbruzek), [@chuck](https://github.com/chuckbutler) ) | [Tested](http://reports.vapour.ws/charm-tests-by-charm/kubernetes) K8s v0.8.1
AWS | Saltstack | Ubuntu | OVS | [docs](../../docs/getting-started-guides/aws.md) | | Community (@justinsb) | Uses K8s version 0.5.0
Vmware | CoreOS | CoreOS | flannel | [docs](../../docs/getting-started-guides/coreos.md) | | Community (@kelseyhightower) | Uses K8s version 0.15.0
Azure | Saltstack | Ubuntu | OpenVPN | [docs](../../docs/getting-started-guides/azure.md) | | Community |
Bare-metal | custom | Ubuntu | flannel | [docs](../../docs/getting-started-guides/ubuntu.md) | | Community (@resouer @WIZARD-CXY) | use k8s version 0.18.0
Local | | | _none_ | [docs](../../docs/getting-started-guides/locally.md) | | Community (@preillyme) |
libvirt/KVM | CoreOS | CoreOS | libvirt/KVM | [docs](../../docs/getting-started-guides/libvirt-coreos.md) | | Community (@lhuard1A) |
oVirt | | | | [docs](../../docs/getting-started-guides/ovirt.md) | | Community (@simon3z) |
Rackspace | CoreOS | CoreOS | flannel | [docs](../../docs/getting-started-guides/rackspace.md) | | Community (@doublerr) | use k8s version 0.18.0
*Note*: The above table is ordered by version test/used in notes followed by support level.
Definition of columns:
- **IaaS Provider** is who/what provides the virtual or physical machines (nodes) that Kubernetes runs on.
- **OS** is the base operating system of the nodes.
- **Config. Mgmt** is the configuration management system that helps install and maintain kubernetes software on the
nodes.
- **Networking** is what implements the [networking model](../../docs/networking.md). Those with networking type
_none_ may not support more than one node, or may support multiple VM nodes only in the same physical node.
- **Conformance** indicates whether a cluster created with this configuration has passed the project's conformance
tests.
- Support Levels
- **Project**: Kubernetes Committers regularly use this configuration, so it usually works with the latest release
of Kubernetes.
- **Commercial**: A commercial offering with its own support arrangements.
- **Community**: Actively supported by community contributions. May not work with more recent releases of kubernetes.
- **Inactive**: No active maintainer. Not recommended for first-time K8s users, and may be deleted soon.
- **Notes** is relevant information such as version k8s used.
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/getting-started-guides/README.md?pixel)]()
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/release-0.20.0/docs/getting-started-guides/README.md?pixel)]()

Binary file not shown.

After

Width:  |  Height:  |  Size: 221 KiB

View File

@@ -0,0 +1,220 @@
# Getting started on Amazon EC2 with CoreOS
The example below creates an elastic Kubernetes cluster with a custom number of worker nodes and a master.
**Warning:** contrary to the [supported procedure](aws.md), the examples below provision Kubernetes with an insecure API server (plain HTTP,
no security tokens, no basic auth). For demonstration purposes only.
## Highlights
* Cluster bootstrapping using [cloud-config](https://coreos.com/docs/cluster-management/setup/cloudinit-cloud-config/)
* Cross container networking with [flannel](https://github.com/coreos/flannel#flannel)
* Auto worker registration with [kube-register](https://github.com/kelseyhightower/kube-register#kube-register)
* Kubernetes v0.17.0 [official binaries](https://github.com/GoogleCloudPlatform/kubernetes/releases/tag/v0.17.0)
## Prerequisites
* [aws CLI](http://aws.amazon.com/cli)
* [CoreOS image for AWS](https://coreos.com/docs/running-coreos/cloud-providers/ec2/)
* [kubectl CLI](aws/kubectl.md)
## Starting a Cluster
### CloudFormation
The [cloudformation-template.json](aws/cloudformation-template.json) can be used to bootstrap a Kubernetes cluster with a single command:
```bash
aws cloudformation create-stack --stack-name kubernetes --region us-west-2 \
--template-body file://aws/cloudformation-template.json \
--parameters ParameterKey=KeyPair,ParameterValue=<keypair> \
ParameterKey=ClusterSize,ParameterValue=<cluster_size> \
ParameterKey=VpcId,ParameterValue=<vpc_id> \
ParameterKey=SubnetId,ParameterValue=<subnet_id> \
ParameterKey=SubnetAZ,ParameterValue=<subnet_az>
```
It will take a few minutes for the entire stack to come up. You can monitor the stack progress with the following command:
```bash
aws cloudformation describe-stack-events --stack-name kubernetes
```
Record the Kubernetes Master IP address:
```bash
aws cloudformation describe-stacks --stack-name kubernetes
```
[Skip to kubectl client configuration](#configure-the-kubectl-ssh-tunnel)
### AWS CLI
The following commands shall use the latest CoreOS alpha AMI for the `us-west-2` region. For a list of different regions and corresponding AMI IDs see the [CoreOS EC2 cloud provider documentation](https://coreos.com/docs/running-coreos/cloud-providers/ec2/#choosing-a-channel).
#### Create the Kubernetes Security Group
```bash
aws ec2 create-security-group --group-name kubernetes --description "Kubernetes Security Group"
aws ec2 authorize-security-group-ingress --group-name kubernetes --protocol tcp --port 22 --cidr 0.0.0.0/0
aws ec2 authorize-security-group-ingress --group-name kubernetes --protocol tcp --port 80 --cidr 0.0.0.0/0
aws ec2 authorize-security-group-ingress --group-name kubernetes --source-security-group-name kubernetes
```
#### Save the master and node cloud-configs
* [master.yaml](aws/cloud-configs/master.yaml)
* [node.yaml](aws/cloud-configs/node.yaml)
#### Launch the master
*Attention:* replace `<ami_image_id>` below for a [suitable version of CoreOS image for AWS](https://coreos.com/docs/running-coreos/cloud-providers/ec2/).
```bash
aws ec2 run-instances --image-id <ami_image_id> --key-name <keypair> \
--region us-west-2 --security-groups kubernetes --instance-type m3.medium \
--user-data file://master.yaml
```
Record the `InstanceId` for the master.
Gather the public and private IPs for the master node:
```bash
aws ec2 describe-instances --instance-id <instance-id>
```
```
{
"Reservations": [
{
"Instances": [
{
"PublicDnsName": "ec2-54-68-97-117.us-west-2.compute.amazonaws.com",
"RootDeviceType": "ebs",
"State": {
"Code": 16,
"Name": "running"
},
"PublicIpAddress": "54.68.97.117",
"PrivateIpAddress": "172.31.9.9",
...
```
#### Update the node.yaml cloud-config
Edit `node.yaml` and replace all instances of `<master-private-ip>` with the **private** IP address of the master node.
### Launch 3 worker nodes
*Attention:* Replace `<ami_image_id>` below for a [suitable version of CoreOS image for AWS](https://coreos.com/docs/running-coreos/cloud-providers/ec2/#choosing-a-channel).
```bash
aws ec2 run-instances --count 3 --image-id <ami_image_id> --key-name <keypair> \
--region us-west-2 --security-groups kubernetes --instance-type m3.medium \
--user-data file://node.yaml
```
### Add additional worker nodes
*Attention:* replace `<ami_image_id>` below for a [suitable version of CoreOS image for AWS](https://coreos.com/docs/running-coreos/cloud-providers/ec2/#choosing-a-channel).
```bash
aws ec2 run-instances --count 1 --image-id <ami_image_id> --key-name <keypair> \
--region us-west-2 --security-groups kubernetes --instance-type m3.medium \
--user-data file://node.yaml
```
### Configure the kubectl SSH tunnel
This command enables secure communication between the kubectl client and the Kubernetes API.
```bash
ssh -f -nNT -L 8080:127.0.0.1:8080 core@<master-public-ip>
```
### Listing worker nodes
Once the worker instances have fully booted, they will be automatically registered with the Kubernetes API server by the kube-register service running on the master node. It may take a few mins.
```bash
kubectl get nodes
```
## Starting a simple pod
Create a pod manifest: `pod.json`
```json
{
"apiVersion": "v1",
"kind": "Pod",
"metadata": {
"name": "hello",
"labels": {
"name": "hello",
"environment": "testing"
}
},
"spec": {
"containers": [{
"name": "hello",
"image": "quay.io/kelseyhightower/hello",
"ports": [{
"containerPort": 80,
"hostPort": 80
}]
}]
}
}
```
### Create the pod using the kubectl command line tool
```bash
kubectl create -f pod.json
```
### Testing
```bash
kubectl get pods
```
Record the **Host** of the pod, which should be the private IP address.
Gather the public IP address for the worker node.
```bash
aws ec2 describe-instances --filters 'Name=private-ip-address,Values=<host>'
```
```
{
"Reservations": [
{
"Instances": [
{
"PublicDnsName": "ec2-54-68-97-117.us-west-2.compute.amazonaws.com",
"RootDeviceType": "ebs",
"State": {
"Code": 16,
"Name": "running"
},
"PublicIpAddress": "54.68.97.117",
...
```
Visit the public IP address in your browser to view the running pod.
### Delete the pod
```bash
kubectl delete pods hello
```
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/getting-started-guides/aws-coreos.md?pixel)]()
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/release-0.20.0/docs/getting-started-guides/aws-coreos.md?pixel)]()

View File

@@ -0,0 +1,102 @@
Getting started on AWS EC2
--------------------------
**Table of Contents**
- [Prerequisites](#prerequisites)
- [Cluster turnup](#cluster-turnup)
- [Supported procedure: `get-kube`](#supported-procedure-get-kube)
- [Alternatives](#alternatives)
- [Getting started with your cluster](#getting-started-with-your-cluster)
- [Command line administration tool: `kubectl`](#command-line-administration-tool-kubectl)
- [Examples](#examples)
- [Tearing down the cluster](#tearing-down-the-cluster)
- [Further reading](#further-reading)
## Prerequisites
1. You need an AWS account. Visit [http://aws.amazon.com](http://aws.amazon.com) to get started
2. Install and configure [AWS Command Line Interface](http://aws.amazon.com/cli)
3. You need an AWS [instance profile and role](http://docs.aws.amazon.com/IAM/latest/UserGuide/instance-profiles.html) with EC2 full access.
## Cluster turnup
### Supported procedure: `get-kube`
```bash
#Using wget
export KUBERNETES_PROVIDER=aws; wget -q -O - https://get.k8s.io | bash
#Using cURL
export KUBERNETES_PROVIDER=aws; curl -sS https://get.k8s.io | bash
```
NOTE: This script calls [cluster/kube-up.sh](https://github.com/GoogleCloudPlatform/kubernetes/blob/master/cluster/kube-up.sh)
which in turn calls [cluster/aws/util.sh](https://github.com/GoogleCloudPlatform/kubernetes/blob/master/cluster/aws/util.sh)
using [cluster/aws/config-default.sh](https://github.com/GoogleCloudPlatform/kubernetes/blob/master/cluster/aws/config-default.sh).
This process takes about 5 to 10 minutes. Once the cluster is up, the IP addresses of your master and node(s) will be printed,
as well as information about the default services running in the cluster (monitoring, logging, dns). User credentials and security
tokens are written in `~/.kube/kubeconfig`, they will be necessary to use the CLI or the HTTP Basic Auth.
By default, the script will provision a new VPC and a 4 node k8s cluster in us-west-2a (Oregon) with `t2.micro` instances running on Ubuntu.
You can override the variables defined in [config-default.sh](https://github.com/GoogleCloudPlatform/kubernetes/blob/master/cluster/aws/config-default.sh) to change this behavior as follows:
```bash
export KUBE_AWS_ZONE=eu-west-1c
export NUM_MINIONS=2
export MINION_SIZE=m3.medium
export AWS_S3_REGION=eu-west-1
export AWS_S3_BUCKET=mycompany-kubernetes-artifacts
export INSTANCE_PREFIX=k8s
...
```
It will also try to create or reuse a keypair called "kubernetes", and IAM profiles called "kubernetes-master" and "kubernetes-minion".
If these already exist, make sure you want them to be used here.
NOTE: If using an existing keypair named "kubernetes" then you must set the `AWS_SSH_KEY` key to point to your private key.
### Alternatives
A contributed [example](aws-coreos.md) allows you to setup a Kubernetes cluster based on [CoreOS](http://www.coreos.com), either using
AWS CloudFormation or EC2 with user data (cloud-config).
## Getting started with your cluster
### Command line administration tool: `kubectl`
Copy the appropriate `kubectl` binary to any location defined in your `PATH` environment variable, for example:
```bash
# OS X
sudo cp kubernetes/platforms/darwin/amd64/kubectl /usr/local/bin/kubectl
# Linux
sudo cp kubernetes/platforms/linux/amd64/kubectl /usr/local/bin/kubectl
```
An up-to-date documentation page for this tool is available here: [kubectl manual](https://github.com/GoogleCloudPlatform/kubernetes/blob/master/docs/kubectl.md)
By default, `kubectl` will use the `kubeconfig` file generated during the cluster startup for authenticating against the API.
For more information, please read [kubeconfig files](https://github.com/GoogleCloudPlatform/kubernetes/blob/master/docs/kubeconfig-file.md)
### Examples
See [a simple nginx example](https://github.com/GoogleCloudPlatform/kubernetes/blob/master/examples/simple-nginx.md) to try out your new cluster.
The "Guestbook" application is another popular example to get started with Kubernetes: [guestbook example](https://github.com/GoogleCloudPlatform/kubernetes/tree/master/examples/guestbook)
For more complete applications, please look in the [examples directory](https://github.com/GoogleCloudPlatform/kubernetes/blob/master/examples)
## Tearing down the cluster
Make sure the environment variables you used to provision your cluster are still exported, then call the following script inside the
`kubernetes` directory:
```bash
cluster/kube-down.sh
```
## Further reading
Please see the [Kubernetes docs](https://github.com/GoogleCloudPlatform/kubernetes/tree/master/docs) for more details on administering
and using a Kubernetes cluster.
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/getting-started-guides/aws.md?pixel)]()
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/release-0.20.0/docs/getting-started-guides/aws.md?pixel)]()

View File

@@ -0,0 +1,177 @@
#cloud-config
write_files:
- path: /opt/bin/waiter.sh
owner: root
permissions: 0755
content: |
#! /usr/bin/bash
until curl http://127.0.0.1:2379/v2/machines; do sleep 2; done
coreos:
etcd2:
name: master
initial-cluster-token: k8s_etcd
initial-cluster: master=http://$private_ipv4:2380
listen-peer-urls: http://$private_ipv4:2380,http://localhost:2380
initial-advertise-peer-urls: http://$private_ipv4:2380
listen-client-urls: http://$private_ipv4:2379,http://localhost:2379
advertise-client-urls: http://$private_ipv4:2379
fleet:
etcd_servers: http://localhost:2379
metadata: k8srole=master
flannel:
etcd_endpoints: http://localhost:2379
locksmithd:
endpoint: http://localhost:2379
units:
- name: etcd2.service
command: start
- name: fleet.service
command: start
- name: etcd2-waiter.service
command: start
content: |
[Unit]
Description=etcd waiter
Wants=network-online.target
Wants=etcd2.service
After=etcd2.service
After=network-online.target
Before=flanneld.service fleet.service locksmithd.service
[Service]
ExecStart=/usr/bin/bash /opt/bin/waiter.sh
RemainAfterExit=true
Type=oneshot
- name: flanneld.service
command: start
drop-ins:
- name: 50-network-config.conf
content: |
[Service]
ExecStartPre=-/usr/bin/etcdctl mk /coreos.com/network/config '{"Network": "10.244.0.0/16", "Backend": {"Type": "vxlan"}}'
- name: docker-cache.service
command: start
content: |
[Unit]
Description=Docker cache proxy
Requires=early-docker.service
After=early-docker.service
Before=early-docker.target
[Service]
Restart=always
TimeoutStartSec=0
RestartSec=5
Environment=TMPDIR=/var/tmp/
Environment=DOCKER_HOST=unix:///var/run/early-docker.sock
ExecStartPre=-/usr/bin/docker kill docker-registry
ExecStartPre=-/usr/bin/docker rm docker-registry
ExecStartPre=/usr/bin/docker pull quay.io/devops/docker-registry:latest
# GUNICORN_OPTS is an workaround for
# https://github.com/docker/docker-registry/issues/892
ExecStart=/usr/bin/docker run --rm --net host --name docker-registry \
-e STANDALONE=false \
-e GUNICORN_OPTS=[--preload] \
-e MIRROR_SOURCE=https://registry-1.docker.io \
-e MIRROR_SOURCE_INDEX=https://index.docker.io \
-e MIRROR_TAGS_CACHE_TTL=1800 \
quay.io/devops/docker-registry:latest
- name: docker.service
drop-ins:
- name: 51-docker-mirror.conf
content: |
[Unit]
# making sure that docker-cache is up and that flanneld finished
# startup, otherwise containers won't land in flannel's network...
Requires=docker-cache.service
After=docker-cache.service
[Service]
Environment=DOCKER_OPTS='--registry-mirror=http://$private_ipv4:5000'
- name: get-kubectl.service
command: start
content: |
[Unit]
Description=Get kubectl client tool
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
Requires=network-online.target
After=network-online.target
[Service]
ExecStart=/usr/bin/wget -N -P /opt/bin https://storage.googleapis.com/kubernetes-release/release/v0.17.0/bin/linux/amd64/kubectl
ExecStart=/usr/bin/chmod +x /opt/bin/kubectl
Type=oneshot
RemainAfterExit=true
- name: kube-apiserver.service
command: start
content: |
[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
Requires=etcd2-waiter.service
After=etcd2-waiter.service
[Service]
ExecStartPre=/usr/bin/wget -N -P /opt/bin https://storage.googleapis.com/kubernetes-release/release/v0.17.0/bin/linux/amd64/kube-apiserver
ExecStartPre=/usr/bin/chmod +x /opt/bin/kube-apiserver
ExecStart=/opt/bin/kube-apiserver \
--insecure-bind-address=0.0.0.0 \
--service-cluster-ip-range=10.100.0.0/16 \
--etcd-servers=http://localhost:2379
Restart=always
RestartSec=10
- name: kube-controller-manager.service
command: start
content: |
[Unit]
Description=Kubernetes Controller Manager
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
Requires=kube-apiserver.service
After=kube-apiserver.service
[Service]
ExecStartPre=/usr/bin/wget -N -P /opt/bin https://storage.googleapis.com/kubernetes-release/release/v0.17.0/bin/linux/amd64/kube-controller-manager
ExecStartPre=/usr/bin/chmod +x /opt/bin/kube-controller-manager
ExecStart=/opt/bin/kube-controller-manager \
--master=127.0.0.1:8080
Restart=always
RestartSec=10
- name: kube-scheduler.service
command: start
content: |
[Unit]
Description=Kubernetes Scheduler
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
Requires=kube-apiserver.service
After=kube-apiserver.service
[Service]
ExecStartPre=/usr/bin/wget -N -P /opt/bin https://storage.googleapis.com/kubernetes-release/release/v0.17.0/bin/linux/amd64/kube-scheduler
ExecStartPre=/usr/bin/chmod +x /opt/bin/kube-scheduler
ExecStart=/opt/bin/kube-scheduler \
--master=127.0.0.1:8080
Restart=always
RestartSec=10
- name: kube-register.service
command: start
content: |
[Unit]
Description=Kubernetes Registration Service
Documentation=https://github.com/kelseyhightower/kube-register
Requires=kube-apiserver.service fleet.service
After=kube-apiserver.service fleet.service
[Service]
ExecStartPre=-/usr/bin/wget -nc -O /opt/bin/kube-register https://github.com/kelseyhightower/kube-register/releases/download/v0.0.3/kube-register-0.0.3-linux-amd64
ExecStartPre=/usr/bin/chmod +x /opt/bin/kube-register
ExecStart=/opt/bin/kube-register \
--metadata=k8srole=node \
--fleet-endpoint=unix:///var/run/fleet.sock \
--api-endpoint=http://127.0.0.1:8080
Restart=always
RestartSec=10
update:
group: alpha
reboot-strategy: off

View File

@@ -0,0 +1,81 @@
#cloud-config
write_files:
- path: /opt/bin/wupiao
owner: root
permissions: 0755
content: |
#!/bin/bash
# [w]ait [u]ntil [p]ort [i]s [a]ctually [o]pen
[ -n "$1" ] && [ -n "$2" ] && while ! curl --output /dev/null \
--silent --head --fail \
http://${1}:${2}; do sleep 1 && echo -n .; done;
exit $?
coreos:
etcd2:
listen-client-urls: http://localhost:2379
advertise-client-urls: http://0.0.0.0:2379
initial-cluster: master=http://<master-private-ip>:2380
proxy: on
fleet:
etcd_servers: http://localhost:2379
metadata: k8srole=node
flannel:
etcd_endpoints: http://localhost:2379
locksmithd:
endpoint: http://localhost:2379
units:
- name: etcd2.service
command: start
- name: fleet.service
command: start
- name: flanneld.service
command: start
- name: docker.service
command: start
drop-ins:
- name: 50-docker-mirror.conf
content: |
[Service]
Environment=DOCKER_OPTS='--registry-mirror=http://<master-private-ip>:5000'
- name: kubelet.service
command: start
content: |
[Unit]
Description=Kubernetes Kubelet
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
Requires=network-online.target
After=network-online.target
[Service]
ExecStartPre=/usr/bin/wget -N -P /opt/bin https://storage.googleapis.com/kubernetes-release/release/v0.17.0/bin/linux/amd64/kubelet
ExecStartPre=/usr/bin/chmod +x /opt/bin/kubelet
# wait for kubernetes master to be up and ready
ExecStartPre=/opt/bin/wupiao <master-private-ip> 8080
ExecStart=/opt/bin/kubelet \
--api-servers=<master-private-ip>:8080 \
--hostname-override=$private_ipv4
Restart=always
RestartSec=10
- name: kube-proxy.service
command: start
content: |
[Unit]
Description=Kubernetes Proxy
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
Requires=network-online.target
After=network-online.target
[Service]
ExecStartPre=/usr/bin/wget -N -P /opt/bin https://storage.googleapis.com/kubernetes-release/release/v0.17.0/bin/linux/amd64/kube-proxy
ExecStartPre=/usr/bin/chmod +x /opt/bin/kube-proxy
# wait for kubernetes master to be up and ready
ExecStartPre=/opt/bin/wupiao <master-private-ip> 8080
ExecStart=/opt/bin/kube-proxy \
--master=http://<master-private-ip>:8080
Restart=always
RestartSec=10
update:
group: alpha
reboot-strategy: off

View File

@@ -0,0 +1,421 @@
{
"AWSTemplateFormatVersion": "2010-09-09",
"Description": "Kubernetes 0.18.2 on EC2 powered by CoreOS 681.0.0 (alpha)",
"Mappings": {
"RegionMap": {
"eu-central-1" : {
"AMI" : "ami-4c4f7151"
},
"ap-northeast-1" : {
"AMI" : "ami-3a35fd3a"
},
"us-gov-west-1" : {
"AMI" : "ami-57117174"
},
"sa-east-1" : {
"AMI" : "ami-fbcc4ae6"
},
"ap-southeast-2" : {
"AMI" : "ami-593c4263"
},
"ap-southeast-1" : {
"AMI" : "ami-3a083668"
},
"us-east-1" : {
"AMI" : "ami-40322028"
},
"us-west-2" : {
"AMI" : "ami-23b58613"
},
"us-west-1" : {
"AMI" : "ami-15618f51"
},
"eu-west-1" : {
"AMI" : "ami-8d1164fa"
}
}
},
"Parameters": {
"InstanceType": {
"Description": "EC2 HVM instance type (m3.medium, etc).",
"Type": "String",
"Default": "m3.medium",
"AllowedValues": [
"m3.medium",
"m3.large",
"m3.xlarge",
"m3.2xlarge",
"c3.large",
"c3.xlarge",
"c3.2xlarge",
"c3.4xlarge",
"c3.8xlarge",
"cc2.8xlarge",
"cr1.8xlarge",
"hi1.4xlarge",
"hs1.8xlarge",
"i2.xlarge",
"i2.2xlarge",
"i2.4xlarge",
"i2.8xlarge",
"r3.large",
"r3.xlarge",
"r3.2xlarge",
"r3.4xlarge",
"r3.8xlarge",
"t2.micro",
"t2.small",
"t2.medium"
],
"ConstraintDescription": "Must be a valid EC2 HVM instance type."
},
"ClusterSize": {
"Description": "Number of nodes in cluster (2-12).",
"Default": "2",
"MinValue": "2",
"MaxValue": "12",
"Type": "Number"
},
"AllowSSHFrom": {
"Description": "The net block (CIDR) that SSH is available to.",
"Default": "0.0.0.0/0",
"Type": "String"
},
"KeyPair": {
"Description": "The name of an EC2 Key Pair to allow SSH access to the instance.",
"Type": "AWS::EC2::KeyPair::KeyName"
},
"VpcId": {
"Description": "The ID of the VPC to launch into.",
"Type": "AWS::EC2::VPC::Id"
},
"SubnetId": {
"Description": "The ID of the subnet to launch into (that must be within the supplied VPC)",
"Type": "AWS::EC2::Subnet::Id"
},
"SubnetAZ": {
"Description": "The availability zone of the subnet supplied (for example eu-west-1a)",
"Type": "String"
}
},
"Conditions": {
"UseEC2Classic": {"Fn::Equals": [{"Ref": "VpcId"}, ""]}
},
"Resources": {
"KubernetesSecurityGroup": {
"Type": "AWS::EC2::SecurityGroup",
"Properties": {
"VpcId": {"Fn::If": ["UseEC2Classic", {"Ref": "AWS::NoValue"}, {"Ref": "VpcId"}]},
"GroupDescription": "Kubernetes SecurityGroup",
"SecurityGroupIngress": [
{
"IpProtocol": "tcp",
"FromPort": "22",
"ToPort": "22",
"CidrIp": {"Ref": "AllowSSHFrom"}
}
]
}
},
"KubernetesIngress": {
"Type": "AWS::EC2::SecurityGroupIngress",
"Properties": {
"GroupId": {"Fn::GetAtt": ["KubernetesSecurityGroup", "GroupId"]},
"IpProtocol": "tcp",
"FromPort": "1",
"ToPort": "65535",
"SourceSecurityGroupId": {
"Fn::GetAtt" : [ "KubernetesSecurityGroup", "GroupId" ]
}
}
},
"KubernetesIngressUDP": {
"Type": "AWS::EC2::SecurityGroupIngress",
"Properties": {
"GroupId": {"Fn::GetAtt": ["KubernetesSecurityGroup", "GroupId"]},
"IpProtocol": "udp",
"FromPort": "1",
"ToPort": "65535",
"SourceSecurityGroupId": {
"Fn::GetAtt" : [ "KubernetesSecurityGroup", "GroupId" ]
}
}
},
"KubernetesMasterInstance": {
"Type": "AWS::EC2::Instance",
"Properties": {
"NetworkInterfaces" : [{
"GroupSet" : [{"Fn::GetAtt": ["KubernetesSecurityGroup", "GroupId"]}],
"AssociatePublicIpAddress" : "true",
"DeviceIndex" : "0",
"DeleteOnTermination" : "true",
"SubnetId" : {"Fn::If": ["UseEC2Classic", {"Ref": "AWS::NoValue"}, {"Ref": "SubnetId"}]}
}],
"ImageId": {"Fn::FindInMap" : ["RegionMap", {"Ref": "AWS::Region" }, "AMI"]},
"InstanceType": {"Ref": "InstanceType"},
"KeyName": {"Ref": "KeyPair"},
"Tags" : [
{"Key" : "Name", "Value" : {"Fn::Join" : [ "-", [ {"Ref" : "AWS::StackName"}, "k8s-master" ] ]}},
{"Key" : "KubernetesRole", "Value" : "node"}
],
"UserData": { "Fn::Base64": {"Fn::Join" : ["", [
"#cloud-config\n\n",
"write_files:\n",
"- path: /opt/bin/waiter.sh\n",
" owner: root\n",
" content: |\n",
" #! /usr/bin/bash\n",
" until curl http://127.0.0.1:2379/v2/machines; do sleep 2; done\n",
"coreos:\n",
" etcd2:\n",
" name: master\n",
" initial-cluster-token: k8s_etcd\n",
" initial-cluster: master=http://$private_ipv4:2380\n",
" listen-peer-urls: http://$private_ipv4:2380,http://localhost:2380\n",
" initial-advertise-peer-urls: http://$private_ipv4:2380\n",
" listen-client-urls: http://$private_ipv4:2379,http://localhost:2379\n",
" advertise-client-urls: http://$private_ipv4:2379\n",
" fleet:\n",
" etcd_servers: http://localhost:2379\n",
" metadata: k8srole=master\n",
" flannel:\n",
" etcd_endpoints: http://localhost:2379\n",
" locksmithd:\n",
" endpoint: http://localhost:2379\n",
" units:\n",
" - name: etcd2.service\n",
" command: start\n",
" - name: fleet.service\n",
" command: start\n",
" - name: etcd2-waiter.service\n",
" command: start\n",
" content: |\n",
" [Unit]\n",
" Description=etcd waiter\n",
" Wants=network-online.target\n",
" Wants=etcd2.service\n",
" After=etcd2.service\n",
" After=network-online.target\n",
" Before=flanneld.service fleet.service locksmithd.service\n\n",
" [Service]\n",
" ExecStart=/usr/bin/bash /opt/bin/waiter.sh\n",
" RemainAfterExit=true\n",
" Type=oneshot\n",
" - name: flanneld.service\n",
" command: start\n",
" drop-ins:\n",
" - name: 50-network-config.conf\n",
" content: |\n",
" [Service]\n",
" ExecStartPre=-/usr/bin/etcdctl mk /coreos.com/network/config '{\"Network\": \"10.244.0.0/16\", \"Backend\": {\"Type\": \"vxlan\"}}'\n",
" - name: docker-cache.service\n",
" command: start\n",
" content: |\n",
" [Unit]\n",
" Description=Docker cache proxy\n",
" Requires=early-docker.service\n",
" After=early-docker.service\n",
" Before=early-docker.target\n\n",
" [Service]\n",
" Restart=always\n",
" TimeoutStartSec=0\n",
" RestartSec=5\n",
" Environment=TMPDIR=/var/tmp/\n",
" Environment=DOCKER_HOST=unix:///var/run/early-docker.sock\n",
" ExecStartPre=-/usr/bin/docker kill docker-registry\n",
" ExecStartPre=-/usr/bin/docker rm docker-registry\n",
" ExecStartPre=/usr/bin/docker pull quay.io/devops/docker-registry:latest\n",
" # GUNICORN_OPTS is an workaround for\n",
" # https://github.com/docker/docker-registry/issues/892\n",
" ExecStart=/usr/bin/docker run --rm --net host --name docker-registry \\\n",
" -e STANDALONE=false \\\n",
" -e GUNICORN_OPTS=[--preload] \\\n",
" -e MIRROR_SOURCE=https://registry-1.docker.io \\\n",
" -e MIRROR_SOURCE_INDEX=https://index.docker.io \\\n",
" -e MIRROR_TAGS_CACHE_TTL=1800 \\\n",
" quay.io/devops/docker-registry:latest\n",
" - name: get-kubectl.service\n",
" command: start\n",
" content: |\n",
" [Unit]\n",
" Description=Get kubectl client tool\n",
" Documentation=https://github.com/GoogleCloudPlatform/kubernetes\n",
" Requires=network-online.target\n",
" After=network-online.target\n\n",
" [Service]\n",
" ExecStart=/usr/bin/wget -N -P /opt/bin https://storage.googleapis.com/kubernetes-release/release/v0.18.2/bin/linux/amd64/kubectl\n",
" ExecStart=/usr/bin/chmod +x /opt/bin/kubectl\n",
" Type=oneshot\n",
" RemainAfterExit=true\n",
" - name: kube-apiserver.service\n",
" command: start\n",
" content: |\n",
" [Unit]\n",
" Description=Kubernetes API Server\n",
" Documentation=https://github.com/GoogleCloudPlatform/kubernetes\n",
" Requires=etcd2-waiter.service\n",
" After=etcd2-waiter.service\n\n",
" [Service]\n",
" ExecStartPre=/usr/bin/wget -N -P /opt/bin https://storage.googleapis.com/kubernetes-release/release/v0.18.2/bin/linux/amd64/kube-apiserver\n",
" ExecStartPre=/usr/bin/chmod +x /opt/bin/kube-apiserver\n",
" ExecStart=/opt/bin/kube-apiserver \\\n",
" --insecure-bind-address=0.0.0.0 \\\n",
" --service-cluster-ip-range=10.100.0.0/16 \\\n",
" --etcd-servers=http://localhost:2379\n",
" Restart=always\n",
" RestartSec=10\n",
" - name: kube-controller-manager.service\n",
" command: start\n",
" content: |\n",
" [Unit]\n",
" Description=Kubernetes Controller Manager\n",
" Documentation=https://github.com/GoogleCloudPlatform/kubernetes\n",
" Requires=kube-apiserver.service\n",
" After=kube-apiserver.service\n\n",
" [Service]\n",
" ExecStartPre=/usr/bin/wget -N -P /opt/bin https://storage.googleapis.com/kubernetes-release/release/v0.18.2/bin/linux/amd64/kube-controller-manager\n",
" ExecStartPre=/usr/bin/chmod +x /opt/bin/kube-controller-manager\n",
" ExecStart=/opt/bin/kube-controller-manager \\\n",
" --master=127.0.0.1:8080\n",
" Restart=always\n",
" RestartSec=10\n",
" - name: kube-scheduler.service\n",
" command: start\n",
" content: |\n",
" [Unit]\n",
" Description=Kubernetes Scheduler\n",
" Documentation=https://github.com/GoogleCloudPlatform/kubernetes\n",
" Requires=kube-apiserver.service\n",
" After=kube-apiserver.service\n\n",
" [Service]\n",
" ExecStartPre=/usr/bin/wget -N -P /opt/bin https://storage.googleapis.com/kubernetes-release/release/v0.18.2/bin/linux/amd64/kube-scheduler\n",
" ExecStartPre=/usr/bin/chmod +x /opt/bin/kube-scheduler\n",
" ExecStart=/opt/bin/kube-scheduler \\\n",
" --master=127.0.0.1:8080\n",
" Restart=always\n",
" RestartSec=10\n",
" - name: kube-register.service\n",
" command: start\n",
" content: |\n",
" [Unit]\n",
" Description=Kubernetes Registration Service\n",
" Documentation=https://github.com/kelseyhightower/kube-register\n",
" Requires=kube-apiserver.service fleet.service\n",
" After=kube-apiserver.service fleet.service\n\n",
" [Service]\n",
" ExecStartPre=-/usr/bin/wget -nc -O /opt/bin/kube-register https://github.com/kelseyhightower/kube-register/releases/download/v0.0.3/kube-register-0.0.3-linux-amd64\n",
" ExecStartPre=/usr/bin/chmod +x /opt/bin/kube-register\n",
" ExecStart=/opt/bin/kube-register \\\n",
" --metadata=k8srole=node \\\n",
" --fleet-endpoint=unix:///var/run/fleet.sock \\\n",
" --api-endpoint=http://127.0.0.1:8080\n",
" Restart=always\n",
" RestartSec=10\n",
" update:\n",
" group: alpha\n",
" reboot-strategy: off\n"
]]}
}
}
},
"KubernetesNodeLaunchConfig": {
"Type": "AWS::AutoScaling::LaunchConfiguration",
"Properties": {
"ImageId": {"Fn::FindInMap" : ["RegionMap", {"Ref": "AWS::Region" }, "AMI" ]},
"InstanceType": {"Ref": "InstanceType"},
"KeyName": {"Ref": "KeyPair"},
"AssociatePublicIpAddress" : "true",
"SecurityGroups": [{"Fn::If": [
"UseEC2Classic",
{"Ref": "KubernetesSecurityGroup"},
{"Fn::GetAtt": ["KubernetesSecurityGroup", "GroupId"]}]
}],
"UserData": { "Fn::Base64": {"Fn::Join" : ["", [
"#cloud-config\n\n",
"coreos:\n",
" etcd2:\n",
" listen-client-urls: http://localhost:2379\n",
" initial-cluster: master=http://", {"Fn::GetAtt" :["KubernetesMasterInstance" , "PrivateIp"]}, ":2380\n",
" proxy: on\n",
" fleet:\n",
" etcd_servers: http://localhost:2379\n",
" metadata: k8srole=node\n",
" flannel:\n",
" etcd_endpoints: http://localhost:2379\n",
" locksmithd:\n",
" endpoint: http://localhost:2379\n",
" units:\n",
" - name: etcd2.service\n",
" command: start\n",
" - name: fleet.service\n",
" command: start\n",
" - name: flanneld.service\n",
" command: start\n",
" - name: docker.service\n",
" command: start\n",
" drop-ins:\n",
" - name: 50-docker-mirror.conf\n",
" content: |\n",
" [Service]\n",
" Environment=DOCKER_OPTS='--registry-mirror=http://", {"Fn::GetAtt" :["KubernetesMasterInstance" , "PrivateIp"]}, ":5000'\n",
" - name: kubelet.service\n",
" command: start\n",
" content: |\n",
" [Unit]\n",
" Description=Kubernetes Kubelet\n",
" Documentation=https://github.com/GoogleCloudPlatform/kubernetes\n",
" Requires=network-online.target\n",
" After=network-online.target\n\n",
" [Service]\n",
" ExecStartPre=/usr/bin/wget -N -P /opt/bin https://storage.googleapis.com/kubernetes-release/release/v0.18.2/bin/linux/amd64/kubelet\n",
" ExecStartPre=/usr/bin/chmod +x /opt/bin/kubelet\n",
" ExecStart=/opt/bin/kubelet \\\n",
" --api-servers=", {"Fn::GetAtt" :["KubernetesMasterInstance" , "PrivateIp"]}, ":8080 \\\n",
" --hostname-override=$private_ipv4\n",
" Restart=always\n",
" RestartSec=10\n",
" - name: kube-proxy.service\n",
" command: start\n",
" content: |\n",
" [Unit]\n",
" Description=Kubernetes Proxy\n",
" Documentation=https://github.com/GoogleCloudPlatform/kubernetes\n",
" Requires=network-online.target\n",
" After=network-online.target\n\n",
" [Service]\n",
" ExecStartPre=/usr/bin/wget -N -P /opt/bin https://storage.googleapis.com/kubernetes-release/release/v0.18.2/bin/linux/amd64/kube-proxy\n",
" ExecStartPre=/usr/bin/chmod +x /opt/bin/kube-proxy\n",
" ExecStart=/opt/bin/kube-proxy \\\n",
" --master=http://", {"Fn::GetAtt" :["KubernetesMasterInstance" , "PrivateIp"]}, ":8080\n",
" Restart=always\n",
" RestartSec=10\n",
" update:\n",
" group: alpha\n",
" reboot-strategy: off\n"
]]}
}
}
},
"KubernetesAutoScalingGroup": {
"Type": "AWS::AutoScaling::AutoScalingGroup",
"Properties": {
"AvailabilityZones": {"Fn::If": ["UseEC2Classic", {"Fn::GetAZs": ""}, [{"Ref": "SubnetAZ"}]]},
"VPCZoneIdentifier": {"Fn::If": ["UseEC2Classic", {"Ref": "AWS::NoValue"}, [{"Ref": "SubnetId"}]]},
"LaunchConfigurationName": {"Ref": "KubernetesNodeLaunchConfig"},
"MinSize": "2",
"MaxSize": "12",
"DesiredCapacity": {"Ref": "ClusterSize"},
"Tags" : [
{"Key" : "Name", "Value" : {"Fn::Join" : [ "-", [ {"Ref" : "AWS::StackName"}, "k8s-node" ] ]}, "PropagateAtLaunch" : true},
{"Key" : "KubernetesRole", "Value" : "node", "PropagateAtLaunch" : true}
]
}
}
},
"Outputs": {
"KubernetesMasterPublicIp": {
"Description": "Public Ip of the newly created Kubernetes Master instance",
"Value": {"Fn::GetAtt": ["KubernetesMasterInstance" , "PublicIp"]}
}
}
}

View File

@@ -0,0 +1,27 @@
# Install and configure kubectl
## Download the kubectl CLI tool
```bash
### Darwin
wget https://storage.googleapis.com/kubernetes-release/release/v0.17.0/bin/darwin/amd64/kubectl
### Linux
wget https://storage.googleapis.com/kubernetes-release/release/v0.17.0/bin/linux/amd64/kubectl
```
### Copy kubectl to your path
```bash
chmod +x kubectl
mv kubectl /usr/local/bin/
```
### Create a secure tunnel for API communication
```bash
ssh -f -nNT -L 8080:127.0.0.1:8080 core@<master-public-ip>
```
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/getting-started-guides/aws/kubectl.md?pixel)]()
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/release-0.20.0/docs/getting-started-guides/aws/kubectl.md?pixel)]()

View File

@@ -0,0 +1,65 @@
Getting started on Microsoft Azure
----------------------------------
**Table of Contents**
- [Prerequisites](#prerequisites)
- [Setup](#setup)
- [Getting started with your cluster](#getting-started-with-your-cluster)
- [Tearing down the cluster](#tearing-down-the-cluster)
## Prerequisites
** Azure Prerequisites**
1. You need an Azure account. Visit http://azure.microsoft.com/ to get started.
2. Install and configure the Azure cross-platform command-line interface. http://azure.microsoft.com/en-us/documentation/articles/xplat-cli/
3. Make sure you have a default account set in the Azure cli, using `azure account set`
**Prerequisites for your workstation**
1. Be running a Linux or Mac OS X.
2. Get or build a [binary release](binary_release.md)
3. If you want to build your own release, you need to have [Docker
installed](https://docs.docker.com/installation/). On Mac OS X you can use
[boot2docker](http://boot2docker.io/).
## Setup
The cluster setup scripts can setup Kubernetes for multiple targets. First modify `cluster/kube-env.sh` to specify azure:
KUBERNETES_PROVIDER="azure"
Next, specify an existing virtual network and subnet in `cluster/azure/config-default.sh`:
AZ_VNET=<vnet name>
AZ_SUBNET=<subnet name>
You can create a virtual network:
azure network vnet create <vnet name> --subnet=<subnet name> --location "West US" -v
Now you're ready.
You can then use the `cluster/kube-*.sh` scripts to manage your azure cluster, start with:
cluster/kube-up.sh
The script above will start (by default) a single master VM along with 4 worker VMs. You
can tweak some of these parameters by editing `cluster/azure/config-default.sh`.
## Getting started with your cluster
See [a simple nginx example](../../examples/simple-nginx.md) to try out your new cluster.
For more complete applications, please look in the [examples directory](../../examples).
## Tearing down the cluster
```
cluster/kube-down.sh
```
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/getting-started-guides/azure.md?pixel)]()
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/release-0.20.0/docs/getting-started-guides/azure.md?pixel)]()

Binary file not shown.

After

Width:  |  Height:  |  Size: 56 KiB

View File

@@ -0,0 +1,29 @@
## Getting a Binary Release
You can either build a release from sources or download a pre-built release. If you do not plan on developing Kubernetes itself, we suggest a pre-built release.
### Prebuilt Binary Release
The list of binary releases is available for download from the [GitHub Kubernetes repo release page](https://github.com/GoogleCloudPlatform/kubernetes/releases).
Download the latest release and unpack this tar file on Linux or OS X, cd to the created `kubernetes/` directory, and then follow the getting started guide for your cloud.
### Building from source
Get the Kubernetes source. If you are simply building a release from source there is no need to set up a full golang environment as all building happens in a Docker container.
Building a release is simple.
```bash
git clone https://github.com/GoogleCloudPlatform/kubernetes.git
cd kubernetes
make release
```
For more details on the release process see the [`build/` directory](../../build)
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/getting-started-guides/binary_release.md?pixel)]()
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/release-0.20.0/docs/getting-started-guides/binary_release.md?pixel)]()

View File

@@ -0,0 +1,178 @@
Getting started on [CentOS](http://centos.org)
----------------------------------------------
**Table of Contents**
- [Prerequisites](#prerequisites)
- [Starting a cluster](#starting-a-cluster)
## Prerequisites
You need two machines with CentOS installed on them.
## Starting a cluster
This is a getting started guide for CentOS. It is a manual configuration so you understand all the underlying packages / services / ports, etc...
This guide will only get ONE minion working. Multiple minions requires a functional [networking configuration](http://docs.k8s.io/networking.md) done outside of kubernetes. Although the additional kubernetes configuration requirements should be obvious.
The kubernetes package provides a few services: kube-apiserver, kube-scheduler, kube-controller-manager, kubelet, kube-proxy. These services are managed by systemd and the configuration resides in a central location: /etc/kubernetes. We will break the services up between the hosts. The first host, centos-master, will be the kubernetes master. This host will run the kube-apiserver, kube-controller-manager, and kube-scheduler. In addition, the master will also run _etcd_. The remaining host, centos-minion will be the minion and run kubelet, proxy, cadvisor and docker.
**System Information:**
Hosts:
```
centos-master = 192.168.121.9
centos-minion = 192.168.121.65
```
**Prepare the hosts:**
* Create virt7-testing repo on all hosts - centos-{master,minion} with following information.
```
[virt7-testing]
name=virt7-testing
baseurl=http://cbs.centos.org/repos/virt7-testing/x86_64/os/
gpgcheck=0
```
* Install kubernetes on all hosts - centos-{master,minion}. This will also pull in etcd, docker, and cadvisor.
```
yum -y install --enablerepo=virt7-testing kubernetes
```
* Note * Using etcd-0.4.6-7 (This is temperory update in documentation)
If you do not get etcd-0.4.6-7 installed with virt7-testing repo,
In the current virt7-testing repo, the etcd package is updated which causes service failure. To avoid this,
```
yum erase etcd
```
It will uninstall the current available etcd package
```
yum install http://cbs.centos.org/kojifiles/packages/etcd/0.4.6/7.el7.centos/x86_64/etcd-0.4.6-7.el7.centos.x86_64.rpm
yum -y install --enablerepo=virt7-testing kubernetes
```
* Add master and minion to /etc/hosts on all machines (not needed if hostnames already in DNS)
```
echo "192.168.121.9 centos-master
192.168.121.65 centos-minion" >> /etc/hosts
```
* Edit /etc/kubernetes/config which will be the same on all hosts to contain:
```
# Comma separated list of nodes in the etcd cluster
KUBE_ETCD_SERVERS="--etcd_servers=http://centos-master:4001"
# logging to stderr means we get it in the systemd journal
KUBE_LOGTOSTDERR="--logtostderr=true"
# journal message level, 0 is debug
KUBE_LOG_LEVEL="--v=0"
# Should this cluster be allowed to run privileged docker containers
KUBE_ALLOW_PRIV="--allow_privileged=false"
```
* Disable the firewall on both the master and minon, as docker does not play well with other firewall rule managers
```
systemctl disable iptables-services firewalld
systemctl stop iptables-services firewalld
```
**Configure the kubernetes services on the master.**
* Edit /etc/kubernetes/apiserver to appear as such:
```
# The address on the local server to listen to.
KUBE_API_ADDRESS="--address=0.0.0.0"
# The port on the local server to listen on.
KUBE_API_PORT="--port=8080"
# How the replication controller and scheduler find the kube-apiserver
KUBE_MASTER="--master=http://centos-master:8080"
# Port minions listen on
KUBELET_PORT="--kubelet_port=10250"
# Address range to use for services
KUBE_SERVICE_ADDRESSES="--service-cluster-ip-range=10.254.0.0/16"
# Add your own!
KUBE_API_ARGS=""
```
* Edit /etc/kubernetes/controller-manager to appear as such:
```
# Comma separated list of minions
KUBELET_ADDRESSES="--machines=centos-minion"
```
* Start the appropriate services on master:
```
for SERVICES in etcd kube-apiserver kube-controller-manager kube-scheduler; do
systemctl restart $SERVICES
systemctl enable $SERVICES
systemctl status $SERVICES
done
```
**Configure the kubernetes services on the minion.**
***We need to configure the kubelet and start the kubelet and proxy***
* Edit /etc/kubernetes/kubelet to appear as such:
```
# The address for the info server to serve on
KUBELET_ADDRESS="--address=0.0.0.0"
# The port for the info server to serve on
KUBELET_PORT="--port=10250"
# You may leave this blank to use the actual hostname
KUBELET_HOSTNAME="--hostname_override=centos-minion"
# Add your own!
KUBELET_ARGS=""
```
* Start the appropriate services on minion (centos-minion).
```
for SERVICES in kube-proxy kubelet docker; do
systemctl restart $SERVICES
systemctl enable $SERVICES
systemctl status $SERVICES
done
```
*You should be finished!*
* Check to make sure the cluster can see the minion (on centos-master)
```
kubectl get minions
NAME LABELS STATUS
centos-minion <none> Ready
```
**The cluster should be running! Launch a test pod.**
You should have a functional cluster, check out [101](https://github.com/GoogleCloudPlatform/kubernetes/blob/master/examples/walkthrough/README.md)!
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/getting-started-guides/centos/centos_manual_config.md?pixel)]()
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/release-0.20.0/docs/getting-started-guides/centos/centos_manual_config.md?pixel)]()

Binary file not shown.

After

Width:  |  Height:  |  Size: 86 KiB

View File

@@ -0,0 +1,97 @@
Getting started on [CloudStack](http://cloudstack.apache.org)
------------------------------------------------------------
**Table of Contents**
- [Introduction](#introduction)
- [Prerequisites](#prerequisites)
- [Clone the playbook](#clone-the-playbook)
- [Create a Kubernetes cluster](#create-a-kubernetes-cluster)
### Introduction
CloudStack is a software to build public and private clouds based on hardware virtualization principles (traditional IaaS). To deploy Kubernetes on CloudStack there are several possibilities depending on the Cloud being used and what images are made available. [Exoscale](http://exoscale.ch) for instance makes a [CoreOS](http://coreos.com) template available, therefore instructions to deploy Kubernetes on coreOS can be used. CloudStack also has a vagrant plugin available, hence Vagrant could be used to deploy Kubernetes either using the existing shell provisioner or using new Salt based recipes.
[CoreOS](http://coreos.com) templates for CloudStack are built [nightly](http://stable.release.core-os.net/amd64-usr/current/). CloudStack operators need to [register](http://docs.cloudstack.apache.org/projects/cloudstack-administration/en/latest/templates.html) this template in their cloud before proceeding with these Kubernetes deployment instructions.
This guide uses an [Ansible playbook](https://github.com/runseb/ansible-kubernetes).
This is a completely automated, a single playbook deploys Kubernetes based on the coreOS [instructions](http://docs.k8s.io/getting-started-guides/coreos/coreos_multinode_cluster.md).
This [Ansible](http://ansibleworks.com) playbook deploys Kubernetes on a CloudStack based Cloud using CoreOS images. The playbook, creates an ssh key pair, creates a security group and associated rules and finally starts coreOS instances configured via cloud-init.
###Prerequisites
$ sudo apt-get install -y python-pip
$ sudo pip install ansible
$ sudo pip install cs
[_cs_](http://github.com/exoscale/cs) is a python module for the CloudStack API.
Set your CloudStack endpoint, API keys and HTTP method used.
You can define them as environment variables: `CLOUDSTACK_ENDPOINT`, `CLOUDSTACK_KEY`, `CLOUDSTACK_SECRET` and `CLOUDSTACK_METHOD`.
Or create a `~/.cloudstack.ini` file:
[cloudstack]
endpoint = <your cloudstack api endpoint>
key = <your api access key>
secret = <your api secret key>
method = post
We need to use the http POST method to pass the _large_ userdata to the coreOS instances.
###Clone the playbook
$ git clone --recursive https://github.com/runseb/ansible-kubernetes.git
$ cd ansible-kubernetes
The [ansible-cloudstack](https://github.com/resmo/ansible-cloudstack) module is setup in this repository as a submodule, hence the `--recursive`.
###Create a Kubernetes cluster
You simply need to run the playbook.
$ ansible-playbook k8s.yml
Some variables can be edited in the `k8s.yml` file.
vars:
ssh_key: k8s
k8s_num_nodes: 2
k8s_security_group_name: k8s
k8s_node_prefix: k8s2
k8s_template: Linux CoreOS alpha 435 64-bit 10GB Disk
k8s_instance_type: Tiny
This will start a Kubernetes master node and a number of compute nodes (by default 2).
The `instance_type` and `template` by default are specific to [exoscale](http://exoscale.ch), edit them to specify your CloudStack cloud specific template and instance type (i.e service offering).
Check the tasks and templates in `roles/k8s` if you want to modify anything.
Once the playbook as finished, it will print out the IP of the Kubernetes master:
TASK: [k8s | debug msg='k8s master IP is {{ k8s_master.default_ip }}'] ********
SSH to it using the key that was created and using the _core_ user and you can list the machines in your cluster:
$ ssh -i ~/.ssh/id_rsa_k8s core@<maste IP>
$ fleetctl list-machines
MACHINE IP METADATA
a017c422... <node #1 IP> role=node
ad13bf84... <master IP> role=master
e9af8293... <node #2 IP> role=node
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/getting-started-guides/cloudstack.md?pixel)]()
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/release-0.20.0/docs/getting-started-guides/cloudstack.md?pixel)]()

View File

@@ -0,0 +1,18 @@
## Getting started on [CoreOS](http://coreos.com)
There are multiple guides on running Kubernetes with [CoreOS](http://coreos.com):
* [Single Node Cluster](coreos/coreos_single_node_cluster.md)
* [Multi-node Cluster](coreos/coreos_multinode_cluster.md)
* [Setup Multi-node Cluster on GCE in an easy way](https://github.com/rimusz/coreos-multi-node-k8s-gce/blob/master/README.md)
* [Multi-node cluster using cloud-config and Weave on Vagrant](https://github.com/errordeveloper/weave-demos/blob/master/poseidon/README.md)
* [Multi-node cluster using cloud-config and Vagrant](https://github.com/pires/kubernetes-vagrant-coreos-cluster/blob/master/README.md)
* [Yet another multi-node cluster using cloud-config and Vagrant](https://github.com/AntonioMeireles/kubernetes-vagrant-coreos-cluster/blob/master/README.md) (similar to the one above but with an increased, more *aggressive* focus on features and flexibility)
* [Multi-node cluster with Vagrant and fleet units using a small OS X App](https://github.com/rimusz/coreos-osx-gui-kubernetes-cluster/blob/master/README.md)
* [Resizable multi-node cluster on Azure with Weave](coreos/azure/README.md)
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/getting-started-guides/coreos.md?pixel)]()
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/release-0.20.0/docs/getting-started-guides/coreos.md?pixel)]()

View File

@@ -0,0 +1 @@
node_modules/

View File

@@ -0,0 +1,210 @@
Kubernetes on Azure with CoreOS and [Weave](http://weave.works)
---------------------------------------------------------------
**Table of Contents**
- [Introduction](#introduction)
- [Prerequisites](#prerequisites)
- [Let's go!](#lets-go)
- [Deploying the workload](#deploying-the-workload)
- [Scaling](#scaling)
- [Exposing the app to the outside world](#exposing-the-app-to-the-outside-world)
- [Next steps](#next-steps)
- [Tear down...](#tear-down)
## Introduction
In this guide I will demonstrate how to deploy a Kubernetes cluster to Azure cloud. You will be using CoreOS with Weave, which implements simple and secure networking, in a transparent, yet robust way. The purpose of this guide is to provide an out-of-the-box implementation that can ultimately be taken into production with little change. It will demonstrate how to provision a dedicated Kubernetes master and etcd nodes, and show how to scale the cluster with ease.
### Prerequisites
1. You need an Azure account.
## Let's go!
To get started, you need to checkout the code:
```
git clone https://github.com/GoogleCloudPlatform/kubernetes
cd kubernetes/docs/getting-started-guides/coreos/azure/
```
You will need to have [Node.js installed](http://nodejs.org/download/) on you machine. If you have previously used Azure CLI, you should have it already.
First, you need to install some of the dependencies with
```
npm install
```
Now, all you need to do is:
```
./azure-login.js -u <your_username>
./create-kubernetes-cluster.js
```
This script will provision a cluster suitable for production use, where there is a ring of 3 dedicated etcd nodes, Kubernetes master and 2 nodes. The `kube-00` VM will be the master, your work loads are only to be deployed on the minion nodes, `kube-01` and `kube-02`. Initially, all VMs are single-core, to ensure a user of the free tier can reproduce it without paying extra. I will show how to add more bigger VMs later.
![VMs in Azure](initial_cluster.png)
Once the creation of Azure VMs has finished, you should see the following:
```
...
azure_wrapper/info: Saved SSH config, you can use it like so: `ssh -F ./output/kube_1c1496016083b4_ssh_conf <hostname>`
azure_wrapper/info: The hosts in this deployment are:
[ 'etcd-00', 'etcd-01', 'etcd-02', 'kube-00', 'kube-01', 'kube-02' ]
azure_wrapper/info: Saved state into `./output/kube_1c1496016083b4_deployment.yml`
```
Let's login to the master node like so:
```
ssh -F ./output/kube_1c1496016083b4_ssh_conf kube-00
```
> Note: config file name will be different, make sure to use the one you see.
Check there are 2 nodes in the cluster:
```
core@kube-00 ~ $ kubectl get nodes
NAME LABELS STATUS
kube-01 environment=production Ready
kube-02 environment=production Ready
```
## Deploying the workload
Let's follow the Guestbook example now:
```
cd guestbook-example
kubectl create -f redis-master-controller.json
kubectl create -f redis-master-service.json
kubectl create -f redis-slave-controller.json
kubectl create -f redis-slave-service.json
kubectl create -f frontend-controller.json
kubectl create -f frontend-service.json
```
You need to wait for the pods to get deployed, run the following and wait for `STATUS` to change from `Unknown`, through `Pending` to `Running`.
```
kubectl get pods --watch
```
> Note: the most time it will spend downloading Docker container images on each of the nodes.
Eventually you should see:
```
POD IP CONTAINER(S) IMAGE(S) HOST LABELS STATUS
frontend-controller-0133o 10.2.1.14 php-redis kubernetes/example-guestbook-php-redis kube-01/172.18.0.13 name=frontend,uses=redisslave,redis-master Running
frontend-controller-ls6k1 10.2.3.10 php-redis kubernetes/example-guestbook-php-redis <unassigned> name=frontend,uses=redisslave,redis-master Running
frontend-controller-oh43e 10.2.2.15 php-redis kubernetes/example-guestbook-php-redis kube-02/172.18.0.14 name=frontend,uses=redisslave,redis-master Running
redis-master 10.2.1.3 master redis kube-01/172.18.0.13 name=redis-master Running
redis-slave-controller-fplln 10.2.2.3 slave brendanburns/redis-slave kube-02/172.18.0.14 name=redisslave,uses=redis-master Running
redis-slave-controller-gziey 10.2.1.4 slave brendanburns/redis-slave kube-01/172.18.0.13 name=redisslave,uses=redis-master Running
```
## Scaling
Two single-core nodes are certainly not enough for a production system of today, and, as you can see, there is one _unassigned_ pod. Let's scale the cluster by adding a couple of bigger nodes.
You will need to open another terminal window on your machine and go to the same working directory (e.g. `~/Workspace/weave-demos/coreos-azure`).
First, lets set the size of new VMs:
```
export AZ_VM_SIZE=Large
```
Now, run scale script with state file of the previous deployment and number of nodes to add:
```
./scale-kubernetes-cluster.js ./output/kube_1c1496016083b4_deployment.yml 2
...
azure_wrapper/info: Saved SSH config, you can use it like so: `ssh -F ./output/kube_8f984af944f572_ssh_conf <hostname>`
azure_wrapper/info: The hosts in this deployment are:
[ 'etcd-00',
'etcd-01',
'etcd-02',
'kube-00',
'kube-01',
'kube-02',
'kube-03',
'kube-04' ]
azure_wrapper/info: Saved state into `./output/kube_8f984af944f572_deployment.yml`
```
> Note: this step has created new files in `./output`.
Back on `kube-00`:
```
core@kube-00 ~ $ kubectl get nodes
NAME LABELS STATUS
kube-01 environment=production Ready
kube-02 environment=production Ready
kube-03 environment=production Ready
kube-04 environment=production Ready
```
You can see that two more nodes joined happily. Let's scale the number of Guestbook instances now.
First, double-check how many replication controllers there are:
```
core@kube-00 ~ $ kubectl get rc
CONTROLLER CONTAINER(S) IMAGE(S) SELECTOR REPLICAS
frontend php-redis kubernetes/example-guestbook-php-redis:v2 name=frontend 3
redis-master master redis name=redis-master 1
redis-slave slave kubernetes/redis-slave:v2 name=redis-slave 2
```
As there are 4 nodes, let's scale proportionally:
```
core@kube-00 ~ $ kubectl scale --replicas=4 rc redis-slave
scaled
core@kube-00 ~ $ kubectl scale --replicas=4 rc frontend
scaled
```
Check what you have now:
```
core@kube-00 ~ $ kubectl get rc
CONTROLLER CONTAINER(S) IMAGE(S) SELECTOR REPLICAS
frontend php-redis kubernetes/example-guestbook-php-redis:v2 name=frontend 4
redis-master master redis name=redis-master 1
redis-slave slave kubernetes/redis-slave:v2 name=redis-slave 4
```
You now will have more instances of front-end Guestbook apps and Redis slaves; and, if you look up all pods labeled `name=frontend`, you should see one running on each node.
```
core@kube-00 ~/guestbook-example $ kubectl get pods -l name=frontend
POD IP CONTAINER(S) IMAGE(S) HOST LABELS STATUS
frontend-controller-0133o 10.2.1.19 php-redis kubernetes/example-guestbook-php-redis kube-01/172.18.0.13 name=frontend,uses=redisslave,redis-master Running
frontend-controller-i7hvs 10.2.4.5 php-redis kubernetes/example-guestbook-php-redis kube-04/172.18.0.21 name=frontend,uses=redisslave,redis-master Running
frontend-controller-ls6k1 10.2.3.18 php-redis kubernetes/example-guestbook-php-redis kube-03/172.18.0.20 name=frontend,uses=redisslave,redis-master Running
frontend-controller-oh43e 10.2.2.22 php-redis kubernetes/example-guestbook-php-redis kube-02/172.18.0.14 name=frontend,uses=redisslave,redis-master Running
```
## Exposing the app to the outside world
To makes sure the app is working, you probably want to load it in the browser. For accessing the Guestbook service from the outside world, an Azure endpoint needs to be created like shown on the picture below.
![Creating an endpoint](external_access.png)
You then should be able to access it from anywhere via the Azure virtual IP for `kube-01`, i.e. `http://104.40.211.194:8000/` as per screenshot.
## Next steps
You now have a full-blow cluster running in Azure, congrats!
You should probably try deploy other [example apps](https://github.com/GoogleCloudPlatform/kubernetes/tree/master/examples) or write your own ;)
## Tear down...
If you don't wish care about the Azure bill, you can tear down the cluster. It's easy to redeploy it, as you can see.
```
./destroy-cluster.js ./output/kube_8f984af944f572_deployment.yml
```
> Note: make sure to use the _latest state file_, as after scaling there is a new one.
By the way, with the scripts shown, you can deploy multiple clusters, if you like :)
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/getting-started-guides/coreos/azure/README.md?pixel)]()
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/release-0.20.0/docs/getting-started-guides/coreos/azure/README.md?pixel)]()

View File

@@ -0,0 +1,14 @@
apiVersion: v1
kind: Service
metadata:
labels:
kubernetes.io/cluster-service: "true"
kubernetes.io/name: "Grafana"
name: monitoring-grafana
spec:
ports:
- port: 80
targetPort: 8080
selector:
name: influxGrafana

View File

@@ -0,0 +1,24 @@
apiVersion: v1
kind: ReplicationController
metadata:
labels:
name: heapster
kubernetes.io/cluster-service: "true"
name: monitoring-heapster-controller
spec:
replicas: 1
selector:
name: heapster
template:
metadata:
labels:
name: heapster
kubernetes.io/cluster-service: "true"
spec:
containers:
- image: gcr.io/google_containers/heapster:v0.12.1
name: heapster
command:
- /heapster
- --source=kubernetes:http://kubernetes?auth=
- --sink=influxdb:http://monitoring-influxdb:8086

View File

@@ -0,0 +1,35 @@
apiVersion: v1
kind: ReplicationController
metadata:
labels:
name: influxGrafana
kubernetes.io/cluster-service: "true"
name: monitoring-influx-grafana-controller
spec:
replicas: 1
selector:
name: influxGrafana
template:
metadata:
labels:
name: influxGrafana
kubernetes.io/cluster-service: "true"
spec:
containers:
- image: gcr.io/google_containers/heapster_influxdb:v0.3
name: influxdb
ports:
- containerPort: 8083
hostPort: 8083
- containerPort: 8086
hostPort: 8086
- image: gcr.io/google_containers/heapster_grafana:v0.7
name: grafana
env:
- name: INFLUXDB_EXTERNAL_URL
value: /api/v1/proxy/namespaces/default/services/monitoring-grafana/db/
- name: INFLUXDB_HOST
value: monitoring-influxdb
- name: INFLUXDB_PORT
value: "8086"

View File

@@ -0,0 +1,17 @@
apiVersion: v1
kind: Service
metadata:
labels:
name: influxGrafana
name: monitoring-influxdb
spec:
ports:
- name: http
port: 8083
targetPort: 8083
- name: api
port: 8086
targetPort: 8086
selector:
name: influxGrafana

View File

@@ -0,0 +1,37 @@
apiVersion: v1
kind: ReplicationController
metadata:
name: elasticsearch-logging-v1
namespace: default
labels:
k8s-app: elasticsearch-logging
version: v1
kubernetes.io/cluster-service: "true"
spec:
replicas: 2
selector:
k8s-app: elasticsearch-logging
version: v1
template:
metadata:
labels:
k8s-app: elasticsearch-logging
version: v1
kubernetes.io/cluster-service: "true"
spec:
containers:
- image: gcr.io/google_containers/elasticsearch:1.3
name: elasticsearch-logging
ports:
- containerPort: 9200
name: es-port
protocol: TCP
- containerPort: 9300
name: es-transport-port
protocol: TCP
volumeMounts:
- name: es-persistent-storage
mountPath: /data
volumes:
- name: es-persistent-storage
emptyDir: {}

View File

@@ -0,0 +1,16 @@
apiVersion: v1
kind: Service
metadata:
name: elasticsearch-logging
namespace: default
labels:
k8s-app: elasticsearch-logging
kubernetes.io/cluster-service: "true"
kubernetes.io/name: "Elasticsearch"
spec:
ports:
- port: 9200
protocol: TCP
targetPort: es-port
selector:
k8s-app: elasticsearch-logging

View File

@@ -0,0 +1,31 @@
apiVersion: v1
kind: ReplicationController
metadata:
name: kibana-logging-v1
namespace: default
labels:
k8s-app: kibana-logging
version: v1
kubernetes.io/cluster-service: "true"
spec:
replicas: 1
selector:
k8s-app: kibana-logging
version: v1
template:
metadata:
labels:
k8s-app: kibana-logging
version: v1
kubernetes.io/cluster-service: "true"
spec:
containers:
- name: kibana-logging
image: gcr.io/google_containers/kibana:1.3
env:
- name: "ELASTICSEARCH_URL"
value: "http://elasticsearch-logging:9200"
ports:
- containerPort: 5601
name: kibana-port
protocol: TCP

View File

@@ -0,0 +1,17 @@
apiVersion: v1
kind: Service
metadata:
name: kibana-logging
namespace: default
labels:
k8s-app: kibana-logging
kubernetes.io/cluster-service: "true"
kubernetes.io/name: "Kibana"
spec:
ports:
- port: 5601
protocol: TCP
targetPort: kibana-port
selector:
k8s-app: kibana-logging

View File

@@ -0,0 +1,3 @@
#!/usr/bin/env node
require('child_process').fork('node_modules/azure-cli/bin/azure', ['login'].concat(process.argv));

View File

@@ -0,0 +1,60 @@
## This file is used as input to deployment script, which ammends it as needed.
## More specifically, we need to add peer hosts for each but the elected peer.
write_files:
- path: /opt/bin/curl-retry.sh
permissions: '0755'
owner: root
content: |
#!/bin/sh -x
until curl $@
do sleep 1
done
coreos:
units:
- name: download-etcd2.service
enable: true
command: start
content: |
[Unit]
After=network-online.target
Before=etcd2.service
Description=Download etcd2 Binaries
Documentation=https://github.com/coreos/etcd/
Requires=network-online.target
[Service]
Environment=ETCD2_RELEASE_TARBALL=https://github.com/coreos/etcd/releases/download/v2.0.11/etcd-v2.0.11-linux-amd64.tar.gz
ExecStartPre=/bin/mkdir -p /opt/bin
ExecStart=/opt/bin/curl-retry.sh --silent --location $ETCD2_RELEASE_TARBALL --output /tmp/etcd2.tgz
ExecStart=/bin/tar xzvf /tmp/etcd2.tgz -C /opt
ExecStartPost=/bin/ln -s /opt/etcd-v2.0.11-linux-amd64/etcd /opt/bin/etcd2
ExecStartPost=/bin/ln -s /opt/etcd-v2.0.11-linux-amd64/etcdctl /opt/bin/etcdctl2
RemainAfterExit=yes
Type=oneshot
[Install]
WantedBy=multi-user.target
- name: etcd2.service
enable: true
command: start
content: |
[Unit]
After=download-etcd2.service
Description=etcd 2
Documentation=https://github.com/coreos/etcd/
[Service]
Environment=ETCD_NAME=%H
Environment=ETCD_INITIAL_CLUSTER_TOKEN=etcd-cluster
Environment=ETCD_INITIAL_ADVERTISE_PEER_URLS=http://%H:2380
Environment=ETCD_LISTEN_PEER_URLS=http://%H:2380
Environment=ETCD_LISTEN_CLIENT_URLS=http://0.0.0.0:2379,http://0.0.0.0:4001
Environment=ETCD_ADVERTISE_CLIENT_URLS=http://%H:2379,http://%H:4001
Environment=ETCD_INITIAL_CLUSTER_STATE=new
ExecStart=/opt/bin/etcd2
Restart=always
RestartSec=10
[Install]
WantedBy=multi-user.target
update:
group: stable
reboot-strategy: off

View File

@@ -0,0 +1,388 @@
## This file is used as input to deployment script, which ammends it as needed.
## More specifically, we need to add environment files for as many nodes as we
## are going to deploy.
write_files:
- path: /opt/bin/curl-retry.sh
permissions: '0755'
owner: root
content: |
#!/bin/sh -x
until curl $@
do sleep 1
done
- path: /opt/bin/register_minion.sh
permissions: '0755'
owner: root
content: |
#!/bin/sh -xe
minion_id="${1}"
master_url="${2}"
env_label="${3}"
until healthcheck=$(curl --fail --silent "${master_url}/healthz")
do sleep 2
done
test -n "${healthcheck}"
test "${healthcheck}" = "ok"
printf '{
"id": "%s",
"kind": "Minion",
"apiVersion": "v1beta1",
"labels": { "environment": "%s" }
}' "${minion_id}" "${env_label}" \
| /opt/bin/kubectl create -s "${master_url}" -f -
- path: /etc/kubernetes/manifests/fluentd.manifest
permissions: '0755'
owner: root
content: |
apiVersion: v1
kind: Pod
metadata:
name: fluentd-elasticsearch
spec:
containers:
- name: fluentd-elasticsearch
image: gcr.io/google_containers/fluentd-elasticsearch:1.5
env:
- name: "FLUENTD_ARGS"
value: "-qq"
volumeMounts:
- name: varlog
mountPath: /varlog
- name: containers
mountPath: /var/lib/docker/containers
volumes:
- name: varlog
hostPath:
path: /var/log
- name: containers
hostPath:
path: /var/lib/docker/containers
coreos:
update:
group: stable
reboot-strategy: off
units:
- name: systemd-networkd-wait-online.service
drop-ins:
- name: 50-check-github-is-reachable.conf
content: |
[Service]
ExecStart=/bin/sh -x -c \
'until curl --silent --fail https://status.github.com/api/status.json | grep -q \"good\"; do sleep 2; done'
- name: docker.service
drop-ins:
- name: 50-weave-kubernetes.conf
content: |
[Service]
Environment=DOCKER_OPTS='--bridge="weave" -r="false"'
- name: weave-network.target
enable: true
content: |
[Unit]
Description=Weave Network Setup Complete
Documentation=man:systemd.special(7)
RefuseManualStart=no
After=network-online.target
[Install]
WantedBy=multi-user.target
WantedBy=kubernetes-master.target
WantedBy=kubernetes-minion.target
- name: kubernetes-master.target
enable: true
command: start
content: |
[Unit]
Description=Kubernetes Cluster Master
Documentation=http://kubernetes.io/
RefuseManualStart=no
After=weave-network.target
Requires=weave-network.target
ConditionHost=kube-00
Wants=apiserver.service
Wants=scheduler.service
Wants=controller-manager.service
[Install]
WantedBy=multi-user.target
- name: kubernetes-minion.target
enable: true
command: start
content: |
[Unit]
Description=Kubernetes Cluster Minion
Documentation=http://kubernetes.io/
RefuseManualStart=no
After=weave-network.target
Requires=weave-network.target
ConditionHost=!kube-00
Wants=proxy.service
Wants=kubelet.service
[Install]
WantedBy=multi-user.target
- name: 10-weave.network
runtime: false
content: |
[Match]
Type=bridge
Name=weave*
[Network]
- name: install-weave.service
enable: true
content: |
[Unit]
After=network-online.target
Before=weave.service
Before=weave-helper.service
Before=docker.service
Description=Install Weave
Documentation=http://docs.weave.works/
Requires=network-online.target
[Service]
Type=oneshot
RemainAfterExit=yes
ExecStartPre=/bin/mkdir -p /opt/bin/
ExecStartPre=/opt/bin/curl-retry.sh \
--silent \
--location \
https://github.com/weaveworks/weave/releases/download/latest_release/weave \
--output /opt/bin/weave
ExecStartPre=/opt/bin/curl-retry.sh \
--silent \
--location \
https://raw.github.com/errordeveloper/weave-demos/master/poseidon/weave-helper \
--output /opt/bin/weave-helper
ExecStartPre=/usr/bin/chmod +x /opt/bin/weave
ExecStartPre=/usr/bin/chmod +x /opt/bin/weave-helper
ExecStart=/bin/echo Weave Installed
[Install]
WantedBy=weave-network.target
WantedBy=weave.service
- name: weave-helper.service
enable: true
content: |
[Unit]
After=install-weave.service
After=docker.service
Description=Weave Network Router
Documentation=http://docs.weave.works/
Requires=docker.service
Requires=install-weave.service
[Service]
ExecStart=/opt/bin/weave-helper
Restart=always
[Install]
WantedBy=weave-network.target
- name: weave.service
enable: true
content: |
[Unit]
After=install-weave.service
After=docker.service
Description=Weave Network Router
Documentation=http://docs.weave.works/
Requires=docker.service
Requires=install-weave.service
[Service]
TimeoutStartSec=0
EnvironmentFile=/etc/weave.%H.env
ExecStartPre=/opt/bin/weave setup
ExecStartPre=/opt/bin/weave launch $WEAVE_PEERS
ExecStart=/usr/bin/docker attach weave
Restart=on-failure
Restart=always
ExecStop=/opt/bin/weave stop
[Install]
WantedBy=weave-network.target
- name: weave-create-bridge.service
enable: true
content: |
[Unit]
After=network.target
After=install-weave.service
Before=weave.service
Before=docker.service
Requires=network.target
Requires=install-weave.service
[Service]
Type=oneshot
EnvironmentFile=/etc/weave.%H.env
ExecStart=/opt/bin/weave --local create-bridge
ExecStart=/usr/bin/ip addr add dev weave $BRIDGE_ADDRESS_CIDR
ExecStart=/usr/bin/ip route add $BREAKOUT_ROUTE dev weave scope link
ExecStart=/usr/bin/ip route add 224.0.0.0/4 dev weave
[Install]
WantedBy=multi-user.target
WantedBy=weave-network.target
- name: download-kubernetes.service
enable: true
content: |
[Unit]
After=network-online.target
Before=apiserver.service
Before=controller-manager.service
Before=kubelet.service
Before=proxy.service
Description=Download Kubernetes Binaries
Documentation=http://kubernetes.io/
Requires=network-online.target
[Service]
Environment=KUBE_RELEASE_TARBALL=https://github.com/GoogleCloudPlatform/kubernetes/releases/download/v0.18.0/kubernetes.tar.gz
ExecStartPre=/bin/mkdir -p /opt/
ExecStart=/opt/bin/curl-retry.sh --silent --location $KUBE_RELEASE_TARBALL --output /tmp/kubernetes.tgz
ExecStart=/bin/tar xzvf /tmp/kubernetes.tgz -C /tmp/
ExecStart=/bin/tar xzvf /tmp/kubernetes/server/kubernetes-server-linux-amd64.tar.gz -C /opt
ExecStartPost=/bin/chmod o+rx -R /opt/kubernetes
ExecStartPost=/bin/ln -s /opt/kubernetes/server/bin/kubectl /opt/bin/
ExecStartPost=/bin/mv /tmp/kubernetes/examples/guestbook /home/core/guestbook-example
ExecStartPost=/bin/chown core. -R /home/core/guestbook-example
ExecStartPost=/bin/rm -rf /tmp/kubernetes
ExecStartPost=/bin/sed 's/\("createExternalLoadBalancer":\) true/\1 false/' -i /home/core/guestbook-example/frontend-service.json
RemainAfterExit=yes
Type=oneshot
[Install]
WantedBy=kubernetes-master.target
WantedBy=kubernetes-minion.target
- name: apiserver.service
enable: true
content: |
[Unit]
After=download-kubernetes.service
Before=controller-manager.service
Before=scheduler.service
ConditionFileIsExecutable=/opt/kubernetes/server/bin/kube-apiserver
Description=Kubernetes API Server
Documentation=http://kubernetes.io/
Wants=download-kubernetes.service
ConditionHost=kube-00
[Service]
ExecStart=/opt/kubernetes/server/bin/kube-apiserver \
--address=0.0.0.0 \
--port=8080 \
$ETCD_SERVERS \
--service-cluster-ip-range=10.1.0.0/16 \
--cloud_provider=vagrant \
--logtostderr=true --v=3
Restart=always
RestartSec=10
[Install]
WantedBy=kubernetes-master.target
- name: scheduler.service
enable: true
content: |
[Unit]
After=apiserver.service
After=download-kubernetes.service
ConditionFileIsExecutable=/opt/kubernetes/server/bin/kube-scheduler
Description=Kubernetes Scheduler
Documentation=http://kubernetes.io/
Wants=apiserver.service
ConditionHost=kube-00
[Service]
ExecStart=/opt/kubernetes/server/bin/kube-scheduler \
--logtostderr=true \
--master=127.0.0.1:8080
Restart=always
RestartSec=10
[Install]
WantedBy=kubernetes-master.target
- name: controller-manager.service
enable: true
content: |
[Unit]
After=download-kubernetes.service
After=apiserver.service
ConditionFileIsExecutable=/opt/kubernetes/server/bin/kube-controller-manager
Description=Kubernetes Controller Manager
Documentation=http://kubernetes.io/
Wants=apiserver.service
Wants=download-kubernetes.service
ConditionHost=kube-00
[Service]
ExecStart=/opt/kubernetes/server/bin/kube-controller-manager \
--cloud_provider=vagrant \
--master=127.0.0.1:8080 \
--logtostderr=true
Restart=always
RestartSec=10
[Install]
WantedBy=kubernetes-master.target
- name: kubelet.service
enable: true
content: |
[Unit]
After=download-kubernetes.service
ConditionFileIsExecutable=/opt/kubernetes/server/bin/kubelet
Description=Kubernetes Kubelet
Documentation=http://kubernetes.io/
Wants=download-kubernetes.service
ConditionHost=!kube-00
[Service]
ExecStartPre=/bin/mkdir -p /etc/kubernetes/manifests/
ExecStart=/opt/kubernetes/server/bin/kubelet \
--address=0.0.0.0 \
--port=10250 \
--hostname_override=%H \
--api_servers=http://kube-00:8080 \
--logtostderr=true \
--cluster_dns=10.1.0.3 \
--cluster_domain=kube.local \
--config=/etc/kubernetes/manifests/
Restart=always
RestartSec=10
[Install]
WantedBy=kubernetes-minion.target
- name: proxy.service
enable: true
content: |
[Unit]
After=download-kubernetes.service
ConditionFileIsExecutable=/opt/kubernetes/server/bin/kube-proxy
Description=Kubernetes Proxy
Documentation=http://kubernetes.io/
Wants=download-kubernetes.service
ConditionHost=!kube-00
[Service]
ExecStart=/opt/kubernetes/server/bin/kube-proxy \
--master=http://kube-00:8080 \
--logtostderr=true
Restart=always
RestartSec=10
[Install]
WantedBy=kubernetes-minion.target
- name: kubectl-create-minion.service
enable: true
content: |
[Unit]
After=download-kubernetes.service
Before=proxy.service
Before=kubelet.service
ConditionFileIsExecutable=/opt/kubernetes/server/bin/kubectl
ConditionFileIsExecutable=/opt/bin/register_minion.sh
Description=Kubernetes Create Minion
Documentation=http://kubernetes.io/
Wants=download-kubernetes.service
ConditionHost=!kube-00
[Service]
ExecStart=/opt/bin/register_minion.sh %H http://kube-00:8080 production
Type=oneshot
[Install]
WantedBy=kubernetes-minion.target

View File

@@ -0,0 +1,15 @@
#!/usr/bin/env node
var azure = require('./lib/azure_wrapper.js');
var kube = require('./lib/deployment_logic/kubernetes.js');
azure.create_config('kube', { 'etcd': 3, 'kube': 3 });
azure.run_task_queue([
azure.queue_default_network(),
azure.queue_storage_if_needed(),
azure.queue_machines('etcd', 'stable',
kube.create_etcd_cloud_config),
azure.queue_machines('kube', 'stable',
kube.create_node_cloud_config),
]);

View File

@@ -0,0 +1,7 @@
#!/usr/bin/env node
var azure = require('./lib/azure_wrapper.js');
azure.destroy_cluster(process.argv[2]);
console.log('The cluster had been destroyed, you can delete the state file now.');

Binary file not shown.

After

Width:  |  Height:  |  Size: 286 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 169 KiB

View File

@@ -0,0 +1,271 @@
var _ = require('underscore');
var fs = require('fs');
var cp = require('child_process');
var yaml = require('js-yaml');
var openssl = require('openssl-wrapper');
var clr = require('colors');
var inspect = require('util').inspect;
var util = require('./util.js');
var coreos_image_ids = {
'stable': '2b171e93f07c4903bcad35bda10acf22__CoreOS-Stable-647.2.0',
'beta': '2b171e93f07c4903bcad35bda10acf22__CoreOS-Beta-681.0.0', // untested
'alpha': '2b171e93f07c4903bcad35bda10acf22__CoreOS-Alpha-695.0.0' // untested
};
var conf = {};
var hosts = {
collection: [],
ssh_port_counter: 2200,
};
var task_queue = [];
exports.run_task_queue = function (dummy) {
var tasks = {
todo: task_queue,
done: [],
};
var pop_task = function() {
console.log(clr.yellow('azure_wrapper/task:'), clr.grey(inspect(tasks)));
var ret = {};
ret.current = tasks.todo.shift();
ret.remaining = tasks.todo.length;
return ret;
};
(function iter (task) {
if (task.current === undefined) {
if (conf.destroying === undefined) {
create_ssh_conf();
save_state();
}
return;
} else {
if (task.current.length !== 0) {
console.log(clr.yellow('azure_wrapper/exec:'), clr.blue(inspect(task.current)));
cp.fork('node_modules/azure-cli/bin/azure', task.current)
.on('exit', function (code, signal) {
tasks.done.push({
code: code,
signal: signal,
what: task.current.join(' '),
remaining: task.remaining,
});
if (code !== 0 && conf.destroying === undefined) {
console.log(clr.red('azure_wrapper/fail: Exiting due to an error.'));
save_state();
console.log(clr.cyan('azure_wrapper/info: You probably want to destroy and re-run.'));
process.abort();
} else {
iter(pop_task());
}
});
} else {
iter(pop_task());
}
}
})(pop_task());
};
var save_state = function () {
var file_name = util.join_output_file_path(conf.name, 'deployment.yml');
try {
conf.hosts = hosts.collection;
fs.writeFileSync(file_name, yaml.safeDump(conf));
console.log(clr.yellow('azure_wrapper/info: Saved state into `%s`'), file_name);
} catch (e) {
console.log(clr.red(e));
}
};
var load_state = function (file_name) {
try {
conf = yaml.safeLoad(fs.readFileSync(file_name, 'utf8'));
console.log(clr.yellow('azure_wrapper/info: Loaded state from `%s`'), file_name);
return conf;
} catch (e) {
console.log(clr.red(e));
}
};
var create_ssh_key = function (prefix) {
var opts = {
x509: true,
nodes: true,
newkey: 'rsa:2048',
subj: '/O=Weaveworks, Inc./L=London/C=GB/CN=weave.works',
keyout: util.join_output_file_path(prefix, 'ssh.key'),
out: util.join_output_file_path(prefix, 'ssh.pem'),
};
openssl.exec('req', opts, function (err, buffer) {
if (err) console.log(clr.red(err));
fs.chmod(opts.keyout, '0600', function (err) {
if (err) console.log(clr.red(err));
});
});
return {
key: opts.keyout,
pem: opts.out,
}
}
var create_ssh_conf = function () {
var file_name = util.join_output_file_path(conf.name, 'ssh_conf');
var ssh_conf_head = [
"Host *",
"\tHostname " + conf.resources['service'] + ".cloudapp.net",
"\tUser core",
"\tCompression yes",
"\tLogLevel FATAL",
"\tStrictHostKeyChecking no",
"\tUserKnownHostsFile /dev/null",
"\tIdentitiesOnly yes",
"\tIdentityFile " + conf.resources['ssh_key']['key'],
"\n",
];
fs.writeFileSync(file_name, ssh_conf_head.concat(_.map(hosts.collection, function (host) {
return _.template("Host <%= name %>\n\tPort <%= port %>\n")(host);
})).join('\n'));
console.log(clr.yellow('azure_wrapper/info:'), clr.green('Saved SSH config, you can use it like so: `ssh -F ', file_name, '<hostname>`'));
console.log(clr.yellow('azure_wrapper/info:'), clr.green('The hosts in this deployment are:\n'), _.map(hosts.collection, function (host) { return host.name; }));
};
var get_location = function () {
if (process.env['AZ_AFFINITY']) {
return '--affinity-group=' + process.env['AZ_AFFINITY'];
} else if (process.env['AZ_LOCATION']) {
return '--location=' + process.env['AZ_LOCATION'];
} else {
return '--location=West Europe';
}
}
var get_vm_size = function () {
if (process.env['AZ_VM_SIZE']) {
return '--vm-size=' + process.env['AZ_VM_SIZE'];
} else {
return '--vm-size=Small';
}
}
exports.queue_default_network = function () {
task_queue.push([
'network', 'vnet', 'create',
get_location(),
'--address-space=172.16.0.0',
conf.resources['vnet'],
]);
}
exports.queue_storage_if_needed = function() {
if (!process.env['AZURE_STORAGE_ACCOUNT']) {
conf.resources['storage_account'] = util.rand_suffix;
task_queue.push([
'storage', 'account', 'create',
'--type=LRS',
get_location(),
conf.resources['storage_account'],
]);
process.env['AZURE_STORAGE_ACCOUNT'] = conf.resources['storage_account'];
} else {
// Preserve it for resizing, so we don't create a new one by accedent,
// when the environment variable is unset
conf.resources['storage_account'] = process.env['AZURE_STORAGE_ACCOUNT'];
}
};
exports.queue_machines = function (name_prefix, coreos_update_channel, cloud_config_creator) {
var x = conf.nodes[name_prefix];
var vm_create_base_args = [
'vm', 'create',
get_location(),
get_vm_size(),
'--connect=' + conf.resources['service'],
'--virtual-network-name=' + conf.resources['vnet'],
'--no-ssh-password',
'--ssh-cert=' + conf.resources['ssh_key']['pem'],
];
var cloud_config = cloud_config_creator(x, conf);
var next_host = function (n) {
hosts.ssh_port_counter += 1;
var host = { name: util.hostname(n, name_prefix), port: hosts.ssh_port_counter };
if (cloud_config instanceof Array) {
host.cloud_config_file = cloud_config[n];
} else {
host.cloud_config_file = cloud_config;
}
hosts.collection.push(host);
return _.map([
"--vm-name=<%= name %>",
"--ssh=<%= port %>",
"--custom-data=<%= cloud_config_file %>",
], function (arg) { return _.template(arg)(host); });
};
task_queue = task_queue.concat(_(x).times(function (n) {
if (conf.resizing && n < conf.old_size) {
return [];
} else {
return vm_create_base_args.concat(next_host(n), [
coreos_image_ids[coreos_update_channel], 'core',
]);
}
}));
};
exports.create_config = function (name, nodes) {
conf = {
name: name,
nodes: nodes,
weave_salt: util.rand_string(),
resources: {
vnet: [name, 'internal-vnet', util.rand_suffix].join('-'),
service: [name, util.rand_suffix].join('-'),
ssh_key: create_ssh_key(name),
}
};
};
exports.destroy_cluster = function (state_file) {
load_state(state_file);
if (conf.hosts === undefined) {
console.log(clr.red('azure_wrapper/fail: Nothing to delete.'));
process.abort();
}
conf.destroying = true;
task_queue = _.map(conf.hosts, function (host) {
return ['vm', 'delete', '--quiet', '--blob-delete', host.name];
});
task_queue.push(['network', 'vnet', 'delete', '--quiet', conf.resources['vnet']]);
task_queue.push(['storage', 'account', 'delete', '--quiet', conf.resources['storage_account']]);
exports.run_task_queue();
};
exports.load_state_for_resizing = function (state_file, node_type, new_nodes) {
load_state(state_file);
if (conf.hosts === undefined) {
console.log(clr.red('azure_wrapper/fail: Nothing to look at.'));
process.abort();
}
conf.resizing = true;
conf.old_size = conf.nodes[node_type];
conf.old_state_file = state_file;
conf.nodes[node_type] += new_nodes;
hosts.collection = conf.hosts;
hosts.ssh_port_counter += conf.hosts.length;
process.env['AZURE_STORAGE_ACCOUNT'] = conf.resources['storage_account'];
}

View File

@@ -0,0 +1,43 @@
var _ = require('underscore');
var fs = require('fs');
var yaml = require('js-yaml');
var colors = require('colors/safe');
var write_cloud_config_from_object = function (data, output_file) {
try {
fs.writeFileSync(output_file, [
'#cloud-config',
yaml.safeDump(data),
].join("\n"));
return output_file;
} catch (e) {
console.log(colors.red(e));
}
};
exports.generate_environment_file_entry_from_object = function (hostname, environ) {
var data = {
hostname: hostname,
environ_array: _.map(environ, function (value, key) {
return [key.toUpperCase(), JSON.stringify(value.toString())].join('=');
}),
};
return {
permissions: '0600',
owner: 'root',
content: _.template("<%= environ_array.join('\\n') %>\n")(data),
path: _.template("/etc/weave.<%= hostname %>.env")(data),
};
};
exports.process_template = function (input_file, output_file, processor) {
var data = {};
try {
data = yaml.safeLoad(fs.readFileSync(input_file, 'utf8'));
} catch (e) {
console.log(colors.red(e));
}
return write_cloud_config_from_object(processor(_.clone(data)), output_file);
};

View File

@@ -0,0 +1,76 @@
var _ = require('underscore');
_.mixin(require('underscore.string').exports());
var util = require('../util.js');
var cloud_config = require('../cloud_config.js');
etcd_initial_cluster_conf_self = function (conf) {
var port = '2380';
var data = {
nodes: _(conf.nodes.etcd).times(function (n) {
var host = util.hostname(n, 'etcd');
return [host, [host, port].join(':')].join('=http://');
}),
};
return {
'name': 'etcd2.service',
'drop-ins': [{
'name': '50-etcd-initial-cluster.conf',
'content': _.template("[Service]\nEnvironment=ETCD_INITIAL_CLUSTER=<%= nodes.join(',') %>\n")(data),
}],
};
};
etcd_initial_cluster_conf_kube = function (conf) {
var port = '4001';
var data = {
nodes: _(conf.nodes.etcd).times(function (n) {
var host = util.hostname(n, 'etcd');
return 'http://' + [host, port].join(':');
}),
};
return {
'name': 'apiserver.service',
'drop-ins': [{
'name': '50-etcd-initial-cluster.conf',
'content': _.template("[Service]\nEnvironment=ETCD_SERVERS=--etcd_servers=<%= nodes.join(',') %>\n")(data),
}],
};
};
exports.create_etcd_cloud_config = function (node_count, conf) {
var input_file = './cloud_config_templates/kubernetes-cluster-etcd-node-template.yml';
var output_file = util.join_output_file_path('kubernetes-cluster-etcd-nodes', 'generated.yml');
return cloud_config.process_template(input_file, output_file, function(data) {
data.coreos.units.push(etcd_initial_cluster_conf_self(conf));
return data;
});
};
exports.create_node_cloud_config = function (node_count, conf) {
var elected_node = 0;
var input_file = './cloud_config_templates/kubernetes-cluster-main-nodes-template.yml';
var output_file = util.join_output_file_path('kubernetes-cluster-main-nodes', 'generated.yml');
var make_node_config = function (n) {
return cloud_config.generate_environment_file_entry_from_object(util.hostname(n, 'kube'), {
weave_password: conf.weave_salt,
weave_peers: n === elected_node ? "" : util.hostname(elected_node, 'kube'),
breakout_route: util.ipv4([10, 2, 0, 0], 16),
bridge_address_cidr: util.ipv4([10, 2, n, 1], 24),
});
};
return cloud_config.process_template(input_file, output_file, function(data) {
data.write_files = data.write_files.concat(_(node_count).times(make_node_config));
data.coreos.units.push(etcd_initial_cluster_conf_kube(conf));
return data;
});
};

View File

@@ -0,0 +1,33 @@
var _ = require('underscore');
_.mixin(require('underscore.string').exports());
exports.ipv4 = function (ocets, prefix) {
return {
ocets: ocets,
prefix: prefix,
toString: function () {
return [ocets.join('.'), prefix].join('/');
}
}
};
exports.hostname = function hostname (n, prefix) {
return _.template("<%= pre %>-<%= seq %>")({
pre: prefix || 'core',
seq: _.pad(n, 2, '0'),
});
};
exports.rand_string = function () {
var crypto = require('crypto');
var shasum = crypto.createHash('sha256');
shasum.update(crypto.randomBytes(256));
return shasum.digest('hex');
};
exports.rand_suffix = exports.rand_string().substring(50);
exports.join_output_file_path = function(prefix, suffix) {
return './output/' + [prefix, exports.rand_suffix, suffix].join('_');
};

View File

@@ -0,0 +1,19 @@
{
"name": "coreos-azure-weave",
"version": "1.0.0",
"description": "Small utility to bring up a woven CoreOS cluster",
"main": "index.js",
"scripts": {
"test": "echo \"Error: no test specified\" && exit 1"
},
"author": "Ilya Dmitrichenko <errordeveloper@gmail.com>",
"license": "Apache 2.0",
"dependencies": {
"azure-cli": "^0.9.2",
"colors": "^1.0.3",
"js-yaml": "^3.2.5",
"openssl-wrapper": "^0.2.1",
"underscore": "^1.7.0",
"underscore.string": "^3.0.2"
}
}

View File

@@ -0,0 +1,10 @@
#!/usr/bin/env node
var azure = require('./lib/azure_wrapper.js');
var kube = require('./lib/deployment_logic/kubernetes.js');
azure.load_state_for_resizing(process.argv[2], 'kube', parseInt(process.argv[3] || 1));
azure.run_task_queue([
azure.queue_machines('kube', 'stable', kube.create_node_cloud_config),
]);

View File

@@ -0,0 +1,663 @@
Bare Metal CoreOS with Kubernetes (OFFLINE)
------------------------------------------
Deploy a CoreOS running Kubernetes environment. This particular guild is made to help those in an OFFLINE system, wither for testing a POC before the real deal, or you are restricted to be totally offline for your applications.
**Table of Contents**
- [Prerequisites](#prerequisites)
- [High Level Design](#high-level-design)
- [This Guides variables](#this-guides-variables)
- [Setup PXELINUX CentOS](#setup-pxelinux-centos)
- [Adding CoreOS to PXE](#adding-coreos-to-pxe)
- [DHCP configuration](#dhcp-configuration)
- [Kubernetes](#kubernetes)
- [Cloud Configs](#cloud-configs)
- [master.yml](#masteryml)
- [node.yml](#nodeyml)
- [New pxelinux.cfg file](#new-pxelinuxcfg-file)
- [Specify the pxelinux targets](#specify-the-pxelinux-targets)
- [Creating test pod](#creating-test-pod)
- [Helping commands for debugging](#helping-commands-for-debugging)
## Prerequisites
1. Installed *CentOS 6* for PXE server
2. At least two bare metal nodes to work with
## High Level Design
1. Manage the tftp directory
* /tftpboot/(coreos)(centos)(RHEL)
* /tftpboot/pxelinux.0/(MAC) -> linked to Linux image config file
2. Update per install the link for pxelinux
3. Update the DHCP config to reflect the host needing deployment
4. Setup nodes to deploy CoreOS creating a etcd cluster.
5. Have no access to the public [etcd discovery tool](https://discovery.etcd.io/).
6. Installing the CoreOS slaves to become Kubernetes minions.
## This Guides variables
| Node Description | MAC | IP |
| :---------------------------- | :---------------: | :---------: |
| CoreOS/etcd/Kubernetes Master | d0:00:67:13:0d:00 | 10.20.30.40 |
| CoreOS Slave 1 | d0:00:67:13:0d:01 | 10.20.30.41 |
| CoreOS Slave 2 | d0:00:67:13:0d:02 | 10.20.30.42 |
## Setup PXELINUX CentOS
To setup CentOS PXELINUX environment there is a complete [guide here](http://docs.fedoraproject.org/en-US/Fedora/7/html/Installation_Guide/ap-pxe-server.html). This section is the abbreviated version.
1. Install packages needed on CentOS
sudo yum install tftp-server dhcp syslinux
2. ```vi /etc/xinetd.d/tftp``` to enable tftp service and change disable to 'no'
disable = no
3. Copy over the syslinux images we will need.
su -
mkdir -p /tftpboot
cd /tftpboot
cp /usr/share/syslinux/pxelinux.0 /tftpboot
cp /usr/share/syslinux/menu.c32 /tftpboot
cp /usr/share/syslinux/memdisk /tftpboot
cp /usr/share/syslinux/mboot.c32 /tftpboot
cp /usr/share/syslinux/chain.c32 /tftpboot
/sbin/service dhcpd start
/sbin/service xinetd start
/sbin/chkconfig tftp on
4. Setup default boot menu
mkdir /tftpboot/pxelinux.cfg
touch /tftpboot/pxelinux.cfg/default
5. Edit the menu ```vi /tftpboot/pxelinux.cfg/default```
default menu.c32
prompt 0
timeout 15
ONTIMEOUT local
display boot.msg
MENU TITLE Main Menu
LABEL local
MENU LABEL Boot local hard drive
LOCALBOOT 0
Now you should have a working PXELINUX setup to image CoreOS nodes. You can verify the services by using VirtualBox locally or with bare metal servers.
## Adding CoreOS to PXE
This section describes how to setup the CoreOS images to live alongside a pre-existing PXELINUX environment.
1. Find or create the TFTP root directory that everything will be based off of.
* For this document we will assume ```/tftpboot/``` is our root directory.
2. Once we know and have our tftp root directory we will create a new directory structure for our CoreOS images.
3. Download the CoreOS PXE files provided by the CoreOS team.
MY_TFTPROOT_DIR=/tftpboot
mkdir -p $MY_TFTPROOT_DIR/images/coreos/
cd $MY_TFTPROOT_DIR/images/coreos/
wget http://stable.release.core-os.net/amd64-usr/current/coreos_production_pxe.vmlinuz
wget http://stable.release.core-os.net/amd64-usr/current/coreos_production_pxe.vmlinuz.sig
wget http://stable.release.core-os.net/amd64-usr/current/coreos_production_pxe_image.cpio.gz
wget http://stable.release.core-os.net/amd64-usr/current/coreos_production_pxe_image.cpio.gz.sig
gpg --verify coreos_production_pxe.vmlinuz.sig
gpg --verify coreos_production_pxe_image.cpio.gz.sig
4. Edit the menu ```vi /tftpboot/pxelinux.cfg/default``` again
default menu.c32
prompt 0
timeout 300
ONTIMEOUT local
display boot.msg
MENU TITLE Main Menu
LABEL local
MENU LABEL Boot local hard drive
LOCALBOOT 0
MENU BEGIN CoreOS Menu
LABEL coreos-master
MENU LABEL CoreOS Master
KERNEL images/coreos/coreos_production_pxe.vmlinuz
APPEND initrd=images/coreos/coreos_production_pxe_image.cpio.gz cloud-config-url=http://<xxx.xxx.xxx.xxx>/pxe-cloud-config-single-master.yml
LABEL coreos-slave
MENU LABEL CoreOS Slave
KERNEL images/coreos/coreos_production_pxe.vmlinuz
APPEND initrd=images/coreos/coreos_production_pxe_image.cpio.gz cloud-config-url=http://<xxx.xxx.xxx.xxx>/pxe-cloud-config-slave.yml
MENU END
This configuration file will now boot from local drive but have the option to PXE image CoreOS.
## DHCP configuration
This section covers configuring the DHCP server to hand out our new images. In this case we are assuming that there are other servers that will boot alongside other images.
1. Add the ```filename``` to the _host_ or _subnet_ sections.
filename "/tftpboot/pxelinux.0";
2. At this point we want to make pxelinux configuration files that will be the templates for the different CoreOS deployments.
subnet 10.20.30.0 netmask 255.255.255.0 {
next-server 10.20.30.242;
option broadcast-address 10.20.30.255;
filename "<other default image>";
...
# http://www.syslinux.org/wiki/index.php/PXELINUX
host core_os_master {
hardware ethernet d0:00:67:13:0d:00;
option routers 10.20.30.1;
fixed-address 10.20.30.40;
option domain-name-servers 10.20.30.242;
filename "/pxelinux.0";
}
host core_os_slave {
hardware ethernet d0:00:67:13:0d:01;
option routers 10.20.30.1;
fixed-address 10.20.30.41;
option domain-name-servers 10.20.30.242;
filename "/pxelinux.0";
}
host core_os_slave2 {
hardware ethernet d0:00:67:13:0d:02;
option routers 10.20.30.1;
fixed-address 10.20.30.42;
option domain-name-servers 10.20.30.242;
filename "/pxelinux.0";
}
...
}
We will be specifying the node configuration later in the guide.
## Kubernetes
To deploy our configuration we need to create an ```etcd``` master. To do so we want to pxe CoreOS with a specific cloud-config.yml. There are two options we have here.
1. Is to template the cloud config file and programmatically create new static configs for different cluster setups.
2. Have a service discovery protocol running in our stack to do auto discovery.
This demo we just make a static single ```etcd``` server to host our Kubernetes and ```etcd``` master servers.
Since we are OFFLINE here most of the helping processes in CoreOS and Kubernetes are then limited. To do our setup we will then have to download and serve up our binaries for Kubernetes in our local environment.
An easy solution is to host a small web server on the DHCP/TFTP host for all our binaries to make them available to the local CoreOS PXE machines.
To get this up and running we are going to setup a simple ```apache``` server to serve our binaries needed to bootstrap Kubernetes.
This is on the PXE server from the previous section:
rm /etc/httpd/conf.d/welcome.conf
cd /var/www/html/
wget -O kube-register https://github.com/kelseyhightower/kube-register/releases/download/v0.0.2/kube-register-0.0.2-linux-amd64
wget -O setup-network-environment https://github.com/kelseyhightower/setup-network-environment/releases/download/v1.0.0/setup-network-environment
wget https://storage.googleapis.com/kubernetes-release/release/v0.15.0/bin/linux/amd64/kubernetes --no-check-certificate
wget https://storage.googleapis.com/kubernetes-release/release/v0.15.0/bin/linux/amd64/kube-apiserver --no-check-certificate
wget https://storage.googleapis.com/kubernetes-release/release/v0.15.0/bin/linux/amd64/kube-controller-manager --no-check-certificate
wget https://storage.googleapis.com/kubernetes-release/release/v0.15.0/bin/linux/amd64/kube-scheduler --no-check-certificate
wget https://storage.googleapis.com/kubernetes-release/release/v0.15.0/bin/linux/amd64/kubectl --no-check-certificate
wget https://storage.googleapis.com/kubernetes-release/release/v0.15.0/bin/linux/amd64/kubecfg --no-check-certificate
wget https://storage.googleapis.com/kubernetes-release/release/v0.15.0/bin/linux/amd64/kubelet --no-check-certificate
wget https://storage.googleapis.com/kubernetes-release/release/v0.15.0/bin/linux/amd64/kube-proxy --no-check-certificate
wget -O flanneld https://storage.googleapis.com/k8s/flanneld --no-check-certificate
This sets up our binaries we need to run Kubernetes. This would need to be enhanced to download from the Internet for updates in the future.
Now for the good stuff!
## Cloud Configs
The following config files are tailored for the OFFLINE version of a Kubernetes deployment.
These are based on the work found here: [master.yml](http://docs.k8s.io/getting-started-guides/coreos/cloud-configs/master.yaml), [node.yml](http://docs.k8s.io/getting-started-guides/coreos/cloud-configs/node.yaml)
To make the setup work, you need to replace a few placeholders:
- Replace `<PXE_SERVER_IP>` with your PXE server ip address (e.g. 10.20.30.242)
- Replace `<MASTER_SERVER_IP>` with the kubernetes master ip address (e.g. 10.20.30.40)
- If you run a private docker registry, replace `rdocker.example.com` with your docker registry dns name.
- If you use a proxy, replace `rproxy.example.com` with your proxy server (and port)
- Add your own SSH public key(s) to the cloud config at the end
### master.yml
On the PXE server make and fill in the variables ```vi /var/www/html/coreos/pxe-cloud-config-master.yml```.
#cloud-config
---
write_files:
- path: /opt/bin/waiter.sh
owner: root
content: |
#! /usr/bin/bash
until curl http://127.0.0.1:4001/v2/machines; do sleep 2; done
- path: /opt/bin/kubernetes-download.sh
owner: root
permissions: 0755
content: |
#! /usr/bin/bash
/usr/bin/wget -N -P "/opt/bin" "http://<PXE_SERVER_IP>/kubectl"
/usr/bin/wget -N -P "/opt/bin" "http://<PXE_SERVER_IP>/kubernetes"
/usr/bin/wget -N -P "/opt/bin" "http://<PXE_SERVER_IP>/kubecfg"
chmod +x /opt/bin/*
- path: /etc/profile.d/opt-path.sh
owner: root
permissions: 0755
content: |
#! /usr/bin/bash
PATH=$PATH/opt/bin
coreos:
units:
- name: 10-eno1.network
runtime: true
content: |
[Match]
Name=eno1
[Network]
DHCP=yes
- name: 20-nodhcp.network
runtime: true
content: |
[Match]
Name=en*
[Network]
DHCP=none
- name: get-kube-tools.service
runtime: true
command: start
content: |
[Service]
ExecStartPre=-/usr/bin/mkdir -p /opt/bin
ExecStart=/opt/bin/kubernetes-download.sh
RemainAfterExit=yes
Type=oneshot
- name: setup-network-environment.service
command: start
content: |
[Unit]
Description=Setup Network Environment
Documentation=https://github.com/kelseyhightower/setup-network-environment
Requires=network-online.target
After=network-online.target
[Service]
ExecStartPre=-/usr/bin/mkdir -p /opt/bin
ExecStartPre=/usr/bin/wget -N -P /opt/bin http://<PXE_SERVER_IP>/setup-network-environment
ExecStartPre=/usr/bin/chmod +x /opt/bin/setup-network-environment
ExecStart=/opt/bin/setup-network-environment
RemainAfterExit=yes
Type=oneshot
- name: etcd.service
command: start
content: |
[Unit]
Description=etcd
Requires=setup-network-environment.service
After=setup-network-environment.service
[Service]
EnvironmentFile=/etc/network-environment
User=etcd
PermissionsStartOnly=true
ExecStart=/usr/bin/etcd \
--name ${DEFAULT_IPV4} \
--addr ${DEFAULT_IPV4}:4001 \
--bind-addr 0.0.0.0 \
--cluster-active-size 1 \
--data-dir /var/lib/etcd \
--http-read-timeout 86400 \
--peer-addr ${DEFAULT_IPV4}:7001 \
--snapshot true
Restart=always
RestartSec=10s
- name: fleet.socket
command: start
content: |
[Socket]
ListenStream=/var/run/fleet.sock
- name: fleet.service
command: start
content: |
[Unit]
Description=fleet daemon
Wants=etcd.service
After=etcd.service
Wants=fleet.socket
After=fleet.socket
[Service]
Environment="FLEET_ETCD_SERVERS=http://127.0.0.1:4001"
Environment="FLEET_METADATA=role=master"
ExecStart=/usr/bin/fleetd
Restart=always
RestartSec=10s
- name: etcd-waiter.service
command: start
content: |
[Unit]
Description=etcd waiter
Wants=network-online.target
Wants=etcd.service
After=etcd.service
After=network-online.target
Before=flannel.service
Before=setup-network-environment.service
[Service]
ExecStartPre=/usr/bin/chmod +x /opt/bin/waiter.sh
ExecStart=/usr/bin/bash /opt/bin/waiter.sh
RemainAfterExit=true
Type=oneshot
- name: flannel.service
command: start
content: |
[Unit]
Wants=etcd-waiter.service
After=etcd-waiter.service
Requires=etcd.service
After=etcd.service
After=network-online.target
Wants=network-online.target
Description=flannel is an etcd backed overlay network for containers
[Service]
Type=notify
ExecStartPre=-/usr/bin/mkdir -p /opt/bin
ExecStartPre=/usr/bin/wget -N -P /opt/bin http://<PXE_SERVER_IP>/flanneld
ExecStartPre=/usr/bin/chmod +x /opt/bin/flanneld
ExecStartPre=-/usr/bin/etcdctl mk /coreos.com/network/config '{"Network":"10.100.0.0/16", "Backend": {"Type": "vxlan"}}'
ExecStart=/opt/bin/flanneld
- name: kube-apiserver.service
command: start
content: |
[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
Requires=etcd.service
After=etcd.service
[Service]
ExecStartPre=-/usr/bin/mkdir -p /opt/bin
ExecStartPre=/usr/bin/wget -N -P /opt/bin http://<PXE_SERVER_IP>/kube-apiserver
ExecStartPre=/usr/bin/chmod +x /opt/bin/kube-apiserver
ExecStart=/opt/bin/kube-apiserver \
--address=0.0.0.0 \
--port=8080 \
--service-cluster-ip-range=10.100.0.0/16 \
--etcd_servers=http://127.0.0.1:4001 \
--logtostderr=true
Restart=always
RestartSec=10
- name: kube-controller-manager.service
command: start
content: |
[Unit]
Description=Kubernetes Controller Manager
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
Requires=kube-apiserver.service
After=kube-apiserver.service
[Service]
ExecStartPre=/usr/bin/wget -N -P /opt/bin http://<PXE_SERVER_IP>/kube-controller-manager
ExecStartPre=/usr/bin/chmod +x /opt/bin/kube-controller-manager
ExecStart=/opt/bin/kube-controller-manager \
--master=127.0.0.1:8080 \
--logtostderr=true
Restart=always
RestartSec=10
- name: kube-scheduler.service
command: start
content: |
[Unit]
Description=Kubernetes Scheduler
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
Requires=kube-apiserver.service
After=kube-apiserver.service
[Service]
ExecStartPre=/usr/bin/wget -N -P /opt/bin http://<PXE_SERVER_IP>/kube-scheduler
ExecStartPre=/usr/bin/chmod +x /opt/bin/kube-scheduler
ExecStart=/opt/bin/kube-scheduler --master=127.0.0.1:8080
Restart=always
RestartSec=10
- name: kube-register.service
command: start
content: |
[Unit]
Description=Kubernetes Registration Service
Documentation=https://github.com/kelseyhightower/kube-register
Requires=kube-apiserver.service
After=kube-apiserver.service
Requires=fleet.service
After=fleet.service
[Service]
ExecStartPre=/usr/bin/wget -N -P /opt/bin http://<PXE_SERVER_IP>/kube-register
ExecStartPre=/usr/bin/chmod +x /opt/bin/kube-register
ExecStart=/opt/bin/kube-register \
--metadata=role=node \
--fleet-endpoint=unix:///var/run/fleet.sock \
--healthz-port=10248 \
--api-endpoint=http://127.0.0.1:8080
Restart=always
RestartSec=10
update:
group: stable
reboot-strategy: off
ssh_authorized_keys:
- ssh-rsa AAAAB3NzaC1yc2EAAAAD...
### node.yml
On the PXE server make and fill in the variables ```vi /var/www/html/coreos/pxe-cloud-config-slave.yml```.
#cloud-config
---
write_files:
- path: /etc/default/docker
content: |
DOCKER_EXTRA_OPTS='--insecure-registry="rdocker.example.com:5000"'
coreos:
units:
- name: 10-eno1.network
runtime: true
content: |
[Match]
Name=eno1
[Network]
DHCP=yes
- name: 20-nodhcp.network
runtime: true
content: |
[Match]
Name=en*
[Network]
DHCP=none
- name: etcd.service
mask: true
- name: docker.service
drop-ins:
- name: 50-insecure-registry.conf
content: |
[Service]
Environment="HTTP_PROXY=http://rproxy.example.com:3128/" "NO_PROXY=localhost,127.0.0.0/8,rdocker.example.com"
- name: fleet.service
command: start
content: |
[Unit]
Description=fleet daemon
Wants=fleet.socket
After=fleet.socket
[Service]
Environment="FLEET_ETCD_SERVERS=http://<MASTER_SERVER_IP>:4001"
Environment="FLEET_METADATA=role=node"
ExecStart=/usr/bin/fleetd
Restart=always
RestartSec=10s
- name: flannel.service
command: start
content: |
[Unit]
After=network-online.target
Wants=network-online.target
Description=flannel is an etcd backed overlay network for containers
[Service]
Type=notify
ExecStartPre=-/usr/bin/mkdir -p /opt/bin
ExecStartPre=/usr/bin/wget -N -P /opt/bin http://<PXE_SERVER_IP>/flanneld
ExecStartPre=/usr/bin/chmod +x /opt/bin/flanneld
ExecStart=/opt/bin/flanneld -etcd-endpoints http://<MASTER_SERVER_IP>:4001
- name: docker.service
command: start
content: |
[Unit]
After=flannel.service
Wants=flannel.service
Description=Docker Application Container Engine
Documentation=http://docs.docker.io
[Service]
EnvironmentFile=-/etc/default/docker
EnvironmentFile=/run/flannel/subnet.env
ExecStartPre=/bin/mount --make-rprivate /
ExecStart=/usr/bin/docker -d --bip=${FLANNEL_SUBNET} --mtu=${FLANNEL_MTU} -s=overlay -H fd:// ${DOCKER_EXTRA_OPTS}
[Install]
WantedBy=multi-user.target
- name: setup-network-environment.service
command: start
content: |
[Unit]
Description=Setup Network Environment
Documentation=https://github.com/kelseyhightower/setup-network-environment
Requires=network-online.target
After=network-online.target
[Service]
ExecStartPre=-/usr/bin/mkdir -p /opt/bin
ExecStartPre=/usr/bin/wget -N -P /opt/bin http://<PXE_SERVER_IP>/setup-network-environment
ExecStartPre=/usr/bin/chmod +x /opt/bin/setup-network-environment
ExecStart=/opt/bin/setup-network-environment
RemainAfterExit=yes
Type=oneshot
- name: kube-proxy.service
command: start
content: |
[Unit]
Description=Kubernetes Proxy
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
Requires=setup-network-environment.service
After=setup-network-environment.service
[Service]
ExecStartPre=/usr/bin/wget -N -P /opt/bin http://<PXE_SERVER_IP>/kube-proxy
ExecStartPre=/usr/bin/chmod +x /opt/bin/kube-proxy
ExecStart=/opt/bin/kube-proxy \
--etcd_servers=http://<MASTER_SERVER_IP>:4001 \
--logtostderr=true
Restart=always
RestartSec=10
- name: kube-kubelet.service
command: start
content: |
[Unit]
Description=Kubernetes Kubelet
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
Requires=setup-network-environment.service
After=setup-network-environment.service
[Service]
EnvironmentFile=/etc/network-environment
ExecStartPre=/usr/bin/wget -N -P /opt/bin http://<PXE_SERVER_IP>/kubelet
ExecStartPre=/usr/bin/chmod +x /opt/bin/kubelet
ExecStart=/opt/bin/kubelet \
--address=0.0.0.0 \
--port=10250 \
--hostname_override=${DEFAULT_IPV4} \
--api_servers=<MASTER_SERVER_IP>:8080 \
--healthz_bind_address=0.0.0.0 \
--healthz_port=10248 \
--logtostderr=true
Restart=always
RestartSec=10
update:
group: stable
reboot-strategy: off
ssh_authorized_keys:
- ssh-rsa AAAAB3NzaC1yc2EAAAAD...
## New pxelinux.cfg file
Create a pxelinux target file for a _slave_ node: ```vi /tftpboot/pxelinux.cfg/coreos-node-slave```
default coreos
prompt 1
timeout 15
display boot.msg
label coreos
menu default
kernel images/coreos/coreos_production_pxe.vmlinuz
append initrd=images/coreos/coreos_production_pxe_image.cpio.gz cloud-config-url=http://<pxe-host-ip>/coreos/pxe-cloud-config-slave.yml console=tty0 console=ttyS0 coreos.autologin=tty1 coreos.autologin=ttyS0
And one for the _master_ node: ```vi /tftpboot/pxelinux.cfg/coreos-node-master```
default coreos
prompt 1
timeout 15
display boot.msg
label coreos
menu default
kernel images/coreos/coreos_production_pxe.vmlinuz
append initrd=images/coreos/coreos_production_pxe_image.cpio.gz cloud-config-url=http://<pxe-host-ip>/coreos/pxe-cloud-config-master.yml console=tty0 console=ttyS0 coreos.autologin=tty1 coreos.autologin=ttyS0
## Specify the pxelinux targets
Now that we have our new targets setup for master and slave we want to configure the specific hosts to those targets. We will do this by using the pxelinux mechanism of setting a specific MAC addresses to a specific pxelinux.cfg file.
Refer to the MAC address table in the beginning of this guide. Documentation for more details can be found [here](http://www.syslinux.org/wiki/index.php/PXELINUX).
cd /tftpboot/pxelinux.cfg
ln -s coreos-node-master 01-d0-00-67-13-0d-00
ln -s coreos-node-slave 01-d0-00-67-13-0d-01
ln -s coreos-node-slave 01-d0-00-67-13-0d-02
Reboot these servers to get the images PXEd and ready for running containers!
## Creating test pod
Now that the CoreOS with Kubernetes installed is up and running lets spin up some Kubernetes pods to demonstrate the system.
See [a simple nginx example](../../../examples/simple-nginx.md) to try out your new cluster.
For more complete applications, please look in the [examples directory](../../../examples).
## Helping commands for debugging
List all keys in etcd:
etcdctl ls --recursive
List fleet machines
fleetctl list-machines
Check system status of services on master node:
systemctl status kube-apiserver
systemctl status kube-controller-manager
systemctl status kube-scheduler
systemctl status kube-register
Check system status of services on a minion node:
systemctl status kube-kubelet
systemctl status docker.service
List Kubernetes
kubectl get pods
kubectl get minions
Kill all pods:
for i in `kubectl get pods | awk '{print $1}'`; do kubectl stop pod $i; done
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/getting-started-guides/coreos/bare_metal_offline.md?pixel)]()
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/release-0.20.0/docs/getting-started-guides/coreos/bare_metal_offline.md?pixel)]()

View File

@@ -0,0 +1,180 @@
#cloud-config
---
hostname: master
coreos:
etcd2:
name: master
listen-client-urls: http://0.0.0.0:2379,http://0.0.0.0:4001
advertise-client-urls: http://$private_ipv4:2379,http://$private_ipv4:4001
initial-cluster-token: k8s_etcd
listen-peer-urls: http://$private_ipv4:2380,http://$private_ipv4:7001
initial-advertise-peer-urls: http://$private_ipv4:2380
initial-cluster: master=http://$private_ipv4:2380
initial-cluster-state: new
fleet:
metadata: "role=master"
units:
- name: setup-network-environment.service
command: start
content: |
[Unit]
Description=Setup Network Environment
Documentation=https://github.com/kelseyhightower/setup-network-environment
Requires=network-online.target
After=network-online.target
[Service]
ExecStartPre=-/usr/bin/mkdir -p /opt/bin
ExecStartPre=/usr/bin/curl -L -o /opt/bin/setup-network-environment -z /opt/bin/setup-network-environment https://github.com/kelseyhightower/setup-network-environment/releases/download/v1.0.0/setup-network-environment
ExecStartPre=/usr/bin/chmod +x /opt/bin/setup-network-environment
ExecStart=/opt/bin/setup-network-environment
RemainAfterExit=yes
Type=oneshot
- name: fleet.service
command: start
- name: flanneld.service
command: start
drop-ins:
- name: 50-network-config.conf
content: |
[Unit]
Requires=etcd2.service
[Service]
ExecStartPre=/usr/bin/etcdctl set /coreos.com/network/config '{"Network":"10.244.0.0/16", "Backend": {"Type": "vxlan"}}'
- name: docker-cache.service
command: start
content: |
[Unit]
Description=Docker cache proxy
Requires=early-docker.service
After=early-docker.service
Before=early-docker.target
[Service]
Restart=always
TimeoutStartSec=0
RestartSec=5
Environment="TMPDIR=/var/tmp/"
Environment="DOCKER_HOST=unix:///var/run/early-docker.sock"
ExecStartPre=-/usr/bin/docker kill docker-registry
ExecStartPre=-/usr/bin/docker rm docker-registry
ExecStartPre=/usr/bin/docker pull quay.io/devops/docker-registry:latest
# GUNICORN_OPTS is an workaround for
# https://github.com/docker/docker-registry/issues/892
ExecStart=/usr/bin/docker run --rm --net host --name docker-registry \
-e STANDALONE=false \
-e GUNICORN_OPTS=[--preload] \
-e MIRROR_SOURCE=https://registry-1.docker.io \
-e MIRROR_SOURCE_INDEX=https://index.docker.io \
-e MIRROR_TAGS_CACHE_TTL=1800 \
quay.io/devops/docker-registry:latest
- name: docker.service
content: |
[Unit]
Description=Docker Application Container Engine
Documentation=http://docs.docker.com
After=docker.socket early-docker.target network.target
Requires=docker.socket early-docker.target
[Service]
Environment=TMPDIR=/var/tmp
EnvironmentFile=-/run/flannel_docker_opts.env
EnvironmentFile=/etc/network-environment
MountFlags=slave
LimitNOFILE=1048576
LimitNPROC=1048576
ExecStart=/usr/lib/coreos/dockerd --daemon --host=fd:// --registry-mirror=http://${DEFAULT_IPV4}:5000 $DOCKER_OPT_BIP $DOCKER_OPT_MTU $DOCKER_OPT_IPMASQ
[Install]
WantedBy=multi-user.target
drop-ins:
- name: 51-docker-mirror.conf
content: |
[Unit]
# making sure that docker-cache is up and that flanneld finished
# startup, otherwise containers won't land in flannel's network...
Requires=docker-cache.service flanneld.service
After=docker-cache.service flanneld.service
- name: kube-apiserver.service
command: start
content: |
[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
Requires=etcd2.service setup-network-environment.service
After=etcd2.service setup-network-environment.service
[Service]
EnvironmentFile=/etc/network-environment
ExecStartPre=-/usr/bin/mkdir -p /opt/bin
ExecStartPre=/usr/bin/curl -L -o /opt/bin/kube-apiserver -z /opt/bin/kube-apiserver https://storage.googleapis.com/kubernetes-release/release/v0.18.2/bin/linux/amd64/kube-apiserver
ExecStartPre=/usr/bin/chmod +x /opt/bin/kube-apiserver
ExecStart=/opt/bin/kube-apiserver \
--allow_privileged=true \
--insecure_bind_address=0.0.0.0 \
--insecure_port=8080 \
--kubelet_https=true \
--secure_port=6443 \
--service-cluster-ip-range=10.100.0.0/16 \
--etcd_servers=http://127.0.0.1:4001 \
--public_address_override=${DEFAULT_IPV4} \
--logtostderr=true
Restart=always
RestartSec=10
- name: kube-controller-manager.service
command: start
content: |
[Unit]
Description=Kubernetes Controller Manager
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
Requires=kube-apiserver.service
After=kube-apiserver.service
[Service]
ExecStartPre=/usr/bin/curl -L -o /opt/bin/kube-controller-manager -z /opt/bin/kube-controller-manager https://storage.googleapis.com/kubernetes-release/release/v0.18.2/bin/linux/amd64/kube-controller-manager
ExecStartPre=/usr/bin/chmod +x /opt/bin/kube-controller-manager
ExecStart=/opt/bin/kube-controller-manager \
--master=127.0.0.1:8080 \
--logtostderr=true
Restart=always
RestartSec=10
- name: kube-scheduler.service
command: start
content: |
[Unit]
Description=Kubernetes Scheduler
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
Requires=kube-apiserver.service
After=kube-apiserver.service
[Service]
ExecStartPre=/usr/bin/curl -L -o /opt/bin/kube-scheduler -z /opt/bin/kube-scheduler https://storage.googleapis.com/kubernetes-release/release/v0.18.2/bin/linux/amd64/kube-scheduler
ExecStartPre=/usr/bin/chmod +x /opt/bin/kube-scheduler
ExecStart=/opt/bin/kube-scheduler --master=127.0.0.1:8080
Restart=always
RestartSec=10
- name: kube-register.service
command: start
content: |
[Unit]
Description=Kubernetes Registration Service
Documentation=https://github.com/kelseyhightower/kube-register
Requires=kube-apiserver.service
After=kube-apiserver.service
Requires=fleet.service
After=fleet.service
[Service]
ExecStartPre=/usr/bin/curl -L -o /opt/bin/kube-register -z /opt/bin/kube-register https://github.com/kelseyhightower/kube-register/releases/download/v0.0.3/kube-register-0.0.3-linux-amd64
ExecStartPre=/usr/bin/chmod +x /opt/bin/kube-register
ExecStart=/opt/bin/kube-register \
--metadata=role=node \
--fleet-endpoint=unix:///var/run/fleet.sock \
--api-endpoint=http://127.0.0.1:8080 \
--healthz-port=10248
Restart=always
RestartSec=10
update:
group: alpha
reboot-strategy: off

View File

@@ -0,0 +1,105 @@
#cloud-config
write-files:
- path: /opt/bin/wupiao
permissions: '0755'
content: |
#!/bin/bash
# [w]ait [u]ntil [p]ort [i]s [a]ctually [o]pen
[ -n "$1" ] && [ -n "$2" ] && while ! curl --output /dev/null \
--silent --head --fail \
http://${1}:${2}; do sleep 1 && echo -n .; done;
exit $?
coreos:
etcd2:
listen-client-urls: http://0.0.0.0:2379,http://0.0.0.0:4001
advertise-client-urls: http://0.0.0.0:2379,http://0.0.0.0:4001
initial-cluster: master=http://<master-private-ip>:2380
proxy: on
fleet:
metadata: "role=node"
units:
- name: fleet.service
command: start
- name: flanneld.service
command: start
drop-ins:
- name: 50-network-config.conf
content: |
[Unit]
Requires=etcd2.service
[Service]
ExecStartPre=/usr/bin/etcdctl set /coreos.com/network/config '{"Network":"10.244.0.0/16", "Backend": {"Type": "vxlan"}}'
- name: docker.service
command: start
drop-ins:
- name: 51-docker-mirror.conf
content: |
[Unit]
Requires=flanneld.service
After=flanneld.service
[Service]
Environment=DOCKER_OPTS='--registry-mirror=http://<master-private-ip>:5000'
- name: setup-network-environment.service
command: start
content: |
[Unit]
Description=Setup Network Environment
Documentation=https://github.com/kelseyhightower/setup-network-environment
Requires=network-online.target
After=network-online.target
[Service]
ExecStartPre=-/usr/bin/mkdir -p /opt/bin
ExecStartPre=/usr/bin/curl -L -o /opt/bin/setup-network-environment -z /opt/bin/setup-network-environment https://github.com/kelseyhightower/setup-network-environment/releases/download/v1.0.0/setup-network-environment
ExecStartPre=/usr/bin/chmod +x /opt/bin/setup-network-environment
ExecStart=/opt/bin/setup-network-environment
RemainAfterExit=yes
Type=oneshot
- name: kube-proxy.service
command: start
content: |
[Unit]
Description=Kubernetes Proxy
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
Requires=setup-network-environment.service
After=setup-network-environment.service
[Service]
ExecStartPre=/usr/bin/curl -L -o /opt/bin/kube-proxy -z /opt/bin/kube-proxy https://storage.googleapis.com/kubernetes-release/release/v0.18.2/bin/linux/amd64/kube-proxy
ExecStartPre=/usr/bin/chmod +x /opt/bin/kube-proxy
# wait for kubernetes master to be up and ready
ExecStartPre=/opt/bin/wupiao <master-private-ip> 8080
ExecStart=/opt/bin/kube-proxy \
--master=<master-private-ip>:8080 \
--logtostderr=true
Restart=always
RestartSec=10
- name: kube-kubelet.service
command: start
content: |
[Unit]
Description=Kubernetes Kubelet
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
Requires=setup-network-environment.service
After=setup-network-environment.service
[Service]
EnvironmentFile=/etc/network-environment
ExecStartPre=/usr/bin/curl -L -o /opt/bin/kubelet -z /opt/bin/kubelet https://storage.googleapis.com/kubernetes-release/release/v0.18.2/bin/linux/amd64/kubelet
ExecStartPre=/usr/bin/chmod +x /opt/bin/kubelet
# wait for kubernetes master to be up and ready
ExecStartPre=/opt/bin/wupiao <master-private-ip> 8080
ExecStart=/opt/bin/kubelet \
--address=0.0.0.0 \
--port=10250 \
--hostname_override=${DEFAULT_IPV4} \
--api_servers=<master-private-ip>:8080 \
--allow_privileged=true \
--logtostderr=true \
--healthz_bind_address=0.0.0.0 \
--healthz_port=10248
Restart=always
RestartSec=10
update:
group: alpha
reboot-strategy: off

View File

@@ -0,0 +1,168 @@
#cloud-config
---
hostname: master
coreos:
etcd2:
name: master
listen-client-urls: http://0.0.0.0:2379,http://0.0.0.0:4001
advertise-client-urls: http://0.0.0.0:2379,http://0.0.0.0:4001
initial-cluster-token: k8s_etcd
listen-peer-urls: http://0.0.0.0:2380,http://0.0.0.0:7001
initial-advertise-peer-urls: http://0.0.0.0:2380
initial-cluster: master=http://0.0.0.0:2380
initial-cluster-state: new
units:
- name: etcd2.service
command: start
- name: fleet.service
command: start
- name: flanneld.service
command: start
drop-ins:
- name: 50-network-config.conf
content: |
[Unit]
Requires=etcd2.service
[Service]
ExecStartPre=/usr/bin/etcdctl set /coreos.com/network/config '{"Network":"10.244.0.0/16", "Backend": {"Type": "vxlan"}}'
- name: docker-cache.service
command: start
content: |
[Unit]
Description=Docker cache proxy
Requires=early-docker.service
After=early-docker.service
Before=early-docker.target
[Service]
Restart=always
TimeoutStartSec=0
RestartSec=5
Environment="TMPDIR=/var/tmp/"
Environment="DOCKER_HOST=unix:///var/run/early-docker.sock"
ExecStartPre=-/usr/bin/docker kill docker-registry
ExecStartPre=-/usr/bin/docker rm docker-registry
ExecStartPre=/usr/bin/docker pull quay.io/devops/docker-registry:latest
# GUNICORN_OPTS is an workaround for
# https://github.com/docker/docker-registry/issues/892
ExecStart=/usr/bin/docker run --rm --net host --name docker-registry \
-e STANDALONE=false \
-e GUNICORN_OPTS=[--preload] \
-e MIRROR_SOURCE=https://registry-1.docker.io \
-e MIRROR_SOURCE_INDEX=https://index.docker.io \
-e MIRROR_TAGS_CACHE_TTL=1800 \
quay.io/devops/docker-registry:latest
- name: docker.service
command: start
drop-ins:
- name: 51-docker-mirror.conf
content: |
[Unit]
# making sure that docker-cache is up and that flanneld finished
# startup, otherwise containers won't land in flannel's network...
Requires=docker-cache.service flanneld.service
After=docker-cache.service flanneld.service
[Service]
Environment=DOCKER_OPTS='--registry-mirror=http://$private_ipv4:5000'
- name: kube-apiserver.service
command: start
content: |
[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
Requires=etcd2.service
After=etcd2.service
[Service]
ExecStartPre=-/usr/bin/mkdir -p /opt/bin
ExecStartPre=/usr/bin/wget -N -P /opt/bin https://storage.googleapis.com/kubernetes-release/release/v0.18.2/bin/linux/amd64/kube-apiserver
ExecStartPre=/usr/bin/chmod +x /opt/bin/kube-apiserver
ExecStart=/opt/bin/kube-apiserver \
--allow_privileged=true \
--insecure_bind_address=0.0.0.0 \
--insecure_port=8080 \
--kubelet_https=true \
--secure_port=6443 \
--service-cluster-ip-range=10.100.0.0/16 \
--etcd_servers=http://127.0.0.1:4001 \
--public_address_override=127.0.0.1 \
--logtostderr=true
Restart=always
RestartSec=10
- name: kube-controller-manager.service
command: start
content: |
[Unit]
Description=Kubernetes Controller Manager
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
Requires=kube-apiserver.service
After=kube-apiserver.service
[Service]
ExecStartPre=/usr/bin/wget -N -P /opt/bin https://storage.googleapis.com/kubernetes-release/release/v0.18.2/bin/linux/amd64/kube-controller-manager
ExecStartPre=/usr/bin/chmod +x /opt/bin/kube-controller-manager
ExecStart=/opt/bin/kube-controller-manager \
--machines=127.0.0.1 \
--master=127.0.0.1:8080 \
--logtostderr=true
Restart=always
RestartSec=10
- name: kube-scheduler.service
command: start
content: |
[Unit]
Description=Kubernetes Scheduler
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
Requires=kube-apiserver.service
After=kube-apiserver.service
[Service]
ExecStartPre=/usr/bin/wget -N -P /opt/bin https://storage.googleapis.com/kubernetes-release/release/v0.18.2/bin/linux/amd64/kube-scheduler
ExecStartPre=/usr/bin/chmod +x /opt/bin/kube-scheduler
ExecStart=/opt/bin/kube-scheduler --master=127.0.0.1:8080
Restart=always
RestartSec=10
- name: kube-proxy.service
command: start
content: |
[Unit]
Description=Kubernetes Proxy
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
Requires=etcd2.service
After=etcd2.service
[Service]
ExecStartPre=/usr/bin/wget -N -P /opt/bin https://storage.googleapis.com/kubernetes-release/release/v0.18.2/bin/linux/amd64/kube-proxy
ExecStartPre=/usr/bin/chmod +x /opt/bin/kube-proxy
ExecStart=/opt/bin/kube-proxy \
--master=127.0.0.1:8080 \
--logtostderr=true
Restart=always
RestartSec=10
- name: kube-kubelet.service
command: start
content: |
[Unit]
Description=Kubernetes Kubelet
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
Requires=etcd2.service
After=etcd2.service
[Service]
ExecStartPre=/usr/bin/wget -N -P /opt/bin https://storage.googleapis.com/kubernetes-release/release/v0.18.2/bin/linux/amd64/kubelet
ExecStartPre=/usr/bin/chmod +x /opt/bin/kubelet
ExecStart=/opt/bin/kubelet \
--address=0.0.0.0 \
--port=10250 \
--hostname_override=127.0.0.1 \
--api_servers=127.0.0.1:8080 \
--allow_privileged=true \
--logtostderr=true \
--healthz_bind_address=0.0.0.0 \
--healthz_port=10248
Restart=always
RestartSec=10
update:
group: alpha
reboot-strategy: off

View File

@@ -0,0 +1,142 @@
# CoreOS Multinode Cluster
Use the [master.yaml](cloud-configs/master.yaml) and [node.yaml](cloud-configs/node.yaml) cloud-configs to provision a multi-node Kubernetes cluster.
> **Attention**: This requires at least CoreOS version **[653.0.0][coreos653]**, as this was the first release to include etcd2.
[coreos653]: https://coreos.com/releases/#653.0.0
## Overview
* Provision the master node
* Capture the master node private IP address
* Edit node.yaml
* Provision one or more worker nodes
### AWS
*Attention:* Replace ```<ami_image_id>``` below for a [suitable version of CoreOS image for AWS](https://coreos.com/docs/running-coreos/cloud-providers/ec2/).
#### Provision the Master
```
aws ec2 create-security-group --group-name kubernetes --description "Kubernetes Security Group"
aws ec2 authorize-security-group-ingress --group-name kubernetes --protocol tcp --port 22 --cidr 0.0.0.0/0
aws ec2 authorize-security-group-ingress --group-name kubernetes --protocol tcp --port 80 --cidr 0.0.0.0/0
aws ec2 authorize-security-group-ingress --group-name kubernetes --source-security-group-name kubernetes
```
```
aws ec2 run-instances \
--image-id <ami_image_id> \
--key-name <keypair> \
--region us-west-2 \
--security-groups kubernetes \
--instance-type m3.medium \
--user-data file://master.yaml
```
#### Capture the private IP address
```
aws ec2 describe-instances --instance-id <master-instance-id>
```
#### Edit node.yaml
Edit `node.yaml` and replace all instances of `<master-private-ip>` with the private IP address of the master node.
#### Provision worker nodes
```
aws ec2 run-instances \
--count 1 \
--image-id <ami_image_id> \
--key-name <keypair> \
--region us-west-2 \
--security-groups kubernetes \
--instance-type m3.medium \
--user-data file://node.yaml
```
### GCE
*Attention:* Replace ```<gce_image_id>``` below for a [suitable version of CoreOS image for GCE](https://coreos.com/docs/running-coreos/cloud-providers/google-compute-engine/).
#### Provision the Master
```
gcloud compute instances create master \
--image-project coreos-cloud \
--image <gce_image_id> \
--boot-disk-size 200GB \
--machine-type n1-standard-1 \
--zone us-central1-a \
--metadata-from-file user-data=master.yaml
```
#### Capture the private IP address
```
gcloud compute instances list
```
#### Edit node.yaml
Edit `node.yaml` and replace all instances of `<master-private-ip>` with the private IP address of the master node.
#### Provision worker nodes
```
gcloud compute instances create node1 \
--image-project coreos-cloud \
--image <gce_image_id> \
--boot-disk-size 200GB \
--machine-type n1-standard-1 \
--zone us-central1-a \
--metadata-from-file user-data=node.yaml
```
#### Establish network connectivity
Next, setup an ssh tunnel to the master so you can run kubectl from your local host.
In one terminal, run `gcloud compute ssh master --ssh-flag="-L 8080:127.0.0.1:8080"` and in a second
run `gcloud compute ssh master --ssh-flag="-R 8080:127.0.0.1:8080"`.
### VMware Fusion
#### Create the master config-drive
```
mkdir -p /tmp/new-drive/openstack/latest/
cp master.yaml /tmp/new-drive/openstack/latest/user_data
hdiutil makehybrid -iso -joliet -joliet-volume-name "config-2" -joliet -o master.iso /tmp/new-drive
```
#### Provision the Master
Boot the [vmware image](https://coreos.com/docs/running-coreos/platforms/vmware) using `master.iso` as a config drive.
#### Capture the master private IP address
#### Edit node.yaml
Edit `node.yaml` and replace all instances of `<master-private-ip>` with the private IP address of the master node.
#### Create the node config-drive
```
mkdir -p /tmp/new-drive/openstack/latest/
cp node.yaml /tmp/new-drive/openstack/latest/user_data
hdiutil makehybrid -iso -joliet -joliet-volume-name "config-2" -joliet -o node.iso /tmp/new-drive
```
#### Provision worker nodes
Boot one or more the [vmware image](https://coreos.com/docs/running-coreos/platforms/vmware) using `node.iso` as a config drive.
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/getting-started-guides/coreos/coreos_multinode_cluster.md?pixel)]()
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/release-0.20.0/docs/getting-started-guides/coreos/coreos_multinode_cluster.md?pixel)]()

View File

@@ -0,0 +1,66 @@
# CoreOS - Single Node Kubernetes Cluster
Use the [standalone.yaml](cloud-configs/standalone.yaml) cloud-config to provision a single node Kubernetes cluster.
> **Attention**: This requires at least CoreOS version **[653.0.0][coreos653]**, as this was the first release to include etcd2.
[coreos653]: https://coreos.com/releases/#653.0.0
### CoreOS image versions
### AWS
```
aws ec2 create-security-group --group-name kubernetes --description "Kubernetes Security Group"
aws ec2 authorize-security-group-ingress --group-name kubernetes --protocol tcp --port 22 --cidr 0.0.0.0/0
aws ec2 authorize-security-group-ingress --group-name kubernetes --source-security-group-name kubernetes
```
*Attention:* Replace ```<ami_image_id>``` bellow for a [suitable version of CoreOS image for AWS](https://coreos.com/docs/running-coreos/cloud-providers/ec2/).
```
aws ec2 run-instances \
--image-id <ami_image_id> \
--key-name <keypair> \
--region us-west-2 \
--security-groups kubernetes \
--instance-type m3.medium \
--user-data file://standalone.yaml
```
### GCE
*Attention:* Replace ```<gce_image_id>``` bellow for a [suitable version of CoreOS image for GCE](https://coreos.com/docs/running-coreos/cloud-providers/google-compute-engine/).
```
gcloud compute instances create standalone \
--image-project coreos-cloud \
--image <gce_image_id> \
--boot-disk-size 200GB \
--machine-type n1-standard-1 \
--zone us-central1-a \
--metadata-from-file user-data=standalone.yaml
```
Next, setup an ssh tunnel to the instance so you can run kubectl from your local host.
In one terminal, run `gcloud compute ssh standalone --ssh-flag="-L 8080:127.0.0.1:8080"` and in a second
run `gcloud compute ssh standalone --ssh-flag="-R 8080:127.0.0.1:8080"`.
### VMware Fusion
Create a [config-drive](https://coreos.com/docs/cluster-management/setup/cloudinit-config-drive) ISO.
```
mkdir -p /tmp/new-drive/openstack/latest/
cp standalone.yaml /tmp/new-drive/openstack/latest/user_data
hdiutil makehybrid -iso -joliet -joliet-volume-name "config-2" -joliet -o standalone.iso /tmp/new-drive
```
Boot the [vmware image](https://coreos.com/docs/running-coreos/platforms/vmware) using the `standalone.iso` as a config drive.
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/getting-started-guides/coreos/coreos_single_node_cluster.md?pixel)]()
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/release-0.20.0/docs/getting-started-guides/coreos/coreos_single_node_cluster.md?pixel)]()

View File

@@ -0,0 +1,58 @@
Running Multi-Node Kubernetes Using Docker
------------------------------------------
_Note_:
These instructions are somewhat significantly more advanced than the [single node](docker.md) instructions. If you are
interested in just starting to explore Kubernetes, we recommend that you start there.
**Table of Contents**
- [Prerequisites](#prerequisites)
- [Overview](#overview)
- [Bootstrap Docker](#bootstrap-docker)
- [Master Node](#master-node)
- [Adding a worker node](#adding-a-worker-node)
- [Testing your cluster](#testing-your-cluster)
## Prerequisites
1. You need a machine with docker installed.
## Overview
This guide will set up a 2-node kubernetes cluster, consisting of a _master_ node which hosts the API server and orchestrates work
and a _worker_ node which receives work from the master. You can repeat the process of adding worker nodes an arbitrary number of
times to create larger clusters.
Here's a diagram of what the final result will look like:
![Kubernetes Single Node on Docker](k8s-docker.png)
### Bootstrap Docker
This guide also uses a pattern of running two instances of the Docker daemon
1) A _bootstrap_ Docker instance which is used to start system daemons like ```flanneld``` and ```etcd```
2) A _main_ Docker instance which is used for the Kubernetes infrastructure and user's scheduled containers
This pattern is necessary because the ```flannel``` daemon is responsible for setting up and managing the network that interconnects
all of the Docker containers created by Kubernetes. To achieve this, it must run outside of the _main_ Docker daemon. However,
it is still useful to use containers for deployment and management, so we create a simpler _bootstrap_ daemon to achieve this.
## Master Node
The first step in the process is to initialize the master node.
See [here](docker-multinode/master.md) for detailed instructions.
## Adding a worker node
Once your master is up and running you can add one or more workers on different machines.
See [here](docker-multinode/worker.md) for detailed instructions.
## Testing your cluster
Once your cluster has been created you can [test it out](docker-multinode/testing.md)
For more complete applications, please look in the [examples directory](../../examples)
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/getting-started-guides/docker-multinode.md?pixel)]()
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/release-0.20.0/docs/getting-started-guides/docker-multinode.md?pixel)]()

View File

@@ -0,0 +1,149 @@
## Installing a Kubernetes Master Node via Docker
We'll begin by setting up the master node. For the purposes of illustration, we'll assume that the IP of this machine is ```${MASTER_IP}```
There are two main phases to installing the master:
* [Setting up ```flanneld``` and ```etcd```](#setting-up-flanneld-and-etcd)
* [Starting the Kubernetes master components](#starting-the-kubernetes-master)
## Setting up flanneld and etcd
### Setup Docker-Bootstrap
We're going to use ```flannel``` to set up networking between Docker daemons. Flannel itself (and etcd on which it relies) will run inside of
Docker containers themselves. To achieve this, we need a separate "bootstrap" instance of the Docker daemon. This daemon will be started with
```--iptables=false``` so that it can only run containers with ```--net=host```. That's sufficient to bootstrap our system.
Run:
```sh
sudo sh -c 'docker -d -H unix:///var/run/docker-bootstrap.sock -p /var/run/docker-bootstrap.pid --iptables=false --ip-masq=false --bridge=none --graph=/var/lib/docker-bootstrap 2> /var/log/docker-bootstrap.log 1> /dev/null &'
```
_Important Note_:
If you are running this on a long running system, rather than experimenting, you should run the bootstrap Docker instance under something like SysV init, upstart or systemd so that it is restarted
across reboots and failures.
### Startup etcd for flannel and the API server to use
Run:
```
sudo docker -H unix:///var/run/docker-bootstrap.sock run --net=host -d gcr.io/google_containers/etcd:2.0.9 /usr/local/bin/etcd --addr=127.0.0.1:4001 --bind-addr=0.0.0.0:4001 --data-dir=/var/etcd/data
```
Next, you need to set a CIDR range for flannel. This CIDR should be chosen to be non-overlapping with any existing network you are using:
```sh
sudo docker -H unix:///var/run/docker-bootstrap.sock run --net=host gcr.io/google_containers/etcd:2.0.9 etcdctl set /coreos.com/network/config '{ "Network": "10.1.0.0/16" }'
```
### Set up Flannel on the master node
Flannel is a network abstraction layer build by CoreOS, we will use it to provide simplfied networking between our Pods of containers.
Flannel re-configures the bridge that Docker uses for networking. As a result we need to stop Docker, reconfigure its networking, and then restart Docker.
#### Bring down Docker
To re-configure Docker to use flannel, we need to take docker down, run flannel and then restart Docker.
Turning down Docker is system dependent, it may be:
```sh
sudo /etc/init.d/docker stop
```
or
```sh
sudo systemctl stop docker
```
or it may be something else.
#### Run flannel
Now run flanneld itself:
```sh
sudo docker -H unix:///var/run/docker-bootstrap.sock run -d --net=host --privileged -v /dev/net:/dev/net quay.io/coreos/flannel:0.3.0
```
The previous command should have printed a really long hash, copy this hash.
Now get the subnet settings from flannel:
```
sudo docker -H unix:///var/run/docker-bootstrap.sock exec <really-long-hash-from-above-here> cat /run/flannel/subnet.env
```
#### Edit the docker configuration
You now need to edit the docker configuration to activate new flags. Again, this is system specific.
This may be in ```/etc/default/docker``` or ```/etc/systemd/service/docker.service``` or it may be elsewhere.
Regardless, you need to add the following to the docker command line:
```sh
--bip=${FLANNEL_SUBNET} --mtu=${FLANNEL_MTU}
```
#### Remove the existing Docker bridge
Docker creates a bridge named ```docker0``` by default. You need to remove this:
```sh
sudo /sbin/ifconfig docker0 down
sudo brctl delbr docker0
```
You may need to install the ```bridge-utils``` package for the ```brctl``` binary.
#### Restart Docker
Again this is system dependent, it may be:
```sh
sudo /etc/init.d/docker start
```
it may be:
```sh
systemctl start docker
```
## Starting the Kubernetes Master
Ok, now that your networking is set up, you can startup Kubernetes, this is the same as the single-node case, we will use the "main" instance of the Docker daemon for the Kubernetes components.
```sh
sudo docker run --net=host -d -v /var/run/docker.sock:/var/run/docker.sock gcr.io/google_containers/hyperkube:v0.18.2 /hyperkube kubelet --api_servers=http://localhost:8080 --v=2 --address=0.0.0.0 --enable_server --hostname_override=127.0.0.1 --config=/etc/kubernetes/manifests-multi
```
### Also run the service proxy
```sh
sudo docker run -d --net=host --privileged gcr.io/google_containers/hyperkube:v0.18.2 /hyperkube proxy --master=http://127.0.0.1:8080 --v=2
```
### Test it out
At this point, you should have a functioning 1-node cluster. Let's test it out!
Download the kubectl binary
([OS X](http://storage.googleapis.com/kubernetes-release/release/v0.18.2/bin/darwin/amd64/kubectl))
([linux](http://storage.googleapis.com/kubernetes-release/release/v0.18.2/bin/linux/amd64/kubectl))
List the nodes
```sh
kubectl get nodes
```
This should print:
```
NAME LABELS STATUS
127.0.0.1 kubernetes.io/hostname=127.0.0.1 Ready
```
If the status of the node is ```NotReady``` or ```Unknown``` please check that all of the containers you created are successfully running.
If all else fails, ask questions on IRC at #google-containers.
### Next steps
Move on to [adding one or more workers](worker.md)
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/getting-started-guides/docker-multinode/master.md?pixel)]()
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/release-0.20.0/docs/getting-started-guides/docker-multinode/master.md?pixel)]()

View File

@@ -0,0 +1,63 @@
## Testing your Kubernetes cluster.
To validate that your node(s) have been added, run:
```sh
kubectl get nodes
```
That should show something like:
```
NAME LABELS STATUS
10.240.99.26 kubernetes.io/hostname=10.240.99.26 Ready
127.0.0.1 kubernetes.io/hostname=127.0.0.1 Ready
```
If the status of any node is ```Unknown``` or ```NotReady``` your cluster is broken, double check that all containers are running properly, and if all else fails, contact us on IRC at
```#google-containers``` for advice.
### Run an application
```sh
kubectl -s http://localhost:8080 run nginx --image=nginx --port=80
```
now run ```docker ps``` you should see nginx running. You may need to wait a few minutes for the image to get pulled.
### Expose it as a service:
```sh
kubectl expose rc nginx --port=80
```
This should print:
```
NAME LABELS SELECTOR IP PORT(S)
nginx <none> run=nginx <ip-addr> 80/TCP
```
Hit the webserver:
```sh
curl <insert-ip-from-above-here>
```
Note that you will need run this curl command on your boot2docker VM if you are running on OS X.
### Scaling
Now try to scale up the nginx you created before:
```sh
kubectl scale rc nginx --replicas=3
```
And list the pods
```sh
kubectl get pods
```
You should see pods landing on the newly added machine.
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/getting-started-guides/docker-multinode/testing.md?pixel)]()
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/release-0.20.0/docs/getting-started-guides/docker-multinode/testing.md?pixel)]()

View File

@@ -0,0 +1,114 @@
## Adding a Kubernetes worker node via Docker.
These instructions are very similar to the master set-up above, but they are duplicated for clarity.
You need to repeat these instructions for each node you want to join the cluster.
We will assume that the IP address of this node is ```${NODE_IP}``` and you have the IP address of the master in ```${MASTER_IP}``` that you created in the [master instructions](master.md).
For each worker node, there are three steps:
* [Set up ```flanneld``` on the worker node](#set-up-flanneld-on-the-worker-node)
* [Start kubernetes on the worker node](#start-kubernetes-on-the-worker-node)
* [Add the worker to the cluster](#add-the-node-to-the-cluster)
### Set up Flanneld on the worker node
As before, the Flannel daemon is going to provide network connectivity.
#### Set up a bootstrap docker:
As previously, we need a second instance of the Docker daemon running to bootstrap the flannel networking.
Run:
```sh
sudo sh -c 'docker -d -H unix:///var/run/docker-bootstrap.sock -p /var/run/docker-bootstrap.pid --iptables=false --ip-masq=false --bridge=none --graph=/var/lib/docker-bootstrap 2> /var/log/docker-bootstrap.log 1> /dev/null &'
```
_Important Note_:
If you are running this on a long running system, rather than experimenting, you should run the bootstrap Docker instance under something like SysV init, upstart or systemd so that it is restarted
across reboots and failures.
#### Bring down Docker
To re-configure Docker to use flannel, we need to take docker down, run flannel and then restart Docker.
Turning down Docker is system dependent, it may be:
```sh
sudo /etc/init.d/docker stop
```
or
```sh
sudo systemctl stop docker
```
or it may be something else.
#### Run flannel
Now run flanneld itself, this call is slightly different from the above, since we point it at the etcd instance on the master.
```sh
sudo docker -H unix:///var/run/docker-bootstrap.sock run -d --net=host --privileged -v /dev/net:/dev/net quay.io/coreos/flannel:0.3.0 /opt/bin/flanneld --etcd-endpoints=http://${MASTER_IP}:4001
```
The previous command should have printed a really long hash, copy this hash.
Now get the subnet settings from flannel:
```
sudo docker -H unix:///var/run/docker-bootstrap.sock exec <really-long-hash-from-above-here> cat /run/flannel/subnet.env
```
#### Edit the docker configuration
You now need to edit the docker configuration to activate new flags. Again, this is system specific.
This may be in ```/etc/default/docker``` or ```/etc/systemd/service/docker.service``` or it may be elsewhere.
Regardless, you need to add the following to the docker command line:
```sh
--bip=${FLANNEL_SUBNET} --mtu=${FLANNEL_MTU}
```
#### Remove the existing Docker bridge
Docker creates a bridge named ```docker0``` by default. You need to remove this:
```sh
sudo /sbin/ifconfig docker0 down
sudo brctl delbr docker0
```
You may need to install the ```bridge-utils``` package for the ```brctl``` binary.
#### Restart Docker
Again this is system dependent, it may be:
```sh
sudo /etc/init.d/docker start
```
it may be:
```sh
systemctl start docker
```
### Start Kubernetes on the worker node
#### Run the kubelet
Again this is similar to the above, but the ```--api_servers``` now points to the master we set up in the beginning.
```sh
sudo docker run --net=host -d -v /var/run/docker.sock:/var/run/docker.sock gcr.io/google_containers/hyperkube:v0.18.2 /hyperkube kubelet --api_servers=http://${MASTER_IP}:8080 --v=2 --address=0.0.0.0 --enable_server --hostname_override=$(hostname -i)
```
#### Run the service proxy
The service proxy provides load-balancing between groups of containers defined by Kubernetes ```Services```
```sh
sudo docker run -d --net=host --privileged gcr.io/google_containers/hyperkube:v0.18.2 /hyperkube proxy --master=http://${MASTER_IP}:8080 --v=2
```
### Next steps
Move on to [testing your cluster](testing.md) or [add another node](#adding-a-kubernetes-worker-node-via-docker)
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/getting-started-guides/docker-multinode/worker.md?pixel)]()
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/release-0.20.0/docs/getting-started-guides/docker-multinode/worker.md?pixel)]()

View File

@@ -0,0 +1,105 @@
Running kubernetes locally via Docker
-------------------------------------
**Table of Contents**
- [Overview](#setting-up-a-cluster)
- [Prerequisites](#prerequisites)
- [Step One: Run etcd](#step-one-run-etcd)
- [Step Two: Run the master](#step-two-run-the-master)
- [Step Three: Run the service proxy](#step-three-run-the-service-proxy)
- [Test it out](#test-it-out)
- [Run an application](#run-an-application)
- [Expose it as a service:](#expose-it-as-a-service)
- [A note on turning down your cluster](#a-note-on-turning-down-your-cluster)
### Overview
The following instructions show you how to set up a simple, single node kubernetes cluster using Docker.
Here's a diagram of what the final result will look like:
![Kubernetes Single Node on Docker](k8s-singlenode-docker.png)
### Prerequisites
1. You need to have docker installed on one machine.
### Step One: Run etcd
```sh
docker run --net=host -d gcr.io/google_containers/etcd:2.0.9 /usr/local/bin/etcd --addr=127.0.0.1:4001 --bind-addr=0.0.0.0:4001 --data-dir=/var/etcd/data
```
### Step Two: Run the master
```sh
docker run --net=host -d -v /var/run/docker.sock:/var/run/docker.sock gcr.io/google_containers/hyperkube:v0.18.2 /hyperkube kubelet --api_servers=http://localhost:8080 --v=2 --address=0.0.0.0 --enable_server --hostname_override=127.0.0.1 --config=/etc/kubernetes/manifests
```
This actually runs the kubelet, which in turn runs a [pod](http://docs.k8s.io/pods.md) that contains the other master components.
### Step Three: Run the service proxy
*Note, this could be combined with master above, but it requires --privileged for iptables manipulation*
```sh
docker run -d --net=host --privileged gcr.io/google_containers/hyperkube:v0.18.2 /hyperkube proxy --master=http://127.0.0.1:8080 --v=2
```
### Test it out
At this point you should have a running kubernetes cluster. You can test this by downloading the kubectl
binary
([OS X](https://storage.googleapis.com/kubernetes-release/release/v0.18.2/bin/darwin/amd64/kubectl))
([linux](https://storage.googleapis.com/kubernetes-release/release/v0.18.2/bin/linux/amd64/kubectl))
*Note:*
On OS/X you will need to set up port forwarding via ssh:
```sh
boot2docker ssh -L8080:localhost:8080
```
List the nodes in your cluster by running::
```sh
kubectl get nodes
```
This should print:
```
NAME LABELS STATUS
127.0.0.1 <none> Ready
```
If you are running different kubernetes clusters, you may need to specify ```-s http://localhost:8080``` to select the local cluster.
### Run an application
```sh
kubectl -s http://localhost:8080 run-container nginx --image=nginx --port=80
```
now run ```docker ps``` you should see nginx running. You may need to wait a few minutes for the image to get pulled.
### Expose it as a service:
```sh
kubectl expose rc nginx --port=80
```
This should print:
```
NAME LABELS SELECTOR IP PORT(S)
nginx <none> run=nginx <ip-addr> 80/TCP
```
Hit the webserver:
```sh
curl <insert-ip-from-above-here>
```
Note that you will need run this curl command on your boot2docker VM if you are running on OS X.
### A note on turning down your cluster
Many of these containers run under the management of the ```kubelet``` binary, which attempts to keep containers running, even if they fail. So, in order to turn down
the cluster, you need to first kill the kubelet container, and then any other containers.
You may use ```docker ps -a | awk '{print $1}' | xargs docker kill```, note this removes _all_ containers running under Docker, so use with caution.
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/getting-started-guides/docker.md?pixel)]()
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/release-0.20.0/docs/getting-started-guides/docker.md?pixel)]()

Binary file not shown.

After

Width:  |  Height:  |  Size: 40 KiB

View File

@@ -0,0 +1,249 @@
Configuring kubernetes on [Fedora](http://fedoraproject.org) via [Ansible](http://www.ansible.com/home)
-------------------------------------------------------------------------------------------------------
Configuring kubernetes on Fedora via Ansible offers a simple way to quickly create a clustered environment with little effort.
**Table of Contents**
- [Prerequisites](#prerequisites)
- [Architecture of the cluster](#architecture-of-the-cluster)
- [Configuring ssh access to the cluster](#configuring-ssh-access-to-the-cluster)
- [Configuring the internal kubernetes network](#configuring-the-internal-kubernetes-network)
- [Setting up the cluster](#setting-up-the-cluster)
- [Testing and using your new cluster](#testing-and-using-your-new-cluster)
##Prerequisites
1. Host able to run ansible and able to clone the following repo: [kubernetes-ansible](https://github.com/eparis/kubernetes-ansible)
2. A Fedora 20+ or RHEL7 host to act as cluster master
3. As many Fedora 20+ or RHEL7 hosts as you would like, that act as cluster minions
The hosts can be virtual or bare metal. The only requirement to make the ansible network setup work is that all of the machines are connected via the same layer 2 network.
Ansible will take care of the rest of the configuration for you - configuring networking, installing packages, handling the firewall, etc... This example will use one master and two minions.
## Architecture of the cluster
A Kubernetes cluster requires etcd, a master, and n minions, so we will create a cluster with three hosts, for example:
```
fed1 (master,etcd) = 192.168.121.205
fed2 (minion) = 192.168.121.84
fed3 (minion) = 192.168.121.116
```
**Make sure your local machine**
- has ansible
- has git
**then we just clone down the kubernetes-ansible repository**
```
yum install -y ansible git
git clone https://github.com/eparis/kubernetes-ansible.git
cd kubernetes-ansible
```
**Tell ansible about each machine and its role in your cluster.**
Get the IP addresses from the master and minions. Add those to the `inventory` file (at the root of the repo) on the host running Ansible.
We will set the kube_ip_addr to '10.254.0.[1-3]', for now. The reason we do this is explained later... It might work for you as a default.
```
[masters]
192.168.121.205
[etcd]
192.168.121.205
[minions]
192.168.121.84 kube_ip_addr=[10.254.0.1]
192.168.121.116 kube_ip_addr=[10.254.0.2]
```
**Setup ansible access to your nodes**
If you already are running on a machine which has passwordless ssh access to the fed[1-3] nodes, and 'sudo' privileges, simply set the value of `ansible_ssh_user` in `group_vars/all.yaml` to the username which you use to ssh to the nodes (i.e. `fedora`), and proceed to the next step...
*Otherwise* setup ssh on the machines like so (you will need to know the root password to all machines in the cluster).
edit: group_vars/all.yml
```
ansible_ssh_user: root
```
## Configuring ssh access to the cluster
If you already have ssh access to every machine using ssh public keys you may skip to [configuring the network](#configuring-the-network)
**Create a password file.**
The password file should contain the root password for every machine in the cluster. It will be used in order to lay down your ssh public key. Make sure your machines sshd-config allows password logins from root.
```
echo "password" > ~/rootpassword
```
**Agree to accept each machine's ssh public key**
After this is completed, ansible is now enabled to ssh into any of the machines you're configuring.
```
ansible-playbook -i inventory ping.yml # This will look like it fails, that's ok
```
**Push your ssh public key to every machine**
Again, you can skip this step if your ansible machine has ssh access to the nodes you are going to use in the kubernetes cluster.
```
ansible-playbook -i inventory keys.yml
```
## Configuring the internal kubernetes network
If you already have configured your network and docker will use it correctly, skip to [setting up the cluster](#setting-up-the-cluster)
The ansible scripts are quite hacky configuring the network, you can see the [README](https://github.com/eparis/kubernetes-ansible) for details, or you can simply enter in variants of the 'kube_service_addresses' (in the all.yaml file) as `kube_ip_addr` entries in the minions field, as shown in the next section.
**Configure the ip addresses which should be used to run pods on each machine**
The IP address pool used to assign addresses to pods for each minion is the `kube_ip_addr`= option. Choose a /24 to use for each minion and add that to you inventory file.
For this example, as shown earlier, we can do something like this...
```
[minions]
192.168.121.84 kube_ip_addr=10.254.0.1
192.168.121.116 kube_ip_addr=10.254.0.2
```
**Run the network setup playbook**
There are two ways to do this: via flannel, or using NetworkManager.
Flannel is a cleaner mechanism to use, and is the recommended choice.
- If you are using flannel, you should check the kubernetes-ansible repository above.
Currently, you essentially have to (1) update group_vars/all.yml, and then (2) run
```
ansible-playbook -i inventory flannel.yml
```
- On the other hand, if using the NetworkManager based setup (i.e. you do not want to use flannel).
On EACH node, make sure NetworkManager is installed, and the service "NetworkManager" is running, then you can run
the network manager playbook...
```
ansible-playbook -i inventory ./old-network-config/hack-network.yml
```
## Setting up the cluster
**Configure the IP addresses used for services**
Each kubernetes service gets its own IP address. These are not real IPs. You need only select a range of IPs which are not in use elsewhere in your environment. This must be done even if you do not use the network setup provided by the ansible scripts.
edit: group_vars/all.yml
```
kube_service_addresses: 10.254.0.0/16
```
**Tell ansible to get to work!**
This will finally setup your whole kubernetes cluster for you.
```
ansible-playbook -i inventory setup.yml
```
## Testing and using your new cluster
That's all there is to it. It's really that easy. At this point you should have a functioning kubernetes cluster.
**Show services running on masters and minions.**
```
systemctl | grep -i kube
```
**Show firewall rules on the masters and minions.**
```
iptables -nvL
```
**Create the following apache.json file and deploy pod to minion.**
```
cat << EOF > apache.json
{
"kind": "Pod",
"apiVersion": "v1",
"metadata": {
"name": "fedoraapache",
"labels": {
"name": "fedoraapache"
}
},
"spec": {
"containers": [
{
"name": "fedoraapache",
"image": "fedora/apache",
"ports": [
{
"hostPort": 80,
"containerPort": 80
}
]
}
]
}
}
EOF
/usr/bin/kubectl create -f apache.json
**Testing your new kube cluster**
```
**Check where the pod was created**
```
kubectl get pods
```
Important : Note that the IP of the pods IP fields are on the network which you created in the kube_ip_addr file.
In this example, that was the 10.254 network.
If you see 172 in the IP fields, networking was not setup correctly, and you may want to re run or dive deeper into the way networking is being setup by looking at the details of the networking scripts used above.
**Check Docker status on minion.**
```
docker ps
docker images
```
**After the pod is 'Running' Check web server access on the minion**
```
curl http://localhost
```
That's it !
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/getting-started-guides/fedora/fedora_ansible_config.md?pixel)]()
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/release-0.20.0/docs/getting-started-guides/fedora/fedora_ansible_config.md?pixel)]()

View File

@@ -0,0 +1,199 @@
Getting started on [Fedora](http://fedoraproject.org)
-----------------------------------------------------
**Table of Contents**
- [Prerequisites](#prerequisites)
- [Instructions](#instructions)
## Prerequisites
1. You need 2 or more machines with Fedora installed.
## Instructions
This is a getting started guide for Fedora. It is a manual configuration so you understand all the underlying packages / services / ports, etc...
This guide will only get ONE node (previously minion) working. Multiple nodes require a functional [networking configuration](http://docs.k8s.io/networking.md) done outside of kubernetes. Although the additional kubernetes configuration requirements should be obvious.
The kubernetes package provides a few services: kube-apiserver, kube-scheduler, kube-controller-manager, kubelet, kube-proxy. These services are managed by systemd and the configuration resides in a central location: /etc/kubernetes. We will break the services up between the hosts. The first host, fed-master, will be the kubernetes master. This host will run the kube-apiserver, kube-controller-manager, and kube-scheduler. In addition, the master will also run _etcd_ (not needed if _etcd_ runs on a different host but this guide assumes that _etcd_ and kubernetes master run on the same host). The remaining host, fed-node will be the node and run kubelet, proxy and docker.
**System Information:**
Hosts:
```
fed-master = 192.168.121.9
fed-node = 192.168.121.65
```
**Prepare the hosts:**
* Install kubernetes on all hosts - fed-{master,node}. This will also pull in docker. Also install etcd on fed-master. This guide has been tested with kubernetes-0.15.0 but should work with other versions too.
* The [--enablerepo=update-testing](https://fedoraproject.org/wiki/QA:Updates_Testing) directive in the yum command below will ensure that the most recent Kubernetes version that is scheduled for pre-release will be installed. This should be a more recent version than the Fedora "stable" release for Kubernetes that you would get without adding the directive.
* If you want the very latest Kubernetes release [you can download and yum install the RPM directly from Fedora Koji](http://koji.fedoraproject.org/koji/packageinfo?packageID=19202) instead of using the yum install command below.
```
yum -y install --enablerepo=updates-testing kubernetes
```
* Install etcd and iptables
```
yum -y install etcd iptables
```
* Add master and node to /etc/hosts on all machines (not needed if hostnames already in DNS). Make sure that communication works between fed-master and fed-node by using a utility such as ping.
```
echo "192.168.121.9 fed-master
192.168.121.65 fed-node" >> /etc/hosts
```
* Edit /etc/kubernetes/config which will be the same on all hosts (master and node) to contain:
```
# Comma separated list of nodes in the etcd cluster
KUBE_MASTER="--master=http://fed-master:8080"
# logging to stderr means we get it in the systemd journal
KUBE_LOGTOSTDERR="--logtostderr=true"
# journal message level, 0 is debug
KUBE_LOG_LEVEL="--v=0"
# Should this cluster be allowed to run privileged docker containers
KUBE_ALLOW_PRIV="--allow_privileged=false"
```
* Disable the firewall on both the master and node, as docker does not play well with other firewall rule managers. Please note that iptables-services does not exist on default fedora server install.
```
systemctl disable iptables-services firewalld
systemctl stop iptables-services firewalld
```
**Configure the kubernetes services on the master.**
* Edit /etc/kubernetes/apiserver to appear as such. The service_cluster_ip_range IP addresses must be an unused block of addresses, not used anywhere else. They do not need to be routed or assigned to anything.
```
# The address on the local server to listen to.
KUBE_API_ADDRESS="--address=0.0.0.0"
# Comma separated list of nodes in the etcd cluster
KUBE_ETCD_SERVERS="--etcd_servers=http://127.0.0.1:4001"
# Address range to use for services
KUBE_SERVICE_ADDRESSES="--service-cluster-ip-range=10.254.0.0/16"
# Add your own!
KUBE_API_ARGS=""
```
* Edit /etc/etcd/etcd.conf,let the etcd to listen all the ip instead of 127.0.0.1, if not, you will get the error like "connection refused"
```
ETCD_LISTEN_CLIENT_URLS="http://0.0.0.0:4001"
```
* Start the appropriate services on master:
```
for SERVICES in etcd kube-apiserver kube-controller-manager kube-scheduler; do
systemctl restart $SERVICES
systemctl enable $SERVICES
systemctl status $SERVICES
done
```
* Addition of nodes:
* Create following node.json file on kubernetes master node:
```json
{
"apiVersion": "v1",
"kind": "Node",
"metadata": {
"name": "fed-node",
"labels":{ "name": "fed-node-label"}
},
"spec": {
"externalID": "fed-node"
}
}
```
Now create a node object internally in your kubernetes cluster by running:
```
$ kubectl create -f node.json
$ kubectl get nodes
NAME LABELS STATUS
fed-node name=fed-node-label Unknown
```
Please note that in the above, it only creates a representation for the node
_fed-node_ internally. It does not provision the actual _fed-node_. Also, it
is assumed that _fed-node_ (as specified in `name`) can be resolved and is
reachable from kubernetes master node. This guide will discuss how to provision
a kubernetes node (fed-node) below.
**Configure the kubernetes services on the node.**
***We need to configure the kubelet on the node.***
* Edit /etc/kubernetes/kubelet to appear as such:
```
###
# kubernetes kubelet (node) config
# The address for the info server to serve on (set to 0.0.0.0 or "" for all interfaces)
KUBELET_ADDRESS="--address=0.0.0.0"
# You may leave this blank to use the actual hostname
KUBELET_HOSTNAME="--hostname_override=fed-node"
# location of the api-server
KUBELET_API_SERVER="--api_servers=http://fed-master:8080"
# Add your own!
#KUBELET_ARGS=""
```
* Start the appropriate services on the node (fed-node).
```
for SERVICES in kube-proxy kubelet docker; do
systemctl restart $SERVICES
systemctl enable $SERVICES
systemctl status $SERVICES
done
```
* Check to make sure now the cluster can see the fed-node on fed-master, and its status changes to _Ready_.
```
kubectl get nodes
NAME LABELS STATUS
fed-node name=fed-node-label Ready
```
* Deletion of nodes:
To delete _fed-node_ from your kubernetes cluster, one should run the following on fed-master (Please do not do it, it is just for information):
```
$ kubectl delete -f node.json
```
*You should be finished!*
**The cluster should be running! Launch a test pod.**
You should have a functional cluster, check out [101](https://github.com/GoogleCloudPlatform/kubernetes/blob/master/examples/walkthrough/README.md)!
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/getting-started-guides/fedora/fedora_manual_config.md?pixel)]()
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/release-0.20.0/docs/getting-started-guides/fedora/fedora_manual_config.md?pixel)]()

View File

@@ -0,0 +1,183 @@
Kubernetes multiple nodes cluster with flannel on Fedora
--------------------------------------------------------
**Table of Contents**
- [Introduction](#introduction)
- [Prerequisites](#prerequisites)
- [Master Setup](#master-setup)
- [Node Setup](#node-setup)
- [**Test the cluster and flannel configuration**](#test-the-cluster-and-flannel-configuration)
## Introduction
This document describes how to deploy kubernetes on multiple hosts to set up a multi-node cluster and networking with flannel. Follow fedora [getting started guide](fedora_manual_config.md) to setup 1 master (fed-master) and 2 or more nodes (minions). Make sure that all nodes (minions) have different names (fed-node1, fed-node2 and so on) and labels (fed-node1-label, fed-node2-label, and so on) to avoid any conflict. Also make sure that the kubernetes master host is running etcd, kube-controller-manager, kube-scheduler, and kube-apiserver services, and the nodes (minions) are running docker, kube-proxy and kubelet services. Now install flannel on kubernetes nodes (minions). flannel on each node configures an overlay network that docker uses. flannel runs on each node to setup a unique class-C container network.
## Prerequisites
1. You need 2 or more machines with Fedora installed.
## Master Setup
**Perform following commands on the kubernetes master**
* Configure flannel by creating a `flannel-config.json` in your current directory on fed-master. flannel provides udp and vxlan among other overlay networking backend options. In this guide, we choose kernel based vxlan backend. The contents of the json are:
```
{
"Network": "18.16.0.0/16",
"SubnetLen": 24,
"Backend": {
"Type": "vxlan",
"VNI": 1
}
}
```
**NOTE:** Choose an IP range that is *NOT* part of the public IP address range.
* Add the configuration to the etcd server on fed-master.
```
# etcdctl set /coreos.com/network/config < flannel-config.json
```
* Verify the key exists in the etcd server on fed-master.
```
# etcdctl get /coreos.com/network/config
```
## Node Setup
**Perform following commands on all kubernetes nodes**
* Edit the flannel configuration file /etc/sysconfig/flanneld as follows:
```
# Flanneld configuration options
# etcd url location. Point this to the server where etcd runs
FLANNEL_ETCD="http://fed-master:4001"
# etcd config key. This is the configuration key that flannel queries
# For address range assignment
FLANNEL_ETCD_KEY="/coreos.com/network"
# Any additional options that you want to pass
FLANNEL_OPTIONS=""
```
**Note:** By default, flannel uses the interface for the default route. If you have multiple interfaces and would like to use an interface other than the default route one, you could add "-iface=" to FLANNEL_OPTIONS. For additional options, run `flanneld --help` on command line.
* Enable the flannel service.
```
# systemctl enable flanneld
```
* If docker is not running, then starting flannel service is enough and skip the next step.
```
# systemctl start flanneld
```
* If docker is already running, then stop docker, delete docker bridge (docker0), start flanneld and restart docker as follows. Another alternative is to just reboot the system (`systemctl reboot`).
```
# systemctl stop docker
# ip link delete docker0
# systemctl start flanneld
# systemctl start docker
```
***
##**Test the cluster and flannel configuration**
* Now check the interfaces on the nodes. Notice there is now a flannel.1 interface, and the ip addresses of docker0 and flannel.1 interfaces are in the same network. You will notice that docker0 is assigned a subnet (18.16.29.0/24 as shown below) on each kubernetes node out of the IP range configured above. A working output should look like this:
```
# ip -4 a|grep inet
inet 127.0.0.1/8 scope host lo
inet 192.168.122.77/24 brd 192.168.122.255 scope global dynamic eth0
inet 18.16.29.0/16 scope global flannel.1
inet 18.16.29.1/24 scope global docker0
```
* From any node in the cluster, check the cluster members by issuing a query to etcd server via curl (only partial output is shown using `grep -E "\{|\}|key|value"`). If you set up a 1 master and 3 nodes cluster, you should see one block for each node showing the subnets they have been assigned. You can associate those subnets to each node by the MAC address (VtepMAC) and IP address (Public IP) that is listed in the output.
```
# curl -s http://fed-master:4001/v2/keys/coreos.com/network/subnets | python -mjson.tool
{
"node": {
"key": "/coreos.com/network/subnets",
{
"key": "/coreos.com/network/subnets/18.16.29.0-24",
"value": "{\"PublicIP\":\"192.168.122.77\",\"BackendType\":\"vxlan\",\"BackendData\":{\"VtepMAC\":\"46:f1:d0:18:d0:65\"}}"
},
{
"key": "/coreos.com/network/subnets/18.16.83.0-24",
"value": "{\"PublicIP\":\"192.168.122.36\",\"BackendType\":\"vxlan\",\"BackendData\":{\"VtepMAC\":\"ca:38:78:fc:72:29\"}}"
},
{
"key": "/coreos.com/network/subnets/18.16.90.0-24",
"value": "{\"PublicIP\":\"192.168.122.127\",\"BackendType\":\"vxlan\",\"BackendData\":{\"VtepMAC\":\"92:e2:80:ba:2d:4d\"}}"
}
}
}
```
* From all nodes, review the `/run/flannel/subnet.env` file. This file was generated automatically by flannel.
```
# cat /run/flannel/subnet.env
FLANNEL_SUBNET=18.16.29.1/24
FLANNEL_MTU=1450
FLANNEL_IPMASQ=false
```
* At this point, we have etcd running on the kubernetes master, and flannel / docker running on kubernetes nodes. Next steps are for testing cross-host container communication which will confirm that docker and flannel are configured properly.
* Issue the following commands on any 2 nodes:
```
#docker run -it fedora:latest bash
bash-4.3#
```
* This will place you inside the container. Install iproute and iputils packages to install ip and ping utilities. Due to a [bug](https://bugzilla.redhat.com/show_bug.cgi?id=1142311), it is required to modify capabilities of ping binary to work around "Operation not permitted" error.
```
bash-4.3# yum -y install iproute iputils
bash-4.3# setcap cap_net_raw-ep /usr/bin/ping
```
* Now note the IP address on the first node:
```
bash-4.3# ip -4 a l eth0 | grep inet
inet 18.16.29.4/24 scope global eth0
```
* And also note the IP address on the other node:
```
bash-4.3# ip a l eth0 | grep inet
inet 18.16.90.4/24 scope global eth0
```
* Now ping from the first node to the other node:
```
bash-4.3# ping 18.16.90.4
PING 18.16.90.4 (18.16.90.4) 56(84) bytes of data.
64 bytes from 18.16.90.4: icmp_seq=1 ttl=62 time=0.275 ms
64 bytes from 18.16.90.4: icmp_seq=2 ttl=62 time=0.372 ms
```
* Now kubernetes multi-node cluster is set up with overlay networking set up by flannel.
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/getting-started-guides/fedora/flannel_multi_node_cluster.md?pixel)]()
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/release-0.20.0/docs/getting-started-guides/fedora/flannel_multi_node_cluster.md?pixel)]()

View File

@@ -0,0 +1,204 @@
Getting started on Google Compute Engine
----------------------------------------
**Table of Contents**
- [Before you start](#before-you-start)
- [Prerequisites](#prerequisites)
- [Starting a cluster](#starting-a-cluster)
- [Installing the kubernetes command line tools on your workstation](#installing-the-kubernetes-command-line-tools-on-your-workstation)
- [Getting started with your cluster](#getting-started-with-your-cluster)
- [Inspect your cluster](#inspect-your-cluster)
- [Run some examples](#run-some-examples)
- [Tearing down the cluster](#tearing-down-the-cluster)
- [Customizing](#customizing)
- [Troubleshooting](#troubleshooting)
- [Project settings](#project-settings)
- [Cluster initialization hang](#cluster-initialization-hang)
- [SSH](#ssh)
- [Networking](#networking)
The example below creates a Kubernetes cluster with 4 worker node Virtual Machines and a master Virtual Machine (i.e. 5 VMs in your cluster). This cluster is set up and controlled from your workstation (or wherever you find convenient).
### Before you start
If you want a simplified getting started experience and GUI for managing clusters, please consider trying [Google Container Engine](https://cloud.google.com/container-engine/) for hosted cluster installation and management.
If you want to use custom binaries or pure open source Kubernetes, please continue with the instructions below.
### Prerequisites
1. You need a Google Cloud Platform account with billing enabled. Visit the [Google Developers Console](http://cloud.google.com/console) for more details.
1. Install `gcloud` as necessary. `gcloud` can be installed as a part of the [Google Cloud SDK](https://cloud.google.com/sdk/).
1. Then, make sure you have the `gcloud preview` command line component installed. Run `gcloud preview` at the command line - if it asks to install any components, go ahead and install them. If it simply shows help text, you're good to go. This is required as the cluster setup script uses GCE [Instance Groups](https://cloud.google.com/compute/docs/instance-groups/), which are in the gcloud preview namespace. You will also need to **enable [`Compute Engine Instance Group Manager API`](https://developers.google.com/console/help/new/#activatingapis)** in the developers console.
1. Make sure that gcloud is set to use the Google Cloud Platform project you want. You can check the current project using `gcloud config list project` and change it via `gcloud config set project <project-id>`.
1. Make sure you have credentials for GCloud by running ` gcloud auth login`.
1. Make sure you can start up a GCE VM from the command line. At least make sure you can do the [Create an instance](https://cloud.google.com/compute/docs/quickstart#create_an_instance) part of the GCE Quickstart.
1. Make sure you can ssh into the VM without interactive prompts. See the [Log in to the instance](https://cloud.google.com/compute/docs/quickstart#ssh) part of the GCE Quickstart.
### Starting a cluster
You can install a client and start a cluster with this command:
```bash
curl -sS https://get.k8s.io | bash
```
Once this command completes, you will have a master VM and four worker VMs, running as a Kubernetes cluster. By default, some containers will already be running on your cluster. Containers like `kibana` and `elasticsearch` provide [logging](../logging.md), while `heapster` provides [monitoring](../../cluster/addons/cluster-monitoring/README.md) services.
Alternately, if you prefer, you can download and install the latest Kubernetes release from [this page](https://github.com/GoogleCloudPlatform/kubernetes/releases), then run the `<kubernetes>/cluster/kube-up.sh` script to start the cluster:
```bash
cd kubernetes
cluster/kube-up.sh
```
If you run into trouble, please see the section on [troubleshooting](gce.md#troubleshooting), post to the
[google-containers group](https://groups.google.com/forum/#!forum/google-containers), or come ask questions on IRC at #google-containers on freenode.
The next few steps will show you:
1. how to set up the command line client on your workstation to manage the cluster
1. examples of how to use the cluster
1. how to delete the cluster
1. how to start clusters with non-default options (like larger clusters)
### Installing the kubernetes command line tools on your workstation
The cluster startup script will leave you with a running cluster and a ```kubernetes``` directory on your workstation.
The next step is to make sure the `kubectl` tool is in your path.
The [kubectl](../kubectl.md) tool controls the Kubernetes cluster manager. It lets you inspect your cluster resources, create, delete, and update components, and much more.
You will use it to look at your new cluster and bring up example apps.
Add the appropriate binary folder to your ```PATH``` to access kubectl:
```bash
# OS X
export PATH=<path/to/kubernetes-directory>/platforms/darwin/amd64:$PATH
# Linux
export PATH=<path/to/kubernetes-directory>/platforms/linux/amd64:$PATH
```
**Note**: gcloud also ships with ```kubectl```, which by default is added to your path.
However the gcloud bundled kubectl version may be older than the one downloaded by the
get.k8s.io install script. We recommend you use the downloaded binary to avoid
potential issues with client/server version skew.
### Getting started with your cluster
#### Inspect your cluster
Once `kubectl` is in your path, you can use it to look at your cluster. E.g., running:
```shell
$ kubectl get services
```
should show a set of [services](../services.md) that look something like this:
```shell
NAME LABELS SELECTOR IP(S) PORT(S)
elasticsearch-logging k8s-app=elasticsearch-logging,kubernetes.io/cluster-service=true,kubernetes.io/name=Elasticsearch k8s-app=elasticsearch-logging 10.0.198.255 9200/TCP
kibana-logging k8s-app=kibana-logging,kubernetes.io/cluster-service=true,kubernetes.io/name=Kibana k8s-app=kibana-logging 10.0.56.44 5601/TCP
kube-dns k8s-app=kube-dns,kubernetes.io/cluster-service=true,kubernetes.io/name=KubeDNS k8s-app=kube-dns 10.0.0.10 53/UDP
kubernetes component=apiserver,provider=kubernetes <none> 10.0.0.1 443/TCP
```
Similarly, you can take a look at the set of [pods](../pods.md) that were created during cluster startup.
You can do this via the
```shell
$ kubectl get pods
```
command.
You'll see see a list of pods that looks something like this (the name specifics will be different):
```shell
NAME READY REASON RESTARTS AGE
elasticsearch-logging-v1-ab87r 1/1 Running 0 1m
elasticsearch-logging-v1-v9lqa 1/1 Running 0 1m
fluentd-elasticsearch-kubernetes-minion-419y 1/1 Running 0 12s
fluentd-elasticsearch-kubernetes-minion-k0xh 1/1 Running 0 1m
fluentd-elasticsearch-kubernetes-minion-oa8l 1/1 Running 0 1m
fluentd-elasticsearch-kubernetes-minion-xuj5 1/1 Running 0 1m
kibana-logging-v1-cx2p8 1/1 Running 0 1m
kube-dns-v3-pa3w9 3/3 Running 0 1m
monitoring-heapster-v1-m1xkz 1/1 Running 0 1m
```
Some of the pods may take a few seconds to start up (during this time they'll show `Pending`), but check that they all show as `Running` after a short period.
#### Run some examples
Then, see [a simple nginx example](../../examples/simple-nginx.md) to try out your new cluster.
For more complete applications, please look in the [examples directory](../../examples). The [guestbook example](../../examples/guestbook) is a good "getting started" walkthrough.
### Tearing down the cluster
To remove/delete/teardown the cluster, use the `kube-down.sh` script.
```bash
cd kubernetes
cluster/kube-down.sh
```
Likewise, the `kube-up.sh` in the same directory will bring it back up. You do not need to rerun the `curl` or `wget` command: everything needed to setup the Kubernetes cluster is now on your workstation.
### Customizing
The script above relies on Google Storage to stage the Kubernetes release. It
then will start (by default) a single master VM along with 4 worker VMs. You
can tweak some of these parameters by editing `kubernetes/cluster/gce/config-default.sh`
You can view a transcript of a successful cluster creation
[here](https://gist.github.com/satnam6502/fc689d1b46db9772adea).
### Troubleshooting
#### Project settings
You need to have the Google Cloud Storage API, and the Google Cloud Storage
JSON API enabled. It is activated by default for new projects. Otherwise, it
can be done in the Google Cloud Console. See the [Google Cloud Storage JSON
API Overview](https://cloud.google.com/storage/docs/json_api/) for more
details.
Also ensure that-- as listed in the [Prerequsites section](#prerequisites)-- you've enabled the `Compute Engine Instance Group Manager API`, and can start up a GCE VM from the command line as in the [GCE Quickstart](https://cloud.google.com/compute/docs/quickstart) instructions.
#### Cluster initialization hang
If the Kubernetes startup script hangs waiting for the API to be reachable, you can troubleshoot by SSHing into the master and minion VMs and looking at logs such as `/var/log/startupscript.log`.
**Once you fix the issue, you should run `kube-down.sh` to cleanup** after the partial cluster creation, before running `kube-up.sh` to try again.
#### SSH
If you're having trouble SSHing into your instances, ensure the GCE firewall
isn't blocking port 22 to your VMs. By default, this should work but if you
have edited firewall rules or created a new non-default network, you'll need to
expose it: `gcloud compute firewall-rules create --network=<network-name>
--description "SSH allowed from anywhere" --allow tcp:22 default-ssh`
Additionally, your GCE SSH key must either have no passcode or you need to be
using `ssh-agent`.
#### Networking
The instances must be able to connect to each other using their private IP. The
script uses the "default" network which should have a firewall rule called
"default-allow-internal" which allows traffic on any port on the private IPs.
If this rule is missing from the default network or if you change the network
being used in `cluster/config-default.sh` create a new rule with the following
field values:
* Source Ranges: `10.0.0.0/8`
* Allowed Protocols and Port: `tcp:1-65535;udp:1-65535;icmp`
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/getting-started-guides/gce.md?pixel)]()
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/release-0.20.0/docs/getting-started-guides/gce.md?pixel)]()

View File

@@ -0,0 +1,239 @@
Getting started with Juju
-------------------------
Juju handles provisioning machines and deploying complex systems to a
wide number of clouds, supporting service orchestration once the bundle of
services has been deployed.
**Table of Contents**
- [Prerequisites](#prerequisites)
- [On Ubuntu](#on-ubuntu)
- [With Docker](#with-docker)
- [Launch Kubernetes cluster](#launch-kubernetes-cluster)
- [Exploring the cluster](#exploring-the-cluster)
- [Run some containers!](#run-some-containers)
- [Scale out cluster](#scale-out-cluster)
- [Launch the "k8petstore" example app](#launch-the-k8petstore-example-app)
- [Tear down cluster](#tear-down-cluster)
- [More Info](#more-info)
- [Cloud compatibility](#cloud-compatibility)
## Prerequisites
> Note: If you're running kube-up, on ubuntu - all of the dependencies
> will be handled for you. You may safely skip to the section:
> [Launch Kubernetes Cluster](#launch-kubernetes-cluster)
### On Ubuntu
[Install the Juju client](https://juju.ubuntu.com/install) on your
local ubuntu system:
sudo add-apt-repository ppa:juju/stable
sudo apt-get update
sudo apt-get install juju-core juju-quickstart
### With Docker
If you are not using ubuntu or prefer the isolation of docker, you may
run the following:
mkdir ~/.juju
sudo docker run -v ~/.juju:/home/ubuntu/.juju -ti whitmo/jujubox:latest
At this point from either path you will have access to the `juju
quickstart` command.
To set up the credentials for your chosen cloud run:
juju quickstart --constraints="mem=3.75G" -i
Follow the dialogue and choose `save` and `use`. Quickstart will now
bootstrap the juju root node and setup the juju web based user
interface.
## Launch Kubernetes cluster
You will need to have the Kubernetes tools compiled before launching the cluster
make all WHAT=cmd/kubectl
export KUBERNETES_PROVIDER=juju
cluster/kube-up.sh
If this is your first time running the `kube-up.sh` script, it will install
the required predependencies to get started with Juju, additionally it will
launch a curses based configuration utility allowing you to select your cloud
provider and enter the proper access credentials.
Next it will deploy the kubernetes master, etcd, 2 minions with flannel based
Software Defined Networking.
## Exploring the cluster
Juju status provides information about each unit in the cluster:
juju status --format=oneline
- docker/0: 52.4.92.78 (started)
- flannel-docker/0: 52.4.92.78 (started)
- kubernetes/0: 52.4.92.78 (started)
- docker/1: 52.6.104.142 (started)
- flannel-docker/1: 52.6.104.142 (started)
- kubernetes/1: 52.6.104.142 (started)
- etcd/0: 52.5.216.210 (started) 4001/tcp
- juju-gui/0: 52.5.205.174 (started) 80/tcp, 443/tcp
- kubernetes-master/0: 52.6.19.238 (started) 8080/tcp
You can use `juju ssh` to access any of the units:
juju ssh kubernetes-master/0
## Run some containers!
`kubectl` is available on the kubernetes master node. We'll ssh in to
launch some containers, but one could use kubectl locally setting
KUBERNETES_MASTER to point at the ip of `kubernetes-master/0`.
No pods will be available before starting a container:
kubectl get pods
POD CONTAINER(S) IMAGE(S) HOST LABELS STATUS
kubectl get replicationcontrollers
CONTROLLER CONTAINER(S) IMAGE(S) SELECTOR REPLICAS
We'll follow the aws-coreos example. Create a pod manifest: `pod.json`
```
{
"apiVersion": "v1",
"kind": "Pod",
"metadata": {
"name": "hello",
"labels": {
"name": "hello",
"environment": "testing"
}
},
"spec": {
"containers": [{
"name": "hello",
"image": "quay.io/kelseyhightower/hello",
"ports": [{
"containerPort": 80,
"hostPort": 80
}]
}]
}
}
```
Create the pod with kubectl:
kubectl create -f pod.json
Get info on the pod:
kubectl get pods
To test the hello app, we need to locate which minion is hosting
the container. Better tooling for using juju to introspect container
is in the works but we can use `juju run` and `juju status` to find
our hello app.
Exit out of our ssh session and run:
juju run --unit kubernetes/0 "docker ps -n=1"
...
juju run --unit kubernetes/1 "docker ps -n=1"
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
02beb61339d8 quay.io/kelseyhightower/hello:latest /hello About an hour ago Up About an hour k8s_hello....
We see `kubernetes/1` has our container, we can open port 80:
juju run --unit kubernetes/1 "open-port 80"
juju expose kubernetes
sudo apt-get install curl
curl $(juju status --format=oneline kubernetes/1 | cut -d' ' -f3)
Finally delete the pod:
juju ssh kubernetes-master/0
kubectl delete pods hello
## Scale out cluster
We can add minion units like so:
juju add-unit docker # creates unit docker/2, kubernetes/2, docker-flannel/2
## Launch the "k8petstore" example app
The [k8petstore example](https://github.com/GoogleCloudPlatform/kubernetes/blob/master/examples/k8petstore) is available as a
[juju action](https://jujucharms.com/docs/devel/actions).
juju action do kubernetes-master/0
Note: this example includes curl statements to exercise the app, which automatically generates "petstore" transactions written to redis, and allows you to visualize the throughput in your browswer.
## Tear down cluster
./kube-down.sh
or
juju destroy-environment --force `juju env`
## More Info
Kubernetes Bundle on Github
- [Bundle Repository](https://github.com/whitmo/bundle-kubernetes)
* [Kubernetes master charm](https://github.com/whitmo/charm-kubernetes-master)
* [Kubernetes mininion charm](https://github.com/whitmo/charm-kubernetes)
- [Bundle Documentation](http://whitmo.github.io/bundle-kubernetes)
- [More about Juju](https://juju.ubuntu.com)
### Cloud compatibility
Juju runs natively against a variety of cloud providers and can be
made to work against many more using a generic manual provider.
Provider | v0.15.0
-------------- | -------
AWS | TBD
HPCloud | TBD
OpenStack | TBD
Joyent | TBD
Azure | TBD
Digital Ocean | TBD
MAAS (bare metal) | TBD
GCE | TBD
Provider | v0.8.1
-------------- | -------
AWS | [Pass](http://reports.vapour.ws/charm-test-details/charm-bundle-test-parent-136)
HPCloud | [Pass](http://reports.vapour.ws/charm-test-details/charm-bundle-test-parent-136)
OpenStack | [Pass](http://reports.vapour.ws/charm-test-details/charm-bundle-test-parent-136)
Joyent | [Pass](http://reports.vapour.ws/charm-test-details/charm-bundle-test-parent-136)
Azure | TBD
Digital Ocean | TBD
MAAS (bare metal) | TBD
GCE | TBD
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/getting-started-guides/juju.md?pixel)]()
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/release-0.20.0/docs/getting-started-guides/juju.md?pixel)]()

Binary file not shown.

After

Width:  |  Height:  |  Size: 51 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 31 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 180 KiB

View File

@@ -0,0 +1,274 @@
Getting started with libvirt CoreOS
-----------------------------------
**Table of Contents**
- [Highlights](#highlights)
- [Prerequisites](#prerequisites)
- [Setup](#setup)
- [Interacting with your Kubernetes cluster with the `kube-*` scripts.](#interacting-with-your-kubernetes-cluster-with-the-kube--scripts)
- [Troubleshooting](#troubleshooting)
- [!!! Cannot find kubernetes-server-linux-amd64.tar.gz](#-cannot-find-kubernetes-server-linux-amd64targz)
- [Can't find virsh in PATH, please fix and retry.](#cant-find-virsh-in-path-please-fix-and-retry)
- [error: Failed to connect socket to '/var/run/libvirt/libvirt-sock': No such file or directory](#error-failed-to-connect-socket-to-varrunlibvirtlibvirt-sock-no-such-file-or-directory)
- [error: Failed to connect socket to '/var/run/libvirt/libvirt-sock': Permission denied](#error-failed-to-connect-socket-to-varrunlibvirtlibvirt-sock-permission-denied)
- [error: Out of memory initializing network (virsh net-create...)](#error-out-of-memory-initializing-network-virsh-net-create)
### Highlights
* Super-fast cluster boot-up (few seconds instead of several minutes for vagrant)
* Reduced disk usage thanks to [COW](https://en.wikibooks.org/wiki/QEMU/Images#Copy_on_write)
* Reduced memory footprint thanks to [KSM](https://www.kernel.org/doc/Documentation/vm/ksm.txt)
### Prerequisites
1. Install [dnsmasq](http://www.thekelleys.org.uk/dnsmasq/doc.html)
2. Install [ebtables](http://ebtables.netfilter.org/)
3. Install [qemu](http://wiki.qemu.org/Main_Page)
4. Install [libvirt](http://libvirt.org/)
5. Enable and start the libvirt daemon, e.g:
* ``systemctl enable libvirtd``
* ``systemctl start libvirtd``
6. [Grant libvirt access to your user¹](https://libvirt.org/aclpolkit.html)
7. Check that your $HOME is accessible to the qemu user²
#### ¹ Depending on your distribution, libvirt access may be denied by default or may require a password at each access.
You can test it with the following command:
```
virsh -c qemu:///system pool-list
```
If you have access error messages, please read https://libvirt.org/acl.html and https://libvirt.org/aclpolkit.html .
In short, if your libvirt has been compiled with Polkit support (ex: Arch, Fedora 21), you can create `/etc/polkit-1/rules.d/50-org.libvirt.unix.manage.rules` as follows to grant full access to libvirt to `$USER`
```
sudo /bin/sh -c "cat - > /etc/polkit-1/rules.d/50-org.libvirt.unix.manage.rules" << EOF
polkit.addRule(function(action, subject) {
if (action.id == "org.libvirt.unix.manage" &&
subject.user == "$USER") {
return polkit.Result.YES;
polkit.log("action=" + action);
polkit.log("subject=" + subject);
}
});
EOF
```
If your libvirt has not been compiled with Polkit (ex: Ubuntu 14.04.1 LTS), check the permissions on the libvirt unix socket:
```
ls -l /var/run/libvirt/libvirt-sock
srwxrwx--- 1 root libvirtd 0 févr. 12 16:03 /var/run/libvirt/libvirt-sock
usermod -a -G libvirtd $USER
# $USER needs to logout/login to have the new group be taken into account
```
(Replace `$USER` with your login name)
#### ² Qemu will run with a specific user. It must have access to the VMs drives
All the disk drive resources needed by the VM (CoreOS disk image, kubernetes binaries, cloud-init files, etc.) are put inside `./cluster/libvirt-coreos/libvirt_storage_pool`.
As were using the `qemu:///system` instance of libvirt, qemu will run with a specific `user:group` distinct from your user. It is configured in `/etc/libvirt/qemu.conf`. That qemu user must have access to that libvirt storage pool.
If your `$HOME` is world readable, everything is fine. If your $HOME is private, `cluster/kube-up.sh` will fail with an error message like:
```
error: Cannot access storage file '$HOME/.../kubernetes/cluster/libvirt-coreos/libvirt_storage_pool/kubernetes_master.img' (as uid:99, gid:78): Permission denied
```
In order to fix that issue, you have several possibilities:
* set `POOL_PATH` inside `cluster/libvirt-coreos/config-default.sh` to a directory:
* backed by a filesystem with a lot of free disk space
* writable by your user;
* accessible by the qemu user.
* Grant the qemu user access to the storage pool.
On Arch:
```
setfacl -m g:kvm:--x ~
```
### Setup
By default, the libvirt-coreos setup will create a single kubernetes master and 3 kubernetes minions. Because the VM drives use Copy-on-Write and because of memory ballooning and KSM, there is a lot of resource over-allocation.
To start your local cluster, open a shell and run:
```shell
cd kubernetes
export KUBERNETES_PROVIDER=libvirt-coreos
cluster/kube-up.sh
```
The `KUBERNETES_PROVIDER` environment variable tells all of the various cluster management scripts which variant to use. If you forget to set this, the assumption is you are running on Google Compute Engine.
The `NUM_MINIONS` environment variable may be set to specify the number of minions to start. If it is not set, the number of minions defaults to 3.
The `KUBE_PUSH` environment variable may be set to specify which kubernetes binaries must be deployed on the cluster. Its possible values are:
* `release` (default if `KUBE_PUSH` is not set) will deploy the binaries of `_output/release-tars/kubernetes-server-….tar.gz`. This is built with `make release` or `make release-skip-tests`.
* `local` will deploy the binaries of `_output/local/go/bin`. These are built with `make`.
You can check that your machines are there and running with:
```
virsh -c qemu:///system list
Id Name State
----------------------------------------------------
15 kubernetes_master running
16 kubernetes_minion-01 running
17 kubernetes_minion-02 running
18 kubernetes_minion-03 running
```
You can check that the kubernetes cluster is working with:
```
$ kubectl get nodes
NAME LABELS STATUS
192.168.10.2 <none> Ready
192.168.10.3 <none> Ready
192.168.10.4 <none> Ready
```
The VMs are running [CoreOS](https://coreos.com/).
Your ssh keys have already been pushed to the VM. (It looks for ~/.ssh/id_*.pub)
The user to use to connect to the VM is `core`.
The IP to connect to the master is 192.168.10.1.
The IPs to connect to the minions are 192.168.10.2 and onwards.
Connect to `kubernetes_master`:
```
ssh core@192.168.10.1
```
Connect to `kubernetes_minion-01`:
```
ssh core@192.168.10.2
```
### Interacting with your Kubernetes cluster with the `kube-*` scripts.
All of the following commands assume you have set `KUBERNETES_PROVIDER` appropriately:
```
export KUBERNETES_PROVIDER=libvirt-coreos
```
Bring up a libvirt-CoreOS cluster of 5 minions
```
NUM_MINIONS=5 cluster/kube-up.sh
```
Destroy the libvirt-CoreOS cluster
```
cluster/kube-down.sh
```
Update the libvirt-CoreOS cluster with a new Kubernetes release produced by `make release` or `make release-skip-tests`:
```
cluster/kube-push.sh
```
Update the libvirt-CoreOS cluster with the locally built Kubernetes binaries produced by `make`:
```
KUBE_PUSH=local cluster/kube-push.sh
```
Interact with the cluster
```
kubectl ...
```
### Troubleshooting
#### !!! Cannot find kubernetes-server-linux-amd64.tar.gz
Build the release tarballs:
```
make release
```
#### Can't find virsh in PATH, please fix and retry.
Install libvirt
On Arch:
```
pacman -S qemu libvirt
```
On Ubuntu 14.04.1:
```
aptitude install qemu-system-x86 libvirt-bin
```
On Fedora 21:
```
yum install qemu libvirt
```
#### error: Failed to connect socket to '/var/run/libvirt/libvirt-sock': No such file or directory
Start the libvirt daemon
On Arch:
```
systemctl start libvirtd
```
On Ubuntu 14.04.1:
```
service libvirt-bin start
```
#### error: Failed to connect socket to '/var/run/libvirt/libvirt-sock': Permission denied
Fix libvirt access permission (Remember to adapt `$USER`)
On Arch and Fedora 21:
```
cat > /etc/polkit-1/rules.d/50-org.libvirt.unix.manage.rules <<EOF
polkit.addRule(function(action, subject) {
if (action.id == "org.libvirt.unix.manage" &&
subject.user == "$USER") {
return polkit.Result.YES;
polkit.log("action=" + action);
polkit.log("subject=" + subject);
}
});
EOF
```
On Ubuntu:
```
usermod -a -G libvirtd $USER
```
#### error: Out of memory initializing network (virsh net-create...)
Ensure libvirtd has been restarted since ebtables was installed.
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/getting-started-guides/libvirt-coreos.md?pixel)]()
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/release-0.20.0/docs/getting-started-guides/libvirt-coreos.md?pixel)]()

View File

@@ -0,0 +1,137 @@
Getting started locally
-----------------------
**Table of Contents**
- [Requirements](#requirements)
- [Linux](#linux)
- [Docker](#docker)
- [etcd](#etcd)
- [go](#go)
- [Starting the cluster](#starting-the-cluster)
- [Running a container](#running-a-container)
- [Running a user defined pod](#running-a-user-defined-pod)
- [Troubleshooting](#troubleshooting)
- [I cannot reach service IPs on the network.](#i-cannot-reach-service-ips-on-the-network)
- [I cannot create a replication controller with replica size greater than 1! What gives?](#i-cannot-create-a-replication-controller-with-replica-size-greater-than-1--what-gives)
- [I changed Kubernetes code, how do I run it?](#i-changed-kubernetes-code-how-do-i-run-it)
- [kubectl claims to start a container but `get pods` and `docker ps` don't show it.](#kubectl-claims-to-start-a-container-but-get-pods-and-docker-ps-dont-show-it)
- [The pods fail to connect to the services by host names](#the-pods-fail-to-connect-to-the-services-by-host-names)
### Requirements
#### Linux
Not running Linux? Consider running Linux in a local virtual machine with [Vagrant](vagrant.md), or on a cloud provider like [Google Compute Engine](gce.md)
#### Docker
At least [Docker](https://docs.docker.com/installation/#installation)
1.3+. Ensure the Docker daemon is running and can be contacted (try `docker
ps`). Some of the kubernetes components need to run as root, which normally
works fine with docker.
#### etcd
You need an [etcd](https://github.com/coreos/etcd/releases) in your path, please make sure it is installed and in your ``$PATH``.
#### go
You need [go](https://golang.org/doc/install) at least 1.3+ in your path, please make sure it is installed and in your ``$PATH``.
### Starting the cluster
In a separate tab of your terminal, run the following (since one needs sudo access to start/stop kubernetes daemons, it is easier to run the entire script as root):
```
cd kubernetes
hack/local-up-cluster.sh
```
This will build and start a lightweight local cluster, consisting of a master
and a single minion. Type Control-C to shut it down.
You can use the cluster/kubectl.sh script to interact with the local cluster. hack/local-up-cluster.sh will
print the commands to run to point kubectl at the local cluster.
### Running a container
Your cluster is running, and you want to start running containers!
You can now use any of the cluster/kubectl.sh commands to interact with your local setup.
```
cluster/kubectl.sh get pods
cluster/kubectl.sh get services
cluster/kubectl.sh get replicationcontrollers
cluster/kubectl.sh run my-nginx --image=nginx --replicas=2 --port=80
## begin wait for provision to complete, you can monitor the docker pull by opening a new terminal
sudo docker images
## you should see it pulling the nginx image, once the above command returns it
sudo docker ps
## you should see your container running!
exit
## end wait
## introspect kubernetes!
cluster/kubectl.sh get pods
cluster/kubectl.sh get services
cluster/kubectl.sh get replicationcontrollers
```
### Running a user defined pod
Note the difference between a [container](http://docs.k8s.io/containers.md)
and a [pod](http://docs.k8s.io/pods.md). Since you only asked for the former, kubernetes will create a wrapper pod for you.
However you cannot view the nginx start page on localhost. To verify that nginx is running you need to run `curl` within the docker container (try `docker exec`).
You can control the specifications of a pod via a user defined manifest, and reach nginx through your browser on the port specified therein:
```
cluster/kubectl.sh create -f examples/pod.yaml
```
Congratulations!
### Troubleshooting
#### I cannot reach service IPs on the network.
Some firewall software that uses iptables may not interact well with
kubernetes. If you have trouble around networking, try disabling any
firewall or other iptables-using systems, first. Also, you can check
if SELinux is blocking anything by running a command such as `journalctl --since yesterday | grep avc`.
By default the IP range for service cluster IPs is 10.0.*.* - depending on your
docker installation, this may conflict with IPs for containers. If you find
containers running with IPs in this range, edit hack/local-cluster-up.sh and
change the service-cluster-ip-range flag to something else.
#### I cannot create a replication controller with replica size greater than 1! What gives?
You are running a single minion setup. This has the limitation of only supporting a single replica of a given pod. If you are interested in running with larger replica sizes, we encourage you to try the local vagrant setup or one of the cloud providers.
#### I changed Kubernetes code, how do I run it?
```
cd kubernetes
hack/build-go.sh
hack/local-up-cluster.sh
```
#### kubectl claims to start a container but `get pods` and `docker ps` don't show it.
One or more of the kubernetes daemons might've crashed. Tail the logs of each in /tmp.
#### The pods fail to connect to the services by host names
The local-up-cluster.sh script doesn't start a DNS service. Similar situation can be found [here](https://github.com/GoogleCloudPlatform/kubernetes/issues/6667). You can start a manually. Related documents can be found [here](https://github.com/GoogleCloudPlatform/kubernetes/tree/master/cluster/addons/dns#how-do-i-configure-it)
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/getting-started-guides/locally.md?pixel)]()
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/release-0.20.0/docs/getting-started-guides/locally.md?pixel)]()

View File

@@ -0,0 +1,234 @@
# Cluster Level Logging with Elasticsearch and Kibana
On the GCE platform the default cluster level logging support targets
[Google Cloud Logging](https://cloud.google.com/logging/docs/) as described at the [Logging](logging.md) getting
started page. Here we describe how to set up a cluster to ingest logs into Elasticsearch and view them using Kibana as an
alternative to Google Cloud Logging.
To use Elasticsearch and Kibana for cluster logging you should set the following environment variable as shown below:
```
KUBE_LOGGING_DESTINATION=elasticsearch
```
You should also ensure that `KUBE_ENABLE_NODE_LOGGING=true` (which is the default for the GCE platform).
Now when you create a cluster a message will indicate that the Fluentd node-level log collectors
will target Elasticsearch:
```
$ cluster/kube-up.sh
...
Project: kubernetes-satnam
Zone: us-central1-b
... calling kube-up
Project: kubernetes-satnam
Zone: us-central1-b
+++ Staging server tars to Google Storage: gs://kubernetes-staging-e6d0e81793/devel
+++ kubernetes-server-linux-amd64.tar.gz uploaded (sha1 = 6987c098277871b6d69623141276924ab687f89d)
+++ kubernetes-salt.tar.gz uploaded (sha1 = bdfc83ed6b60fa9e3bff9004b542cfc643464cd0)
Looking for already existing resources
Starting master and configuring firewalls
Created [https://www.googleapis.com/compute/v1/projects/kubernetes-satnam/zones/us-central1-b/disks/kubernetes-master-pd].
NAME ZONE SIZE_GB TYPE STATUS
kubernetes-master-pd us-central1-b 20 pd-ssd READY
Created [https://www.googleapis.com/compute/v1/projects/kubernetes-satnam/regions/us-central1/addresses/kubernetes-master-ip].
+++ Logging using Fluentd to elasticsearch
```
The node level Fluentd collector pods and the Elasticsearech pods used to ingest cluster logs and the pod for the Kibana
viewer should be running soon after the cluster comes to life.
```
$ kubectl get pods
NAME READY REASON RESTARTS AGE
elasticsearch-logging-v1-78nog 1/1 Running 0 2h
elasticsearch-logging-v1-nj2nb 1/1 Running 0 2h
fluentd-elasticsearch-kubernetes-minion-5oq0 1/1 Running 0 2h
fluentd-elasticsearch-kubernetes-minion-6896 1/1 Running 0 2h
fluentd-elasticsearch-kubernetes-minion-l1ds 1/1 Running 0 2h
fluentd-elasticsearch-kubernetes-minion-lz9j 1/1 Running 0 2h
kibana-logging-v1-bhpo8 1/1 Running 0 2h
kube-dns-v3-7r1l9 3/3 Running 0 2h
monitoring-heapster-v4-yl332 1/1 Running 1 2h
monitoring-influx-grafana-v1-o79xf 2/2 Running 0 2h
```
Here we see that for a four node cluster there is a `fluent-elasticsearch` pod running which gathers
the Docker container logs and sends them to Elasticsearch. The Fluentd collector communicates to
a Kubernetes service that maps requests to specific Elasticsearch pods. Similarly, Kibana can also be
accessed via a Kubernetes service definition.
```
$ kubectl get services
NAME LABELS SELECTOR IP(S) PORT(S)
elasticsearch-logging k8s-app=elasticsearch-logging,kubernetes.io/cluster-service=true,kubernetes.io/name=Elasticsearch k8s-app=elasticsearch-logging 10.0.222.57 9200/TCP
kibana-logging k8s-app=kibana-logging,kubernetes.io/cluster-service=true,kubernetes.io/name=Kibana k8s-app=kibana-logging 10.0.193.226 5601/TCP
kube-dns k8s-app=kube-dns,kubernetes.io/cluster-service=true,kubernetes.io/name=KubeDNS k8s-app=kube-dns 10.0.0.10 53/UDP
53/TCP
kubernetes component=apiserver,provider=kubernetes <none> 10.0.0.1 443/TCP
monitoring-grafana kubernetes.io/cluster-service=true,kubernetes.io/name=Grafana k8s-app=influxGrafana 10.0.167.139 80/TCP
monitoring-heapster kubernetes.io/cluster-service=true,kubernetes.io/name=Heapster k8s-app=heapster 10.0.208.221 80/TCP
monitoring-influxdb kubernetes.io/cluster-service=true,kubernetes.io/name=InfluxDB k8s-app=influxGrafana 10.0.188.57 8083/TCP
```
By default two Elasticsearch replicas are created and one Kibana replica is created.
```
$ kubectl get rc
CONTROLLER CONTAINER(S) IMAGE(S) SELECTOR REPLICAS
elasticsearch-logging-v1 elasticsearch-logging gcr.io/google_containers/elasticsearch:1.4 k8s-app=elasticsearch-logging,version=v1 2
kibana-logging-v1 kibana-logging gcr.io/google_containers/kibana:1.3 k8s-app=kibana-logging,version=v1 1
kube-dns-v3 etcd gcr.io/google_containers/etcd:2.0.9 k8s-app=kube-dns,version=v3 1
kube2sky gcr.io/google_containers/kube2sky:1.9
skydns gcr.io/google_containers/skydns:2015-03-11-001
monitoring-heapster-v4 heapster gcr.io/google_containers/heapster:v0.14.3 k8s-app=heapster,version=v4 1
monitoring-influx-grafana-v1 influxdb gcr.io/google_containers/heapster_influxdb:v0.3 k8s-app=influxGrafana,version=v1 1
grafana gcr.io/google_containers/heapster_grafana:v0.7
```
The Elasticsearch and Kibana services are not directly exposed via a publicly reachable IP address. Instead,
they can be accessed via the service proxy running at the master. The URLs for accessing Elasticsearch
and Kibana via the service proxy can be found using the `kubectl cluster-info` command.
```
$ kubectl cluster-info
Kubernetes master is running at https://146.148.94.154
Elasticsearch is running at https://146.148.94.154/api/v1/proxy/namespaces/default/services/elasticsearch-logging
Kibana is running at https://146.148.94.154/api/v1/proxy/namespaces/default/services/kibana-logging
KubeDNS is running at https://146.148.94.154/api/v1/proxy/namespaces/default/services/kube-dns
Grafana is running at https://146.148.94.154/api/v1/proxy/namespaces/default/services/monitoring-grafana
Heapster is running at https://146.148.94.154/api/v1/proxy/namespaces/default/services/monitoring-heapster
InfluxDB is running at https://146.148.94.154/api/v1/proxy/namespaces/default/services/monitoring-influxdb
```
Before accessing the logs ingested into Elasticsearch using a browser and the service proxy URL we need to find out
the `admin` password for the cluster using `kubectl config view`.
```
$ kubectl config view
...
- name: kubernetes-satnam_kubernetes-basic-auth
user:
password: 7GlspJ9Q43OnGIJO
username: admin
...
```
The first time you try to access the cluster from a browser a dialog box appears asking for the username and password.
Use the username `admin` and provide the basic auth password reported by `kubectl config view` for the
cluster you are trying to connect to. Connecting to the Elasticsearch URL should then give the
status page for Elasticsearch.
![Elasticsearch Status](es-browser.png)
You can now type Elasticsearch queries directly into the browser. Alternatively you can query Elasticsearch
from your local machine using `curl` but first you need to know what your bearer token is:
```
$ kubectl config view --minify
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: REDACTED
server: https://146.148.94.154
name: kubernetes-satnam_kubernetes
contexts:
- context:
cluster: kubernetes-satnam_kubernetes
user: kubernetes-satnam_kubernetes
name: kubernetes-satnam_kubernetes
current-context: kubernetes-satnam_kubernetes
kind: Config
preferences: {}
users:
- name: kubernetes-satnam_kubernetes
user:
client-certificate-data: REDACTED
client-key-data: REDACTED
token: JsUe2Z3cXqa17UQqQ8qWGGf4nOSLwSnp
```
Now you can can issue requests to Elasticsearch:
```
$ curl --header "Authorization: Bearer JsUe2Z3cXqa17UQqQ8qWGGf4nOSLwSnp" --insecure https://146.148.94.154/api/v1/proxy/namespaces/default/services/elasticsearch-logging/
{
"status" : 200,
"name" : "Vance Astrovik",
"cluster_name" : "kubernetes-logging",
"version" : {
"number" : "1.5.2",
"build_hash" : "62ff9868b4c8a0c45860bebb259e21980778ab1c",
"build_timestamp" : "2015-04-27T09:21:06Z",
"build_snapshot" : false,
"lucene_version" : "4.10.4"
},
"tagline" : "You Know, for Search"
}
```
Note that you need the trailing slash at the end of the service proxy URL. Here is an example of a search:
```
$ curl --header "Authorization: Bearer JsUe2Z3cXqa17UQqQ8qWGGf4nOSLwSnp" --insecure https://146.148.94.154/api/v1/proxy/namespaces/default/services/elasticsearch-logging/_search?pretty=true
{
"took" : 7,
"timed_out" : false,
"_shards" : {
"total" : 6,
"successful" : 6,
"failed" : 0
},
"hits" : {
"total" : 123711,
"max_score" : 1.0,
"hits" : [ {
"_index" : ".kibana",
"_type" : "config",
"_id" : "4.0.2",
"_score" : 1.0,
"_source":{"buildNum":6004,"defaultIndex":"logstash-*"}
}, {
...
"_index" : "logstash-2015.06.22",
"_type" : "fluentd",
"_id" : "AU4c_GvFZL5p_gZ8dxtx",
"_score" : 1.0,
"_source":{"log":"synthetic-logger-10lps-pod: 31: 2015-06-22 20:35:33.597918073+00:00\n","stream":"stdout","tag":"kubernetes.synthetic-logger-10lps-pod_default_synth-lgr","@timestamp":"2015-06-22T20:35:33+00:00"}
}, {
"_index" : "logstash-2015.06.22",
"_type" : "fluentd",
"_id" : "AU4c_GvFZL5p_gZ8dxt2",
"_score" : 1.0,
"_source":{"log":"synthetic-logger-10lps-pod: 36: 2015-06-22 20:35:34.108780133+00:00\n","stream":"stdout","tag":"kubernetes.synthetic-logger-10lps-pod_default_synth-lgr","@timestamp":"2015-06-22T20:35:34+00:00"}
} ]
}
}
```
The Elasticsearch website contains information about [URI search queries](https://www.elastic.co/guide/en/elasticsearch/reference/current/search-uri-request.html) which can be used to extract the required logs.
Alternatively you can view the ingested logs using Kibana. The first time you visit the Kibana URL you will be
presented with a page that asks you to configure your view of the ingested logs. Select the option for
timeseries values and select `@timestamp`. On the following page select the `Discover` tab and then you
should be able to see the ingested logs. You can set the refresh interval to 5 seconds to have the logs
regulary refreshed. Here is a typical view of ingested logs from the Kibana viewer.
![Kibana logs](kibana-logs.png)
Another way to access Elasticsearch and Kibana in the cluster is to use `kubectl proxy` which will serve
a local proxy to the remote master:
```
$ kubectl proxy
Starting to serve on localhost:8001
```
Now you can visit the URL [http://localhost:8001/api/v1/proxy/namespaces/default/services/elasticsearch-logging](http://localhost:8001/api/v1/proxy/namespaces/default/services/elasticsearch-logging) to contact Elasticsearch and [http://localhost:8001/api/v1/proxy/namespaces/default/services/kibana-logging](http://localhost:8001/api/v1/proxy/namespaces/default/services/kibana-logging) to access the Kibana viewer.
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/getting-started-guides/logging-elasticsearch.md?pixel)]()
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/release-0.20.0/docs/getting-started-guides/logging-elasticsearch.md?pixel)]()

View File

@@ -0,0 +1,199 @@
# Cluster Level Logging to Google Cloud Logging
A Kubernetes cluster will typically be humming along running many system and application pods. How does the system administrator collect, manage and query the logs of the system pods? How does a user query the logs of their application which is composed of many pods which may be restarted or automatically generated by the Kubernetes system? These questions are addressed by the Kubernetes **cluster level logging** services.
Cluster level logging for Kubernetes allows us to collect logs which persist beyond the lifetime of the pods container images or the lifetime of the pod or even cluster. In this article we assume that a Kubernetes cluster has been created with cluster level logging support for sending logs to Google Cloud Logging. After a cluster has been created you will have a collection of system pods running that support monitoring, logging and DNS resolution for names of Kubernetes services:
```
$ kubectl get pods
NAME READY REASON RESTARTS AGE
fluentd-cloud-logging-kubernetes-minion-0f64 1/1 Running 0 32m
fluentd-cloud-logging-kubernetes-minion-27gf 1/1 Running 0 32m
fluentd-cloud-logging-kubernetes-minion-pk22 1/1 Running 0 31m
fluentd-cloud-logging-kubernetes-minion-20ej 1/1 Running 0 31m
kube-dns-v3-pk22 3/3 Running 0 32m
monitoring-heapster-v1-20ej 0/1 Running 9 32m
```
Here is the same information in a picture which shows how the pods might be placed on specific nodes.
![Cluster](/examples/blog-logging/diagrams/cloud-logging.png)
This diagram shows four nodes created on a GCE cluster with the name of each VM node on a purple background. The internal and public IPs of each node are shown on gray boxes and the pods running in each node are shown in green boxes. Each pod box shows the name of the pod and the namespace it runs in, the IP address of the pod and the images which are run as part of the pods execution. Here we see that every node is running a fluentd-cloud-logging pod which is collecting the log output of the containers running on the same node and sending them to Google Cloud Logging. A pod which provides the
[cluster DNS service](/docs/dns.md) runs on one of the nodes and a pod which provides monitoring support runs on another node.
To help explain how cluster level logging works lets start off with a synthetic log generator pod specification [counter-pod.yaml](/examples/blog-logging/counter-pod.yaml):
```
apiVersion: v1
kind: Pod
metadata:
name: counter
spec:
containers:
- name: count
image: ubuntu:14.04
args: [bash, -c,
'for ((i = 0; ; i++)); do echo "$i: $(date)"; sleep 1; done']
```
This pod specification has one container which runs a bash script when the container is born. This script simply writes out the value of a counter and the date once per second and runs indefinitely. Lets create the pod.
```
$ kubectl create -f counter-pod.yaml
pods/counter
```
We can observe the running pod:
```
$ kubectl get pods
NAME READY REASON RESTARTS AGE
counter 1/1 Running 0 5m
fluentd-cloud-logging-kubernetes-minion-0f64 1/1 Running 0 55m
fluentd-cloud-logging-kubernetes-minion-27gf 1/1 Running 0 55m
fluentd-cloud-logging-kubernetes-minion-pk22 1/1 Running 0 55m
fluentd-cloud-logging-kubernetes-minion-20ej 1/1 Running 0 55m
kube-dns-v3-pk22 3/3 Running 0 55m
monitoring-heapster-v1-20ej 0/1 Running 9 56m
```
This step may take a few minutes to download the ubuntu:14.04 image during which the pod status will be shown as `Pending`.
One of the nodes is now running the counter pod:
![Counter Pod](/examples/blog-logging/diagrams/27gf-counter.png)
When the pod status changes to `Running` we can use the kubectl logs command to view the output of this counter pod.
```
$ kubectl logs counter
0: Tue Jun 2 21:37:31 UTC 2015
1: Tue Jun 2 21:37:32 UTC 2015
2: Tue Jun 2 21:37:33 UTC 2015
3: Tue Jun 2 21:37:34 UTC 2015
4: Tue Jun 2 21:37:35 UTC 2015
5: Tue Jun 2 21:37:36 UTC 2015
...
```
This command fetches the log text from the Docker log file for the image that is running in this container. We can connect to the running container and observe the running counter bash script.
```
$ kubectl exec -i counter bash
ps aux
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
root 1 0.0 0.0 17976 2888 ? Ss 00:02 0:00 bash -c for ((i = 0; ; i++)); do echo "$i: $(date)"; sleep 1; done
root 468 0.0 0.0 17968 2904 ? Ss 00:05 0:00 bash
root 479 0.0 0.0 4348 812 ? S 00:05 0:00 sleep 1
root 480 0.0 0.0 15572 2212 ? R 00:05 0:00 ps aux
```
What happens if for any reason the image in this pod is killed off and then restarted by Kubernetes? Will we still see the log lines from the previous invocation of the container followed by the log lines for the started container? Or will we lose the log lines from the original containers execution and only see the log lines for the new container? Lets find out. First lets stop the currently running counter.
```
$ kubectl stop pod counter
pods/counter
```
Now lets restart the counter.
```
$ kubectl create -f counter-pod.yaml
pods/counter
```
Lets wait for the container to restart and get the log lines again.
```
$ kubectl logs counter
0: Tue Jun 2 21:51:40 UTC 2015
1: Tue Jun 2 21:51:41 UTC 2015
2: Tue Jun 2 21:51:42 UTC 2015
3: Tue Jun 2 21:51:43 UTC 2015
4: Tue Jun 2 21:51:44 UTC 2015
5: Tue Jun 2 21:51:45 UTC 2015
6: Tue Jun 2 21:51:46 UTC 2015
7: Tue Jun 2 21:51:47 UTC 2015
8: Tue Jun 2 21:51:48 UTC 2015
```
Weve lost the log lines from the first invocation of the container in this pod! Ideally, we want to preserve all the log lines from each invocation of each container in the pod. Furthermore, even if the pod is restarted we would still like to preserve all the log lines that were ever emitted by the containers in the pod. But dont fear, this is the functionality provided by cluster level logging in Kubernetes. When a cluster is created, the standard output and standard error output of each container can be ingested using a [Fluentd](http://www.fluentd.org/) agent running on each node into either [Google Cloud Logging](https://cloud.google.com/logging/docs/) or into Elasticsearch and viewed with Kibana.
When a Kubernetes cluster is created with logging to Google Cloud Logging enabled, the system creates a pod called `fluentd-cloud-logging` on each node of the cluster to collect Docker container logs. These pods were shown at the start of this blog article in the response to the first get pods command.
This log collection pod has a specification which looks something like this [fluentd-gcp.yaml](/cluster/saltbase/salt/fluentd-gcp/fluentd-gcp.yaml):
```
apiVersion: v1
kind: Pod
metadata:
name: fluentd-cloud-logging
spec:
containers:
- name: fluentd-cloud-logging
image: gcr.io/google_containers/fluentd-gcp:1.6
env:
- name: FLUENTD_ARGS
value: -qq
volumeMounts:
- name: containers
mountPath: /var/lib/docker/containers
volumes:
- name: containers
hostPath:
path: /var/lib/docker/containers
```
This pod specification maps the the directory on the host containing the Docker log files, `/var/lib/docker/containers`, to a directory inside the container which has the same path. The pod runs one image, `gcr.io/google_containers/fluentd-gcp:1.6`, which is configured to collect the Docker log files from the logs directory and ingest them into Google Cloud Logging. One instance of this pod runs on each node of the cluster. Kubernetes will notice if this pod fails and automatically restart it.
We can click on the Logs item under the Monitoring section of the Google Developer Console and select the logs for the counter container, which will be called kubernetes.counter_default_count. This identifies the name of the pod (counter), the namespace (default) and the name of the container (count) for which the log collection occurred. Using this name we can select just the logs for our counter container from the drop down menu:
![Cloud Logging Console](cloud-logging-console.png)
When we view the logs in the Developer Console we observe the logs for both invocations of the container.
![Both Logs](all-lines.png)
Note the first container counted to 108 and then it was terminated. When the next container image restarted the counting process resumed from 0. Similarly if we deleted the pod and restarted it we would capture the logs for all instances of the containers in the pod whenever the pod was running.
Logs ingested into Google Cloud Logging may be exported to various other destinations including [Google Cloud Storage](https://cloud.google.com/storage/) buckets and [BigQuery](https://cloud.google.com/bigquery/). Use the Exports tab in the Cloud Logging console to specify where logs should be streamed to. You can also follow this link to the
[settings tab](https://pantheon.corp.google.com/project/_/logs/settings).
We could query the ingested logs from BigQuery using the SQL query which reports the counter log lines showing the newest lines first:
```
SELECT metadata.timestamp, structPayload.log
FROM [mylogs.kubernetes_counter_default_count_20150611]
ORDER BY metadata.timestamp DESC
```
Here is some sample output:
![BigQuery](bigquery-logging.png)
We could also fetch the logs from Google Cloud Storage buckets to our desktop or laptop and then search them locally. The following command fetches logs for the counter pod running in a cluster which is itself in a GCE project called `myproject`. Only logs for the date 2015-06-11 are fetched.
```
$ gsutil -m cp -r gs://myproject/kubernetes.counter_default_count/2015/06/11 .
```
Now we can run queries over the ingested logs. The example below uses the [jq](http://stedolan.github.io/jq/) program to extract just the log lines.
```
$ cat 21\:00\:00_21\:59\:59_S0.json | jq '.structPayload.log'
"0: Thu Jun 11 21:39:38 UTC 2015\n"
"1: Thu Jun 11 21:39:39 UTC 2015\n"
"2: Thu Jun 11 21:39:40 UTC 2015\n"
"3: Thu Jun 11 21:39:41 UTC 2015\n"
"4: Thu Jun 11 21:39:42 UTC 2015\n"
"5: Thu Jun 11 21:39:43 UTC 2015\n"
"6: Thu Jun 11 21:39:44 UTC 2015\n"
"7: Thu Jun 11 21:39:45 UTC 2015\n"
...
```
This page has touched briefly on the underlying mechanisms that support gathering cluster level logs on a Kubernetes deployment. The approach here only works for gathering the standard output and standard error output of the processes running in the pods containers. To gather other logs that are stored in files one can use a sidecar container to gather the required files as described at the page [Collecting log files within containers with Fluentd](/contrib/logging/fluentd-sidecar-gcp/README.md) and sending them to the Google Cloud Logging service.
Some of the material in this section also appears in the blog article [Cluster Level Logging with Kubernetes](http://blog.kubernetes.io/2015/06/cluster-level-logging-with-kubernetes.html).
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/getting-started-guides/logging.md?pixel)]()
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/release-0.20.0/docs/getting-started-guides/logging.md?pixel)]()

View File

@@ -0,0 +1,324 @@
Getting started with Kubernetes on Mesos
----------------------------------------
**Table of Contents**
- [About Kubernetes on Mesos](#about-kubernetes-on-mesos)
- [Prerequisites](#prerequisites)
- [Deploy Kubernetes-Mesos](#deploy-kubernetes-mesos)
- [Deploy etcd](#deploy-etcd)
- [Start Kubernetes-Mesos Services](#start-kubernetes-mesos-services)
- [Validate KM Services](#validate-km-services)
- [Spin up a pod](#spin-up-a-pod)
- [Run the Example Guestbook App](#run-the-example-guestbook-app)
- [Test Guestbook App](#test-guestbook-app)
## About Kubernetes on Mesos
<!-- TODO: Update, clean up. -->
Mesos allows dynamic sharing of cluster resources between Kubernetes and other first-class Mesos frameworks such as [Hadoop][1], [Spark][2], and [Chronos][3].
Mesos also ensures applications from different frameworks running on your cluster are isolated and that resources are allocated fairly.
Running Kubernetes on Mesos allows you to easily move Kubernetes workloads from one cloud provider to another to your own physical datacenter.
This tutorial will walk you through setting up Kubernetes on a Mesos cluster.
It provides a step by step walk through of adding Kubernetes to a Mesos cluster and running the classic GuestBook demo application.
The walkthrough presented here is based on the v0.4.x series of the Kubernetes-Mesos project, which itself is based on Kubernetes v0.11.0.
**NOTE:** There are [known issues with the current implementation][11].
Please [file an issue against the kubernetes-mesos project][12] if you have problems completing the steps below.
### Prerequisites
* Understanding of [Apache Mesos][10]
* Mesos cluster on [Google Compute Engine][5]
* A VPN connection to the cluster.
### Deploy Kubernetes-Mesos
Log into the master node over SSH, replacing the placeholder below with the correct IP address.
```bash
ssh jclouds@${ip_address_of_master_node}
```
Build Kubernetes-Mesos.
```bash
$ git clone https://github.com/mesosphere/kubernetes-mesos k8sm
$ mkdir -p bin && sudo docker run --rm -v $(pwd)/bin:/target \
-v $(pwd)/k8sm:/snapshot -e GIT_BRANCH=release-0.4 \
mesosphere/kubernetes-mesos:build
```
Set some environment variables.
The internal IP address of the master may be obtained via `hostname -i`.
```bash
$ export servicehost=$(hostname -i)
$ export mesos_master=${servicehost}:5050
$ export KUBERNETES_MASTER=http://${servicehost}:8888
```
### Deploy etcd
Start etcd and verify that it is running:
```bash
$ sudo docker run -d --hostname $(uname -n) --name etcd -p 4001:4001 -p 7001:7001 coreos/etcd
```
```bash
$ sudo docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
fd7bac9e2301 coreos/etcd:latest "/etcd" 5s ago Up 3s 2379/tcp, 2380/... etcd
```
It's also a good idea to ensure your etcd instance is reachable by testing it
```bash
curl -L http://$servicehost:4001/v2/keys/
```
If connectivity is OK, you will see an output of the available keys in etcd (if any).
### Start Kubernetes-Mesos Services
Start the kubernetes-mesos API server, controller manager, and scheduler on a Mesos master node:
```bash
$ ./bin/km apiserver \
--address=${servicehost} \
--mesos_master=${mesos_master} \
--etcd_servers=http://${servicehost}:4001 \
--service-cluster-ip-range=10.10.10.0/24 \
--port=8888 \
--cloud_provider=mesos \
--v=1 >apiserver.log 2>&1 &
$ ./bin/km controller-manager \
--master=$servicehost:8888 \
--mesos_master=${mesos_master} \
--v=1 >controller.log 2>&1 &
$ ./bin/km scheduler \
--address=${servicehost} \
--mesos_master=${mesos_master} \
--etcd_servers=http://${servicehost}:4001 \
--mesos_user=root \
--api_servers=$servicehost:8888 \
--v=2 >scheduler.log 2>&1 &
```
Also on the master node, we'll start up a proxy instance to act as a
public-facing service router, for testing the web interface a little
later on.
```bash
$ sudo ./bin/km proxy \
--bind_address=${servicehost} \
--etcd_servers=http://${servicehost}:4001 \
--logtostderr=true >proxy.log 2>&1 &
```
Disown your background jobs so that they'll stay running if you log out.
```bash
$ disown -a
```
#### Validate KM Services
Interact with the kubernetes-mesos framework via `kubectl`:
```bash
$ bin/kubectl get pods
POD IP CONTAINER(S) IMAGE(S) HOST LABELS STATUS
```
```bash
$ bin/kubectl get services # your service IPs will likely differ
NAME LABELS SELECTOR IP PORT
kubernetes component=apiserver,provider=kubernetes <none> 10.10.10.2 443
```
Lastly, use the Mesos CLI tool to validate the Kubernetes scheduler framework has been registered and running:
```bash
$ mesos state | grep "Kubernetes"
"name": "Kubernetes",
```
Or, look for Kubernetes in the Mesos web GUI by pointing your browser to
`http://${mesos_master}`. Make sure you have an active VPN connection.
Go to the Frameworks tab, and look for an active framework named "Kubernetes".
## Spin up a pod
Write a JSON pod description to a local file:
```bash
$ cat <<EOPOD >nginx.json
{ "kind": "Pod",
"apiVersion": "v1beta1",
"id": "nginx-id-01",
"desiredState": {
"manifest": {
"version": "v1beta1",
"containers": [{
"name": "nginx-01",
"image": "nginx",
"ports": [{
"containerPort": 80,
"hostPort": 31000
}],
"livenessProbe": {
"enabled": true,
"type": "http",
"initialDelaySeconds": 30,
"httpGet": {
"path": "/index.html",
"port": "8081"
}
}
}]
}
},
"labels": {
"name": "foo"
} }
EOPOD
```
Send the pod description to Kubernetes using the `kubectl` CLI:
```bash
$ bin/kubectl create -f nginx.json
nginx-id-01
```
Wait a minute or two while `dockerd` downloads the image layers from the internet.
We can use the `kubectl` interface to monitor the status of our pod:
```bash
$ bin/kubectl get pods
POD IP CONTAINER(S) IMAGE(S) HOST LABELS STATUS
nginx-id-01 172.17.5.27 nginx-01 nginx 10.72.72.178/10.72.72.178 cluster=gce,name=foo Running
```
Verify that the pod task is running in the Mesos web GUI. Click on the
Kubernetes framework. The next screen should show the running Mesos task that
started the Kubernetes pod.
## Run the Example Guestbook App
Following the instructions from the kubernetes-mesos [examples/guestbook][6]:
```bash
$ export ex=k8sm/examples/guestbook
$ bin/kubectl create -f $ex/redis-master.json
$ bin/kubectl create -f $ex/redis-master-service.json
$ bin/kubectl create -f $ex/redis-slave-controller.json
$ bin/kubectl create -f $ex/redis-slave-service.json
$ bin/kubectl create -f $ex/frontend-controller.json
$ cat <<EOS >/tmp/frontend-service
{
"id": "frontend",
"kind": "Service",
"apiVersion": "v1beta1",
"port": 9998,
"selector": {
"name": "frontend"
},
"publicIPs": [
"${servicehost}"
]
}
EOS
$ bin/kubectl create -f /tmp/frontend-service
```
Watch your pods transition from `Pending` to `Running`:
```bash
$ watch 'bin/kubectl get pods'
```
Review your Mesos cluster's tasks:
```bash
$ mesos ps
TIME STATE RSS CPU %MEM COMMAND USER ID
0:00:05 R 41.25 MB 0.5 64.45 none root 0597e78b-d826-11e4-9162-42010acb46e2
0:00:08 R 41.58 MB 0.5 64.97 none root 0595b321-d826-11e4-9162-42010acb46e2
0:00:10 R 41.93 MB 0.75 65.51 none root ff8fff87-d825-11e4-9162-42010acb46e2
0:00:10 R 41.93 MB 0.75 65.51 none root 0597fa32-d826-11e4-9162-42010acb46e2
0:00:05 R 41.25 MB 0.5 64.45 none root ff8e01f9-d825-11e4-9162-42010acb46e2
0:00:10 R 41.93 MB 0.75 65.51 none root fa1da063-d825-11e4-9162-42010acb46e2
0:00:08 R 41.58 MB 0.5 64.97 none root b9b2e0b2-d825-11e4-9162-42010acb46e2
```
The number of Kubernetes pods listed earlier (from `bin/kubectl get pods`) should equal to the number active Mesos tasks listed the previous listing (`mesos ps`).
Next, determine the internal IP address of the front end [service][7]:
```bash
$ bin/kubectl get services
NAME LABELS SELECTOR IP PORT
kubernetes component=apiserver,provider=kubernetes <none> 10.10.10.2 443
redismaster <none> name=redis-master 10.10.10.49 10000
redisslave name=redisslave name=redisslave 10.10.10.109 10001
frontend <none> name=frontend 10.10.10.149 9998
```
Interact with the frontend application via curl using the front-end service IP address from above:
```bash
$ curl http://${frontend_service_ip_address}:9998/index.php?cmd=get\&key=messages
{"data": ""}
```
Or via the Redis CLI:
```bash
$ sudo apt-get install redis-tools
$ redis-cli -h ${redis_master_service_ip_address} -p 10000
10.233.254.108:10000> dump messages
"\x00\x06,world\x06\x00\xc9\x82\x8eHj\xe5\xd1\x12"
```
#### Test Guestbook App
Or interact with the frontend application via your browser, in 2 steps:
First, open the firewall on the master machine.
```bash
# determine the internal port for the frontend service
$ sudo iptables-save|grep -e frontend # -- port 36336 in this case
-A KUBE-PORTALS-CONTAINER -d 10.10.10.149/32 -p tcp -m comment --comment frontend -m tcp --dport 9998 -j DNAT --to-destination 10.22.183.23:36336
-A KUBE-PORTALS-CONTAINER -d 10.22.183.23/32 -p tcp -m comment --comment frontend -m tcp --dport 9998 -j DNAT --to-destination 10.22.183.23:36336
-A KUBE-PORTALS-HOST -d 10.10.10.149/32 -p tcp -m comment --comment frontend -m tcp --dport 9998 -j DNAT --to-destination 10.22.183.23:36336
-A KUBE-PORTALS-HOST -d 10.22.183.23/32 -p tcp -m comment --comment frontend -m tcp --dport 9998 -j DNAT --to-destination 10.22.183.23:36336
# open up access to the internal port for the frontend service
$ sudo iptables -A INPUT -i eth0 -p tcp -m state --state NEW,ESTABLISHED -m tcp \
--dport ${internal_frontend_service_port} -j ACCEPT
```
Next, add a firewall rule in the Google Cloud Platform Console. Choose Compute >
Compute Engine > Networks, click on the name of your mesosphere-* network, then
click "New firewall rule" and allow access to TCP port 9998.
![Google Cloud Platform firewall configuration][8]
Now, you can visit the guestbook in your browser!
![Kubernetes Guestbook app running on Mesos][9]
[1]: http://mesosphere.com/docs/tutorials/run-hadoop-on-mesos-using-installer
[2]: http://mesosphere.com/docs/tutorials/run-spark-on-mesos
[3]: http://mesosphere.com/docs/tutorials/run-chronos-on-mesos
[4]: http://cloud.google.com
[5]: https://cloud.google.com/compute/
[6]: https://github.com/mesosphere/kubernetes-mesos/tree/v0.4.0/examples/guestbook
[7]: https://github.com/GoogleCloudPlatform/kubernetes/blob/v0.11.0/docs/services.md#ips-and-vips
[8]: mesos/k8s-firewall.png
[9]: mesos/k8s-guestbook.png
[10]: http://mesos.apache.org/
[11]: https://github.com/mesosphere/kubernetes-mesos/blob/master/docs/issues.md
[12]: https://github.com/mesosphere/kubernetes-mesos/issues
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/getting-started-guides/mesos.md?pixel)]()
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/release-0.20.0/docs/getting-started-guides/mesos.md?pixel)]()

Binary file not shown.

After

Width:  |  Height:  |  Size: 87 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 43 KiB

View File

@@ -0,0 +1,60 @@
Getting started on oVirt
------------------------
**Table of Contents**
- [What is oVirt](#what-is-ovirt)
- [oVirt Cloud Provider Deployment](#ovirt-cloud-provider-deployment)
- [Using the oVirt Cloud Provider](#using-the-ovirt-cloud-provider)
- [oVirt Cloud Provider Screencast](#ovirt-cloud-provider-screencast)
## What is oVirt
oVirt is a virtual datacenter manager that delivers powerful management of multiple virtual machines on multiple hosts. Using KVM and libvirt, oVirt can be installed on Fedora, CentOS, or Red Hat Enterprise Linux hosts to set up and manage your virtual data center.
## oVirt Cloud Provider Deployment
The oVirt cloud provider allows to easily discover and automatically add new VM instances as nodes to your kubernetes cluster.
At the moment there are no community-supported or pre-loaded VM images including kubernetes but it is possible to [import] or [install] Project Atomic (or Fedora) in a VM to [generate a template]. Any other distribution that includes kubernetes may work as well.
It is mandatory to [install the ovirt-guest-agent] in the guests for the VM ip address and hostname to be reported to ovirt-engine and ultimately to kubernetes.
Once the kubernetes template is available it is possible to start instantiating VMs that can be discovered by the cloud provider.
[import]: http://ovedou.blogspot.it/2014/03/importing-glance-images-as-ovirt.html
[install]: http://www.ovirt.org/Quick_Start_Guide#Create_Virtual_Machines
[generate a template]: http://www.ovirt.org/Quick_Start_Guide#Using_Templates
[install the ovirt-guest-agent]: http://www.ovirt.org/How_to_install_the_guest_agent_in_Fedora
## Using the oVirt Cloud Provider
The oVirt Cloud Provider requires access to the oVirt REST-API to gather the proper information, the required credential should be specified in the `ovirt-cloud.conf` file:
[connection]
uri = https://localhost:8443/ovirt-engine/api
username = admin@internal
password = admin
In the same file it is possible to specify (using the `filters` section) what search query to use to identify the VMs to be reported to kubernetes:
[filters]
# Search query used to find nodes
vms = tag=kubernetes
In the above example all the VMs tagged with the `kubernetes` label will be reported as nodes to kubernetes.
The `ovirt-cloud.conf` file then must be specified in kube-controller-manager:
kube-controller-manager ... --cloud-provider=ovirt --cloud-config=/path/to/ovirt-cloud.conf ...
## oVirt Cloud Provider Screencast
This short screencast demonstrates how the oVirt Cloud Provider can be used to dynamically add VMs to your kubernetes cluster.
[![Screencast](http://img.youtube.com/vi/JyyST4ZKne8/0.jpg)](http://www.youtube.com/watch?v=JyyST4ZKne8)
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/getting-started-guides/ovirt.md?pixel)]()
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/release-0.20.0/docs/getting-started-guides/ovirt.md?pixel)]()

View File

@@ -0,0 +1,71 @@
Getting started on Rackspace
----------------------------
**Table of Contents**
- [Introduction](#introduction)
- [Prerequisites](#prerequisites)
- [Provider: Rackspace](#provider-rackspace)
- [Build](#build)
- [Cluster](#cluster)
- [Some notes:](#some-notes)
- [Network Design](#network-design)
## Introduction
* Supported Version: v0.18.1
In general, the dev-build-and-up.sh workflow for Rackspace is the similar to GCE. The specific implementation is different due to the use of CoreOS, Rackspace Cloud Files and the overall network design.
These scripts should be used to deploy development environments for Kubernetes. If your account leverages RackConnect or non-standard networking, these scripts will most likely not work without modification.
NOTE: The rackspace scripts do NOT rely on `saltstack` and instead rely on cloud-init for configuration.
The current cluster design is inspired by:
- [corekube](https://github.com/metral/corekube/)
- [Angus Lees](https://github.com/anguslees/kube-openstack/)
## Prerequisites
1. Python2.7
2. You need to have both `nova` and `swiftly` installed. It's recommended to use a python virtualenv to install these packages into.
3. Make sure you have the appropriate environment variables set to interact with the OpenStack APIs. See [Rackspace Documentation](http://docs.rackspace.com/servers/api/v2/cs-gettingstarted/content/section_gs_install_nova.html) for more details.
##Provider: Rackspace
- To build your own released version from source use `export KUBERNETES_PROVIDER=rackspace` and run the `bash hack/dev-build-and-up.sh`
- Note: The get.k8s.io install method is not working yet for our scripts.
* To install the latest released version of kubernetes use `export KUBERNETES_PROVIDER=rackspace; wget -q -O - https://get.k8s.io | bash`
## Build
1. The kubernetes binaries will be built via the common build scripts in `build/`.
2. If you've set the ENV `KUBERNETES_PROVIDER=rackspace`, the scripts will upload `kubernetes-server-linux-amd64.tar.gz` to Cloud Files.
2. A cloud files container will be created via the `swiftly` CLI and a temp URL will be enabled on the object.
3. The built `kubernetes-server-linux-amd64.tar.gz` will be uploaded to this container and the URL will be passed to master/minions nodes when booted.
## Cluster
There is a specific `cluster/rackspace` directory with the scripts for the following steps:
1. A cloud network will be created and all instances will be attached to this network.
- flanneld uses this network for next hop routing. These routes allow the containers running on each node to communicate with one another on this private network.
2. A SSH key will be created and uploaded if needed. This key must be used to ssh into the machines (we do not capture the password).
3. The master server and additional nodes will be created via the `nova` CLI. A `cloud-config.yaml` is generated and provided as user-data with the entire configuration for the systems.
4. We then boot as many nodes as defined via `$NUM_MINIONS`.
## Some notes:
- The scripts expect `eth2` to be the cloud network that the containers will communicate across.
- A number of the items in `config-default.sh` are overridable via environment variables.
- For older versions please either:
* Sync back to `v0.9` with `git checkout v0.9`
* Download a [snapshot of `v0.9`](https://github.com/GoogleCloudPlatform/kubernetes/archive/v0.9.tar.gz)
* Sync back to `v0.3` with `git checkout v0.3`
* Download a [snapshot of `v0.3`](https://github.com/GoogleCloudPlatform/kubernetes/archive/v0.3.tar.gz)
## Network Design
- eth0 - Public Interface used for servers/containers to reach the internet
- eth1 - ServiceNet - Intra-cluster communication (k8s, etcd, etc) communicate via this interface. The `cloud-config` files use the special CoreOS identifier `$private_ipv4` to configure the services.
- eth2 - Cloud Network - Used for k8s pods to communicate with one another. The proxy service will pass traffic via this interface.
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/getting-started-guides/rackspace.md?pixel)]()
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/release-0.20.0/docs/getting-started-guides/rackspace.md?pixel)]()

View File

@@ -0,0 +1,95 @@
# Run Kubernetes with rkt
This document describes how to run Kubernetes using [rkt](https://github.com/coreos/rkt) as a container runtime.
We still have [a bunch of work](https://github.com/GoogleCloudPlatform/kubernetes/issues/8262) to do to make the experience with rkt wonderful, please stay tuned!
### **Prerequisite**
- [systemd](http://www.freedesktop.org/wiki/Software/systemd/) should be installed on your machine and should be enabled. The minimum version required at this moment (2015/05/28) is [215](http://lists.freedesktop.org/archives/systemd-devel/2014-July/020903.html).
*(Note that systemd is not required by rkt itself, we are using it here to monitor and manage the pods launched by kubelet.)*
- Install the latest rkt release according to the instructions [here](https://github.com/coreos/rkt).
The minimum version required for now is [v0.5.6](https://github.com/coreos/rkt/releases/tag/v0.5.6).
- Make sure the `rkt metadata service` is running because it is necessary for running pod in private network mode.
More details about the networking of rkt can be found in the [documentation](https://github.com/coreos/rkt/blob/master/Documentation/networking.md).
To start the `rkt metadata service`, you can simply run:
```shell
$ sudo rkt metadata-service
```
If you want the service to be running as a systemd service, then:
```shell
$ sudo systemd-run rkt metadata-service
```
Alternatively, you can use the [rkt-metadata.service](https://github.com/coreos/rkt/blob/master/dist/init/systemd/rkt-metadata.service) and [rkt-metadata.socket](https://github.com/coreos/rkt/blob/master/dist/init/systemd/rkt-metadata.socket) to start the service.
### Local cluster
To use rkt as the container runtime, you just need to set the environment variable `CONTAINER_RUNTIME`:
```shell
$ export CONTAINER_RUNTIME=rkt
$ hack/local-up-cluster.sh
```
### CoreOS cluster on GCE
To use rkt as the container runtime for your CoreOS cluster on GCE, you need to specify the OS distribution, project, image:
```shell
$ export KUBE_OS_DISTRIBUTION=coreos
$ export KUBE_GCE_MINION_IMAGE=<image_id>
$ export KUBE_GCE_MINION_PROJECT=coreos-cloud
$ export KUBE_CONTAINER_RUNTIME=rkt
```
You can optionally choose the version of rkt used by setting `KUBE_RKT_VERSION`:
```shell
$ export KUBE_RKT_VERSION=0.5.6
```
Then you can launch the cluster by:
````shell
$ kube-up.sh
```
Note that we are still working on making all containerized the master components run smoothly in rkt. Before that we are not able to run the master node with rkt yet.
### CoreOS cluster on AWS
To use rkt as the container runtime for your CoreOS cluster on AWS, you need to specify the provider and OS distribution:
```shell
$ export KUBERNETES_PROVIDER=aws
$ export KUBE_OS_DISTRIBUTION=coreos
$ export KUBE_CONTAINER_RUNTIME=rkt
```
You can optionally choose the version of rkt used by setting `KUBE_RKT_VERSION`:
```shell
$ export KUBE_RKT_VERSION=0.5.6
```
You can optionally choose the CoreOS channel by setting `COREOS_CHANNEL`:
```shell
$ export COREOS_CHANNEL=stable
```
Then you can launch the cluster by:
````shell
$ kube-up.sh
```
Note: CoreOS is not supported as the master using the automated launch
scripts. The master node is always Ubuntu.
### Getting started with your cluster
See [a simple nginx example](../../examples/simple-nginx.md) to try out your new cluster.
For more complete applications, please look in the [examples directory](../../examples).
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/getting-started-guides/rkt/README.md?pixel)]()
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/release-0.20.0/docs/getting-started-guides/rkt/README.md?pixel)]()

View File

@@ -0,0 +1,191 @@
Kubernetes Deployment On Bare-metal Ubuntu Nodes
------------------------------------------------
- [Introduction](#introduction)
- [Prerequisites](#prerequisites)
- [Starting a Cluster](#starting-a-cluster)
- [Make *kubernetes* , *etcd* and *flanneld* binaries](#make-kubernetes--etcd-and-flanneld-binaries)
- [Configure and start the kubernetes cluster](#configure-and-start-the-kubernetes-cluster)
- [Deploy addons](#deploy-addons)
- [Trouble Shooting](#trouble-shooting)
## Introduction
This document describes how to deploy kubernetes on ubuntu nodes, including 1 master node and 3 minion nodes, and people uses this approach can scale to **any number of minion nodes** by changing some settings with ease. The original idea was heavily inspired by @jainvipin 's ubuntu single node work, which has been merge into this document.
[Cloud team from Zhejiang University](https://github.com/ZJU-SEL) will maintain this work.
## Prerequisites
*1 The minion nodes have installed docker version 1.2+ and bridge-utils to manipulate linux bridge*
*2 All machines can communicate with each other, no need to connect Internet (should use private docker registry in this case)*
*3 These guide is tested OK on Ubuntu 14.04 LTS 64bit server, but it should also work on most Ubuntu versions*
*4 Dependences of this guide: etcd-2.0.9, flannel-0.4.0, k8s-0.18.0, but it may work with higher versions*
*5 All the remote servers can be ssh logged in without a password by using key authentication*
### Starting a Cluster
#### Make *kubernetes* , *etcd* and *flanneld* binaries
First clone the kubernetes github repo, `$ git clone https://github.com/GoogleCloudPlatform/kubernetes.git`
then `$ cd kubernetes/cluster/ubuntu`.
Then run `$ ./build.sh`, this will download all the needed binaries into `./binaries`.
You can customize your etcd version, flannel version, k8s version by changing variable `ETCD_VERSION` , `FLANNEL_VERSION` and `K8S_VERSION` in build.sh, default etcd version is 2.0.9, flannel version is 0.4.0 and K8s version is 0.18.0.
Please make sure that there are `kube-apiserver`, `kube-controller-manager`, `kube-scheduler`, `kubelet`, `kube-proxy`, `etcd`, `etcdctl` and `flannel` in the binaries/master or binaries/minion directory.
> We used flannel here because we want to use overlay network, but please remember it is not the only choice, and it is also not a k8s' necessary dependence. Actually you can just build up k8s cluster natively, or use flannel, Open vSwitch or any other SDN tool you like, we just choose flannel here as a example.
#### Configure and start the kubernetes cluster
An example cluster is listed as below:
| IP Address|Role |
|---------|------|
|10.10.103.223| minion |
|10.10.103.162| minion |
|10.10.103.250| both master and minion|
First configure the cluster information in cluster/ubuntu/config-default.sh, below is a simple sample.
```
export nodes="vcap@10.10.103.250 vcap@10.10.103.162 vcap@10.10.103.223"
export roles=("ai" "i" "i")
export NUM_MINIONS=${NUM_MINIONS:-3}
export SERVICE_CLUSTER_IP_RANGE=11.1.1.0/24
export FLANNEL_NET=172.16.0.0/16
```
The first variable `nodes` defines all your cluster nodes, MASTER node comes first and separated with blank space like `<user_1@ip_1> <user_2@ip_2> <user_3@ip_3> `
Then the `roles ` variable defines the role of above machine in the same order, "ai" stands for machine acts as both master and minion, "a" stands for master, "i" stands for minion. So they are just defined the k8s cluster as the table above described.
The `NUM_MINIONS` variable defines the total number of minions.
The `SERVICE_CLUSTER_IP_RANGE` variable defines the kubernetes service IP range. Please make sure that you do have a valid private ip range defined here, because some IaaS provider may reserve private ips. You can use below three private network range according to rfc1918. Besides you'd better not choose the one that conflicts with your own private network range.
10.0.0.0 - 10.255.255.255 (10/8 prefix)
172.16.0.0 - 172.31.255.255 (172.16/12 prefix)
192.168.0.0 - 192.168.255.255 (192.168/16 prefix)
The `FLANNEL_NET` variable defines the IP range used for flannel overlay network, should not conflict with above `SERVICE_CLUSTER_IP_RANGE`.
After all the above variable being set correctly. We can use below command in cluster/ directory to bring up the whole cluster.
`$ KUBERNETES_PROVIDER=ubuntu ./kube-up.sh`
The scripts is automatically scp binaries and config files to all the machines and start the k8s service on them. The only thing you need to do is to type the sudo password when promoted. The current machine name is shown below like. So you will not type in the wrong password.
```
Deploying minion on machine 10.10.103.223
...
[sudo] password to copy files and start minion:
```
If all things goes right, you will see the below message from console
`Cluster validation succeeded` indicating the k8s is up.
**All done !**
You can also use `kubectl` command to see if the newly created k8s is working correctly. The `kubectl` binary is under the `cluster/ubuntu/binaries` directory. You can move it into your PATH. Then you can use the below command smoothly.
For example, use `$ kubectl get nodes` to see if all your minion nodes are in ready status. It may take some time for the minions ready to use like below.
```
NAME LABELS STATUS
10.10.103.162 kubernetes.io/hostname=10.10.103.162 Ready
10.10.103.223 kubernetes.io/hostname=10.10.103.223 Ready
10.10.103.250 kubernetes.io/hostname=10.10.103.250 Ready
```
Also you can run kubernetes [guest-example](https://github.com/GoogleCloudPlatform/kubernetes/tree/master/examples/guestbook) to build a redis backend cluster on the k8s
#### Deploy addons
After the previous parts, you will have a working k8s cluster, this part will teach you how to deploy addons like dns onto the existing cluster.
The configuration of dns is configured in cluster/ubuntu/config-default.sh.
```
ENABLE_CLUSTER_DNS=true
DNS_SERVER_IP="192.168.3.10"
DNS_DOMAIN="kubernetes.local"
DNS_REPLICAS=1
```
The `DNS_SERVER_IP` is defining the ip of dns server which must be in the service_cluster_ip_range.
The `DNS_REPLICAS` describes how many dns pod running in the cluster.
After all the above variable have been set. Just type the below command
```
$ cd cluster/ubuntu
$ KUBERNETES_PROVIDER=ubuntu ./deployAddons.sh
```
After some time, you can use `$ kubectl get pods` to see the dns pod is running in the cluster. Done!
#### Trouble Shooting
Generally, what this approach did is quite simple:
1. Download and copy binaries and configuration files to proper directories on every node
2. Configure `etcd` using IPs based on input from user
3. Create and start flannel network
So, if you see a problem, **check etcd configuration first**
Please try:
1. Check `/var/log/upstart/etcd.log` for suspicious etcd log
2. Check `/etc/default/etcd`, as we do not have much input validation, a right config should be like:
```
ETCD_OPTS="-name infra1 -initial-advertise-peer-urls <http://ip_of_this_node:2380> -listen-peer-urls <http://ip_of_this_node:2380> -initial-cluster-token etcd-cluster-1 -initial-cluster infra1=<http://ip_of_this_node:2380>,infra2=<http://ip_of_another_node:2380>,infra3=<http://ip_of_another_node:2380> -initial-cluster-state new"
```
3. You can use below command
`$ KUBERNETES_PROVIDER=ubuntu ./kube-down.sh` to bring down the cluster and run
`$ KUBERNETES_PROVIDER=ubuntu ./kube-up.sh` again to start again.
4. You can also customize your own settings in `/etc/default/{component_name}` after configured success.
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/getting-started-guides/ubuntu.md?pixel)]()
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/release-0.20.0/docs/getting-started-guides/ubuntu.md?pixel)]()

View File

@@ -0,0 +1,337 @@
## Getting started with Vagrant
Running kubernetes with Vagrant (and VirtualBox) is an easy way to run/test/develop on your local machine (Linux, Mac OS X).
**Table of Contents**
- [Prerequisites](#prerequisites)
- [Setup](#setup)
- [Interacting with your Kubernetes cluster with Vagrant.](#interacting-with-your-kubernetes-cluster-with-vagrant)
- [Authenticating with your master](#authenticating-with-your-master)
- [Running containers](#running-containers)
- [Troubleshooting](#troubleshooting)
- [I keep downloading the same (large) box all the time!](#i-keep-downloading-the-same-large-box-all-the-time)
- [I just created the cluster, but I am getting authorization errors!](#i-just-created-the-cluster-but-i-am-getting-authorization-errors)
- [I just created the cluster, but I do not see my container running!](#i-just-created-the-cluster-but-i-do-not-see-my-container-running)
- [I want to make changes to Kubernetes code!](#i-want-to-make-changes-to-kubernetes-code)
- [I have brought Vagrant up but the nodes cannot validate!](#i-have-brought-vagrant-up-but-the-nodes-cannot-validate)
- [I want to change the number of nodes!](#i-want-to-change-the-number-of-nodes)
- [I want my VMs to have more memory!](#i-want-my-vms-to-have-more-memory)
- [I ran vagrant suspend and nothing works!](#i-ran-vagrant-suspend-and-nothing-works)
### Prerequisites
1. Install latest version >= 1.6.2 of vagrant from http://www.vagrantup.com/downloads.html
2. Install one of:
1. The latest version of Virtual Box from https://www.virtualbox.org/wiki/Downloads
2. [VMWare Fusion](https://www.vmware.com/products/fusion/) version 5 or greater as well as the appropriate [Vagrant VMWare Fusion provider](https://www.vagrantup.com/vmware)
3. [VMWare Workstation](https://www.vmware.com/products/workstation/) version 9 or greater as well as the [Vagrant VMWare Workstation provider](https://www.vagrantup.com/vmware)
4. [Parallels Desktop](https://www.parallels.com/products/desktop/) version 9 or greater as well as the [Vagrant Parallels provider](https://parallels.github.io/vagrant-parallels/)
5. libvirt with KVM and enable support of hardware virtualisation. [Vagrant-libvirt](https://github.com/pradels/vagrant-libvirt). For fedora provided official rpm, and possible to use ```yum install vagrant-libvirt```
### Setup
Setting up a cluster is as simple as running:
```sh
export KUBERNETES_PROVIDER=vagrant
curl -sS https://get.k8s.io | bash
```
The `KUBERNETES_PROVIDER` environment variable tells all of the various cluster management scripts which variant to use. If you forget to set this, the assumption is you are running on Google Compute Engine.
By default, the Vagrant setup will create a single kubernetes-master and 1 kubernetes-minion. Each VM will take 1 GB, so make sure you have at least 2GB to 4GB of free memory (plus appropriate free disk space). To start your local cluster, open a shell and run:
```sh
cd kubernetes
export KUBERNETES_PROVIDER=vagrant
./cluster/kube-up.sh
```
Vagrant will provision each machine in the cluster with all the necessary components to run Kubernetes. The initial setup can take a few minutes to complete on each machine.
If you installed more than one Vagrant provider, Kubernetes will usually pick the appropriate one. However, you can override which one Kubernetes will use by setting the [`VAGRANT_DEFAULT_PROVIDER`](https://docs.vagrantup.com/v2/providers/default.html) environment variable:
```sh
export VAGRANT_DEFAULT_PROVIDER=parallels
export KUBERNETES_PROVIDER=vagrant
./cluster/kube-up.sh
```
By default, each VM in the cluster is running Fedora.
To access the master or any minion:
```sh
vagrant ssh master
vagrant ssh minion-1
```
If you are running more than one minion, you can access the others by:
```sh
vagrant ssh minion-2
vagrant ssh minion-3
```
Each node in the cluster installs the docker daemon and the kubelet.
The master node instantiates the Kubernetes master components as pods on the machine.
To view the service status and/or logs on the kubernetes-master:
```sh
vagrant ssh master
[vagrant@kubernetes-master ~] $ sudo su
[root@kubernetes-master ~] $ systemctl status kubelet
[root@kubernetes-master ~] $ journalctl -ru kubelet
[root@kubernetes-master ~] $ systemctl status docker
[root@kubernetes-master ~] $ journalctl -ru docker
[root@kubernetes-master ~] $ tail -f /var/log/kube-apiserver.log
[root@kubernetes-master ~] $ tail -f /var/log/kube-controller-manager.log
[root@kubernetes-master ~] $ tail -f /var/log/kube-scheduler.log
```
To view the services on any of the kubernetes-minion(s):
```sh
vagrant ssh minion-1
[vagrant@kubernetes-master ~] $ sudo su
[root@kubernetes-master ~] $ systemctl status kubelet
[root@kubernetes-master ~] $ journalctl -ru kubelet
[root@kubernetes-master ~] $ systemctl status docker
[root@kubernetes-master ~] $ journalctl -ru docker
```
### Interacting with your Kubernetes cluster with Vagrant.
With your Kubernetes cluster up, you can manage the nodes in your cluster with the regular Vagrant commands.
To push updates to new Kubernetes code after making source changes:
```sh
./cluster/kube-push.sh
```
To stop and then restart the cluster:
```sh
vagrant halt
./cluster/kube-up.sh
```
To destroy the cluster:
```sh
vagrant destroy
```
Once your Vagrant machines are up and provisioned, the first thing to do is to check that you can use the `kubectl.sh` script.
You may need to build the binaries first, you can do this with ```make```
```sh
$ ./cluster/kubectl.sh get nodes
NAME LABELS
10.245.1.4 <none>
10.245.1.5 <none>
10.245.1.3 <none>
```
### Authenticating with your master
When using the vagrant provider in Kubernetes, the `cluster/kubectl.sh` script will cache your credentials in a `~/.kubernetes_vagrant_auth` file so you will not be prompted for them in the future.
```sh
cat ~/.kubernetes_vagrant_auth
{ "User": "vagrant",
"Password": "vagrant",
"CAFile": "/home/k8s_user/.kubernetes.vagrant.ca.crt",
"CertFile": "/home/k8s_user/.kubecfg.vagrant.crt",
"KeyFile": "/home/k8s_user/.kubecfg.vagrant.key"
}
```
You should now be set to use the `cluster/kubectl.sh` script. For example try to list the nodes that you have started with:
```sh
./cluster/kubectl.sh get nodes
```
### Running containers
Your cluster is running, you can list the nodes in your cluster:
```sh
$ ./cluster/kubectl.sh get nodes
NAME LABELS
10.245.2.4 <none>
10.245.2.3 <none>
10.245.2.2 <none>
```
Now start running some containers!
You can now use any of the `cluster/kube-*.sh` commands to interact with your VM machines.
Before starting a container there will be no pods, services and replication controllers.
```sh
$ ./cluster/kubectl.sh get pods
NAME IMAGE(S) HOST LABELS STATUS
$ ./cluster/kubectl.sh get services
NAME LABELS SELECTOR IP PORT
$ ./cluster/kubectl.sh get replicationcontrollers
NAME IMAGE(S SELECTOR REPLICAS
```
Start a container running nginx with a replication controller and three replicas
```sh
$ ./cluster/kubectl.sh run my-nginx --image=nginx --replicas=3 --port=80
```
When listing the pods, you will see that three containers have been started and are in Waiting state:
```sh
$ ./cluster/kubectl.sh get pods
NAME IMAGE(S) HOST LABELS STATUS
781191ff-3ffe-11e4-9036-0800279696e1 nginx 10.245.2.4/10.245.2.4 name=myNginx Waiting
7813c8bd-3ffe-11e4-9036-0800279696e1 nginx 10.245.2.2/10.245.2.2 name=myNginx Waiting
78140853-3ffe-11e4-9036-0800279696e1 nginx 10.245.2.3/10.245.2.3 name=myNginx Waiting
```
You need to wait for the provisioning to complete, you can monitor the nodes by doing:
```sh
$ vagrant ssh minion-1 -c 'sudo docker images'
kubernetes-minion-1:
REPOSITORY TAG IMAGE ID CREATED VIRTUAL SIZE
<none> <none> 96864a7d2df3 26 hours ago 204.4 MB
google/cadvisor latest e0575e677c50 13 days ago 12.64 MB
kubernetes/pause latest 6c4579af347b 8 weeks ago 239.8 kB
```
Once the docker image for nginx has been downloaded, the container will start and you can list it:
```sh
$ vagrant ssh minion-1 -c 'sudo docker ps'
kubernetes-minion-1:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
dbe79bf6e25b nginx:latest "nginx" 21 seconds ago Up 19 seconds k8s--mynginx.8c5b8a3a--7813c8bd_-_3ffe_-_11e4_-_9036_-_0800279696e1.etcd--7813c8bd_-_3ffe_-_11e4_-_9036_-_0800279696e1--fcfa837f
fa0e29c94501 kubernetes/pause:latest "/pause" 8 minutes ago Up 8 minutes 0.0.0.0:8080->80/tcp k8s--net.a90e7ce4--7813c8bd_-_3ffe_-_11e4_-_9036_-_0800279696e1.etcd--7813c8bd_-_3ffe_-_11e4_-_9036_-_0800279696e1--baf5b21b
aa2ee3ed844a google/cadvisor:latest "/usr/bin/cadvisor" 38 minutes ago Up 38 minutes k8s--cadvisor.9e90d182--cadvisor_-_agent.file--4626b3a2
65a3a926f357 kubernetes/pause:latest "/pause" 39 minutes ago Up 39 minutes 0.0.0.0:4194->8080/tcp k8s--net.c5ba7f0e--cadvisor_-_agent.file--342fd561
```
Going back to listing the pods, services and replicationcontrollers, you now have:
```sh
$ ./cluster/kubectl.sh get pods
NAME IMAGE(S) HOST LABELS STATUS
781191ff-3ffe-11e4-9036-0800279696e1 nginx 10.245.2.4/10.245.2.4 name=myNginx Running
7813c8bd-3ffe-11e4-9036-0800279696e1 nginx 10.245.2.2/10.245.2.2 name=myNginx Running
78140853-3ffe-11e4-9036-0800279696e1 nginx 10.245.2.3/10.245.2.3 name=myNginx Running
$ ./cluster/kubectl.sh get services
NAME LABELS SELECTOR IP PORT
$ ./cluster/kubectl.sh get replicationcontrollers
NAME IMAGE(S SELECTOR REPLICAS
myNginx nginx name=my-nginx 3
```
We did not start any services, hence there are none listed. But we see three replicas displayed properly.
Check the [guestbook](../../examples/guestbook/README.md) application to learn how to create a service.
You can already play with scaling the replicas with:
```sh
$ ./cluster/kubectl.sh scale rc my-nginx --replicas=2
$ ./cluster/kubectl.sh get pods
NAME IMAGE(S) HOST LABELS STATUS
7813c8bd-3ffe-11e4-9036-0800279696e1 nginx 10.245.2.2/10.245.2.2 name=myNginx Running
78140853-3ffe-11e4-9036-0800279696e1 nginx 10.245.2.3/10.245.2.3 name=myNginx Running
```
Congratulations!
### Troubleshooting
#### I keep downloading the same (large) box all the time!
By default the Vagrantfile will download the box from S3. You can change this (and cache the box locally) by providing a name and an alternate URL when calling `kube-up.sh`
```sh
export KUBERNETES_BOX_NAME=choose_your_own_name_for_your_kuber_box
export KUBERNETES_BOX_URL=path_of_your_kuber_box
export KUBERNETES_PROVIDER=vagrant
./cluster/kube-up.sh
```
#### I just created the cluster, but I am getting authorization errors!
You probably have an incorrect ~/.kubernetes_vagrant_auth file for the cluster you are attempting to contact.
```sh
rm ~/.kubernetes_vagrant_auth
```
After using kubectl.sh make sure that the correct credentials are set:
```sh
cat ~/.kubernetes_vagrant_auth
{
"User": "vagrant",
"Password": "vagrant"
}
```
#### I just created the cluster, but I do not see my container running!
If this is your first time creating the cluster, the kubelet on each minion schedules a number of docker pull requests to fetch prerequisite images. This can take some time and as a result may delay your initial pod getting provisioned.
#### I want to make changes to Kubernetes code!
To set up a vagrant cluster for hacking, follow the [vagrant developer guide](../devel/developer-guides/vagrant.md).
#### I have brought Vagrant up but the nodes cannot validate!
Log on to one of the nodes (`vagrant ssh minion-1`) and inspect the salt minion log (`sudo cat /var/log/salt/minion`).
#### I want to change the number of nodes!
You can control the number of nodes that are instantiated via the environment variable `NUM_MINIONS` on your host machine. If you plan to work with replicas, we strongly encourage you to work with enough nodes to satisfy your largest intended replica size. If you do not plan to work with replicas, you can save some system resources by running with a single minion. You do this, by setting `NUM_MINIONS` to 1 like so:
```sh
export NUM_MINIONS=1
```
#### I want my VMs to have more memory!
You can control the memory allotted to virtual machines with the `KUBERNETES_MEMORY` environment variable.
Just set it to the number of megabytes you would like the machines to have. For example:
```sh
export KUBERNETES_MEMORY=2048
```
If you need more granular control, you can set the amount of memory for the master and nodes independently. For example:
```sh
export KUBERNETES_MASTER_MEMORY=1536
export KUBERNETES_MINION_MEMORY=2048
```
#### I ran vagrant suspend and nothing works!
```vagrant suspend``` seems to mess up the network. This is not supported at this time.
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/getting-started-guides/vagrant.md?pixel)]()
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/release-0.20.0/docs/getting-started-guides/vagrant.md?pixel)]()

View File

@@ -0,0 +1,94 @@
Getting started with vSphere
-------------------------------
The example below creates a Kubernetes cluster with 4 worker node Virtual
Machines and a master Virtual Machine (i.e. 5 VMs in your cluster). This
cluster is set up and controlled from your workstation (or wherever you find
convenient).
**Table of Contents**
- [Prerequisites](#prerequisites)
- [Setup](#setup)
- [Starting a cluster](#starting-a-cluster)
- [Extra: debugging deployment failure](#extra-debugging-deployment-failure)
### Prerequisites
1. You need administrator credentials to an ESXi machine or vCenter instance.
2. You must have Go (version 1.2 or later) installed: [www.golang.org](http://www.golang.org).
3. You must have your `GOPATH` set up and include `$GOPATH/bin` in your `PATH`.
```sh
export GOPATH=$HOME/src/go
mkdir -p $GOPATH
export PATH=$PATH:$GOPATH/bin
```
4. Install the govc tool to interact with ESXi/vCenter:
```sh
go get github.com/vmware/govmomi/govc
```
5. Get or build a [binary release](binary_release.md)
### Setup
Download a prebuilt Debian 7.7 VMDK that we'll use as a base image:
```sh
curl --remote-name-all https://storage.googleapis.com/govmomi/vmdk/2014-11-11/kube.vmdk.gz{,.md5}
md5sum -c kube.vmdk.gz.md5
gzip -d kube.vmdk.gz
```
Import this VMDK into your vSphere datastore:
```sh
export GOVC_URL='user:pass@hostname'
export GOVC_INSECURE=1 # If the host above uses a self-signed cert
export GOVC_DATASTORE='target datastore'
export GOVC_RESOURCE_POOL='resource pool or cluster with access to datastore'
govc import.vmdk kube.vmdk ./kube/
```
Verify that the VMDK was correctly uploaded and expanded to ~3GiB:
```sh
govc datastore.ls ./kube/
```
Take a look at the file `cluster/vsphere/config-common.sh` fill in the required
parameters. The guest login for the image that you imported is `kube:kube`.
### Starting a cluster
Now, let's continue with deploying Kubernetes.
This process takes about ~10 minutes.
```sh
cd kubernetes # Extracted binary release OR repository root
export KUBERNETES_PROVIDER=vsphere
cluster/kube-up.sh
```
Refer to the top level README and the getting started guide for Google Compute
Engine. Once you have successfully reached this point, your vSphere Kubernetes
deployment works just as any other one!
**Enjoy!**
### Extra: debugging deployment failure
The output of `kube-up.sh` displays the IP addresses of the VMs it deploys. You
can log into any VM as the `kube` user to poke around and figure out what is
going on (find yourself authorized with your SSH key, or use the password
`kube` otherwise).
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/getting-started-guides/vsphere.md?pixel)]()
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/release-0.20.0/docs/getting-started-guides/vsphere.md?pixel)]()