Merge branch 'master' of github.com:GoogleCloudPlatform/kubernetes into add-charms
This commit is contained in:
@@ -9,8 +9,8 @@ If you are considering contributing a new guide, please read the
|
||||
|
||||
IaaS Provider | Config. Mgmt | OS | Networking | Docs | Support Level | Notes
|
||||
-------------- | ------------ | ------ | ---------- | ---------------------------------------------------- | ---------------------------- | -----
|
||||
GKE | | | GCE | [docs](https://cloud.google.com/container-engine) | Commercial | Uses K8s version 0.14.1
|
||||
GCE | Saltstack | Debian | GCE | [docs](../../docs/getting-started-guides/gce.md) | Project | Tested with 0.14.1 by @brendandburns
|
||||
GKE | | | GCE | [docs](https://cloud.google.com/container-engine) | Commercial | Uses K8s version 0.15.0
|
||||
GCE | Saltstack | Debian | GCE | [docs](../../docs/getting-started-guides/gce.md) | Project | Tested with 0.15.0 by @robertbailey
|
||||
Mesos/GCE | | | | [docs](../../docs/getting-started-guides/mesos.md) | [Community](https://github.com/mesosphere/kubernetes-mesos) ([@jdef](https://github.com/jdef)) | Uses K8s v0.11.0
|
||||
Vagrant | Saltstack | Fedora | OVS | [docs](../../docs/getting-started-guides/vagrant.md) | Project |
|
||||
Bare-metal | custom | Fedora | _none_ | [docs](../../docs/getting-started-guides/fedora/fedora_manual_config.md) | Project | Uses K8s v0.13.2
|
||||
@@ -29,13 +29,12 @@ Docker Single Node | custom | N/A | local | [docs](docker.
|
||||
Docker Multi Node | Flannel| N/A | local | [docs](docker-multinode.md) | Project (@brendandburns) | Tested @ 0.14.1 |
|
||||
Local | | | _none_ | [docs](../../docs/getting-started-guides/locally.md) | Community (@preillyme) |
|
||||
Ovirt | | | | [docs](../../docs/getting-started-guides/ovirt.md) | Inactive (@simon3z) |
|
||||
Rackspace | CoreOS | CoreOS | Rackspace | [docs](../../docs/getting-started-guides/rackspace.md) | Inactive (@doubleerr) |
|
||||
Bare-metal | custom | CentOS | _none_ | [docs](../../docs/getting-started-guides/centos/centos_manual_config.md) | Community(@coolsvap) | Uses K8s v0.9.1
|
||||
libvirt/KVM | CoreOS | CoreOS | libvirt/KVM | [docs](../../docs/getting-started-guides/libvirt-coreos.md) | Community (@lhuard1A) |
|
||||
AWS | Juju | Ubuntu | flannel | [docs](../../docs/getting-started-guides/juju.md) | [Community](https://github.com/whitmo/bundle-kubernetes) ( [@whit](https://github.com/whitmo), [@matt](https://github.com/mbruzek), [@chuck](https://github.com/chuckbutler) ) | [Tested](http://reports.vapour.ws/charm-tests-by-charm/kubernetes) K8s v0.8.1
|
||||
OpenStack/HPCloud | Juju | Ubuntu | flannel | [docs](../../docs/getting-started-guides/juju.md) | [Community](https://github.com/whitmo/bundle-kubernetes) ( [@whit](https://github.com/whitmo), [@matt](https://github.com/mbruzek), [@chuck](https://github.com/chuckbutler) ) | [Tested](http://reports.vapour.ws/charm-tests-by-charm/kubernetes) K8s v0.8.1
|
||||
Joyent | Juju | Ubuntu | flannel | [docs](../../docs/getting-started-guides/juju.md) | [Community](https://github.com/whitmo/bundle-kubernetes) ( [@whit](https://github.com/whitmo), [@matt](https://github.com/mbruzek), [@chuck](https://github.com/chuckbutler) ) | [Tested](http://reports.vapour.ws/charm-tests-by-charm/kubernetes) K8s v0.8.1
|
||||
Azure | CoreOS | CoreOS | Weave | [docs](../../docs/getting-started-guides/coreos/azure/README.md) | Community ([@errordeveloper](https://github.com/errordeveloper), [@squillace](https://github.com/squillace), [@chanezon](https://github.com/chanezon)) | Uses K8s version 0.11.0
|
||||
Azure | CoreOS | CoreOS | Weave | [docs](../../docs/getting-started-guides/coreos/azure/README.md) | Community ([@errordeveloper](https://github.com/errordeveloper), [@squillace](https://github.com/squillace), [@chanezon](https://github.com/chanezon), [@crossorigin](https://github.com/crossorigin)) | Uses K8s version 0.15.0
|
||||
Bare-metal (Offline) | CoreOS | CoreOS | flannel | [docs](../../docs/getting-started-guides/coreos/bare_metal_offline.md) | Community([@jeffbean](https://github.com/jeffbean)) | K8s v0.10.1
|
||||
|
||||
Definition of columns:
|
||||
|
@@ -19,10 +19,12 @@ or if you prefer ```curl```
|
||||
export KUBERNETES_PROVIDER=aws; curl -sS https://get.k8s.io | bash
|
||||
```
|
||||
|
||||
|
||||
NOTE: The script will provision a new VPC and a 4 node k8s cluster in us-west-2 (Oregon). It'll also try to create or
|
||||
reuse a keypair called "kubernetes", and IAM profiles called "kubernetes-master" and "kubernetes-minion". If these
|
||||
already exist, make sure you want them to be used here.
|
||||
NOTE: This script calls [cluster/kube-up.sh](https://github.com/GoogleCloudPlatform/kubernetes/blob/master/cluster/kube-up.sh)
|
||||
which in turn calls [cluster/aws/util.sh](https://github.com/GoogleCloudPlatform/kubernetes/blob/master/cluster/aws/util.sh)
|
||||
using [cluster/aws/config-default.sh](https://github.com/GoogleCloudPlatform/kubernetes/blob/master/cluster/aws/config-default.sh).
|
||||
By default, the script will provision a new VPC and a 4 node k8s cluster in us-west-2 (Oregon). It'll also try to create or reuse
|
||||
a keypair called "kubernetes", and IAM profiles called "kubernetes-master" and "kubernetes-minion". If these already exist, make
|
||||
sure you want them to be used here. You can override the variables defined in config-default.sh to change this behavior.
|
||||
|
||||
Once the cluster is up, it will print the ip address of your cluster, this process takes about 5 to 10 minutes.
|
||||
|
||||
@@ -134,3 +136,6 @@ Take a look at [next steps](https://github.com/GoogleCloudPlatform/kubernetes/tr
|
||||
|
||||
### Cloud Formation [optional]
|
||||
There is a contributed [example](aws-coreos.md) from [CoreOS](http://www.coreos.com) using Cloud Formation.
|
||||
|
||||
### Further reading
|
||||
Please see the [Kubernetes docs](https://github.com/GoogleCloudPlatform/kubernetes/tree/master/docs) for more details on administering and using a Kubernetes cluster.
|
||||
|
@@ -19,7 +19,7 @@ coreos:
|
||||
content: |
|
||||
[Service]
|
||||
ExecStartPre=/bin/bash -c "until curl http://<master-private-ip>:4001/v2/machines; do sleep 2; done"
|
||||
ExecStartPre=/usr/bin/etcdctl set /coreos.com/network/config '{"Network":"10.244.0.0/16", "Backend": {"Type": "vxlan"}}'
|
||||
ExecStartPre=/usr/bin/etcdctl -C <master-private-ip>:4001 set /coreos.com/network/config '{"Network":"10.244.0.0/16", "Backend": {"Type": "vxlan"}}'
|
||||
- name: docker.service
|
||||
command: start
|
||||
drop-ins:
|
||||
|
@@ -68,7 +68,7 @@ kubectl create -f frontend-controller.json
|
||||
kubectl create -f frontend-service.json
|
||||
```
|
||||
|
||||
You need to wait for the pods to get deployed, run the following and wait for `STATUS` to change from `Unknown`, through `Pending` to `Runnig`.
|
||||
You need to wait for the pods to get deployed, run the following and wait for `STATUS` to change from `Unknown`, through `Pending` to `Running`.
|
||||
```
|
||||
kubectl get pods --watch
|
||||
```
|
||||
|
@@ -2,16 +2,48 @@
|
||||
## More specifically, we need to add peer hosts for each but the elected peer.
|
||||
|
||||
coreos:
|
||||
etcd:
|
||||
name: etcd
|
||||
addr: $private_ipv4:4001
|
||||
bind-addr: 0.0.0.0
|
||||
peer-addr: $private_ipv4:7001
|
||||
snapshot: true
|
||||
max-retry-attempts: 50
|
||||
units:
|
||||
- name: etcd.service
|
||||
- name: download-etcd2.service
|
||||
enable: true
|
||||
command: start
|
||||
content: |
|
||||
[Unit]
|
||||
After=network-online.target
|
||||
Before=etcd2.service
|
||||
Description=Download etcd2 Binaries
|
||||
Documentation=https://github.com/coreos/etcd/
|
||||
Requires=network-online.target
|
||||
[Service]
|
||||
Environment=ETCD2_RELEASE_TARBALL=https://github.com/coreos/etcd/releases/download/v2.0.9/etcd-v2.0.9-linux-amd64.tar.gz
|
||||
ExecStartPre=/bin/mkdir -p /opt/bin
|
||||
ExecStart=/bin/bash -c "curl --silent --location $ETCD2_RELEASE_TARBALL | tar xzv -C /opt"
|
||||
ExecStartPost=/bin/ln -s /opt/etcd-v2.0.9-linux-amd64/etcd /opt/bin/etcd2
|
||||
ExecStartPost=/bin/ln -s /opt/etcd-v2.0.9-linux-amd64/etcdctl /opt/bin/etcdctl2
|
||||
RemainAfterExit=yes
|
||||
Type=oneshot
|
||||
[Install]
|
||||
WantedBy=multi-user.target
|
||||
- name: etcd2.service
|
||||
enable: true
|
||||
command: start
|
||||
content: |
|
||||
[Unit]
|
||||
After=download-etcd2.service
|
||||
Description=etcd 2
|
||||
Documentation=https://github.com/coreos/etcd/
|
||||
[Service]
|
||||
Environment=ETCD_NAME=%host%
|
||||
Environment=ETCD_INITIAL_CLUSTER_TOKEN=etcd-cluster
|
||||
Environment=ETCD_INITIAL_ADVERTISE_PEER_URLS=http://%host%:2380
|
||||
Environment=ETCD_LISTEN_PEER_URLS=http://%host%:2380
|
||||
Environment=ETCD_LISTEN_CLIENT_URLS=http://0.0.0.0:2379,http://0.0.0.0:4001
|
||||
Environment=ETCD_INITIAL_CLUSTER=%cluster%
|
||||
Environment=ETCD_INITIAL_CLUSTER_STATE=new
|
||||
ExecStart=/opt/bin/etcd2
|
||||
Restart=always
|
||||
RestartSec=10
|
||||
[Install]
|
||||
WantedBy=multi-user.target
|
||||
update:
|
||||
group: stable
|
||||
reboot-strategy: off
|
||||
|
@@ -18,9 +18,37 @@ write_files:
|
||||
printf '{ "id": "%s", "kind": "Minion", "apiVersion": "v1beta1", "labels": { "environment": "production" } }' "${minion_id}" \
|
||||
| /opt/bin/kubectl create -s "${master_url}" -f -
|
||||
|
||||
- path: /etc/kubernetes/manifests/fluentd.manifest
|
||||
permissions: '0755'
|
||||
owner: root
|
||||
content: |
|
||||
version: v1beta2
|
||||
id: fluentd-to-elasticsearch
|
||||
containers:
|
||||
- name: fluentd-es
|
||||
image: gcr.io/google_containers/fluentd-elasticsearch:1.3
|
||||
env:
|
||||
- name: FLUENTD_ARGS
|
||||
value: -qq
|
||||
volumeMounts:
|
||||
- name: containers
|
||||
mountPath: /var/lib/docker/containers
|
||||
- name: varlog
|
||||
mountPath: /varlog
|
||||
volumes:
|
||||
- name: containers
|
||||
source:
|
||||
hostDir:
|
||||
path: /var/lib/docker/containers
|
||||
- name: varlog
|
||||
source:
|
||||
hostDir:
|
||||
path: /var/log
|
||||
|
||||
coreos:
|
||||
update:
|
||||
group: stable
|
||||
reboot-strategy: off
|
||||
units:
|
||||
- name: docker.service
|
||||
drop-ins:
|
||||
@@ -187,7 +215,7 @@ coreos:
|
||||
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
|
||||
Requires=network-online.target
|
||||
[Service]
|
||||
Environment=KUBE_RELEASE_TARBALL=https://github.com/GoogleCloudPlatform/kubernetes/releases/download/v0.11.0/kubernetes.tar.gz
|
||||
Environment=KUBE_RELEASE_TARBALL=https://github.com/GoogleCloudPlatform/kubernetes/releases/download/v0.15.0/kubernetes.tar.gz
|
||||
ExecStartPre=/bin/mkdir -p /opt/
|
||||
ExecStart=/bin/bash -c "curl --silent --location $KUBE_RELEASE_TARBALL | tar xzv -C /tmp/"
|
||||
ExecStart=/bin/tar xzvf /tmp/kubernetes/server/kubernetes-server-linux-amd64.tar.gz -C /opt
|
||||
@@ -278,12 +306,16 @@ coreos:
|
||||
Wants=download-kubernetes.service
|
||||
ConditionHost=!kube-00
|
||||
[Service]
|
||||
ExecStartPre=/bin/mkdir -p /etc/kubernetes/manifests/
|
||||
ExecStart=/opt/kubernetes/server/bin/kubelet \
|
||||
--address=0.0.0.0 \
|
||||
--port=10250 \
|
||||
--hostname_override=%H \
|
||||
--api_servers=http://kube-00:8080 \
|
||||
--logtostderr=true
|
||||
--logtostderr=true \
|
||||
--cluster_dns=10.1.0.3 \
|
||||
--cluster_domain=kube.local \
|
||||
--config=/etc/kubernetes/manifests/
|
||||
Restart=always
|
||||
RestartSec=10
|
||||
[Install]
|
||||
|
@@ -13,9 +13,9 @@ var inspect = require('util').inspect;
|
||||
var util = require('./util.js');
|
||||
|
||||
var coreos_image_ids = {
|
||||
'stable': '2b171e93f07c4903bcad35bda10acf22__CoreOS-Stable-607.0.0',
|
||||
'beta': '2b171e93f07c4903bcad35bda10acf22__CoreOS-Beta-612.1.0', // untested
|
||||
'alpha': '2b171e93f07c4903bcad35bda10acf22__CoreOS-Alpha-626.0.0', // untested
|
||||
'stable': '2b171e93f07c4903bcad35bda10acf22__CoreOS-Stable-633.1.0',
|
||||
'beta': '2b171e93f07c4903bcad35bda10acf22__CoreOS-Beta-647.0.0', // untested
|
||||
'alpha': '2b171e93f07c4903bcad35bda10acf22__CoreOS-Alpha-647.0.0' // untested
|
||||
};
|
||||
|
||||
var conf = {};
|
||||
@@ -140,7 +140,9 @@ var create_ssh_conf = function () {
|
||||
};
|
||||
|
||||
var get_location = function () {
|
||||
if (process.env['AZ_LOCATION']) {
|
||||
if (process.env['AZ_AFFINITY']) {
|
||||
return '--affinity-group=' + process.env['AZ_AFFINITY'];
|
||||
} else if (process.env['AZ_LOCATION']) {
|
||||
return '--location=' + process.env['AZ_LOCATION'];
|
||||
} else {
|
||||
return '--location=West Europe';
|
||||
|
@@ -1,22 +1,29 @@
|
||||
var _ = require('underscore');
|
||||
_.mixin(require('underscore.string').exports());
|
||||
|
||||
var util = require('../util.js');
|
||||
var cloud_config = require('../cloud_config.js');
|
||||
|
||||
|
||||
exports.create_etcd_cloud_config = function (node_count, conf) {
|
||||
var elected_node = 0;
|
||||
|
||||
var input_file = './cloud_config_templates/kubernetes-cluster-etcd-node-template.yml';
|
||||
|
||||
var peers = [ ];
|
||||
for (var i = 0; i < node_count; i++) {
|
||||
peers.push(util.hostname(i, 'etcd') + '=http://' + util.hostname(i, 'etcd') + ':2380');
|
||||
}
|
||||
var cluster = peers.join(',');
|
||||
|
||||
return _(node_count).times(function (n) {
|
||||
var output_file = util.join_output_file_path('kubernetes-cluster-etcd-node-' + n, 'generated.yml');
|
||||
|
||||
return cloud_config.process_template(input_file, output_file, function(data) {
|
||||
if (n !== elected_node) {
|
||||
data.coreos.etcd.peers = [
|
||||
util.hostname(elected_node, 'etcd'), 7001
|
||||
].join(':');
|
||||
for (var i = 0; i < data.coreos.units.length; i++) {
|
||||
var unit = data.coreos.units[i];
|
||||
if (unit.name === 'etcd2.service') {
|
||||
unit.content = _.replaceAll(_.replaceAll(unit.content, '%host%', util.hostname(n, 'etcd')), '%cluster%', cluster);
|
||||
break;
|
||||
}
|
||||
}
|
||||
return data;
|
||||
});
|
||||
|
@@ -108,20 +108,20 @@ systemctl start docker
|
||||
Ok, now that your networking is set up, you can startup Kubernetes, this is the same as the single-node case, we will use the "main" instance of the Docker daemon for the Kubernetes components.
|
||||
|
||||
```sh
|
||||
sudo docker run --net=host -d -v /var/run/docker.sock:/var/run/docker.sock gcr.io/google_containers/hyperkube:v0.14.2 /hyperkube kubelet --api_servers=http://localhost:8080 --v=2 --address=0.0.0.0 --enable_server --hostname_override=127.0.0.1 --config=/etc/kubernetes/manifests-multi
|
||||
sudo docker run --net=host -d -v /var/run/docker.sock:/var/run/docker.sock gcr.io/google_containers/hyperkube:v0.15.0 /hyperkube kubelet --api_servers=http://localhost:8080 --v=2 --address=0.0.0.0 --enable_server --hostname_override=127.0.0.1 --config=/etc/kubernetes/manifests-multi
|
||||
```
|
||||
|
||||
### Also run the service proxy
|
||||
```sh
|
||||
sudo docker run -d --net=host --privileged gcr.io/google_containers/hyperkube:v0.14.2 /hyperkube proxy --master=http://127.0.0.1:8080 --v=2
|
||||
sudo docker run -d --net=host --privileged gcr.io/google_containers/hyperkube:v0.15.0 /hyperkube proxy --master=http://127.0.0.1:8080 --v=2
|
||||
```
|
||||
|
||||
### Test it out
|
||||
At this point, you should have a functioning 1-node cluster. Let's test it out!
|
||||
|
||||
Download the kubectl binary
|
||||
([OS X](http://storage.googleapis.com/kubernetes-release/release/v0.14.2/bin/darwin/amd64/kubectl))
|
||||
([linux](http://storage.googleapis.com/kubernetes-release/release/v0.14.2/bin/linux/amd64/kubectl))
|
||||
([OS X](http://storage.googleapis.com/kubernetes-release/release/v0.15.0/bin/darwin/amd64/kubectl))
|
||||
([linux](http://storage.googleapis.com/kubernetes-release/release/v0.15.0/bin/linux/amd64/kubectl))
|
||||
|
||||
List the nodes
|
||||
|
||||
|
@@ -93,14 +93,14 @@ systemctl start docker
|
||||
Again this is similar to the above, but the ```--api_servers``` now points to the master we set up in the beginning.
|
||||
|
||||
```sh
|
||||
sudo docker run --net=host -d -v /var/run/docker.sock:/var/run/docker.sock gcr.io/google_containers/hyperkube:v0.14.2 /hyperkube kubelet --api_servers=http://${MASTER_IP}:8080 --v=2 --address=0.0.0.0 --enable_server --hostname_override=$(hostname -i)
|
||||
sudo docker run --net=host -d -v /var/run/docker.sock:/var/run/docker.sock gcr.io/google_containers/hyperkube:v0.15.0 /hyperkube kubelet --api_servers=http://${MASTER_IP}:8080 --v=2 --address=0.0.0.0 --enable_server --hostname_override=$(hostname -i)
|
||||
```
|
||||
|
||||
#### Run the service proxy
|
||||
The service proxy provides load-balancing between groups of containers defined by Kubernetes ```Services```
|
||||
|
||||
```sh
|
||||
sudo docker run -d --net=host --privileged gcr.io/google_containers/hyperkube:v0.14.2 /hyperkube proxy --master=http://${MASTER_IP}:8080 --v=2
|
||||
sudo docker run -d --net=host --privileged gcr.io/google_containers/hyperkube:v0.15.0 /hyperkube proxy --master=http://${MASTER_IP}:8080 --v=2
|
||||
```
|
||||
|
||||
|
||||
|
@@ -12,7 +12,7 @@ docker run --net=host -d kubernetes/etcd:2.0.5.1 /usr/local/bin/etcd --addr=127.
|
||||
|
||||
### Step Two: Run the master
|
||||
```sh
|
||||
docker run --net=host -d -v /var/run/docker.sock:/var/run/docker.sock gcr.io/google_containers/hyperkube:v0.14.2 /hyperkube kubelet --api_servers=http://localhost:8080 --v=2 --address=0.0.0.0 --enable_server --hostname_override=127.0.0.1 --config=/etc/kubernetes/manifests
|
||||
docker run --net=host -d -v /var/run/docker.sock:/var/run/docker.sock gcr.io/google_containers/hyperkube:v0.15.0 /hyperkube kubelet --api_servers=http://localhost:8080 --v=2 --address=0.0.0.0 --enable_server --hostname_override=127.0.0.1 --config=/etc/kubernetes/manifests
|
||||
```
|
||||
|
||||
This actually runs the kubelet, which in turn runs a [pod](https://github.com/GoogleCloudPlatform/kubernetes/blob/master/docs/pods.md) that contains the other master components.
|
||||
@@ -20,14 +20,14 @@ This actually runs the kubelet, which in turn runs a [pod](https://github.com/Go
|
||||
### Step Three: Run the service proxy
|
||||
*Note, this could be combined with master above, but it requires --privileged for iptables manipulation*
|
||||
```sh
|
||||
docker run -d --net=host --privileged gcr.io/google_containers/hyperkube:v0.14.2 /hyperkube proxy --master=http://127.0.0.1:8080 --v=2
|
||||
docker run -d --net=host --privileged gcr.io/google_containers/hyperkube:v0.15.0 /hyperkube proxy --master=http://127.0.0.1:8080 --v=2
|
||||
```
|
||||
|
||||
### Test it out
|
||||
At this point you should have a running kubernetes cluster. You can test this by downloading the kubectl
|
||||
binary
|
||||
([OS X](http://storage.googleapis.com/kubernetes-release/release/v0.14.2/bin/darwin/amd64/kubectl))
|
||||
([linux](http://storage.googleapis.com/kubernetes-release/release/v0.14.2/bin/linux/amd64/kubectl))
|
||||
([OS X](http://storage.googleapis.com/kubernetes-release/release/v0.15.0/bin/darwin/amd64/kubectl))
|
||||
([linux](http://storage.googleapis.com/kubernetes-release/release/v0.15.0/bin/linux/amd64/kubectl))
|
||||
|
||||
*Note:*
|
||||
On OS/X you will need to set up port forwarding via ssh:
|
||||
|
@@ -47,7 +47,7 @@ $ export KUBERNETES_MASTER=http://${servicehost}:8888
|
||||
Start etcd and verify that it is running:
|
||||
|
||||
```bash
|
||||
$ sudo docker run -d --hostname $(hostname -f) --name etcd -p 4001:4001 -p 7001:7001 coreos/etcd
|
||||
$ sudo docker run -d --hostname $(uname -n) --name etcd -p 4001:4001 -p 7001:7001 coreos/etcd
|
||||
```
|
||||
|
||||
```bash
|
||||
|
@@ -1,3 +1,7 @@
|
||||
# Status: Out Of Date
|
||||
|
||||
** Rackspace support is out of date. Please check back later **
|
||||
|
||||
# Rackspace
|
||||
In general, the dev-build-and-up.sh workflow for Rackspace is the similar to GCE. The specific implementation is different due to the use of CoreOS, Rackspace Cloud Files and network design.
|
||||
|
||||
|
@@ -4,13 +4,17 @@ Running kubernetes with Vagrant (and VirtualBox) is an easy way to run/test/deve
|
||||
|
||||
### Prerequisites
|
||||
1. Install latest version >= 1.6.2 of vagrant from http://www.vagrantup.com/downloads.html
|
||||
2. Install latest version of Virtual Box from https://www.virtualbox.org/wiki/Downloads
|
||||
2. Install one of:
|
||||
1. The latest version of Virtual Box from https://www.virtualbox.org/wiki/Downloads
|
||||
2. [VMWare Fusion](https://www.vmware.com/products/fusion/) version 5 or greater as well as the appropriate [Vagrant VMWare Fusion provider](https://www.vagrantup.com/vmware)
|
||||
3. [VMWare Workstation](https://www.vmware.com/products/workstation/) version 9 or greater as well as the [Vagrant VMWare Workstation provider](https://www.vagrantup.com/vmware)
|
||||
4. [Parallels Desktop](https://www.parallels.com/products/desktop/) version 9 or greater as well as the [Vagrant Parallels provider](https://parallels.github.io/vagrant-parallels/)
|
||||
|
||||
### Setup
|
||||
|
||||
Setting up a cluster is as simple as running:
|
||||
|
||||
```
|
||||
```sh
|
||||
export KUBERNETES_PROVIDER=vagrant
|
||||
curl -sS https://get.k8s.io | bash
|
||||
```
|
||||
@@ -19,33 +23,41 @@ The `KUBERNETES_PROVIDER` environment variable tells all of the various cluster
|
||||
|
||||
By default, the Vagrant setup will create a single kubernetes-master and 1 kubernetes-minion. Each VM will take 1 GB, so make sure you have at least 2GB to 4GB of free memory (plus appropriate free disk space). To start your local cluster, open a shell and run:
|
||||
|
||||
```
|
||||
```sh
|
||||
cd kubernetes
|
||||
|
||||
export KUBERNETES_PROVIDER=vagrant
|
||||
cluster/kube-up.sh
|
||||
./cluster/kube-up.sh
|
||||
```
|
||||
|
||||
Vagrant will provision each machine in the cluster with all the necessary components to run Kubernetes. The initial setup can take a few minutes to complete on each machine.
|
||||
|
||||
If you installed more than one Vagrant provider, Kubernetes will usually pick the appropriate one. However, you can override which one Kubernetes will use by setting the [`VAGRANT_DEFAULT_PROVIDER`](https://docs.vagrantup.com/v2/providers/default.html) environment variable:
|
||||
|
||||
```sh
|
||||
export VAGRANT_DEFAULT_PROVIDER=parallels
|
||||
export KUBERNETES_PROVIDER=vagrant
|
||||
./cluster/kube-up.sh
|
||||
```
|
||||
|
||||
By default, each VM in the cluster is running Fedora, and all of the Kubernetes services are installed into systemd.
|
||||
|
||||
To access the master or any minion:
|
||||
|
||||
```
|
||||
```sh
|
||||
vagrant ssh master
|
||||
vagrant ssh minion-1
|
||||
```
|
||||
|
||||
If you are running more than one minion, you can access the others by:
|
||||
|
||||
```
|
||||
```sh
|
||||
vagrant ssh minion-2
|
||||
vagrant ssh minion-3
|
||||
```
|
||||
|
||||
To view the service status and/or logs on the kubernetes-master:
|
||||
```
|
||||
```sh
|
||||
vagrant ssh master
|
||||
[vagrant@kubernetes-master ~] $ sudo systemctl status kube-apiserver
|
||||
[vagrant@kubernetes-master ~] $ sudo journalctl -r -u kube-apiserver
|
||||
@@ -58,7 +70,7 @@ vagrant ssh master
|
||||
```
|
||||
|
||||
To view the services on any of the kubernetes-minion(s):
|
||||
```
|
||||
```sh
|
||||
vagrant ssh minion-1
|
||||
[vagrant@kubernetes-minion-1] $ sudo systemctl status docker
|
||||
[vagrant@kubernetes-minion-1] $ sudo journalctl -r -u docker
|
||||
@@ -71,18 +83,18 @@ vagrant ssh minion-1
|
||||
With your Kubernetes cluster up, you can manage the nodes in your cluster with the regular Vagrant commands.
|
||||
|
||||
To push updates to new Kubernetes code after making source changes:
|
||||
```
|
||||
cluster/kube-push.sh
|
||||
```sh
|
||||
./cluster/kube-push.sh
|
||||
```
|
||||
|
||||
To stop and then restart the cluster:
|
||||
```
|
||||
```sh
|
||||
vagrant halt
|
||||
cluster/kube-up.sh
|
||||
./cluster/kube-up.sh
|
||||
```
|
||||
|
||||
To destroy the cluster:
|
||||
```
|
||||
```sh
|
||||
vagrant destroy
|
||||
```
|
||||
|
||||
@@ -90,14 +102,13 @@ Once your Vagrant machines are up and provisioned, the first thing to do is to c
|
||||
|
||||
You may need to build the binaries first, you can do this with ```make```
|
||||
|
||||
```
|
||||
```sh
|
||||
$ ./cluster/kubectl.sh get minions
|
||||
|
||||
NAME LABELS
|
||||
10.245.1.4 <none>
|
||||
10.245.1.5 <none>
|
||||
10.245.1.3 <none>
|
||||
|
||||
```
|
||||
|
||||
### Interacting with your Kubernetes cluster with the `kube-*` scripts.
|
||||
@@ -106,39 +117,39 @@ Alternatively to using the vagrant commands, you can also use the `cluster/kube-
|
||||
|
||||
All of these commands assume you have set `KUBERNETES_PROVIDER` appropriately:
|
||||
|
||||
```
|
||||
```sh
|
||||
export KUBERNETES_PROVIDER=vagrant
|
||||
```
|
||||
|
||||
Bring up a vagrant cluster
|
||||
|
||||
```
|
||||
cluster/kube-up.sh
|
||||
```sh
|
||||
./cluster/kube-up.sh
|
||||
```
|
||||
|
||||
Destroy the vagrant cluster
|
||||
|
||||
```
|
||||
cluster/kube-down.sh
|
||||
```sh
|
||||
./cluster/kube-down.sh
|
||||
```
|
||||
|
||||
Update the vagrant cluster after you make changes (only works when building your own releases locally):
|
||||
|
||||
```
|
||||
cluster/kube-push.sh
|
||||
```sh
|
||||
./cluster/kube-push.sh
|
||||
```
|
||||
|
||||
Interact with the cluster
|
||||
|
||||
```
|
||||
cluster/kubectl.sh
|
||||
```sh
|
||||
./cluster/kubectl.sh
|
||||
```
|
||||
|
||||
### Authenticating with your master
|
||||
|
||||
When using the vagrant provider in Kubernetes, the `cluster/kubectl.sh` script will cache your credentials in a `~/.kubernetes_vagrant_auth` file so you will not be prompted for them in the future.
|
||||
|
||||
```
|
||||
```sh
|
||||
cat ~/.kubernetes_vagrant_auth
|
||||
{ "User": "vagrant",
|
||||
"Password": "vagrant"
|
||||
@@ -150,50 +161,49 @@ cat ~/.kubernetes_vagrant_auth
|
||||
|
||||
You should now be set to use the `cluster/kubectl.sh` script. For example try to list the minions that you have started with:
|
||||
|
||||
```
|
||||
cluster/kubectl.sh get minions
|
||||
```sh
|
||||
./cluster/kubectl.sh get minions
|
||||
```
|
||||
|
||||
### Running containers
|
||||
|
||||
Your cluster is running, you can list the minions in your cluster:
|
||||
|
||||
```
|
||||
$ cluster/kubectl.sh get minions
|
||||
```sh
|
||||
$ ./cluster/kubectl.sh get minions
|
||||
|
||||
NAME LABELS
|
||||
10.245.2.4 <none>
|
||||
10.245.2.3 <none>
|
||||
10.245.2.2 <none>
|
||||
|
||||
```
|
||||
|
||||
Now start running some containers!
|
||||
|
||||
You can now use any of the cluster/kube-*.sh commands to interact with your VM machines.
|
||||
You can now use any of the `cluster/kube-*.sh` commands to interact with your VM machines.
|
||||
Before starting a container there will be no pods, services and replication controllers.
|
||||
|
||||
```
|
||||
$ cluster/kubectl.sh get pods
|
||||
```sh
|
||||
$ ./cluster/kubectl.sh get pods
|
||||
NAME IMAGE(S) HOST LABELS STATUS
|
||||
|
||||
$ cluster/kubectl.sh get services
|
||||
$ ./cluster/kubectl.sh get services
|
||||
NAME LABELS SELECTOR IP PORT
|
||||
|
||||
$ cluster/kubectl.sh get replicationControllers
|
||||
$ ./cluster/kubectl.sh get replicationControllers
|
||||
NAME IMAGE(S SELECTOR REPLICAS
|
||||
```
|
||||
|
||||
Start a container running nginx with a replication controller and three replicas
|
||||
|
||||
```
|
||||
$ cluster/kubectl.sh run-container my-nginx --image=nginx --replicas=3 --port=80
|
||||
```sh
|
||||
$ ./cluster/kubectl.sh run-container my-nginx --image=nginx --replicas=3 --port=80
|
||||
```
|
||||
|
||||
When listing the pods, you will see that three containers have been started and are in Waiting state:
|
||||
|
||||
```
|
||||
$ cluster/kubectl.sh get pods
|
||||
```sh
|
||||
$ ./cluster/kubectl.sh get pods
|
||||
NAME IMAGE(S) HOST LABELS STATUS
|
||||
781191ff-3ffe-11e4-9036-0800279696e1 nginx 10.245.2.4/10.245.2.4 name=myNginx Waiting
|
||||
7813c8bd-3ffe-11e4-9036-0800279696e1 nginx 10.245.2.2/10.245.2.2 name=myNginx Waiting
|
||||
@@ -202,7 +212,7 @@ NAME IMAGE(S) HOST
|
||||
|
||||
You need to wait for the provisioning to complete, you can monitor the minions by doing:
|
||||
|
||||
```
|
||||
```sh
|
||||
$ sudo salt '*minion-1' cmd.run 'docker images'
|
||||
kubernetes-minion-1:
|
||||
REPOSITORY TAG IMAGE ID CREATED VIRTUAL SIZE
|
||||
@@ -213,7 +223,7 @@ kubernetes-minion-1:
|
||||
|
||||
Once the docker image for nginx has been downloaded, the container will start and you can list it:
|
||||
|
||||
```
|
||||
```sh
|
||||
$ sudo salt '*minion-1' cmd.run 'docker ps'
|
||||
kubernetes-minion-1:
|
||||
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
|
||||
@@ -225,17 +235,17 @@ kubernetes-minion-1:
|
||||
|
||||
Going back to listing the pods, services and replicationControllers, you now have:
|
||||
|
||||
```
|
||||
$ cluster/kubectl.sh get pods
|
||||
```sh
|
||||
$ ./cluster/kubectl.sh get pods
|
||||
NAME IMAGE(S) HOST LABELS STATUS
|
||||
781191ff-3ffe-11e4-9036-0800279696e1 nginx 10.245.2.4/10.245.2.4 name=myNginx Running
|
||||
7813c8bd-3ffe-11e4-9036-0800279696e1 nginx 10.245.2.2/10.245.2.2 name=myNginx Running
|
||||
78140853-3ffe-11e4-9036-0800279696e1 nginx 10.245.2.3/10.245.2.3 name=myNginx Running
|
||||
|
||||
$ cluster/kubectl.sh get services
|
||||
$ ./cluster/kubectl.sh get services
|
||||
NAME LABELS SELECTOR IP PORT
|
||||
|
||||
$ cluster/kubectl.sh get replicationControllers
|
||||
$ ./cluster/kubectl.sh get replicationControllers
|
||||
NAME IMAGE(S SELECTOR REPLICAS
|
||||
myNginx nginx name=my-nginx 3
|
||||
```
|
||||
@@ -244,9 +254,9 @@ We did not start any services, hence there are none listed. But we see three rep
|
||||
Check the [guestbook](../../examples/guestbook/README.md) application to learn how to create a service.
|
||||
You can already play with resizing the replicas with:
|
||||
|
||||
```
|
||||
$ cluster/kubectl.sh resize rc my-nginx --replicas=2
|
||||
$ cluster/kubectl.sh get pods
|
||||
```sh
|
||||
$ ./cluster/kubectl.sh resize rc my-nginx --replicas=2
|
||||
$ ./cluster/kubectl.sh get pods
|
||||
NAME IMAGE(S) HOST LABELS STATUS
|
||||
7813c8bd-3ffe-11e4-9036-0800279696e1 nginx 10.245.2.2/10.245.2.2 name=myNginx Running
|
||||
78140853-3ffe-11e4-9036-0800279696e1 nginx 10.245.2.3/10.245.2.3 name=myNginx Running
|
||||
@@ -258,26 +268,26 @@ Congratulations!
|
||||
|
||||
#### I keep downloading the same (large) box all the time!
|
||||
|
||||
By default the Vagrantfile will download the box from S3. You can change this (and cache the box locally) by providing an alternate URL when calling `kube-up.sh`
|
||||
By default the Vagrantfile will download the box from S3. You can change this (and cache the box locally) by providing a name and an alternate URL when calling `kube-up.sh`
|
||||
|
||||
```bash
|
||||
```sh
|
||||
export KUBERNETES_BOX_NAME=choose_your_own_name_for_your_kuber_box
|
||||
export KUBERNETES_BOX_URL=path_of_your_kuber_box
|
||||
export KUBERNETES_PROVIDER=vagrant
|
||||
cluster/kube-up.sh
|
||||
./cluster/kube-up.sh
|
||||
```
|
||||
|
||||
|
||||
#### I just created the cluster, but I am getting authorization errors!
|
||||
|
||||
You probably have an incorrect ~/.kubernetes_vagrant_auth file for the cluster you are attempting to contact.
|
||||
|
||||
```
|
||||
```sh
|
||||
rm ~/.kubernetes_vagrant_auth
|
||||
```
|
||||
|
||||
After using kubectl.sh make sure that the correct credentials are set:
|
||||
|
||||
```
|
||||
```sh
|
||||
cat ~/.kubernetes_vagrant_auth
|
||||
{
|
||||
"User": "vagrant",
|
||||
@@ -285,34 +295,41 @@ cat ~/.kubernetes_vagrant_auth
|
||||
}
|
||||
```
|
||||
|
||||
#### I just created the cluster, but I do not see my container running !
|
||||
#### I just created the cluster, but I do not see my container running!
|
||||
|
||||
If this is your first time creating the cluster, the kubelet on each minion schedules a number of docker pull requests to fetch prerequisite images. This can take some time and as a result may delay your initial pod getting provisioned.
|
||||
|
||||
#### I want to make changes to Kubernetes code !
|
||||
#### I want to make changes to Kubernetes code!
|
||||
|
||||
To set up a vagrant cluster for hacking, follow the [vagrant developer guide](../devel/developer-guides/vagrant.md).
|
||||
|
||||
#### I have brought Vagrant up but the minions won't validate !
|
||||
#### I have brought Vagrant up but the minions won't validate!
|
||||
|
||||
Log on to one of the minions (`vagrant ssh minion-1`) and inspect the salt minion log (`sudo cat /var/log/salt/minion`).
|
||||
|
||||
#### I want to change the number of minions !
|
||||
#### I want to change the number of minions!
|
||||
|
||||
You can control the number of minions that are instantiated via the environment variable `NUM_MINIONS` on your host machine. If you plan to work with replicas, we strongly encourage you to work with enough minions to satisfy your largest intended replica size. If you do not plan to work with replicas, you can save some system resources by running with a single minion. You do this, by setting `NUM_MINIONS` to 1 like so:
|
||||
|
||||
```
|
||||
```sh
|
||||
export NUM_MINIONS=1
|
||||
```
|
||||
|
||||
#### I want my VMs to have more memory !
|
||||
#### I want my VMs to have more memory!
|
||||
|
||||
You can control the memory allotted to virtual machines with the `KUBERNETES_MEMORY` environment variable.
|
||||
Just set it to the number of megabytes you would like the machines to have. For example:
|
||||
|
||||
```
|
||||
```sh
|
||||
export KUBERNETES_MEMORY=2048
|
||||
```
|
||||
|
||||
If you need more granular control, you can set the amount of memory for the master and minions independently. For example:
|
||||
|
||||
```sh
|
||||
export KUBERNETES_MASTER_MEMORY=1536
|
||||
export KUBERNETES_MASTER_MINION=2048
|
||||
```
|
||||
|
||||
#### I ran vagrant suspend and nothing works!
|
||||
```vagrant suspend``` seems to mess up the network. It's not supported at this time.
|
||||
|
Reference in New Issue
Block a user