Merge branch 'master' of github.com:GoogleCloudPlatform/kubernetes into add-charms
This commit is contained in:
@@ -73,7 +73,7 @@ Mitigations:
|
||||
- Action: Multiple independent clusters (and avoid making risky changes to all clusters at once)
|
||||
- Mitigates: Everything listed above.
|
||||
|
||||
## Chosing Multiple Kubernetes Clusters
|
||||
## Choosing Multiple Kubernetes Clusters
|
||||
|
||||
You may want to set up multiple kubernetes clusters, both to
|
||||
have clusters in different regions to be nearer to your users; and to tolerate failures and/or invasive maintenance.
|
||||
@@ -120,8 +120,7 @@ then you need `R + U` clusters. If it is not (e.g you want to ensure low latenc
|
||||
cluster failure), then you need to have `R * U` clusters (`U` in each of `R` regions). In any case, try to put each cluster in a different zone.
|
||||
|
||||
Finally, if any of your clusters would need more than the maximum recommended number of nodes for a Kubernetes cluster, then
|
||||
you may need even more clusters. Our roadmap (
|
||||
https://github.com/GoogleCloudPlatform/kubernetes/blob/24e59de06e4da61f5dafd4cd84c9340a2c0d112f/docs/roadmap.md)
|
||||
you may need even more clusters. Our [roadmap](https://github.com/GoogleCloudPlatform/kubernetes/blob/master/docs/roadmap.md)
|
||||
calls for maximum 100 node clusters at v1.0 and maximum 1000 node clusters in the middle of 2015.
|
||||
|
||||
## Working with multiple clusters
|
||||
@@ -129,4 +128,3 @@ calls for maximum 100 node clusters at v1.0 and maximum 1000 node clusters in th
|
||||
When you have multiple clusters, you would typically create services with the same config in each cluster and put each of those
|
||||
service instances behind a load balancer (AWS Elastic Load Balancer, GCE Forwarding Rule or HTTP Load Balancer), so that
|
||||
failures of a single cluster are not visible to end users.
|
||||
|
||||
|
@@ -46,11 +46,12 @@ If you want more control over the upgrading process, you may use the following w
|
||||
This keeps new pods from landing on the node while you are trying to get them off.
|
||||
1. Get the pods off the machine, via any of the following strategies:
|
||||
1. wait for finite-duration pods to complete
|
||||
1. for pods with a replication controller, delete the pod with `kubectl delete pods $PODNAME`
|
||||
1. for pods which are not replicated, bring up a new copy of the pod, and redirect clients to it.
|
||||
1. delete pods with `kubectl delete pods $PODNAME`
|
||||
1. for pods with a replication controller, the pod will eventually be replaced by a new pod which will be scheduled to a new node. additionally, if the pod is part of a service, then clients will automatically be redirected to the new pod.
|
||||
1. for pods with no replication controller, you need to bring up a new copy of the pod, and assuming it is not part of a service, redirect clients to it.
|
||||
1. Work on the node
|
||||
1. Make the node schedulable again:
|
||||
`kubectl update nodes $NODENAME --patch='{"apiVersion": "v1beta1", "unschedulable": false}'`.
|
||||
Or, if you deleted the VM instance and created a new one, and are using `--sync_nodes=true` on the apiserver
|
||||
(the default), then a new schedulable node resource will be created automatically when you create a new
|
||||
VM instance. See [Node](node.md).
|
||||
If you deleted the node's VM instance and created a new one, then a new schedulable node resource will
|
||||
be created automatically when you create a new VM instance (if you're using a cloud provider that supports
|
||||
node discovery; currently this is only GCE, not including CoreOS on GCE using kube-register). See [Node](node.md).
|
||||
|
@@ -1,48 +0,0 @@
|
||||
# Design: Limit direct access to etcd from within Kubernetes
|
||||
|
||||
All nodes have effective access of "root" on the entire Kubernetes cluster today because they have access to etcd, the central data store. The kubelet, the service proxy, and the nodes themselves have a connection to etcd that can be used to read or write any data in the system. In a cluster with many hosts, any container or user that gains the ability to write to the network device that can reach etcd, on any host, also gains that access.
|
||||
|
||||
* The Kubelet and Kube Proxy currently rely on an efficient "wait for changes over HTTP" interface get their current state and avoid missing changes
|
||||
* This interface is implemented by etcd as the "watch" operation on a given key containing useful data
|
||||
|
||||
|
||||
## Options:
|
||||
|
||||
1. Do nothing
|
||||
2. Introduce an HTTP proxy that limits the ability of nodes to access etcd
|
||||
1. Prevent writes of data from the kubelet
|
||||
2. Prevent reading data not associated with the client responsibilities
|
||||
3. Introduce a security token granting access
|
||||
3. Introduce an API on the apiserver that returns the data a node Kubelet and Kube Proxy needs
|
||||
1. Remove the ability of nodes to access etcd via network configuration
|
||||
2. Provide an alternate implementation for the event writing code Kubelet
|
||||
3. Implement efficient "watch for changes over HTTP" to offer comparable function with etcd
|
||||
4. Ensure that the apiserver can scale at or above the capacity of the etcd system.
|
||||
5. Implement authorization scoping for the nodes that limits the data they can view
|
||||
4. Implement granular access control in etcd
|
||||
1. Authenticate HTTP clients with client certificates, tokens, or BASIC auth and authorize them for read only access
|
||||
2. Allow read access of certain subpaths based on what the requestor's tokens are
|
||||
|
||||
|
||||
## Evaluation:
|
||||
|
||||
Option 1 would be considered unacceptable for deployment in a multi-tenant or security conscious environment. It would be acceptable in a low security deployment where all software is trusted. It would be acceptable in proof of concept environments on a single machine.
|
||||
|
||||
Option 2 would require implementing an http proxy that for 2-1 could block POST/PUT/DELETE requests (and potentially HTTP method tunneling parameters accepted by etcd). 2-2 would be more complicated and would require filtering operations based on deep understanding of the etcd API *and* the underlying schema. It would be possible, but involve extra software.
|
||||
|
||||
Option 3 would involve extending the existing apiserver to return pods associated with a given node over an HTTP "watch for changes" mechanism, which is already implemented. Proper security would involve checking that the caller is authorized to access that data - one imagines a per node token, key, or SSL certificate that could be used to authenticate and then authorize access to only the data belonging to that node. The current event publishing mechanism from the kubelet would also need to be replaced with a secure API endpoint or a change to a polling model. The apiserver would also need to be able to function in a horizontally scalable mode by changing or fixing the "operations" queue to work in a stateless, scalable model. In practice, the amount of traffic even a large Kubernetes deployment would drive towards an apiserver would be tens of requests per second (500 hosts, 1 request per host every minute) which is negligible if well implemented. Implementing this would also decouple the data store schema from the nodes, allowing a different data store technology to be added in the future without affecting existing nodes. This would also expose that data to other consumers for their own purposes (monitoring, implementing service discovery).
|
||||
|
||||
Option 4 would involve extending etcd to [support access control](https://github.com/coreos/etcd/issues/91). Administrators would need to authorize nodes to connect to etcd, and expose network routability directly to etcd. The mechanism for handling this authentication and authorization would be different than the authorization used by Kubernetes controllers and API clients. It would not be possible to completely replace etcd as a data store without also implementing a new Kubelet config endpoint.
|
||||
|
||||
|
||||
## Preferred solution:
|
||||
|
||||
Implement the first parts of option 3 - an efficient watch API for the pod, service, and endpoints data for the Kubelet and Kube Proxy. Authorization and authentication are planned in the future - when a solution is available, implement a custom authorization scope that allows API access to be restricted to only the data about a single node or the service endpoint data.
|
||||
|
||||
In general, option 4 is desirable in addition to option 3 as a mechanism to further secure the store to infrastructure components that must access it.
|
||||
|
||||
|
||||
## Caveats
|
||||
|
||||
In all four options, compromise of a host will allow an attacker to imitate that host. For attack vectors that are reproducible from inside containers (privilege escalation), an attacker can distribute himself to other hosts by requesting new containers be spun up. In scenario 1, the cluster is totally compromised immediately. In 2-1, the attacker can view all information about the cluster including keys or authorization data defined with pods. In 2-2 and 3, the attacker must still distribute himself in order to get access to a large subset of information, and cannot see other data that is potentially located in etcd like side storage or system configuration. For attack vectors that are not exploits, but instead allow network access to etcd, an attacker in 2ii has no ability to spread his influence, and is instead restricted to the subset of information on the host. For 3-5, they can do nothing they could not do already (request access to the nodes / services endpoint) because the token is not visible to them on the host.
|
||||
|
@@ -4,42 +4,54 @@ Running kubernetes with Vagrant (and VirtualBox) is an easy way to run/test/deve
|
||||
|
||||
### Prerequisites
|
||||
1. Install latest version >= 1.6.2 of vagrant from http://www.vagrantup.com/downloads.html
|
||||
2. Install latest version of Virtual Box from https://www.virtualbox.org/wiki/Downloads
|
||||
2. Install one of:
|
||||
1. The latest version of Virtual Box from https://www.virtualbox.org/wiki/Downloads
|
||||
2. [VMWare Fusion](https://www.vmware.com/products/fusion/) version 5 or greater as well as the appropriate [Vagrant VMWare Fusion provider](https://www.vagrantup.com/vmware)
|
||||
3. [VMWare Workstation](https://www.vmware.com/products/workstation/) version 9 or greater as well as the [Vagrant VMWare Workstation provider](https://www.vagrantup.com/vmware)
|
||||
4. [Parallels Desktop](https://www.parallels.com/products/desktop/) version 9 or greater as well as the [Vagrant Parallels provider](https://parallels.github.io/vagrant-parallels/)
|
||||
3. Get or build a [binary release](../../getting-started-guides/binary_release.md)
|
||||
|
||||
### Setup
|
||||
|
||||
By default, the Vagrant setup will create a single kubernetes-master and 1 kubernetes-minion. Each VM will take 1 GB, so make sure you have at least 2GB to 4GB of free memory (plus appropriate free disk space). To start your local cluster, open a shell and run:
|
||||
|
||||
```
|
||||
```sh
|
||||
cd kubernetes
|
||||
|
||||
export KUBERNETES_PROVIDER=vagrant
|
||||
cluster/kube-up.sh
|
||||
./cluster/kube-up.sh
|
||||
```
|
||||
|
||||
The `KUBERNETES_PROVIDER` environment variable tells all of the various cluster management scripts which variant to use. If you forget to set this, the assumption is you are running on Google Compute Engine.
|
||||
|
||||
If you installed more than one Vagrant provider, Kubernetes will usually pick the appropriate one. However, you can override which one Kubernetes will use by setting the [`VAGRANT_DEFAULT_PROVIDER`](https://docs.vagrantup.com/v2/providers/default.html) environment variable:
|
||||
|
||||
```sh
|
||||
export VAGRANT_DEFAULT_PROVIDER=parallels
|
||||
export KUBERNETES_PROVIDER=vagrant
|
||||
./cluster/kube-up.sh
|
||||
```
|
||||
|
||||
Vagrant will provision each machine in the cluster with all the necessary components to run Kubernetes. The initial setup can take a few minutes to complete on each machine.
|
||||
|
||||
By default, each VM in the cluster is running Fedora, and all of the Kubernetes services are installed into systemd.
|
||||
|
||||
To access the master or any minion:
|
||||
|
||||
```
|
||||
```sh
|
||||
vagrant ssh master
|
||||
vagrant ssh minion-1
|
||||
```
|
||||
|
||||
If you are running more than one minion, you can access the others by:
|
||||
|
||||
```
|
||||
```sh
|
||||
vagrant ssh minion-2
|
||||
vagrant ssh minion-3
|
||||
```
|
||||
|
||||
To view the service status and/or logs on the kubernetes-master:
|
||||
```
|
||||
```sh
|
||||
vagrant ssh master
|
||||
[vagrant@kubernetes-master ~] $ sudo systemctl status kube-apiserver
|
||||
[vagrant@kubernetes-master ~] $ sudo journalctl -r -u kube-apiserver
|
||||
@@ -52,7 +64,7 @@ vagrant ssh master
|
||||
```
|
||||
|
||||
To view the services on any of the kubernetes-minion(s):
|
||||
```
|
||||
```sh
|
||||
vagrant ssh minion-1
|
||||
[vagrant@kubernetes-minion-1] $ sudo systemctl status docker
|
||||
[vagrant@kubernetes-minion-1] $ sudo journalctl -r -u docker
|
||||
@@ -65,18 +77,18 @@ vagrant ssh minion-1
|
||||
With your Kubernetes cluster up, you can manage the nodes in your cluster with the regular Vagrant commands.
|
||||
|
||||
To push updates to new Kubernetes code after making source changes:
|
||||
```
|
||||
cluster/kube-push.sh
|
||||
```sh
|
||||
./cluster/kube-push.sh
|
||||
```
|
||||
|
||||
To stop and then restart the cluster:
|
||||
```
|
||||
```sh
|
||||
vagrant halt
|
||||
cluster/kube-up.sh
|
||||
./cluster/kube-up.sh
|
||||
```
|
||||
|
||||
To destroy the cluster:
|
||||
```
|
||||
```sh
|
||||
vagrant destroy
|
||||
```
|
||||
|
||||
@@ -84,14 +96,13 @@ Once your Vagrant machines are up and provisioned, the first thing to do is to c
|
||||
|
||||
You may need to build the binaries first, you can do this with ```make```
|
||||
|
||||
```
|
||||
```sh
|
||||
$ ./cluster/kubectl.sh get minions
|
||||
|
||||
NAME LABELS
|
||||
10.245.1.4 <none>
|
||||
10.245.1.5 <none>
|
||||
10.245.1.3 <none>
|
||||
|
||||
```
|
||||
|
||||
### Interacting with your Kubernetes cluster with the `kube-*` scripts.
|
||||
@@ -100,39 +111,39 @@ Alternatively to using the vagrant commands, you can also use the `cluster/kube-
|
||||
|
||||
All of these commands assume you have set `KUBERNETES_PROVIDER` appropriately:
|
||||
|
||||
```
|
||||
```sh
|
||||
export KUBERNETES_PROVIDER=vagrant
|
||||
```
|
||||
|
||||
Bring up a vagrant cluster
|
||||
|
||||
```
|
||||
cluster/kube-up.sh
|
||||
```sh
|
||||
./cluster/kube-up.sh
|
||||
```
|
||||
|
||||
Destroy the vagrant cluster
|
||||
|
||||
```
|
||||
cluster/kube-down.sh
|
||||
```sh
|
||||
./cluster/kube-down.sh
|
||||
```
|
||||
|
||||
Update the vagrant cluster after you make changes (only works when building your own releases locally):
|
||||
|
||||
```
|
||||
cluster/kube-push.sh
|
||||
```sh
|
||||
./cluster/kube-push.sh
|
||||
```
|
||||
|
||||
Interact with the cluster
|
||||
|
||||
```
|
||||
cluster/kubectl.sh
|
||||
```sh
|
||||
./cluster/kubectl.sh
|
||||
```
|
||||
|
||||
### Authenticating with your master
|
||||
|
||||
When using the vagrant provider in Kubernetes, the `cluster/kubectl.sh` script will cache your credentials in a `~/.kubernetes_vagrant_auth` file so you will not be prompted for them in the future.
|
||||
|
||||
```
|
||||
```sh
|
||||
cat ~/.kubernetes_vagrant_auth
|
||||
{ "User": "vagrant",
|
||||
"Password": "vagrant"
|
||||
@@ -144,22 +155,21 @@ cat ~/.kubernetes_vagrant_auth
|
||||
|
||||
You should now be set to use the `cluster/kubectl.sh` script. For example try to list the minions that you have started with:
|
||||
|
||||
```
|
||||
cluster/kubectl.sh get minions
|
||||
```sh
|
||||
./cluster/kubectl.sh get minions
|
||||
```
|
||||
|
||||
### Running containers
|
||||
|
||||
Your cluster is running, you can list the minions in your cluster:
|
||||
|
||||
```
|
||||
$ cluster/kubectl.sh get minions
|
||||
```sh
|
||||
$ ./cluster/kubectl.sh get minions
|
||||
|
||||
NAME LABELS
|
||||
10.245.2.4 <none>
|
||||
10.245.2.3 <none>
|
||||
10.245.2.2 <none>
|
||||
|
||||
```
|
||||
|
||||
Now start running some containers!
|
||||
@@ -196,7 +206,7 @@ NAME IMAGE(S) HOST
|
||||
|
||||
You need to wait for the provisioning to complete, you can monitor the minions by doing:
|
||||
|
||||
```
|
||||
```sh
|
||||
$ sudo salt '*minion-1' cmd.run 'docker images'
|
||||
kubernetes-minion-1:
|
||||
REPOSITORY TAG IMAGE ID CREATED VIRTUAL SIZE
|
||||
@@ -206,7 +216,7 @@ kubernetes-minion-1:
|
||||
|
||||
Once the docker image for nginx has been downloaded, the container will start and you can list it:
|
||||
|
||||
```
|
||||
```sh
|
||||
$ sudo salt '*minion-1' cmd.run 'docker ps'
|
||||
kubernetes-minion-1:
|
||||
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
|
||||
@@ -235,9 +245,9 @@ We did not start any services, hence there are none listed. But we see three rep
|
||||
Check the [guestbook](../../examples/guestbook/README.md) application to learn how to create a service.
|
||||
You can already play with resizing the replicas with:
|
||||
|
||||
```
|
||||
$ cluster/kubectl.sh resize rc my-nginx --replicas=2
|
||||
$ cluster/kubectl.sh get pods
|
||||
```sh
|
||||
$ ./cluster/kubectl.sh resize rc my-nginx --replicas=2
|
||||
$ ./cluster/kubectl.sh get pods
|
||||
NAME IMAGE(S) HOST LABELS STATUS
|
||||
7813c8bd-3ffe-11e4-9036-0800279696e1 nginx 10.245.2.2/10.245.2.2 name=myNginx Running
|
||||
78140853-3ffe-11e4-9036-0800279696e1 nginx 10.245.2.3/10.245.2.3 name=myNginx Running
|
||||
@@ -247,9 +257,9 @@ Congratulations!
|
||||
|
||||
### Testing
|
||||
|
||||
The following will run all of the end-to-end testing scenarios assuming you set your environment in cluster/kube-env.sh
|
||||
The following will run all of the end-to-end testing scenarios assuming you set your environment in `cluster/kube-env.sh`:
|
||||
|
||||
```
|
||||
```sh
|
||||
NUM_MINIONS=3 hack/e2e-test.sh
|
||||
```
|
||||
|
||||
@@ -257,26 +267,26 @@ NUM_MINIONS=3 hack/e2e-test.sh
|
||||
|
||||
#### I keep downloading the same (large) box all the time!
|
||||
|
||||
By default the Vagrantfile will download the box from S3. You can change this (and cache the box locally) by providing an alternate URL when calling `kube-up.sh`
|
||||
By default the Vagrantfile will download the box from S3. You can change this (and cache the box locally) by providing a name and an alternate URL when calling `kube-up.sh`
|
||||
|
||||
```bash
|
||||
```sh
|
||||
export KUBERNETES_BOX_NAME=choose_your_own_name_for_your_kuber_box
|
||||
export KUBERNETES_BOX_URL=path_of_your_kuber_box
|
||||
export KUBERNETES_PROVIDER=vagrant
|
||||
cluster/kube-up.sh
|
||||
./cluster/kube-up.sh
|
||||
```
|
||||
|
||||
|
||||
#### I just created the cluster, but I am getting authorization errors!
|
||||
|
||||
You probably have an incorrect ~/.kubernetes_vagrant_auth file for the cluster you are attempting to contact.
|
||||
|
||||
```
|
||||
```sh
|
||||
rm ~/.kubernetes_vagrant_auth
|
||||
```
|
||||
|
||||
After using kubectl.sh make sure that the correct credentials are set:
|
||||
|
||||
```
|
||||
```sh
|
||||
cat ~/.kubernetes_vagrant_auth
|
||||
{
|
||||
"User": "vagrant",
|
||||
@@ -284,35 +294,42 @@ cat ~/.kubernetes_vagrant_auth
|
||||
}
|
||||
```
|
||||
|
||||
#### I just created the cluster, but I do not see my container running !
|
||||
#### I just created the cluster, but I do not see my container running!
|
||||
|
||||
If this is your first time creating the cluster, the kubelet on each minion schedules a number of docker pull requests to fetch prerequisite images. This can take some time and as a result may delay your initial pod getting provisioned.
|
||||
|
||||
#### I changed Kubernetes code, but it's not running !
|
||||
#### I changed Kubernetes code, but it's not running!
|
||||
|
||||
Are you sure there was no build error? After running `$ vagrant provision`, scroll up and ensure that each Salt state was completed successfully on each box in the cluster.
|
||||
It's very likely you see a build error due to an error in your source files!
|
||||
|
||||
#### I have brought Vagrant up but the minions won't validate !
|
||||
#### I have brought Vagrant up but the minions won't validate!
|
||||
|
||||
Are you sure you built a release first? Did you install `net-tools`? For more clues, login to one of the minions (`vagrant ssh minion-1`) and inspect the salt minion log (`sudo cat /var/log/salt/minion`).
|
||||
|
||||
#### I want to change the number of minions !
|
||||
#### I want to change the number of minions!
|
||||
|
||||
You can control the number of minions that are instantiated via the environment variable `NUM_MINIONS` on your host machine. If you plan to work with replicas, we strongly encourage you to work with enough minions to satisfy your largest intended replica size. If you do not plan to work with replicas, you can save some system resources by running with a single minion. You do this, by setting `NUM_MINIONS` to 1 like so:
|
||||
|
||||
```
|
||||
```sh
|
||||
export NUM_MINIONS=1
|
||||
```
|
||||
|
||||
#### I want my VMs to have more memory !
|
||||
#### I want my VMs to have more memory!
|
||||
|
||||
You can control the memory allotted to virtual machines with the `KUBERNETES_MEMORY` environment variable.
|
||||
Just set it to the number of megabytes you would like the machines to have. For example:
|
||||
|
||||
```
|
||||
```sh
|
||||
export KUBERNETES_MEMORY=2048
|
||||
```
|
||||
|
||||
If you need more granular control, you can set the amount of memory for the master and minions independently. For example:
|
||||
|
||||
```sh
|
||||
export KUBERNETES_MASTER_MEMORY=1536
|
||||
export KUBERNETES_MASTER_MINION=2048
|
||||
```
|
||||
|
||||
#### I ran vagrant suspend and nothing works!
|
||||
```vagrant suspend``` seems to mess up the network. It's not supported at this time.
|
||||
|
@@ -9,8 +9,8 @@ If you are considering contributing a new guide, please read the
|
||||
|
||||
IaaS Provider | Config. Mgmt | OS | Networking | Docs | Support Level | Notes
|
||||
-------------- | ------------ | ------ | ---------- | ---------------------------------------------------- | ---------------------------- | -----
|
||||
GKE | | | GCE | [docs](https://cloud.google.com/container-engine) | Commercial | Uses K8s version 0.14.1
|
||||
GCE | Saltstack | Debian | GCE | [docs](../../docs/getting-started-guides/gce.md) | Project | Tested with 0.14.1 by @brendandburns
|
||||
GKE | | | GCE | [docs](https://cloud.google.com/container-engine) | Commercial | Uses K8s version 0.15.0
|
||||
GCE | Saltstack | Debian | GCE | [docs](../../docs/getting-started-guides/gce.md) | Project | Tested with 0.15.0 by @robertbailey
|
||||
Mesos/GCE | | | | [docs](../../docs/getting-started-guides/mesos.md) | [Community](https://github.com/mesosphere/kubernetes-mesos) ([@jdef](https://github.com/jdef)) | Uses K8s v0.11.0
|
||||
Vagrant | Saltstack | Fedora | OVS | [docs](../../docs/getting-started-guides/vagrant.md) | Project |
|
||||
Bare-metal | custom | Fedora | _none_ | [docs](../../docs/getting-started-guides/fedora/fedora_manual_config.md) | Project | Uses K8s v0.13.2
|
||||
@@ -29,13 +29,12 @@ Docker Single Node | custom | N/A | local | [docs](docker.
|
||||
Docker Multi Node | Flannel| N/A | local | [docs](docker-multinode.md) | Project (@brendandburns) | Tested @ 0.14.1 |
|
||||
Local | | | _none_ | [docs](../../docs/getting-started-guides/locally.md) | Community (@preillyme) |
|
||||
Ovirt | | | | [docs](../../docs/getting-started-guides/ovirt.md) | Inactive (@simon3z) |
|
||||
Rackspace | CoreOS | CoreOS | Rackspace | [docs](../../docs/getting-started-guides/rackspace.md) | Inactive (@doubleerr) |
|
||||
Bare-metal | custom | CentOS | _none_ | [docs](../../docs/getting-started-guides/centos/centos_manual_config.md) | Community(@coolsvap) | Uses K8s v0.9.1
|
||||
libvirt/KVM | CoreOS | CoreOS | libvirt/KVM | [docs](../../docs/getting-started-guides/libvirt-coreos.md) | Community (@lhuard1A) |
|
||||
AWS | Juju | Ubuntu | flannel | [docs](../../docs/getting-started-guides/juju.md) | [Community](https://github.com/whitmo/bundle-kubernetes) ( [@whit](https://github.com/whitmo), [@matt](https://github.com/mbruzek), [@chuck](https://github.com/chuckbutler) ) | [Tested](http://reports.vapour.ws/charm-tests-by-charm/kubernetes) K8s v0.8.1
|
||||
OpenStack/HPCloud | Juju | Ubuntu | flannel | [docs](../../docs/getting-started-guides/juju.md) | [Community](https://github.com/whitmo/bundle-kubernetes) ( [@whit](https://github.com/whitmo), [@matt](https://github.com/mbruzek), [@chuck](https://github.com/chuckbutler) ) | [Tested](http://reports.vapour.ws/charm-tests-by-charm/kubernetes) K8s v0.8.1
|
||||
Joyent | Juju | Ubuntu | flannel | [docs](../../docs/getting-started-guides/juju.md) | [Community](https://github.com/whitmo/bundle-kubernetes) ( [@whit](https://github.com/whitmo), [@matt](https://github.com/mbruzek), [@chuck](https://github.com/chuckbutler) ) | [Tested](http://reports.vapour.ws/charm-tests-by-charm/kubernetes) K8s v0.8.1
|
||||
Azure | CoreOS | CoreOS | Weave | [docs](../../docs/getting-started-guides/coreos/azure/README.md) | Community ([@errordeveloper](https://github.com/errordeveloper), [@squillace](https://github.com/squillace), [@chanezon](https://github.com/chanezon)) | Uses K8s version 0.11.0
|
||||
Azure | CoreOS | CoreOS | Weave | [docs](../../docs/getting-started-guides/coreos/azure/README.md) | Community ([@errordeveloper](https://github.com/errordeveloper), [@squillace](https://github.com/squillace), [@chanezon](https://github.com/chanezon), [@crossorigin](https://github.com/crossorigin)) | Uses K8s version 0.15.0
|
||||
Bare-metal (Offline) | CoreOS | CoreOS | flannel | [docs](../../docs/getting-started-guides/coreos/bare_metal_offline.md) | Community([@jeffbean](https://github.com/jeffbean)) | K8s v0.10.1
|
||||
|
||||
Definition of columns:
|
||||
|
@@ -19,10 +19,12 @@ or if you prefer ```curl```
|
||||
export KUBERNETES_PROVIDER=aws; curl -sS https://get.k8s.io | bash
|
||||
```
|
||||
|
||||
|
||||
NOTE: The script will provision a new VPC and a 4 node k8s cluster in us-west-2 (Oregon). It'll also try to create or
|
||||
reuse a keypair called "kubernetes", and IAM profiles called "kubernetes-master" and "kubernetes-minion". If these
|
||||
already exist, make sure you want them to be used here.
|
||||
NOTE: This script calls [cluster/kube-up.sh](https://github.com/GoogleCloudPlatform/kubernetes/blob/master/cluster/kube-up.sh)
|
||||
which in turn calls [cluster/aws/util.sh](https://github.com/GoogleCloudPlatform/kubernetes/blob/master/cluster/aws/util.sh)
|
||||
using [cluster/aws/config-default.sh](https://github.com/GoogleCloudPlatform/kubernetes/blob/master/cluster/aws/config-default.sh).
|
||||
By default, the script will provision a new VPC and a 4 node k8s cluster in us-west-2 (Oregon). It'll also try to create or reuse
|
||||
a keypair called "kubernetes", and IAM profiles called "kubernetes-master" and "kubernetes-minion". If these already exist, make
|
||||
sure you want them to be used here. You can override the variables defined in config-default.sh to change this behavior.
|
||||
|
||||
Once the cluster is up, it will print the ip address of your cluster, this process takes about 5 to 10 minutes.
|
||||
|
||||
@@ -134,3 +136,6 @@ Take a look at [next steps](https://github.com/GoogleCloudPlatform/kubernetes/tr
|
||||
|
||||
### Cloud Formation [optional]
|
||||
There is a contributed [example](aws-coreos.md) from [CoreOS](http://www.coreos.com) using Cloud Formation.
|
||||
|
||||
### Further reading
|
||||
Please see the [Kubernetes docs](https://github.com/GoogleCloudPlatform/kubernetes/tree/master/docs) for more details on administering and using a Kubernetes cluster.
|
||||
|
@@ -19,7 +19,7 @@ coreos:
|
||||
content: |
|
||||
[Service]
|
||||
ExecStartPre=/bin/bash -c "until curl http://<master-private-ip>:4001/v2/machines; do sleep 2; done"
|
||||
ExecStartPre=/usr/bin/etcdctl set /coreos.com/network/config '{"Network":"10.244.0.0/16", "Backend": {"Type": "vxlan"}}'
|
||||
ExecStartPre=/usr/bin/etcdctl -C <master-private-ip>:4001 set /coreos.com/network/config '{"Network":"10.244.0.0/16", "Backend": {"Type": "vxlan"}}'
|
||||
- name: docker.service
|
||||
command: start
|
||||
drop-ins:
|
||||
|
@@ -68,7 +68,7 @@ kubectl create -f frontend-controller.json
|
||||
kubectl create -f frontend-service.json
|
||||
```
|
||||
|
||||
You need to wait for the pods to get deployed, run the following and wait for `STATUS` to change from `Unknown`, through `Pending` to `Runnig`.
|
||||
You need to wait for the pods to get deployed, run the following and wait for `STATUS` to change from `Unknown`, through `Pending` to `Running`.
|
||||
```
|
||||
kubectl get pods --watch
|
||||
```
|
||||
|
@@ -2,16 +2,48 @@
|
||||
## More specifically, we need to add peer hosts for each but the elected peer.
|
||||
|
||||
coreos:
|
||||
etcd:
|
||||
name: etcd
|
||||
addr: $private_ipv4:4001
|
||||
bind-addr: 0.0.0.0
|
||||
peer-addr: $private_ipv4:7001
|
||||
snapshot: true
|
||||
max-retry-attempts: 50
|
||||
units:
|
||||
- name: etcd.service
|
||||
- name: download-etcd2.service
|
||||
enable: true
|
||||
command: start
|
||||
content: |
|
||||
[Unit]
|
||||
After=network-online.target
|
||||
Before=etcd2.service
|
||||
Description=Download etcd2 Binaries
|
||||
Documentation=https://github.com/coreos/etcd/
|
||||
Requires=network-online.target
|
||||
[Service]
|
||||
Environment=ETCD2_RELEASE_TARBALL=https://github.com/coreos/etcd/releases/download/v2.0.9/etcd-v2.0.9-linux-amd64.tar.gz
|
||||
ExecStartPre=/bin/mkdir -p /opt/bin
|
||||
ExecStart=/bin/bash -c "curl --silent --location $ETCD2_RELEASE_TARBALL | tar xzv -C /opt"
|
||||
ExecStartPost=/bin/ln -s /opt/etcd-v2.0.9-linux-amd64/etcd /opt/bin/etcd2
|
||||
ExecStartPost=/bin/ln -s /opt/etcd-v2.0.9-linux-amd64/etcdctl /opt/bin/etcdctl2
|
||||
RemainAfterExit=yes
|
||||
Type=oneshot
|
||||
[Install]
|
||||
WantedBy=multi-user.target
|
||||
- name: etcd2.service
|
||||
enable: true
|
||||
command: start
|
||||
content: |
|
||||
[Unit]
|
||||
After=download-etcd2.service
|
||||
Description=etcd 2
|
||||
Documentation=https://github.com/coreos/etcd/
|
||||
[Service]
|
||||
Environment=ETCD_NAME=%host%
|
||||
Environment=ETCD_INITIAL_CLUSTER_TOKEN=etcd-cluster
|
||||
Environment=ETCD_INITIAL_ADVERTISE_PEER_URLS=http://%host%:2380
|
||||
Environment=ETCD_LISTEN_PEER_URLS=http://%host%:2380
|
||||
Environment=ETCD_LISTEN_CLIENT_URLS=http://0.0.0.0:2379,http://0.0.0.0:4001
|
||||
Environment=ETCD_INITIAL_CLUSTER=%cluster%
|
||||
Environment=ETCD_INITIAL_CLUSTER_STATE=new
|
||||
ExecStart=/opt/bin/etcd2
|
||||
Restart=always
|
||||
RestartSec=10
|
||||
[Install]
|
||||
WantedBy=multi-user.target
|
||||
update:
|
||||
group: stable
|
||||
reboot-strategy: off
|
||||
|
@@ -18,9 +18,37 @@ write_files:
|
||||
printf '{ "id": "%s", "kind": "Minion", "apiVersion": "v1beta1", "labels": { "environment": "production" } }' "${minion_id}" \
|
||||
| /opt/bin/kubectl create -s "${master_url}" -f -
|
||||
|
||||
- path: /etc/kubernetes/manifests/fluentd.manifest
|
||||
permissions: '0755'
|
||||
owner: root
|
||||
content: |
|
||||
version: v1beta2
|
||||
id: fluentd-to-elasticsearch
|
||||
containers:
|
||||
- name: fluentd-es
|
||||
image: gcr.io/google_containers/fluentd-elasticsearch:1.3
|
||||
env:
|
||||
- name: FLUENTD_ARGS
|
||||
value: -qq
|
||||
volumeMounts:
|
||||
- name: containers
|
||||
mountPath: /var/lib/docker/containers
|
||||
- name: varlog
|
||||
mountPath: /varlog
|
||||
volumes:
|
||||
- name: containers
|
||||
source:
|
||||
hostDir:
|
||||
path: /var/lib/docker/containers
|
||||
- name: varlog
|
||||
source:
|
||||
hostDir:
|
||||
path: /var/log
|
||||
|
||||
coreos:
|
||||
update:
|
||||
group: stable
|
||||
reboot-strategy: off
|
||||
units:
|
||||
- name: docker.service
|
||||
drop-ins:
|
||||
@@ -187,7 +215,7 @@ coreos:
|
||||
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
|
||||
Requires=network-online.target
|
||||
[Service]
|
||||
Environment=KUBE_RELEASE_TARBALL=https://github.com/GoogleCloudPlatform/kubernetes/releases/download/v0.11.0/kubernetes.tar.gz
|
||||
Environment=KUBE_RELEASE_TARBALL=https://github.com/GoogleCloudPlatform/kubernetes/releases/download/v0.15.0/kubernetes.tar.gz
|
||||
ExecStartPre=/bin/mkdir -p /opt/
|
||||
ExecStart=/bin/bash -c "curl --silent --location $KUBE_RELEASE_TARBALL | tar xzv -C /tmp/"
|
||||
ExecStart=/bin/tar xzvf /tmp/kubernetes/server/kubernetes-server-linux-amd64.tar.gz -C /opt
|
||||
@@ -278,12 +306,16 @@ coreos:
|
||||
Wants=download-kubernetes.service
|
||||
ConditionHost=!kube-00
|
||||
[Service]
|
||||
ExecStartPre=/bin/mkdir -p /etc/kubernetes/manifests/
|
||||
ExecStart=/opt/kubernetes/server/bin/kubelet \
|
||||
--address=0.0.0.0 \
|
||||
--port=10250 \
|
||||
--hostname_override=%H \
|
||||
--api_servers=http://kube-00:8080 \
|
||||
--logtostderr=true
|
||||
--logtostderr=true \
|
||||
--cluster_dns=10.1.0.3 \
|
||||
--cluster_domain=kube.local \
|
||||
--config=/etc/kubernetes/manifests/
|
||||
Restart=always
|
||||
RestartSec=10
|
||||
[Install]
|
||||
|
@@ -13,9 +13,9 @@ var inspect = require('util').inspect;
|
||||
var util = require('./util.js');
|
||||
|
||||
var coreos_image_ids = {
|
||||
'stable': '2b171e93f07c4903bcad35bda10acf22__CoreOS-Stable-607.0.0',
|
||||
'beta': '2b171e93f07c4903bcad35bda10acf22__CoreOS-Beta-612.1.0', // untested
|
||||
'alpha': '2b171e93f07c4903bcad35bda10acf22__CoreOS-Alpha-626.0.0', // untested
|
||||
'stable': '2b171e93f07c4903bcad35bda10acf22__CoreOS-Stable-633.1.0',
|
||||
'beta': '2b171e93f07c4903bcad35bda10acf22__CoreOS-Beta-647.0.0', // untested
|
||||
'alpha': '2b171e93f07c4903bcad35bda10acf22__CoreOS-Alpha-647.0.0' // untested
|
||||
};
|
||||
|
||||
var conf = {};
|
||||
@@ -140,7 +140,9 @@ var create_ssh_conf = function () {
|
||||
};
|
||||
|
||||
var get_location = function () {
|
||||
if (process.env['AZ_LOCATION']) {
|
||||
if (process.env['AZ_AFFINITY']) {
|
||||
return '--affinity-group=' + process.env['AZ_AFFINITY'];
|
||||
} else if (process.env['AZ_LOCATION']) {
|
||||
return '--location=' + process.env['AZ_LOCATION'];
|
||||
} else {
|
||||
return '--location=West Europe';
|
||||
|
@@ -1,22 +1,29 @@
|
||||
var _ = require('underscore');
|
||||
_.mixin(require('underscore.string').exports());
|
||||
|
||||
var util = require('../util.js');
|
||||
var cloud_config = require('../cloud_config.js');
|
||||
|
||||
|
||||
exports.create_etcd_cloud_config = function (node_count, conf) {
|
||||
var elected_node = 0;
|
||||
|
||||
var input_file = './cloud_config_templates/kubernetes-cluster-etcd-node-template.yml';
|
||||
|
||||
var peers = [ ];
|
||||
for (var i = 0; i < node_count; i++) {
|
||||
peers.push(util.hostname(i, 'etcd') + '=http://' + util.hostname(i, 'etcd') + ':2380');
|
||||
}
|
||||
var cluster = peers.join(',');
|
||||
|
||||
return _(node_count).times(function (n) {
|
||||
var output_file = util.join_output_file_path('kubernetes-cluster-etcd-node-' + n, 'generated.yml');
|
||||
|
||||
return cloud_config.process_template(input_file, output_file, function(data) {
|
||||
if (n !== elected_node) {
|
||||
data.coreos.etcd.peers = [
|
||||
util.hostname(elected_node, 'etcd'), 7001
|
||||
].join(':');
|
||||
for (var i = 0; i < data.coreos.units.length; i++) {
|
||||
var unit = data.coreos.units[i];
|
||||
if (unit.name === 'etcd2.service') {
|
||||
unit.content = _.replaceAll(_.replaceAll(unit.content, '%host%', util.hostname(n, 'etcd')), '%cluster%', cluster);
|
||||
break;
|
||||
}
|
||||
}
|
||||
return data;
|
||||
});
|
||||
|
@@ -108,20 +108,20 @@ systemctl start docker
|
||||
Ok, now that your networking is set up, you can startup Kubernetes, this is the same as the single-node case, we will use the "main" instance of the Docker daemon for the Kubernetes components.
|
||||
|
||||
```sh
|
||||
sudo docker run --net=host -d -v /var/run/docker.sock:/var/run/docker.sock gcr.io/google_containers/hyperkube:v0.14.2 /hyperkube kubelet --api_servers=http://localhost:8080 --v=2 --address=0.0.0.0 --enable_server --hostname_override=127.0.0.1 --config=/etc/kubernetes/manifests-multi
|
||||
sudo docker run --net=host -d -v /var/run/docker.sock:/var/run/docker.sock gcr.io/google_containers/hyperkube:v0.15.0 /hyperkube kubelet --api_servers=http://localhost:8080 --v=2 --address=0.0.0.0 --enable_server --hostname_override=127.0.0.1 --config=/etc/kubernetes/manifests-multi
|
||||
```
|
||||
|
||||
### Also run the service proxy
|
||||
```sh
|
||||
sudo docker run -d --net=host --privileged gcr.io/google_containers/hyperkube:v0.14.2 /hyperkube proxy --master=http://127.0.0.1:8080 --v=2
|
||||
sudo docker run -d --net=host --privileged gcr.io/google_containers/hyperkube:v0.15.0 /hyperkube proxy --master=http://127.0.0.1:8080 --v=2
|
||||
```
|
||||
|
||||
### Test it out
|
||||
At this point, you should have a functioning 1-node cluster. Let's test it out!
|
||||
|
||||
Download the kubectl binary
|
||||
([OS X](http://storage.googleapis.com/kubernetes-release/release/v0.14.2/bin/darwin/amd64/kubectl))
|
||||
([linux](http://storage.googleapis.com/kubernetes-release/release/v0.14.2/bin/linux/amd64/kubectl))
|
||||
([OS X](http://storage.googleapis.com/kubernetes-release/release/v0.15.0/bin/darwin/amd64/kubectl))
|
||||
([linux](http://storage.googleapis.com/kubernetes-release/release/v0.15.0/bin/linux/amd64/kubectl))
|
||||
|
||||
List the nodes
|
||||
|
||||
|
@@ -93,14 +93,14 @@ systemctl start docker
|
||||
Again this is similar to the above, but the ```--api_servers``` now points to the master we set up in the beginning.
|
||||
|
||||
```sh
|
||||
sudo docker run --net=host -d -v /var/run/docker.sock:/var/run/docker.sock gcr.io/google_containers/hyperkube:v0.14.2 /hyperkube kubelet --api_servers=http://${MASTER_IP}:8080 --v=2 --address=0.0.0.0 --enable_server --hostname_override=$(hostname -i)
|
||||
sudo docker run --net=host -d -v /var/run/docker.sock:/var/run/docker.sock gcr.io/google_containers/hyperkube:v0.15.0 /hyperkube kubelet --api_servers=http://${MASTER_IP}:8080 --v=2 --address=0.0.0.0 --enable_server --hostname_override=$(hostname -i)
|
||||
```
|
||||
|
||||
#### Run the service proxy
|
||||
The service proxy provides load-balancing between groups of containers defined by Kubernetes ```Services```
|
||||
|
||||
```sh
|
||||
sudo docker run -d --net=host --privileged gcr.io/google_containers/hyperkube:v0.14.2 /hyperkube proxy --master=http://${MASTER_IP}:8080 --v=2
|
||||
sudo docker run -d --net=host --privileged gcr.io/google_containers/hyperkube:v0.15.0 /hyperkube proxy --master=http://${MASTER_IP}:8080 --v=2
|
||||
```
|
||||
|
||||
|
||||
|
@@ -12,7 +12,7 @@ docker run --net=host -d kubernetes/etcd:2.0.5.1 /usr/local/bin/etcd --addr=127.
|
||||
|
||||
### Step Two: Run the master
|
||||
```sh
|
||||
docker run --net=host -d -v /var/run/docker.sock:/var/run/docker.sock gcr.io/google_containers/hyperkube:v0.14.2 /hyperkube kubelet --api_servers=http://localhost:8080 --v=2 --address=0.0.0.0 --enable_server --hostname_override=127.0.0.1 --config=/etc/kubernetes/manifests
|
||||
docker run --net=host -d -v /var/run/docker.sock:/var/run/docker.sock gcr.io/google_containers/hyperkube:v0.15.0 /hyperkube kubelet --api_servers=http://localhost:8080 --v=2 --address=0.0.0.0 --enable_server --hostname_override=127.0.0.1 --config=/etc/kubernetes/manifests
|
||||
```
|
||||
|
||||
This actually runs the kubelet, which in turn runs a [pod](https://github.com/GoogleCloudPlatform/kubernetes/blob/master/docs/pods.md) that contains the other master components.
|
||||
@@ -20,14 +20,14 @@ This actually runs the kubelet, which in turn runs a [pod](https://github.com/Go
|
||||
### Step Three: Run the service proxy
|
||||
*Note, this could be combined with master above, but it requires --privileged for iptables manipulation*
|
||||
```sh
|
||||
docker run -d --net=host --privileged gcr.io/google_containers/hyperkube:v0.14.2 /hyperkube proxy --master=http://127.0.0.1:8080 --v=2
|
||||
docker run -d --net=host --privileged gcr.io/google_containers/hyperkube:v0.15.0 /hyperkube proxy --master=http://127.0.0.1:8080 --v=2
|
||||
```
|
||||
|
||||
### Test it out
|
||||
At this point you should have a running kubernetes cluster. You can test this by downloading the kubectl
|
||||
binary
|
||||
([OS X](http://storage.googleapis.com/kubernetes-release/release/v0.14.2/bin/darwin/amd64/kubectl))
|
||||
([linux](http://storage.googleapis.com/kubernetes-release/release/v0.14.2/bin/linux/amd64/kubectl))
|
||||
([OS X](http://storage.googleapis.com/kubernetes-release/release/v0.15.0/bin/darwin/amd64/kubectl))
|
||||
([linux](http://storage.googleapis.com/kubernetes-release/release/v0.15.0/bin/linux/amd64/kubectl))
|
||||
|
||||
*Note:*
|
||||
On OS/X you will need to set up port forwarding via ssh:
|
||||
|
@@ -47,7 +47,7 @@ $ export KUBERNETES_MASTER=http://${servicehost}:8888
|
||||
Start etcd and verify that it is running:
|
||||
|
||||
```bash
|
||||
$ sudo docker run -d --hostname $(hostname -f) --name etcd -p 4001:4001 -p 7001:7001 coreos/etcd
|
||||
$ sudo docker run -d --hostname $(uname -n) --name etcd -p 4001:4001 -p 7001:7001 coreos/etcd
|
||||
```
|
||||
|
||||
```bash
|
||||
|
@@ -1,3 +1,7 @@
|
||||
# Status: Out Of Date
|
||||
|
||||
** Rackspace support is out of date. Please check back later **
|
||||
|
||||
# Rackspace
|
||||
In general, the dev-build-and-up.sh workflow for Rackspace is the similar to GCE. The specific implementation is different due to the use of CoreOS, Rackspace Cloud Files and network design.
|
||||
|
||||
|
@@ -4,13 +4,17 @@ Running kubernetes with Vagrant (and VirtualBox) is an easy way to run/test/deve
|
||||
|
||||
### Prerequisites
|
||||
1. Install latest version >= 1.6.2 of vagrant from http://www.vagrantup.com/downloads.html
|
||||
2. Install latest version of Virtual Box from https://www.virtualbox.org/wiki/Downloads
|
||||
2. Install one of:
|
||||
1. The latest version of Virtual Box from https://www.virtualbox.org/wiki/Downloads
|
||||
2. [VMWare Fusion](https://www.vmware.com/products/fusion/) version 5 or greater as well as the appropriate [Vagrant VMWare Fusion provider](https://www.vagrantup.com/vmware)
|
||||
3. [VMWare Workstation](https://www.vmware.com/products/workstation/) version 9 or greater as well as the [Vagrant VMWare Workstation provider](https://www.vagrantup.com/vmware)
|
||||
4. [Parallels Desktop](https://www.parallels.com/products/desktop/) version 9 or greater as well as the [Vagrant Parallels provider](https://parallels.github.io/vagrant-parallels/)
|
||||
|
||||
### Setup
|
||||
|
||||
Setting up a cluster is as simple as running:
|
||||
|
||||
```
|
||||
```sh
|
||||
export KUBERNETES_PROVIDER=vagrant
|
||||
curl -sS https://get.k8s.io | bash
|
||||
```
|
||||
@@ -19,33 +23,41 @@ The `KUBERNETES_PROVIDER` environment variable tells all of the various cluster
|
||||
|
||||
By default, the Vagrant setup will create a single kubernetes-master and 1 kubernetes-minion. Each VM will take 1 GB, so make sure you have at least 2GB to 4GB of free memory (plus appropriate free disk space). To start your local cluster, open a shell and run:
|
||||
|
||||
```
|
||||
```sh
|
||||
cd kubernetes
|
||||
|
||||
export KUBERNETES_PROVIDER=vagrant
|
||||
cluster/kube-up.sh
|
||||
./cluster/kube-up.sh
|
||||
```
|
||||
|
||||
Vagrant will provision each machine in the cluster with all the necessary components to run Kubernetes. The initial setup can take a few minutes to complete on each machine.
|
||||
|
||||
If you installed more than one Vagrant provider, Kubernetes will usually pick the appropriate one. However, you can override which one Kubernetes will use by setting the [`VAGRANT_DEFAULT_PROVIDER`](https://docs.vagrantup.com/v2/providers/default.html) environment variable:
|
||||
|
||||
```sh
|
||||
export VAGRANT_DEFAULT_PROVIDER=parallels
|
||||
export KUBERNETES_PROVIDER=vagrant
|
||||
./cluster/kube-up.sh
|
||||
```
|
||||
|
||||
By default, each VM in the cluster is running Fedora, and all of the Kubernetes services are installed into systemd.
|
||||
|
||||
To access the master or any minion:
|
||||
|
||||
```
|
||||
```sh
|
||||
vagrant ssh master
|
||||
vagrant ssh minion-1
|
||||
```
|
||||
|
||||
If you are running more than one minion, you can access the others by:
|
||||
|
||||
```
|
||||
```sh
|
||||
vagrant ssh minion-2
|
||||
vagrant ssh minion-3
|
||||
```
|
||||
|
||||
To view the service status and/or logs on the kubernetes-master:
|
||||
```
|
||||
```sh
|
||||
vagrant ssh master
|
||||
[vagrant@kubernetes-master ~] $ sudo systemctl status kube-apiserver
|
||||
[vagrant@kubernetes-master ~] $ sudo journalctl -r -u kube-apiserver
|
||||
@@ -58,7 +70,7 @@ vagrant ssh master
|
||||
```
|
||||
|
||||
To view the services on any of the kubernetes-minion(s):
|
||||
```
|
||||
```sh
|
||||
vagrant ssh minion-1
|
||||
[vagrant@kubernetes-minion-1] $ sudo systemctl status docker
|
||||
[vagrant@kubernetes-minion-1] $ sudo journalctl -r -u docker
|
||||
@@ -71,18 +83,18 @@ vagrant ssh minion-1
|
||||
With your Kubernetes cluster up, you can manage the nodes in your cluster with the regular Vagrant commands.
|
||||
|
||||
To push updates to new Kubernetes code after making source changes:
|
||||
```
|
||||
cluster/kube-push.sh
|
||||
```sh
|
||||
./cluster/kube-push.sh
|
||||
```
|
||||
|
||||
To stop and then restart the cluster:
|
||||
```
|
||||
```sh
|
||||
vagrant halt
|
||||
cluster/kube-up.sh
|
||||
./cluster/kube-up.sh
|
||||
```
|
||||
|
||||
To destroy the cluster:
|
||||
```
|
||||
```sh
|
||||
vagrant destroy
|
||||
```
|
||||
|
||||
@@ -90,14 +102,13 @@ Once your Vagrant machines are up and provisioned, the first thing to do is to c
|
||||
|
||||
You may need to build the binaries first, you can do this with ```make```
|
||||
|
||||
```
|
||||
```sh
|
||||
$ ./cluster/kubectl.sh get minions
|
||||
|
||||
NAME LABELS
|
||||
10.245.1.4 <none>
|
||||
10.245.1.5 <none>
|
||||
10.245.1.3 <none>
|
||||
|
||||
```
|
||||
|
||||
### Interacting with your Kubernetes cluster with the `kube-*` scripts.
|
||||
@@ -106,39 +117,39 @@ Alternatively to using the vagrant commands, you can also use the `cluster/kube-
|
||||
|
||||
All of these commands assume you have set `KUBERNETES_PROVIDER` appropriately:
|
||||
|
||||
```
|
||||
```sh
|
||||
export KUBERNETES_PROVIDER=vagrant
|
||||
```
|
||||
|
||||
Bring up a vagrant cluster
|
||||
|
||||
```
|
||||
cluster/kube-up.sh
|
||||
```sh
|
||||
./cluster/kube-up.sh
|
||||
```
|
||||
|
||||
Destroy the vagrant cluster
|
||||
|
||||
```
|
||||
cluster/kube-down.sh
|
||||
```sh
|
||||
./cluster/kube-down.sh
|
||||
```
|
||||
|
||||
Update the vagrant cluster after you make changes (only works when building your own releases locally):
|
||||
|
||||
```
|
||||
cluster/kube-push.sh
|
||||
```sh
|
||||
./cluster/kube-push.sh
|
||||
```
|
||||
|
||||
Interact with the cluster
|
||||
|
||||
```
|
||||
cluster/kubectl.sh
|
||||
```sh
|
||||
./cluster/kubectl.sh
|
||||
```
|
||||
|
||||
### Authenticating with your master
|
||||
|
||||
When using the vagrant provider in Kubernetes, the `cluster/kubectl.sh` script will cache your credentials in a `~/.kubernetes_vagrant_auth` file so you will not be prompted for them in the future.
|
||||
|
||||
```
|
||||
```sh
|
||||
cat ~/.kubernetes_vagrant_auth
|
||||
{ "User": "vagrant",
|
||||
"Password": "vagrant"
|
||||
@@ -150,50 +161,49 @@ cat ~/.kubernetes_vagrant_auth
|
||||
|
||||
You should now be set to use the `cluster/kubectl.sh` script. For example try to list the minions that you have started with:
|
||||
|
||||
```
|
||||
cluster/kubectl.sh get minions
|
||||
```sh
|
||||
./cluster/kubectl.sh get minions
|
||||
```
|
||||
|
||||
### Running containers
|
||||
|
||||
Your cluster is running, you can list the minions in your cluster:
|
||||
|
||||
```
|
||||
$ cluster/kubectl.sh get minions
|
||||
```sh
|
||||
$ ./cluster/kubectl.sh get minions
|
||||
|
||||
NAME LABELS
|
||||
10.245.2.4 <none>
|
||||
10.245.2.3 <none>
|
||||
10.245.2.2 <none>
|
||||
|
||||
```
|
||||
|
||||
Now start running some containers!
|
||||
|
||||
You can now use any of the cluster/kube-*.sh commands to interact with your VM machines.
|
||||
You can now use any of the `cluster/kube-*.sh` commands to interact with your VM machines.
|
||||
Before starting a container there will be no pods, services and replication controllers.
|
||||
|
||||
```
|
||||
$ cluster/kubectl.sh get pods
|
||||
```sh
|
||||
$ ./cluster/kubectl.sh get pods
|
||||
NAME IMAGE(S) HOST LABELS STATUS
|
||||
|
||||
$ cluster/kubectl.sh get services
|
||||
$ ./cluster/kubectl.sh get services
|
||||
NAME LABELS SELECTOR IP PORT
|
||||
|
||||
$ cluster/kubectl.sh get replicationControllers
|
||||
$ ./cluster/kubectl.sh get replicationControllers
|
||||
NAME IMAGE(S SELECTOR REPLICAS
|
||||
```
|
||||
|
||||
Start a container running nginx with a replication controller and three replicas
|
||||
|
||||
```
|
||||
$ cluster/kubectl.sh run-container my-nginx --image=nginx --replicas=3 --port=80
|
||||
```sh
|
||||
$ ./cluster/kubectl.sh run-container my-nginx --image=nginx --replicas=3 --port=80
|
||||
```
|
||||
|
||||
When listing the pods, you will see that three containers have been started and are in Waiting state:
|
||||
|
||||
```
|
||||
$ cluster/kubectl.sh get pods
|
||||
```sh
|
||||
$ ./cluster/kubectl.sh get pods
|
||||
NAME IMAGE(S) HOST LABELS STATUS
|
||||
781191ff-3ffe-11e4-9036-0800279696e1 nginx 10.245.2.4/10.245.2.4 name=myNginx Waiting
|
||||
7813c8bd-3ffe-11e4-9036-0800279696e1 nginx 10.245.2.2/10.245.2.2 name=myNginx Waiting
|
||||
@@ -202,7 +212,7 @@ NAME IMAGE(S) HOST
|
||||
|
||||
You need to wait for the provisioning to complete, you can monitor the minions by doing:
|
||||
|
||||
```
|
||||
```sh
|
||||
$ sudo salt '*minion-1' cmd.run 'docker images'
|
||||
kubernetes-minion-1:
|
||||
REPOSITORY TAG IMAGE ID CREATED VIRTUAL SIZE
|
||||
@@ -213,7 +223,7 @@ kubernetes-minion-1:
|
||||
|
||||
Once the docker image for nginx has been downloaded, the container will start and you can list it:
|
||||
|
||||
```
|
||||
```sh
|
||||
$ sudo salt '*minion-1' cmd.run 'docker ps'
|
||||
kubernetes-minion-1:
|
||||
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
|
||||
@@ -225,17 +235,17 @@ kubernetes-minion-1:
|
||||
|
||||
Going back to listing the pods, services and replicationControllers, you now have:
|
||||
|
||||
```
|
||||
$ cluster/kubectl.sh get pods
|
||||
```sh
|
||||
$ ./cluster/kubectl.sh get pods
|
||||
NAME IMAGE(S) HOST LABELS STATUS
|
||||
781191ff-3ffe-11e4-9036-0800279696e1 nginx 10.245.2.4/10.245.2.4 name=myNginx Running
|
||||
7813c8bd-3ffe-11e4-9036-0800279696e1 nginx 10.245.2.2/10.245.2.2 name=myNginx Running
|
||||
78140853-3ffe-11e4-9036-0800279696e1 nginx 10.245.2.3/10.245.2.3 name=myNginx Running
|
||||
|
||||
$ cluster/kubectl.sh get services
|
||||
$ ./cluster/kubectl.sh get services
|
||||
NAME LABELS SELECTOR IP PORT
|
||||
|
||||
$ cluster/kubectl.sh get replicationControllers
|
||||
$ ./cluster/kubectl.sh get replicationControllers
|
||||
NAME IMAGE(S SELECTOR REPLICAS
|
||||
myNginx nginx name=my-nginx 3
|
||||
```
|
||||
@@ -244,9 +254,9 @@ We did not start any services, hence there are none listed. But we see three rep
|
||||
Check the [guestbook](../../examples/guestbook/README.md) application to learn how to create a service.
|
||||
You can already play with resizing the replicas with:
|
||||
|
||||
```
|
||||
$ cluster/kubectl.sh resize rc my-nginx --replicas=2
|
||||
$ cluster/kubectl.sh get pods
|
||||
```sh
|
||||
$ ./cluster/kubectl.sh resize rc my-nginx --replicas=2
|
||||
$ ./cluster/kubectl.sh get pods
|
||||
NAME IMAGE(S) HOST LABELS STATUS
|
||||
7813c8bd-3ffe-11e4-9036-0800279696e1 nginx 10.245.2.2/10.245.2.2 name=myNginx Running
|
||||
78140853-3ffe-11e4-9036-0800279696e1 nginx 10.245.2.3/10.245.2.3 name=myNginx Running
|
||||
@@ -258,26 +268,26 @@ Congratulations!
|
||||
|
||||
#### I keep downloading the same (large) box all the time!
|
||||
|
||||
By default the Vagrantfile will download the box from S3. You can change this (and cache the box locally) by providing an alternate URL when calling `kube-up.sh`
|
||||
By default the Vagrantfile will download the box from S3. You can change this (and cache the box locally) by providing a name and an alternate URL when calling `kube-up.sh`
|
||||
|
||||
```bash
|
||||
```sh
|
||||
export KUBERNETES_BOX_NAME=choose_your_own_name_for_your_kuber_box
|
||||
export KUBERNETES_BOX_URL=path_of_your_kuber_box
|
||||
export KUBERNETES_PROVIDER=vagrant
|
||||
cluster/kube-up.sh
|
||||
./cluster/kube-up.sh
|
||||
```
|
||||
|
||||
|
||||
#### I just created the cluster, but I am getting authorization errors!
|
||||
|
||||
You probably have an incorrect ~/.kubernetes_vagrant_auth file for the cluster you are attempting to contact.
|
||||
|
||||
```
|
||||
```sh
|
||||
rm ~/.kubernetes_vagrant_auth
|
||||
```
|
||||
|
||||
After using kubectl.sh make sure that the correct credentials are set:
|
||||
|
||||
```
|
||||
```sh
|
||||
cat ~/.kubernetes_vagrant_auth
|
||||
{
|
||||
"User": "vagrant",
|
||||
@@ -285,34 +295,41 @@ cat ~/.kubernetes_vagrant_auth
|
||||
}
|
||||
```
|
||||
|
||||
#### I just created the cluster, but I do not see my container running !
|
||||
#### I just created the cluster, but I do not see my container running!
|
||||
|
||||
If this is your first time creating the cluster, the kubelet on each minion schedules a number of docker pull requests to fetch prerequisite images. This can take some time and as a result may delay your initial pod getting provisioned.
|
||||
|
||||
#### I want to make changes to Kubernetes code !
|
||||
#### I want to make changes to Kubernetes code!
|
||||
|
||||
To set up a vagrant cluster for hacking, follow the [vagrant developer guide](../devel/developer-guides/vagrant.md).
|
||||
|
||||
#### I have brought Vagrant up but the minions won't validate !
|
||||
#### I have brought Vagrant up but the minions won't validate!
|
||||
|
||||
Log on to one of the minions (`vagrant ssh minion-1`) and inspect the salt minion log (`sudo cat /var/log/salt/minion`).
|
||||
|
||||
#### I want to change the number of minions !
|
||||
#### I want to change the number of minions!
|
||||
|
||||
You can control the number of minions that are instantiated via the environment variable `NUM_MINIONS` on your host machine. If you plan to work with replicas, we strongly encourage you to work with enough minions to satisfy your largest intended replica size. If you do not plan to work with replicas, you can save some system resources by running with a single minion. You do this, by setting `NUM_MINIONS` to 1 like so:
|
||||
|
||||
```
|
||||
```sh
|
||||
export NUM_MINIONS=1
|
||||
```
|
||||
|
||||
#### I want my VMs to have more memory !
|
||||
#### I want my VMs to have more memory!
|
||||
|
||||
You can control the memory allotted to virtual machines with the `KUBERNETES_MEMORY` environment variable.
|
||||
Just set it to the number of megabytes you would like the machines to have. For example:
|
||||
|
||||
```
|
||||
```sh
|
||||
export KUBERNETES_MEMORY=2048
|
||||
```
|
||||
|
||||
If you need more granular control, you can set the amount of memory for the master and minions independently. For example:
|
||||
|
||||
```sh
|
||||
export KUBERNETES_MASTER_MEMORY=1536
|
||||
export KUBERNETES_MASTER_MINION=2048
|
||||
```
|
||||
|
||||
#### I ran vagrant suspend and nothing works!
|
||||
```vagrant suspend``` seems to mess up the network. It's not supported at this time.
|
||||
|
@@ -66,4 +66,4 @@ kubectl
|
||||
* [kubectl update](kubectl_update.md) - Update a resource by filename or stdin.
|
||||
* [kubectl version](kubectl_version.md) - Print the client and server version information.
|
||||
|
||||
###### Auto generated by spf13/cobra at 2015-04-16 17:04:37.392549632 +0000 UTC
|
||||
###### Auto generated by spf13/cobra at 2015-04-17 18:59:11.488963312 +0000 UTC
|
||||
|
@@ -50,4 +50,4 @@ kubectl api-versions
|
||||
### SEE ALSO
|
||||
* [kubectl](kubectl.md) - kubectl controls the Kubernetes cluster manager
|
||||
|
||||
###### Auto generated by spf13/cobra at 2015-04-16 17:04:37.39227534 +0000 UTC
|
||||
###### Auto generated by spf13/cobra at 2015-04-17 18:59:11.488505223 +0000 UTC
|
||||
|
@@ -50,4 +50,4 @@ kubectl cluster-info
|
||||
### SEE ALSO
|
||||
* [kubectl](kubectl.md) - kubectl controls the Kubernetes cluster manager
|
||||
|
||||
###### Auto generated by spf13/cobra at 2015-04-16 17:04:37.392162759 +0000 UTC
|
||||
###### Auto generated by spf13/cobra at 2015-04-17 18:59:11.48831375 +0000 UTC
|
||||
|
@@ -63,4 +63,4 @@ kubectl config SUBCOMMAND
|
||||
* [kubectl config use-context](kubectl_config_use-context.md) - Sets the current-context in a kubeconfig file
|
||||
* [kubectl config view](kubectl_config_view.md) - displays Merged kubeconfig settings or a specified kubeconfig file.
|
||||
|
||||
###### Auto generated by spf13/cobra at 2015-04-16 17:04:37.392043616 +0000 UTC
|
||||
###### Auto generated by spf13/cobra at 2015-04-17 18:59:11.488116168 +0000 UTC
|
||||
|
@@ -65,4 +65,4 @@ $ kubectl config set-cluster e2e --insecure-skip-tls-verify=true
|
||||
### SEE ALSO
|
||||
* [kubectl config](kubectl_config.md) - config modifies kubeconfig files
|
||||
|
||||
###### Auto generated by spf13/cobra at 2015-04-16 17:04:37.39119629 +0000 UTC
|
||||
###### Auto generated by spf13/cobra at 2015-04-17 18:59:11.486460859 +0000 UTC
|
||||
|
@@ -58,4 +58,4 @@ $ kubectl config set-context gce --user=cluster-admin
|
||||
### SEE ALSO
|
||||
* [kubectl config](kubectl_config.md) - config modifies kubeconfig files
|
||||
|
||||
###### Auto generated by spf13/cobra at 2015-04-16 17:04:37.391488399 +0000 UTC
|
||||
###### Auto generated by spf13/cobra at 2015-04-17 18:59:11.486736724 +0000 UTC
|
||||
|
@@ -78,4 +78,4 @@ $ kubectl set-credentials cluster-admin --client-certificate=~/.kube/admin.crt -
|
||||
### SEE ALSO
|
||||
* [kubectl config](kubectl_config.md) - config modifies kubeconfig files
|
||||
|
||||
###### Auto generated by spf13/cobra at 2015-04-16 17:04:37.391323192 +0000 UTC
|
||||
###### Auto generated by spf13/cobra at 2015-04-17 18:59:11.486604006 +0000 UTC
|
||||
|
@@ -52,4 +52,4 @@ kubectl config set PROPERTY_NAME PROPERTY_VALUE
|
||||
### SEE ALSO
|
||||
* [kubectl config](kubectl_config.md) - config modifies kubeconfig files
|
||||
|
||||
###### Auto generated by spf13/cobra at 2015-04-16 17:04:37.391618859 +0000 UTC
|
||||
###### Auto generated by spf13/cobra at 2015-04-17 18:59:11.486861123 +0000 UTC
|
||||
|
@@ -51,4 +51,4 @@ kubectl config unset PROPERTY_NAME
|
||||
### SEE ALSO
|
||||
* [kubectl config](kubectl_config.md) - config modifies kubeconfig files
|
||||
|
||||
###### Auto generated by spf13/cobra at 2015-04-16 17:04:37.391735806 +0000 UTC
|
||||
###### Auto generated by spf13/cobra at 2015-04-17 18:59:11.487685494 +0000 UTC
|
||||
|
@@ -50,4 +50,4 @@ kubectl config use-context CONTEXT_NAME
|
||||
### SEE ALSO
|
||||
* [kubectl config](kubectl_config.md) - config modifies kubeconfig files
|
||||
|
||||
###### Auto generated by spf13/cobra at 2015-04-16 17:04:37.391848246 +0000 UTC
|
||||
###### Auto generated by spf13/cobra at 2015-04-17 18:59:11.487888021 +0000 UTC
|
||||
|
@@ -72,4 +72,4 @@ $ kubectl config view -o template --template='{{range .users}}{{ if eq .name "e2
|
||||
### SEE ALSO
|
||||
* [kubectl config](kubectl_config.md) - config modifies kubeconfig files
|
||||
|
||||
###### Auto generated by spf13/cobra at 2015-04-16 17:04:37.391073075 +0000 UTC
|
||||
###### Auto generated by spf13/cobra at 2015-04-17 18:59:11.486319118 +0000 UTC
|
||||
|
@@ -63,4 +63,4 @@ $ cat pod.json | kubectl create -f -
|
||||
### SEE ALSO
|
||||
* [kubectl](kubectl.md) - kubectl controls the Kubernetes cluster manager
|
||||
|
||||
###### Auto generated by spf13/cobra at 2015-04-16 17:04:37.388588064 +0000 UTC
|
||||
###### Auto generated by spf13/cobra at 2015-04-17 18:59:11.48343431 +0000 UTC
|
||||
|
@@ -81,4 +81,4 @@ $ kubectl delete pods --all
|
||||
### SEE ALSO
|
||||
* [kubectl](kubectl.md) - kubectl controls the Kubernetes cluster manager
|
||||
|
||||
###### Auto generated by spf13/cobra at 2015-04-16 17:04:37.389412973 +0000 UTC
|
||||
###### Auto generated by spf13/cobra at 2015-04-17 18:59:11.483731878 +0000 UTC
|
||||
|
@@ -53,4 +53,4 @@ kubectl describe RESOURCE ID
|
||||
### SEE ALSO
|
||||
* [kubectl](kubectl.md) - kubectl controls the Kubernetes cluster manager
|
||||
|
||||
###### Auto generated by spf13/cobra at 2015-04-16 17:04:37.388410556 +0000 UTC
|
||||
###### Auto generated by spf13/cobra at 2015-04-17 18:59:11.483293174 +0000 UTC
|
||||
|
@@ -64,4 +64,4 @@ $ kubectl exec -p 123456-7890 -c ruby-container -i -t -- bash -il
|
||||
### SEE ALSO
|
||||
* [kubectl](kubectl.md) - kubectl controls the Kubernetes cluster manager
|
||||
|
||||
###### Auto generated by spf13/cobra at 2015-04-16 17:04:37.390127525 +0000 UTC
|
||||
###### Auto generated by spf13/cobra at 2015-04-17 18:59:11.484697863 +0000 UTC
|
||||
|
@@ -82,4 +82,4 @@ $ kubectl expose streamer --port=4100 --protocol=udp --service-name=video-stream
|
||||
### SEE ALSO
|
||||
* [kubectl](kubectl.md) - kubectl controls the Kubernetes cluster manager
|
||||
|
||||
###### Auto generated by spf13/cobra at 2015-04-16 17:04:37.390792874 +0000 UTC
|
||||
###### Auto generated by spf13/cobra at 2015-04-17 18:59:11.485803902 +0000 UTC
|
||||
|
@@ -8,7 +8,7 @@ Display one or many resources
|
||||
Display one or many resources.
|
||||
|
||||
Possible resources include pods (po), replication controllers (rc), services
|
||||
(svc), minions (mi), or events (ev).
|
||||
(svc), minions (mi), events (ev), or component statuses (cs).
|
||||
|
||||
By specifying the output as 'template' and providing a Go template as the value
|
||||
of the --template flag, you can filter the attributes of the fetched resource(s).
|
||||
@@ -85,4 +85,4 @@ $ kubectl get rc/web service/frontend pods/web-pod-13je7
|
||||
### SEE ALSO
|
||||
* [kubectl](kubectl.md) - kubectl controls the Kubernetes cluster manager
|
||||
|
||||
###### Auto generated by spf13/cobra at 2015-04-16 17:04:37.387483074 +0000 UTC
|
||||
###### Auto generated by spf13/cobra at 2015-04-17 18:59:11.482589064 +0000 UTC
|
||||
|
@@ -81,4 +81,4 @@ $ kubectl label pods foo bar-
|
||||
### SEE ALSO
|
||||
* [kubectl](kubectl.md) - kubectl controls the Kubernetes cluster manager
|
||||
|
||||
###### Auto generated by spf13/cobra at 2015-04-16 17:04:37.390937166 +0000 UTC
|
||||
###### Auto generated by spf13/cobra at 2015-04-17 18:59:11.486060232 +0000 UTC
|
||||
|
@@ -62,4 +62,4 @@ $ kubectl log -f 123456-7890 ruby-container
|
||||
### SEE ALSO
|
||||
* [kubectl](kubectl.md) - kubectl controls the Kubernetes cluster manager
|
||||
|
||||
###### Auto generated by spf13/cobra at 2015-04-16 17:04:37.389728881 +0000 UTC
|
||||
###### Auto generated by spf13/cobra at 2015-04-17 18:59:11.484139739 +0000 UTC
|
||||
|
@@ -53,4 +53,4 @@ kubectl namespace [namespace]
|
||||
### SEE ALSO
|
||||
* [kubectl](kubectl.md) - kubectl controls the Kubernetes cluster manager
|
||||
|
||||
###### Auto generated by spf13/cobra at 2015-04-16 17:04:37.389609191 +0000 UTC
|
||||
###### Auto generated by spf13/cobra at 2015-04-17 18:59:11.483937463 +0000 UTC
|
||||
|
@@ -68,4 +68,4 @@ $ kubectl port-forward -p mypod 0:5000
|
||||
### SEE ALSO
|
||||
* [kubectl](kubectl.md) - kubectl controls the Kubernetes cluster manager
|
||||
|
||||
###### Auto generated by spf13/cobra at 2015-04-16 17:04:37.390241417 +0000 UTC
|
||||
###### Auto generated by spf13/cobra at 2015-04-17 18:59:11.484899751 +0000 UTC
|
||||
|
@@ -65,4 +65,4 @@ $ kubectl proxy --api-prefix=k8s-api
|
||||
### SEE ALSO
|
||||
* [kubectl](kubectl.md) - kubectl controls the Kubernetes cluster manager
|
||||
|
||||
###### Auto generated by spf13/cobra at 2015-04-16 17:04:37.390360738 +0000 UTC
|
||||
###### Auto generated by spf13/cobra at 2015-04-17 18:59:11.485099157 +0000 UTC
|
||||
|
@@ -68,4 +68,4 @@ $ kubectl resize --current-replicas=2 --replicas=3 replicationcontrollers foo
|
||||
### SEE ALSO
|
||||
* [kubectl](kubectl.md) - kubectl controls the Kubernetes cluster manager
|
||||
|
||||
###### Auto generated by spf13/cobra at 2015-04-16 17:04:37.389989377 +0000 UTC
|
||||
###### Auto generated by spf13/cobra at 2015-04-17 18:59:11.484493463 +0000 UTC
|
||||
|
@@ -68,4 +68,4 @@ $ cat frontend-v2.json | kubectl rolling-update frontend-v1 -f -
|
||||
### SEE ALSO
|
||||
* [kubectl](kubectl.md) - kubectl controls the Kubernetes cluster manager
|
||||
|
||||
###### Auto generated by spf13/cobra at 2015-04-16 17:04:37.38985117 +0000 UTC
|
||||
###### Auto generated by spf13/cobra at 2015-04-17 18:59:11.484316119 +0000 UTC
|
||||
|
@@ -78,4 +78,4 @@ $ kubectl run-container nginx --image=nginx --overrides='{ "apiVersion": "v1beta
|
||||
### SEE ALSO
|
||||
* [kubectl](kubectl.md) - kubectl controls the Kubernetes cluster manager
|
||||
|
||||
###### Auto generated by spf13/cobra at 2015-04-16 17:04:37.390501802 +0000 UTC
|
||||
###### Auto generated by spf13/cobra at 2015-04-17 18:59:11.485362986 +0000 UTC
|
||||
|
@@ -72,4 +72,4 @@ $ kubectl stop -f path/to/resources
|
||||
### SEE ALSO
|
||||
* [kubectl](kubectl.md) - kubectl controls the Kubernetes cluster manager
|
||||
|
||||
###### Auto generated by spf13/cobra at 2015-04-16 17:04:37.390631789 +0000 UTC
|
||||
###### Auto generated by spf13/cobra at 2015-04-17 18:59:11.48555328 +0000 UTC
|
||||
|
@@ -67,4 +67,4 @@ $ kubectl update pods my-pod --patch='{ "apiVersion": "v1beta1", "desiredState":
|
||||
### SEE ALSO
|
||||
* [kubectl](kubectl.md) - kubectl controls the Kubernetes cluster manager
|
||||
|
||||
###### Auto generated by spf13/cobra at 2015-04-16 17:04:37.388743178 +0000 UTC
|
||||
###### Auto generated by spf13/cobra at 2015-04-17 18:59:11.483572524 +0000 UTC
|
||||
|
@@ -51,4 +51,4 @@ kubectl version
|
||||
### SEE ALSO
|
||||
* [kubectl](kubectl.md) - kubectl controls the Kubernetes cluster manager
|
||||
|
||||
###### Auto generated by spf13/cobra at 2015-04-16 17:04:37.392395408 +0000 UTC
|
||||
###### Auto generated by spf13/cobra at 2015-04-17 18:59:11.488692518 +0000 UTC
|
||||
|
@@ -15,7 +15,7 @@ There are 4 ways that a container manifest can be provided to the Kubelet:
|
||||
|
||||
File Path passed as a flag on the command line. This file is rechecked every 20 seconds (configurable with a flag).
|
||||
HTTP endpoint HTTP endpoint passed as a parameter on the command line. This endpoint is checked every 20 seconds (also configurable with a flag).
|
||||
etcd server The Kubelet will reach out and do a watch on an etcd server. The etcd path that is watched is /registry/hosts/$(hostname -f). As this is a watch, changes are noticed and acted upon very quickly.
|
||||
etcd server The Kubelet will reach out and do a watch on an etcd server. The etcd path that is watched is /registry/hosts/$(uname -n). As this is a watch, changes are noticed and acted upon very quickly.
|
||||
HTTP server The kubelet can also listen for HTTP and respond to a simple API (underspec'd currently) to submit a new manifest.
|
||||
|
||||
|
||||
|
@@ -17,7 +17,7 @@ Display one or many resources.
|
||||
|
||||
.PP
|
||||
Possible resources include pods (po), replication controllers (rc), services
|
||||
(svc), minions (mi), or events (ev).
|
||||
(svc), minions (mi), events (ev), or component statuses (cs).
|
||||
|
||||
.PP
|
||||
By specifying the output as 'template' and providing a Go template as the value
|
||||
|
@@ -21,7 +21,7 @@ There are 4 ways that a container manifest can be provided to the Kubelet:
|
||||
.nf
|
||||
File Path passed as a flag on the command line. This file is rechecked every 20 seconds (configurable with a flag).
|
||||
HTTP endpoint HTTP endpoint passed as a parameter on the command line. This endpoint is checked every 20 seconds (also configurable with a flag).
|
||||
etcd server The Kubelet will reach out and do a watch on an etcd server. The etcd path that is watched is /registry/hosts/\$(hostname \-f). As this is a watch, changes are noticed and acted upon very quickly.
|
||||
etcd server The Kubelet will reach out and do a watch on an etcd server. The etcd path that is watched is /registry/hosts/\$(uname \-n). As this is a watch, changes are noticed and acted upon very quickly.
|
||||
HTTP server The kubelet can also listen for HTTP and respond to a simple API (underspec'd currently) to submit a new manifest.
|
||||
|
||||
.fi
|
||||
|
@@ -25,6 +25,8 @@ are supported:
|
||||
| services | Total number of services |
|
||||
| replicationcontrollers | Total number of replication controllers |
|
||||
| resourcequotas | Total number of resource quotas |
|
||||
| secrets | Total number of secrets |
|
||||
| persistentvolumeclaims | Total number of persistent volume claims |
|
||||
|
||||
For example, `pods` quota counts and enforces a maximum on the number of `pods`
|
||||
created in a single namespace.
|
||||
|
@@ -1,6 +1,6 @@
|
||||
# Kubernetes Roadmap
|
||||
|
||||
Updated Feb 9, 2015
|
||||
Updated April 20, 2015
|
||||
|
||||
This document is intended to capture the set of supported use cases, features,
|
||||
docs, and patterns that we feel are required to call Kubernetes “feature
|
||||
@@ -18,30 +18,30 @@ clustered database or key-value store. We will target such workloads for our
|
||||
|
||||
## APIs and core features
|
||||
1. Consistent v1 API
|
||||
- Status: v1beta3 (#1519) is being developed as the release candidate for the v1 API.
|
||||
2. Multi-port services for apps which need more than one port on the same portal IP (#1802)
|
||||
- Status: #2585 covers the design.
|
||||
3. Nominal services for applications which need one stable IP per pod instance (#260)
|
||||
- Status: DONE. [v1beta3](http://kubernetesio.blogspot.com/2015/04/introducing-kubernetes-v1beta3.html) was developed as the release candidate for the v1 API.
|
||||
2. Multi-port services for apps which need more than one port on the same portal IP ([#1802](https://github.com/GoogleCloudPlatform/kubernetes/issues/1802))
|
||||
- Status: DONE. Released in 0.15.0
|
||||
3. Nominal services for applications which need one stable IP per pod instance ([#260](https://github.com/GoogleCloudPlatform/kubernetes/issues/260))
|
||||
- Status: #2585 covers some design options.
|
||||
4. API input is scrubbed of status fields in favor of a new API to set status (#4248)
|
||||
- Status: in progress
|
||||
5. Input validation reporting versioned field names (#2518)
|
||||
4. API input is scrubbed of status fields in favor of a new API to set status ([#4248](https://github.com/GoogleCloudPlatform/kubernetes/issues/4248))
|
||||
- Status: DONE
|
||||
5. Input validation reporting versioned field names ([#3084](https://github.com/GoogleCloudPlatform/kubernetes/issues/3084))
|
||||
- Status: in progress
|
||||
6. Error reporting: Report common problems in ways that users can discover
|
||||
- Status:
|
||||
7. Event management: Make events usable and useful
|
||||
- Status:
|
||||
8. Persistent storage support (#4055)
|
||||
8. Persistent storage support ([#5105](https://github.com/GoogleCloudPlatform/kubernetes/issues/5105))
|
||||
- Status: in progress
|
||||
9. Allow nodes to join/leave a cluster (#2303,#2435)
|
||||
- Status: high level [design doc](https://github.com/GoogleCloudPlatform/kubernetes/blob/master/docs/design/clustering.md).
|
||||
9. Allow nodes to join/leave a cluster ([#6087](https://github.com/GoogleCloudPlatform/kubernetes/issues/6087),[#3168](https://github.com/GoogleCloudPlatform/kubernetes/issues/3168))
|
||||
- Status: in progress ([#6949](https://github.com/GoogleCloudPlatform/kubernetes/pull/6949))
|
||||
10. Handle node death
|
||||
- Status: mostly covered by nodes joining/leaving a cluster
|
||||
11. Allow live cluster upgrades (#2524)
|
||||
11. Allow live cluster upgrades ([#6075](https://github.com/GoogleCloudPlatform/kubernetes/issues/6075),[#6079](https://github.com/GoogleCloudPlatform/kubernetes/issues/6079))
|
||||
- Status: design in progress
|
||||
12. Allow kernel upgrades
|
||||
- Status: mostly covered by nodes joining/leaving a cluster, need demonstration
|
||||
13. Allow rolling-updates to fail gracefully (#1353)
|
||||
13. Allow rolling-updates to fail gracefully ([#1353](https://github.com/GoogleCloudPlatform/kubernetes/issues/1353))
|
||||
- Status:
|
||||
14. Easy .dockercfg
|
||||
- Status:
|
||||
|
@@ -63,9 +63,9 @@ Key | Value
|
||||
`cbr-cidr` | (Optional) The minion IP address range used for the docker container bridge.
|
||||
`cloud` | (Optional) Which IaaS platform is used to host kubernetes, *gce*, *azure*, *aws*, *vagrant*
|
||||
`etcd_servers` | (Optional) Comma-delimited list of IP addresses the kube-apiserver and kubelet use to reach etcd. Uses the IP of the first machine in the kubernetes_master role, or 127.0.0.1 on GCE.
|
||||
`hostnamef` | (Optional) The full host name of the machine, i.e. hostname -f (only used on Azure)
|
||||
`hostnamef` | (Optional) The full host name of the machine, i.e. uname -n
|
||||
`node_ip` | (Optional) The IP address to use to address this node
|
||||
`minion_ip` | (Optional) Mapped to the kubelet hostname_override, K8S TODO - change this name
|
||||
`hostname_override` | (Optional) Mapped to the kubelet hostname_override
|
||||
`network_mode` | (Optional) Networking model to use among nodes: *openvswitch*
|
||||
`networkInterfaceName` | (Optional) Networking interface to use to bind addresses, default value *eth0*
|
||||
`publicAddressOverride` | (Optional) The IP address the kube-apiserver should use to bind against for external read-only access
|
||||
|
Reference in New Issue
Block a user