apply changes
This commit is contained in:
parent
2a112a0004
commit
f7873d2a1f
@ -118,11 +118,13 @@ To permit an action Policy with an unset namespace applies regardless of namespa
|
||||
|
||||
Other implementations can be developed fairly easily.
|
||||
The APIserver calls the Authorizer interface:
|
||||
|
||||
```go
|
||||
type Authorizer interface {
|
||||
Authorize(a Attributes) error
|
||||
}
|
||||
```
|
||||
|
||||
to determine whether or not to allow each API action.
|
||||
|
||||
An authorization plugin is a module that implements this interface.
|
||||
|
@ -62,6 +62,7 @@ To avoid running into cloud provider quota issues, when creating a cluster with
|
||||
To prevent memory leaks or other resource issues in [cluster addons](../../cluster/addons/) from consuming all the resources available on a node, Kubernetes sets resource limits on addon containers to limit the CPU and Memory resources they can consume (See PR [#10653](https://github.com/GoogleCloudPlatform/kubernetes/pull/10653/files) and [#10778](https://github.com/GoogleCloudPlatform/kubernetes/pull/10778/files)).
|
||||
|
||||
For example:
|
||||
|
||||
```YAML
|
||||
containers:
|
||||
- image: gcr.io/google_containers/heapster:v0.15.0
|
||||
|
@ -38,6 +38,7 @@ problems please see the [application troubleshooting guide](../user-guide/applic
|
||||
The first thing to debug in your cluster is if your nodes are all registered correctly.
|
||||
|
||||
Run
|
||||
|
||||
```
|
||||
kubectl get nodes
|
||||
```
|
||||
|
@ -131,6 +131,7 @@ for ```${NODE_IP}``` on each machine.
|
||||
|
||||
#### Validating your cluster
|
||||
Once you copy this into all three nodes, you should have a clustered etcd set up. You can validate with
|
||||
|
||||
```
|
||||
etcdctl member list
|
||||
```
|
||||
@ -209,11 +210,12 @@ master election. On each of the three apiserver nodes, we run a small utility a
|
||||
election protocol using etcd "compare and swap". If the apiserver node wins the election, it starts the master component it is managing (e.g. the scheduler), if it
|
||||
loses the election, it ensures that any master components running on the node (e.g. the scheduler) are stopped.
|
||||
|
||||
In the future, we expect to more tightly integrate this lease-locking into the scheduler and controller-manager binaries directly, as described in the [high availability design proposal](proposals/high-availability.md)
|
||||
In the future, we expect to more tightly integrate this lease-locking into the scheduler and controller-manager binaries directly, as described in the [high availability design proposal](../proposals/high-availability.md)
|
||||
|
||||
### Installing configuration files
|
||||
|
||||
First, create empty log files on each node, so that Docker will mount the files not make new directories:
|
||||
|
||||
```
|
||||
touch /var/log/kube-scheduler.log
|
||||
touch /var/log/kube-controller-manager.log
|
||||
@ -244,7 +246,7 @@ set the ```--apiserver``` flag to your replicated endpoint.
|
||||
|
||||
##Vagrant up!
|
||||
|
||||
We indeed have an initial proof of concept tester for this, which is available [here](../examples/high-availability/).
|
||||
We indeed have an initial proof of concept tester for this, which is available [here](../../examples/high-availability/).
|
||||
|
||||
It implements the major concepts (with a few minor reductions for simplicity), of the podmaster HA implementation alongside a quick smoke test using k8petstore.
|
||||
|
||||
|
@ -152,6 +152,7 @@ outbound internet access. A linux bridge (called `cbr0`) is configured to exist
|
||||
on that subnet, and is passed to docker's `--bridge` flag.
|
||||
|
||||
We start Docker with:
|
||||
|
||||
```
|
||||
DOCKER_OPTS="--bridge=cbr0 --iptables=false --ip-masq=false"
|
||||
```
|
||||
|
@ -97,6 +97,7 @@ Current valid condition is `Ready`. In the future, we plan to add more.
|
||||
condition provides different level of understanding for node health.
|
||||
Node condition is represented as a json object. For example,
|
||||
the following conditions mean the node is in sane state:
|
||||
|
||||
```json
|
||||
"conditions": [
|
||||
{
|
||||
@ -125,6 +126,7 @@ or from your physical or virtual machines. What this means is that when
|
||||
Kubernetes creates a node, it only creates a representation for the node.
|
||||
After creation, Kubernetes will check whether the node is valid or not.
|
||||
For example, if you try to create a node from the following content:
|
||||
|
||||
```json
|
||||
{
|
||||
"kind": "Node",
|
||||
@ -196,6 +198,7 @@ Making a node unscheduleable will prevent new pods from being scheduled to that
|
||||
node, but will not affect any existing pods on the node. This is useful as a
|
||||
preparatory step before a node reboot, etc. For example, to mark a node
|
||||
unschedulable, run this command:
|
||||
|
||||
```
|
||||
kubectl replace nodes 10.1.2.3 --patch='{"apiVersion": "v1", "unschedulable": true}'
|
||||
```
|
||||
@ -214,6 +217,7 @@ processes not in containers.
|
||||
|
||||
If you want to explicitly reserve resources for non-Pod processes, you can create a placeholder
|
||||
pod. Use the following template:
|
||||
|
||||
```
|
||||
apiVersion: v1
|
||||
kind: Pod
|
||||
@ -228,6 +232,7 @@ spec:
|
||||
cpu: 100m
|
||||
memory: 100Mi
|
||||
```
|
||||
|
||||
Set the `cpu` and `memory` values to the amount of resources you want to reserve.
|
||||
Place the file in the manifest directory (`--config=DIR` flag of kubelet). Do this
|
||||
on each kubelet where you want to reserve resources.
|
||||
|
@ -84,6 +84,7 @@ This means the resource must have a fully-qualified name (i.e. mycompany.org/shi
|
||||
|
||||
## Viewing and Setting Quotas
|
||||
Kubectl supports creating, updating, and viewing quotas
|
||||
|
||||
```
|
||||
$ kubectl namespace myspace
|
||||
$ cat <<EOF > quota.json
|
||||
|
@ -48,6 +48,7 @@ Each salt-minion service is configured to interact with the **salt-master** serv
|
||||
[root@kubernetes-master] $ cat /etc/salt/minion.d/master.conf
|
||||
master: kubernetes-master
|
||||
```
|
||||
|
||||
The salt-master is contacted by each salt-minion and depending upon the machine information presented, the salt-master will provision the machine as either a kubernetes-master or kubernetes-minion with all the required capabilities needed to run Kubernetes.
|
||||
|
||||
If you are running the Vagrant based environment, the **salt-api** service is running on the kubernetes-master. It is configured to enable the vagrant user to introspect the salt cluster in order to find out about machines in the Vagrant environment via a REST API.
|
||||
|
@ -109,6 +109,7 @@ $ kubectl describe secret mysecretname
|
||||
```
|
||||
|
||||
#### To delete/invalidate a service account token
|
||||
|
||||
```
|
||||
kubectl delete secret mysecretname
|
||||
```
|
||||
|
@ -164,7 +164,7 @@ It is expected we will want to define limits for particular pods or containers b
|
||||
To make a **LimitRangeItem** more restrictive, we will intend to add these additional restrictions at a future point in time.
|
||||
|
||||
## Example
|
||||
See the [example of Limit Range](../user-guide/limitrange) for more information.
|
||||
See the [example of Limit Range](../user-guide/limitrange/) for more information.
|
||||
|
||||
|
||||
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
|
||||
|
@ -185,7 +185,7 @@ services 3 5
|
||||
```
|
||||
|
||||
## More information
|
||||
See [resource quota document](../admin/resource-quota.md) and the [example of Resource Quota](../user-guide/resourcequota) for more information.
|
||||
See [resource quota document](../admin/resource-quota.md) and the [example of Resource Quota](../user-guide/resourcequota/) for more information.
|
||||
|
||||
|
||||
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
|
||||
|
@ -84,6 +84,7 @@ Each binary that generates events:
|
||||
|
||||
## Example
|
||||
Sample kubectl output
|
||||
|
||||
```
|
||||
FIRSTSEEN LASTSEEN COUNT NAME KIND SUBOBJECT REASON SOURCE MESSAGE
|
||||
Thu, 12 Feb 2015 01:13:02 +0000 Thu, 12 Feb 2015 01:13:02 +0000 1 kubernetes-minion-4.c.saad-dev-vms.internal Minion starting {kubelet kubernetes-minion-4.c.saad-dev-vms.internal} Starting kubelet.
|
||||
|
@ -87,23 +87,27 @@ Internally (i.e., everywhere else), Kubernetes will represent resource quantitie
|
||||
Both users and a number of system components, such as schedulers, (horizontal) auto-scalers, (vertical) auto-sizers, load balancers, and worker-pool managers need to reason about resource requirements of workloads, resource capacities of nodes, and resource usage. Kubernetes divides specifications of *desired state*, aka the Spec, and representations of *current state*, aka the Status. Resource requirements and total node capacity fall into the specification category, while resource usage, characterizations derived from usage (e.g., maximum usage, histograms), and other resource demand signals (e.g., CPU load) clearly fall into the status category and are discussed in the Appendix for now.
|
||||
|
||||
Resource requirements for a container or pod should have the following form:
|
||||
|
||||
```
|
||||
resourceRequirementSpec: [
|
||||
request: [ cpu: 2.5, memory: "40Mi" ],
|
||||
limit: [ cpu: 4.0, memory: "99Mi" ],
|
||||
]
|
||||
```
|
||||
|
||||
Where:
|
||||
* _request_ [optional]: the amount of resources being requested, or that were requested and have been allocated. Scheduler algorithms will use these quantities to test feasibility (whether a pod will fit onto a node). If a container (or pod) tries to use more resources than its _request_, any associated SLOs are voided — e.g., the program it is running may be throttled (compressible resource types), or the attempt may be denied. If _request_ is omitted for a container, it defaults to _limit_ if that is explicitly specified, otherwise to an implementation-defined value; this will always be 0 for a user-defined resource type. If _request_ is omitted for a pod, it defaults to the sum of the (explicit or implicit) _request_ values for the containers it encloses.
|
||||
|
||||
* _limit_ [optional]: an upper bound or cap on the maximum amount of resources that will be made available to a container or pod; if a container or pod uses more resources than its _limit_, it may be terminated. The _limit_ defaults to "unbounded"; in practice, this probably means the capacity of an enclosing container, pod, or node, but may result in non-deterministic behavior, especially for memory.
|
||||
|
||||
Total capacity for a node should have a similar structure:
|
||||
|
||||
```
|
||||
resourceCapacitySpec: [
|
||||
total: [ cpu: 12, memory: "128Gi" ]
|
||||
]
|
||||
```
|
||||
|
||||
Where:
|
||||
* _total_: the total allocatable resources of a node. Initially, the resources at a given scope will bound the resources of the sum of inner scopes.
|
||||
|
||||
@ -149,6 +153,7 @@ rather than decimal ones: "64MiB" rather than "64MB".
|
||||
|
||||
## Resource metadata
|
||||
A resource type may have an associated read-only ResourceType structure, that contains metadata about the type. For example:
|
||||
|
||||
```
|
||||
resourceTypes: [
|
||||
"kubernetes.io/memory": [
|
||||
@ -194,6 +199,7 @@ resourceStatus: [
|
||||
```
|
||||
|
||||
where a `<CPU-info>` or `<memory-info>` structure looks like this:
|
||||
|
||||
```
|
||||
{
|
||||
mean: <value> # arithmetic mean
|
||||
@ -209,6 +215,7 @@ where a `<CPU-info>` or `<memory-info>` structure looks like this:
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
All parts of this structure are optional, although we strongly encourage including quantities for 50, 90, 95, 99, 99.5, and 99.9 percentiles. _[In practice, it will be important to include additional info such as the length of the time window over which the averages are calculated, the confidence level, and information-quality metrics such as the number of dropped or discarded data points.]_
|
||||
and predicted
|
||||
|
||||
|
@ -179,6 +179,7 @@ type SELinuxOptions struct {
|
||||
Level string
|
||||
}
|
||||
```
|
||||
|
||||
### Admission
|
||||
|
||||
It is up to an admission plugin to determine if the security context is acceptable or not. At the
|
||||
|
@ -61,6 +61,7 @@ A service account binds together several things:
|
||||
## Design Discussion
|
||||
|
||||
A new object Kind is added:
|
||||
|
||||
```go
|
||||
type ServiceAccount struct {
|
||||
TypeMeta `json:",inline" yaml:",inline"`
|
||||
|
@ -196,12 +196,15 @@ References in the status of the referee to the referrer may be permitted, when t
|
||||
Discussed in [#2004](https://github.com/GoogleCloudPlatform/kubernetes/issues/2004) and elsewhere. There are no maps of subobjects in any API objects. Instead, the convention is to use a list of subobjects containing name fields.
|
||||
|
||||
For example:
|
||||
|
||||
```yaml
|
||||
ports:
|
||||
- name: www
|
||||
containerPort: 80
|
||||
```
|
||||
|
||||
vs.
|
||||
|
||||
```yaml
|
||||
ports:
|
||||
www:
|
||||
@ -518,6 +521,7 @@ A ```Status``` kind will be returned by the API in two cases:
|
||||
The status object is encoded as JSON and provided as the body of the response. The status object contains fields for humans and machine consumers of the API to get more detailed information for the cause of the failure. The information in the status object supplements, but does not override, the HTTP status code's meaning. When fields in the status object have the same meaning as generally defined HTTP headers and that header is returned with the response, the header should be considered as having higher priority.
|
||||
|
||||
**Example:**
|
||||
|
||||
```
|
||||
$ curl -v -k -H "Authorization: Bearer WhCDvq4VPpYhrcfmF6ei7V9qlbqTubUc" https://10.240.122.184:443/api/v1/namespaces/default/pods/grafana
|
||||
|
||||
|
@ -282,6 +282,7 @@ conversion functions when writing your conversion functions.
|
||||
Once all the necessary manually written conversions are added, you need to
|
||||
regenerate auto-generated ones. To regenerate them:
|
||||
- run
|
||||
|
||||
```
|
||||
$ hack/update-generated-conversions.sh
|
||||
```
|
||||
|
@ -83,6 +83,7 @@ vagrant ssh minion-3
|
||||
```
|
||||
|
||||
To view the service status and/or logs on the kubernetes-master:
|
||||
|
||||
```sh
|
||||
vagrant ssh master
|
||||
[vagrant@kubernetes-master ~] $ sudo systemctl status kube-apiserver
|
||||
@ -96,6 +97,7 @@ vagrant ssh master
|
||||
```
|
||||
|
||||
To view the services on any of the nodes:
|
||||
|
||||
```sh
|
||||
vagrant ssh minion-1
|
||||
[vagrant@kubernetes-minion-1] $ sudo systemctl status docker
|
||||
@ -109,17 +111,20 @@ vagrant ssh minion-1
|
||||
With your Kubernetes cluster up, you can manage the nodes in your cluster with the regular Vagrant commands.
|
||||
|
||||
To push updates to new Kubernetes code after making source changes:
|
||||
|
||||
```sh
|
||||
./cluster/kube-push.sh
|
||||
```
|
||||
|
||||
To stop and then restart the cluster:
|
||||
|
||||
```sh
|
||||
vagrant halt
|
||||
./cluster/kube-up.sh
|
||||
```
|
||||
|
||||
To destroy the cluster:
|
||||
|
||||
```sh
|
||||
vagrant destroy
|
||||
```
|
||||
|
@ -109,6 +109,7 @@ source control system). Use ```apt-get install mercurial``` or ```yum install m
|
||||
directly from mercurial.
|
||||
|
||||
2) Create a new GOPATH for your tools and install godep:
|
||||
|
||||
```
|
||||
export GOPATH=$HOME/go-tools
|
||||
mkdir -p $GOPATH
|
||||
@ -116,6 +117,7 @@ go get github.com/tools/godep
|
||||
```
|
||||
|
||||
3) Add $GOPATH/bin to your path. Typically you'd add this to your ~/.profile:
|
||||
|
||||
```
|
||||
export GOPATH=$HOME/go-tools
|
||||
export PATH=$PATH:$GOPATH/bin
|
||||
@ -125,6 +127,7 @@ export PATH=$PATH:$GOPATH/bin
|
||||
Here's a quick walkthrough of one way to use godeps to add or update a Kubernetes dependency into Godeps/_workspace. For more details, please see the instructions in [godep's documentation](https://github.com/tools/godep).
|
||||
|
||||
1) Devote a directory to this endeavor:
|
||||
|
||||
```
|
||||
export KPATH=$HOME/code/kubernetes
|
||||
mkdir -p $KPATH/src/github.com/GoogleCloudPlatform/kubernetes
|
||||
@ -134,6 +137,7 @@ git clone https://path/to/your/fork .
|
||||
```
|
||||
|
||||
2) Set up your GOPATH.
|
||||
|
||||
```
|
||||
# Option A: this will let your builds see packages that exist elsewhere on your system.
|
||||
export GOPATH=$KPATH:$GOPATH
|
||||
@ -143,12 +147,14 @@ export GOPATH=$KPATH
|
||||
```
|
||||
|
||||
3) Populate your new GOPATH.
|
||||
|
||||
```
|
||||
cd $KPATH/src/github.com/GoogleCloudPlatform/kubernetes
|
||||
godep restore
|
||||
```
|
||||
|
||||
4) Next, you can either add a new dependency or update an existing one.
|
||||
|
||||
```
|
||||
# To add a new dependency, do:
|
||||
cd $KPATH/src/github.com/GoogleCloudPlatform/kubernetes
|
||||
@ -218,6 +224,7 @@ KUBE_COVER=y hack/test-go.sh
|
||||
At the end of the run, an the HTML report will be generated with the path printed to stdout.
|
||||
|
||||
To run tests and collect coverage in only one package, pass its relative path under the `kubernetes` directory as an argument, for example:
|
||||
|
||||
```
|
||||
cd kubernetes
|
||||
KUBE_COVER=y hack/test-go.sh pkg/kubectl
|
||||
@ -230,6 +237,7 @@ Coverage results for the project can also be viewed on [Coveralls](https://cover
|
||||
## Integration tests
|
||||
|
||||
You need an [etcd](https://github.com/coreos/etcd/releases/tag/v2.0.0) in your path, please make sure it is installed and in your ``$PATH``.
|
||||
|
||||
```
|
||||
cd kubernetes
|
||||
hack/test-integration.sh
|
||||
@ -238,12 +246,14 @@ hack/test-integration.sh
|
||||
## End-to-End tests
|
||||
|
||||
You can run an end-to-end test which will bring up a master and two nodes, perform some tests, and then tear everything down. Make sure you have followed the getting started steps for your chosen cloud platform (which might involve changing the `KUBERNETES_PROVIDER` environment variable to something other than "gce".
|
||||
|
||||
```
|
||||
cd kubernetes
|
||||
hack/e2e-test.sh
|
||||
```
|
||||
|
||||
Pressing control-C should result in an orderly shutdown but if something goes wrong and you still have some VMs running you can force a cleanup with this command:
|
||||
|
||||
```
|
||||
go run hack/e2e.go --down
|
||||
```
|
||||
@ -281,6 +291,7 @@ hack/ginkgo-e2e.sh --ginkgo.focus=Pods.*env
|
||||
```
|
||||
|
||||
### Combining flags
|
||||
|
||||
```sh
|
||||
# Flags can be combined, and their actions will take place in this order:
|
||||
# -build, -push|-up|-pushup, -test|-tests=..., -down
|
||||
|
@ -42,6 +42,7 @@ _Note: these instructions are mildly hacky for now, as we get run once semantics
|
||||
There is a testing image ```brendanburns/flake``` up on the docker hub. We will use this image to test our fix.
|
||||
|
||||
Create a replication controller with the following config:
|
||||
|
||||
```yaml
|
||||
apiVersion: v1
|
||||
kind: ReplicationController
|
||||
@ -63,6 +64,7 @@ spec:
|
||||
- name: REPO_SPEC
|
||||
value: https://github.com/GoogleCloudPlatform/kubernetes
|
||||
```
|
||||
|
||||
Note that we omit the labels and the selector fields of the replication controller, because they will be populated from the labels field of the pod template by default.
|
||||
|
||||
```
|
||||
|
@ -45,6 +45,7 @@ usage:
|
||||
```
|
||||
|
||||
You can also use the gsutil tool to explore the Google Cloud Storage release bucket. Here are some examples:
|
||||
|
||||
```
|
||||
gsutil cat gs://kubernetes-release/ci/latest.txt # output the latest ci version number
|
||||
gsutil cat gs://kubernetes-release/ci/latest-green.txt # output the latest ci version number that passed gce e2e
|
||||
|
@ -40,6 +40,7 @@ _TODO_: Figure out a way to record this somewhere to save the next release engin
|
||||
Find the most-recent PR that was merged with the current .0 release. Remeber this as $CURRENTPR.
|
||||
|
||||
### 2) Run the release-notes tool
|
||||
|
||||
```bash
|
||||
${KUBERNETES_ROOT}/build/make-release-notes.sh $LASTPR $CURRENTPR
|
||||
```
|
||||
|
@ -41,24 +41,30 @@ Go comes with inbuilt 'net/http/pprof' profiling library and profiling web servi
|
||||
## Adding profiling to services to APIserver.
|
||||
|
||||
TL;DR: Add lines:
|
||||
|
||||
```
|
||||
m.mux.HandleFunc("/debug/pprof/", pprof.Index)
|
||||
m.mux.HandleFunc("/debug/pprof/profile", pprof.Profile)
|
||||
m.mux.HandleFunc("/debug/pprof/symbol", pprof.Symbol)
|
||||
```
|
||||
|
||||
to the init(c *Config) method in 'pkg/master/master.go' and import 'net/http/pprof' package.
|
||||
|
||||
In most use cases to use profiler service it's enough to do 'import _ net/http/pprof', which automatically registers a handler in the default http.Server. Slight inconvenience is that APIserver uses default server for intra-cluster communication, so plugging profiler to it is not really useful. In 'pkg/master/server/server.go' more servers are created and started as separate goroutines. The one that is usually serving external traffic is secureServer. The handler for this traffic is defined in 'pkg/master/master.go' and stored in Handler variable. It is created from HTTP multiplexer, so the only thing that needs to be done is adding profiler handler functions to this multiplexer. This is exactly what lines after TL;DR do.
|
||||
|
||||
## Connecting to the profiler
|
||||
Even when running profiler I found not really straightforward to use 'go tool pprof' with it. The problem is that at least for dev purposes certificates generated for APIserver are not signed by anyone trusted and because secureServer serves only secure traffic it isn't straightforward to connect to the service. The best workaround I found is by creating an ssh tunnel from the kubernetes_master open unsecured port to some external server, and use this server as a proxy. To save everyone looking for correct ssh flags, it is done by running:
|
||||
|
||||
```
|
||||
ssh kubernetes_master -L<local_port>:localhost:8080
|
||||
```
|
||||
|
||||
or analogous one for you Cloud provider. Afterwards you can e.g. run
|
||||
|
||||
```
|
||||
go tool pprof http://localhost:<local_port>/debug/pprof/profile
|
||||
```
|
||||
|
||||
to get 30 sec. CPU profile.
|
||||
|
||||
## Contention profiling
|
||||
|
@ -78,9 +78,11 @@ and you're trying to cut a release, don't hesitate to contact the GKE
|
||||
oncall.
|
||||
|
||||
Before proceeding to the next step:
|
||||
|
||||
```
|
||||
export BRANCHPOINT=v0.20.2-322-g974377b
|
||||
```
|
||||
|
||||
Where `v0.20.2-322-g974377b` is the git hash you decided on. This will become
|
||||
our (retroactive) branch point.
|
||||
|
||||
|
@ -52,6 +52,8 @@ Getting started on AWS EC2
|
||||
3. You need an AWS [instance profile and role](http://docs.aws.amazon.com/IAM/latest/UserGuide/instance-profiles.html) with EC2 full access.
|
||||
|
||||
## Cluster turnup
|
||||
### Supported procedure: `get-kube`
|
||||
|
||||
```bash
|
||||
#Using wget
|
||||
export KUBERNETES_PROVIDER=aws; wget -q -O - https://get.k8s.io | bash
|
||||
|
@ -33,6 +33,7 @@ Documentation for other releases can be found at
|
||||
# Install and configure kubectl
|
||||
|
||||
## Download the kubectl CLI tool
|
||||
|
||||
```bash
|
||||
### Darwin
|
||||
wget https://storage.googleapis.com/kubernetes-release/release/v0.19.3/bin/darwin/amd64/kubectl
|
||||
@ -42,12 +43,14 @@ wget https://storage.googleapis.com/kubernetes-release/release/v0.19.3/bin/linux
|
||||
```
|
||||
|
||||
### Copy kubectl to your path
|
||||
|
||||
```bash
|
||||
chmod +x kubectl
|
||||
mv kubectl /usr/local/bin/
|
||||
```
|
||||
|
||||
### Create a secure tunnel for API communication
|
||||
|
||||
```bash
|
||||
ssh -f -nNT -L 8080:127.0.0.1:8080 core@<master-public-ip>
|
||||
```
|
||||
|
@ -100,6 +100,7 @@ See [a simple nginx example](../user-guide/simple-nginx.md) to try out your new
|
||||
For more complete applications, please look in the [examples directory](../../examples/).
|
||||
|
||||
## Tearing down the cluster
|
||||
|
||||
```
|
||||
cluster/kube-down.sh
|
||||
```
|
||||
|
@ -50,6 +50,7 @@ The kubernetes package provides a few services: kube-apiserver, kube-scheduler,
|
||||
**System Information:**
|
||||
|
||||
Hosts:
|
||||
|
||||
```
|
||||
centos-master = 192.168.121.9
|
||||
centos-minion = 192.168.121.65
|
||||
|
@ -54,6 +54,7 @@ In this guide I will demonstrate how to deploy a Kubernetes cluster to Azure clo
|
||||
## Let's go!
|
||||
|
||||
To get started, you need to checkout the code:
|
||||
|
||||
```
|
||||
git clone https://github.com/GoogleCloudPlatform/kubernetes
|
||||
cd kubernetes/docs/getting-started-guides/coreos/azure/
|
||||
@ -89,12 +90,15 @@ azure_wrapper/info: Saved state into `./output/kube_1c1496016083b4_deployment.ym
|
||||
```
|
||||
|
||||
Let's login to the master node like so:
|
||||
|
||||
```
|
||||
ssh -F ./output/kube_1c1496016083b4_ssh_conf kube-00
|
||||
```
|
||||
|
||||
> Note: config file name will be different, make sure to use the one you see.
|
||||
|
||||
Check there are 2 nodes in the cluster:
|
||||
|
||||
```
|
||||
core@kube-00 ~ $ kubectl get nodes
|
||||
NAME LABELS STATUS
|
||||
@ -105,6 +109,7 @@ kube-02 environment=production Ready
|
||||
## Deploying the workload
|
||||
|
||||
Let's follow the Guestbook example now:
|
||||
|
||||
```
|
||||
cd guestbook-example
|
||||
kubectl create -f examples/guestbook/redis-master-controller.yaml
|
||||
@ -116,12 +121,15 @@ kubectl create -f examples/guestbook/frontend-service.yaml
|
||||
```
|
||||
|
||||
You need to wait for the pods to get deployed, run the following and wait for `STATUS` to change from `Unknown`, through `Pending` to `Running`.
|
||||
|
||||
```
|
||||
kubectl get pods --watch
|
||||
```
|
||||
|
||||
> Note: the most time it will spend downloading Docker container images on each of the nodes.
|
||||
|
||||
Eventually you should see:
|
||||
|
||||
```
|
||||
NAME READY STATUS RESTARTS AGE
|
||||
frontend-8anh8 1/1 Running 0 1m
|
||||
@ -139,10 +147,13 @@ Two single-core nodes are certainly not enough for a production system of today,
|
||||
You will need to open another terminal window on your machine and go to the same working directory (e.g. `~/Workspace/weave-demos/coreos-azure`).
|
||||
|
||||
First, lets set the size of new VMs:
|
||||
|
||||
```
|
||||
export AZ_VM_SIZE=Large
|
||||
```
|
||||
|
||||
Now, run scale script with state file of the previous deployment and number of nodes to add:
|
||||
|
||||
```
|
||||
./scale-kubernetes-cluster.js ./output/kube_1c1496016083b4_deployment.yml 2
|
||||
...
|
||||
@ -158,9 +169,11 @@ azure_wrapper/info: The hosts in this deployment are:
|
||||
'kube-04' ]
|
||||
azure_wrapper/info: Saved state into `./output/kube_8f984af944f572_deployment.yml`
|
||||
```
|
||||
|
||||
> Note: this step has created new files in `./output`.
|
||||
|
||||
Back on `kube-00`:
|
||||
|
||||
```
|
||||
core@kube-00 ~ $ kubectl get nodes
|
||||
NAME LABELS STATUS
|
||||
@ -181,14 +194,18 @@ frontend php-redis kubernetes/example-guestbook-php-redis:v2 name=f
|
||||
redis-master master redis name=redis-master 1
|
||||
redis-slave slave kubernetes/redis-slave:v2 name=redis-slave 2
|
||||
```
|
||||
|
||||
As there are 4 nodes, let's scale proportionally:
|
||||
|
||||
```
|
||||
core@kube-00 ~ $ kubectl scale --replicas=4 rc redis-slave
|
||||
scaled
|
||||
core@kube-00 ~ $ kubectl scale --replicas=4 rc frontend
|
||||
scaled
|
||||
```
|
||||
|
||||
Check what you have now:
|
||||
|
||||
```
|
||||
core@kube-00 ~ $ kubectl get rc
|
||||
CONTROLLER CONTAINER(S) IMAGE(S) SELECTOR REPLICAS
|
||||
|
@ -50,6 +50,7 @@ Docker containers themselves. To achieve this, we need a separate "bootstrap" i
|
||||
```--iptables=false``` so that it can only run containers with ```--net=host```. That's sufficient to bootstrap our system.
|
||||
|
||||
Run:
|
||||
|
||||
```sh
|
||||
sudo sh -c 'docker -d -H unix:///var/run/docker-bootstrap.sock -p /var/run/docker-bootstrap.pid --iptables=false --ip-masq=false --bridge=none --graph=/var/lib/docker-bootstrap 2> /var/log/docker-bootstrap.log 1> /dev/null &'
|
||||
```
|
||||
@ -61,6 +62,7 @@ across reboots and failures.
|
||||
|
||||
### Startup etcd for flannel and the API server to use
|
||||
Run:
|
||||
|
||||
```
|
||||
sudo docker -H unix:///var/run/docker-bootstrap.sock run --net=host -d gcr.io/google_containers/etcd:2.0.12 /usr/local/bin/etcd --addr=127.0.0.1:4001 --bind-addr=0.0.0.0:4001 --data-dir=/var/etcd/data
|
||||
```
|
||||
@ -97,6 +99,7 @@ or it may be something else.
|
||||
#### Run flannel
|
||||
|
||||
Now run flanneld itself:
|
||||
|
||||
```sh
|
||||
sudo docker -H unix:///var/run/docker-bootstrap.sock run -d --net=host --privileged -v /dev/net:/dev/net quay.io/coreos/flannel:0.5.0
|
||||
```
|
||||
@ -104,6 +107,7 @@ sudo docker -H unix:///var/run/docker-bootstrap.sock run -d --net=host --privile
|
||||
The previous command should have printed a really long hash, copy this hash.
|
||||
|
||||
Now get the subnet settings from flannel:
|
||||
|
||||
```
|
||||
sudo docker -H unix:///var/run/docker-bootstrap.sock exec <really-long-hash-from-above-here> cat /run/flannel/subnet.env
|
||||
```
|
||||
@ -114,6 +118,7 @@ You now need to edit the docker configuration to activate new flags. Again, thi
|
||||
This may be in ```/etc/default/docker``` or ```/etc/systemd/service/docker.service``` or it may be elsewhere.
|
||||
|
||||
Regardless, you need to add the following to the docker command line:
|
||||
|
||||
```sh
|
||||
--bip=${FLANNEL_SUBNET} --mtu=${FLANNEL_MTU}
|
||||
```
|
||||
@ -136,6 +141,7 @@ sudo /etc/init.d/docker start
|
||||
```
|
||||
|
||||
it may be:
|
||||
|
||||
```sh
|
||||
systemctl start docker
|
||||
```
|
||||
@ -148,6 +154,7 @@ sudo docker run --net=host -d -v /var/run/docker.sock:/var/run/docker.sock gcr.
|
||||
```
|
||||
|
||||
### Also run the service proxy
|
||||
|
||||
```sh
|
||||
sudo docker run -d --net=host --privileged gcr.io/google_containers/hyperkube:v0.21.2 /hyperkube proxy --master=http://127.0.0.1:8080 --v=2
|
||||
```
|
||||
@ -166,6 +173,7 @@ kubectl get nodes
|
||||
```
|
||||
|
||||
This should print:
|
||||
|
||||
```
|
||||
NAME LABELS STATUS
|
||||
127.0.0.1 kubernetes.io/hostname=127.0.0.1 Ready
|
||||
|
@ -39,6 +39,7 @@ kubectl get nodes
|
||||
```
|
||||
|
||||
That should show something like:
|
||||
|
||||
```
|
||||
NAME LABELS STATUS
|
||||
10.240.99.26 kubernetes.io/hostname=10.240.99.26 Ready
|
||||
@ -49,6 +50,7 @@ If the status of any node is ```Unknown``` or ```NotReady``` your cluster is bro
|
||||
[```#google-containers```](http://webchat.freenode.net/?channels=google-containers) for advice.
|
||||
|
||||
### Run an application
|
||||
|
||||
```sh
|
||||
kubectl -s http://localhost:8080 run nginx --image=nginx --port=80
|
||||
```
|
||||
@ -56,17 +58,20 @@ kubectl -s http://localhost:8080 run nginx --image=nginx --port=80
|
||||
now run ```docker ps``` you should see nginx running. You may need to wait a few minutes for the image to get pulled.
|
||||
|
||||
### Expose it as a service
|
||||
|
||||
```sh
|
||||
kubectl expose rc nginx --port=80
|
||||
```
|
||||
|
||||
This should print:
|
||||
|
||||
```
|
||||
NAME LABELS SELECTOR IP PORT(S)
|
||||
nginx <none> run=nginx <ip-addr> 80/TCP
|
||||
```
|
||||
|
||||
Hit the webserver:
|
||||
|
||||
```sh
|
||||
curl <insert-ip-from-above-here>
|
||||
```
|
||||
|
@ -55,6 +55,7 @@ Please install Docker 1.6.2 or wait for Docker 1.7.1.
|
||||
As previously, we need a second instance of the Docker daemon running to bootstrap the flannel networking.
|
||||
|
||||
Run:
|
||||
|
||||
```sh
|
||||
sudo sh -c 'docker -d -H unix:///var/run/docker-bootstrap.sock -p /var/run/docker-bootstrap.pid --iptables=false --ip-masq=false --bridge=none --graph=/var/lib/docker-bootstrap 2> /var/log/docker-bootstrap.log 1> /dev/null &'
|
||||
```
|
||||
@ -83,6 +84,7 @@ or it may be something else.
|
||||
#### Run flannel
|
||||
|
||||
Now run flanneld itself, this call is slightly different from the above, since we point it at the etcd instance on the master.
|
||||
|
||||
```sh
|
||||
sudo docker -H unix:///var/run/docker-bootstrap.sock run -d --net=host --privileged -v /dev/net:/dev/net quay.io/coreos/flannel:0.5.0 /opt/bin/flanneld --etcd-endpoints=http://${MASTER_IP}:4001
|
||||
```
|
||||
@ -90,6 +92,7 @@ sudo docker -H unix:///var/run/docker-bootstrap.sock run -d --net=host --privile
|
||||
The previous command should have printed a really long hash, copy this hash.
|
||||
|
||||
Now get the subnet settings from flannel:
|
||||
|
||||
```
|
||||
sudo docker -H unix:///var/run/docker-bootstrap.sock exec <really-long-hash-from-above-here> cat /run/flannel/subnet.env
|
||||
```
|
||||
@ -101,6 +104,7 @@ You now need to edit the docker configuration to activate new flags. Again, thi
|
||||
This may be in ```/etc/default/docker``` or ```/etc/systemd/service/docker.service``` or it may be elsewhere.
|
||||
|
||||
Regardless, you need to add the following to the docker command line:
|
||||
|
||||
```sh
|
||||
--bip=${FLANNEL_SUBNET} --mtu=${FLANNEL_MTU}
|
||||
```
|
||||
@ -123,6 +127,7 @@ sudo /etc/init.d/docker start
|
||||
```
|
||||
|
||||
it may be:
|
||||
|
||||
```sh
|
||||
systemctl start docker
|
||||
```
|
||||
|
@ -56,11 +56,13 @@ Here's a diagram of what the final result will look like:
|
||||
1. You need to have docker installed on one machine.
|
||||
|
||||
### Step One: Run etcd
|
||||
|
||||
```sh
|
||||
docker run --net=host -d gcr.io/google_containers/etcd:2.0.9 /usr/local/bin/etcd --addr=127.0.0.1:4001 --bind-addr=0.0.0.0:4001 --data-dir=/var/etcd/data
|
||||
```
|
||||
|
||||
### Step Two: Run the master
|
||||
|
||||
```sh
|
||||
docker run --net=host -d -v /var/run/docker.sock:/var/run/docker.sock gcr.io/google_containers/hyperkube:v0.21.2 /hyperkube kubelet --api_servers=http://localhost:8080 --v=2 --address=0.0.0.0 --enable_server --hostname_override=127.0.0.1 --config=/etc/kubernetes/manifests
|
||||
```
|
||||
@ -69,6 +71,7 @@ This actually runs the kubelet, which in turn runs a [pod](../user-guide/pods.md
|
||||
|
||||
### Step Three: Run the service proxy
|
||||
*Note, this could be combined with master above, but it requires --privileged for iptables manipulation*
|
||||
|
||||
```sh
|
||||
docker run -d --net=host --privileged gcr.io/google_containers/hyperkube:v0.21.2 /hyperkube proxy --master=http://127.0.0.1:8080 --v=2
|
||||
```
|
||||
@ -81,6 +84,7 @@ binary
|
||||
|
||||
*Note:*
|
||||
On OS/X you will need to set up port forwarding via ssh:
|
||||
|
||||
```sh
|
||||
boot2docker ssh -L8080:localhost:8080
|
||||
```
|
||||
@ -92,6 +96,7 @@ kubectl get nodes
|
||||
```
|
||||
|
||||
This should print:
|
||||
|
||||
```
|
||||
NAME LABELS STATUS
|
||||
127.0.0.1 <none> Ready
|
||||
@ -100,6 +105,7 @@ NAME LABELS STATUS
|
||||
If you are running different kubernetes clusters, you may need to specify ```-s http://localhost:8080``` to select the local cluster.
|
||||
|
||||
### Run an application
|
||||
|
||||
```sh
|
||||
kubectl -s http://localhost:8080 run-container nginx --image=nginx --port=80
|
||||
```
|
||||
@ -107,17 +113,20 @@ kubectl -s http://localhost:8080 run-container nginx --image=nginx --port=80
|
||||
now run ```docker ps``` you should see nginx running. You may need to wait a few minutes for the image to get pulled.
|
||||
|
||||
### Expose it as a service
|
||||
|
||||
```sh
|
||||
kubectl expose rc nginx --port=80
|
||||
```
|
||||
|
||||
This should print:
|
||||
|
||||
```
|
||||
NAME LABELS SELECTOR IP PORT(S)
|
||||
nginx <none> run=nginx <ip-addr> 80/TCP
|
||||
```
|
||||
|
||||
Hit the webserver:
|
||||
|
||||
```sh
|
||||
curl <insert-ip-from-above-here>
|
||||
```
|
||||
|
@ -130,6 +130,7 @@ ansible-playbook -i inventory ping.yml # This will look like it fails, that's ok
|
||||
**Push your ssh public key to every machine**
|
||||
|
||||
Again, you can skip this step if your ansible machine has ssh access to the nodes you are going to use in the kubernetes cluster.
|
||||
|
||||
```
|
||||
ansible-playbook -i inventory keys.yml
|
||||
```
|
||||
@ -161,6 +162,7 @@ Flannel is a cleaner mechanism to use, and is the recommended choice.
|
||||
- If you are using flannel, you should check the kubernetes-ansible repository above.
|
||||
|
||||
Currently, you essentially have to (1) update group_vars/all.yml, and then (2) run
|
||||
|
||||
```
|
||||
ansible-playbook -i inventory flannel.yml
|
||||
```
|
||||
|
@ -52,6 +52,7 @@ The kubernetes package provides a few services: kube-apiserver, kube-scheduler,
|
||||
**System Information:**
|
||||
|
||||
Hosts:
|
||||
|
||||
```
|
||||
fed-master = 192.168.121.9
|
||||
fed-node = 192.168.121.65
|
||||
@ -66,6 +67,7 @@ fed-node = 192.168.121.65
|
||||
```
|
||||
yum -y install --enablerepo=updates-testing kubernetes
|
||||
```
|
||||
|
||||
* Install etcd and iptables
|
||||
|
||||
```
|
||||
@ -121,6 +123,7 @@ KUBE_API_ARGS=""
|
||||
```
|
||||
|
||||
* Edit /etc/etcd/etcd.conf,let the etcd to listen all the ip instead of 127.0.0.1, if not, you will get the error like "connection refused"
|
||||
|
||||
```
|
||||
ETCD_LISTEN_CLIENT_URLS="http://0.0.0.0:4001"
|
||||
```
|
||||
@ -210,6 +213,7 @@ kubectl get nodes
|
||||
NAME LABELS STATUS
|
||||
fed-node name=fed-node-label Ready
|
||||
```
|
||||
|
||||
* Deletion of nodes:
|
||||
|
||||
To delete _fed-node_ from your kubernetes cluster, one should run the following on fed-master (Please do not do it, it is just for information):
|
||||
|
@ -64,6 +64,7 @@ This document describes how to deploy kubernetes on multiple hosts to set up a m
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
**NOTE:** Choose an IP range that is *NOT* part of the public IP address range.
|
||||
|
||||
* Add the configuration to the etcd server on fed-master.
|
||||
|
@ -96,6 +96,7 @@ Alternately, you can download and install the latest Kubernetes release from [th
|
||||
cd kubernetes
|
||||
cluster/kube-up.sh
|
||||
```
|
||||
|
||||
If you want more than one cluster running in your project, want to use a different name, or want a different number of worker nodes, see the `<kubernetes>/cluster/gce/config-default.sh` file for more fine-grained configuration before you start up your cluster.
|
||||
|
||||
If you run into trouble, please see the section on [troubleshooting](gce.md#troubleshooting), post to the
|
||||
@ -154,12 +155,14 @@ kube-system monitoring-heapster kubernetes.io/cluster-service=true,kubernete
|
||||
kube-system monitoring-influxdb kubernetes.io/cluster-service=true,kubernetes.io/name=InfluxDB k8s-app=influxGrafana 10.0.210.156 8083/TCP
|
||||
8086/TCP
|
||||
```
|
||||
|
||||
Similarly, you can take a look at the set of [pods](../user-guide/pods.md) that were created during cluster startup.
|
||||
You can do this via the
|
||||
|
||||
```shell
|
||||
$ kubectl get --all-namespaces pods
|
||||
```
|
||||
|
||||
command.
|
||||
|
||||
You'll see a list of pods that looks something like this (the name specifics will be different):
|
||||
|
@ -67,6 +67,7 @@ Getting started with libvirt CoreOS
|
||||
#### ¹ Depending on your distribution, libvirt access may be denied by default or may require a password at each access.
|
||||
|
||||
You can test it with the following command:
|
||||
|
||||
```
|
||||
virsh -c qemu:///system pool-list
|
||||
```
|
||||
@ -176,11 +177,13 @@ The IP to connect to the master is 192.168.10.1.
|
||||
The IPs to connect to the nodes are 192.168.10.2 and onwards.
|
||||
|
||||
Connect to `kubernetes_master`:
|
||||
|
||||
```
|
||||
ssh core@192.168.10.1
|
||||
```
|
||||
|
||||
Connect to `kubernetes_minion-01`:
|
||||
|
||||
```
|
||||
ssh core@192.168.10.2
|
||||
```
|
||||
@ -212,6 +215,7 @@ cluster/kube-push.sh
|
||||
```
|
||||
|
||||
Update the libvirt-CoreOS cluster with the locally built Kubernetes binaries produced by `make`:
|
||||
|
||||
```
|
||||
KUBE_PUSH=local cluster/kube-push.sh
|
||||
```
|
||||
|
@ -38,6 +38,7 @@ started page. Here we describe how to set up a cluster to ingest logs into Elast
|
||||
alternative to Google Cloud Logging.
|
||||
|
||||
To use Elasticsearch and Kibana for cluster logging you should set the following environment variable as shown below:
|
||||
|
||||
```
|
||||
KUBE_LOGGING_DESTINATION=elasticsearch
|
||||
```
|
||||
@ -160,6 +161,7 @@ status page for Elasticsearch.
|
||||
|
||||
You can now type Elasticsearch queries directly into the browser. Alternatively you can query Elasticsearch
|
||||
from your local machine using `curl` but first you need to know what your bearer token is:
|
||||
|
||||
```
|
||||
$ kubectl config view --minify
|
||||
apiVersion: v1
|
||||
@ -185,6 +187,7 @@ users:
|
||||
```
|
||||
|
||||
Now you can issue requests to Elasticsearch:
|
||||
|
||||
```
|
||||
$ curl --header "Authorization: Bearer JsUe2Z3cXqa17UQqQ8qWGGf4nOSLwSnp" --insecure https://146.148.94.154/api/v1/proxy/namespaces/kube-system/services/elasticsearch-logging/
|
||||
{
|
||||
@ -202,7 +205,9 @@ $ curl --header "Authorization: Bearer JsUe2Z3cXqa17UQqQ8qWGGf4nOSLwSnp" --insec
|
||||
}
|
||||
|
||||
```
|
||||
|
||||
Note that you need the trailing slash at the end of the service proxy URL. Here is an example of a search:
|
||||
|
||||
```
|
||||
$ curl --header "Authorization: Bearer JsUe2Z3cXqa17UQqQ8qWGGf4nOSLwSnp" --insecure https://146.148.94.154/api/v1/proxy/namespaces/kube-system/services/elasticsearch-logging/_search?pretty=true
|
||||
{
|
||||
|
@ -56,6 +56,7 @@ This diagram shows four nodes created on a Google Compute Engine cluster with th
|
||||
[cluster DNS service](../admin/dns.md) runs on one of the nodes and a pod which provides monitoring support runs on another node.
|
||||
|
||||
To help explain how cluster level logging works let’s start off with a synthetic log generator pod specification [counter-pod.yaml](../../examples/blog-logging/counter-pod.yaml):
|
||||
|
||||
```
|
||||
apiVersion: v1
|
||||
kind: Pod
|
||||
@ -69,6 +70,7 @@ To help explain how cluster level logging works let’s start off with a synthet
|
||||
args: [bash, -c,
|
||||
'for ((i = 0; ; i++)); do echo "$i: $(date)"; sleep 1; done']
|
||||
```
|
||||
|
||||
This pod specification has one container which runs a bash script when the container is born. This script simply writes out the value of a counter and the date once per second and runs indefinitely. Let’s create the pod in the default
|
||||
namespace.
|
||||
|
||||
@ -78,11 +80,13 @@ namespace.
|
||||
```
|
||||
|
||||
We can observe the running pod:
|
||||
|
||||
```
|
||||
$ kubectl get pods
|
||||
NAME READY STATUS RESTARTS AGE
|
||||
counter 1/1 Running 0 5m
|
||||
```
|
||||
|
||||
This step may take a few minutes to download the ubuntu:14.04 image during which the pod status will be shown as `Pending`.
|
||||
|
||||
One of the nodes is now running the counter pod:
|
||||
@ -127,6 +131,7 @@ Now let’s restart the counter.
|
||||
$ kubectl create -f examples/blog-logging/counter-pod.yaml
|
||||
pods/counter
|
||||
```
|
||||
|
||||
Let’s wait for the container to restart and get the log lines again.
|
||||
|
||||
```
|
||||
|
@ -108,23 +108,31 @@ $ sudo docker ps
|
||||
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
|
||||
fd7bac9e2301 quay.io/coreos/etcd:v2.0.12 "/etcd" 5s ago Up 3s 2379/tcp, 2380/... etcd
|
||||
```
|
||||
|
||||
It's also a good idea to ensure your etcd instance is reachable by testing it
|
||||
|
||||
```bash
|
||||
curl -L http://${KUBERNETES_MASTER_IP}:4001/v2/keys/
|
||||
```
|
||||
|
||||
If connectivity is OK, you will see an output of the available keys in etcd (if any).
|
||||
|
||||
### Start Kubernetes-Mesos Services
|
||||
Update your PATH to more easily run the Kubernetes-Mesos binaries:
|
||||
|
||||
```bash
|
||||
$ export PATH="$(pwd)/_output/local/go/bin:$PATH"
|
||||
```
|
||||
|
||||
Identify your Mesos master: depending on your Mesos installation this is either a `host:port` like `mesos_master:5050` or a ZooKeeper URL like `zk://zookeeper:2181/mesos`.
|
||||
In order to let Kubernetes survive Mesos master changes, the ZooKeeper URL is recommended for production environments.
|
||||
|
||||
```bash
|
||||
$ export MESOS_MASTER=<host:port or zk:// url>
|
||||
```
|
||||
|
||||
Create a cloud config file `mesos-cloud.conf` in the current directory with the following contents:
|
||||
|
||||
```bash
|
||||
$ cat <<EOF >mesos-cloud.conf
|
||||
[mesos-cloud]
|
||||
@ -166,6 +174,7 @@ Disown your background jobs so that they'll stay running if you log out.
|
||||
```bash
|
||||
$ disown -a
|
||||
```
|
||||
|
||||
#### Validate KM Services
|
||||
Add the appropriate binary folder to your ```PATH``` to access kubectl:
|
||||
|
||||
@ -312,6 +321,7 @@ kubectl exec busybox -- nslookup kubernetes
|
||||
```
|
||||
|
||||
If everything works fine, you will get this output:
|
||||
|
||||
```
|
||||
Server: 10.10.10.10
|
||||
Address 1: 10.10.10.10
|
||||
|
@ -47,20 +47,24 @@ We still have [a bunch of work](https://github.com/GoogleCloudPlatform/kubernete
|
||||
More details about the networking of rkt can be found in the [documentation](https://github.com/coreos/rkt/blob/master/Documentation/networking.md).
|
||||
|
||||
To start the `rkt metadata service`, you can simply run:
|
||||
|
||||
```shell
|
||||
$ sudo rkt metadata-service
|
||||
```
|
||||
|
||||
If you want the service to be running as a systemd service, then:
|
||||
|
||||
```shell
|
||||
$ sudo systemd-run rkt metadata-service
|
||||
```
|
||||
|
||||
Alternatively, you can use the [rkt-metadata.service](https://github.com/coreos/rkt/blob/master/dist/init/systemd/rkt-metadata.service) and [rkt-metadata.socket](https://github.com/coreos/rkt/blob/master/dist/init/systemd/rkt-metadata.socket) to start the service.
|
||||
|
||||
|
||||
### Local cluster
|
||||
|
||||
To use rkt as the container runtime, you just need to set the environment variable `CONTAINER_RUNTIME`:
|
||||
|
||||
```shell
|
||||
$ export CONTAINER_RUNTIME=rkt
|
||||
$ hack/local-up-cluster.sh
|
||||
@ -69,6 +73,7 @@ $ hack/local-up-cluster.sh
|
||||
### CoreOS cluster on Google Compute Engine (GCE)
|
||||
|
||||
To use rkt as the container runtime for your CoreOS cluster on GCE, you need to specify the OS distribution, project, image:
|
||||
|
||||
```shell
|
||||
$ export KUBE_OS_DISTRIBUTION=coreos
|
||||
$ export KUBE_GCE_MINION_IMAGE=<image_id>
|
||||
@ -77,11 +82,13 @@ $ export KUBE_CONTAINER_RUNTIME=rkt
|
||||
```
|
||||
|
||||
You can optionally choose the version of rkt used by setting `KUBE_RKT_VERSION`:
|
||||
|
||||
```shell
|
||||
$ export KUBE_RKT_VERSION=0.5.6
|
||||
```
|
||||
|
||||
Then you can launch the cluster by:
|
||||
|
||||
````shell
|
||||
$ kube-up.sh
|
||||
```
|
||||
@ -91,6 +98,7 @@ Note that we are still working on making all containerized the master components
|
||||
### CoreOS cluster on AWS
|
||||
|
||||
To use rkt as the container runtime for your CoreOS cluster on AWS, you need to specify the provider and OS distribution:
|
||||
|
||||
```shell
|
||||
$ export KUBERNETES_PROVIDER=aws
|
||||
$ export KUBE_OS_DISTRIBUTION=coreos
|
||||
@ -98,16 +106,19 @@ $ export KUBE_CONTAINER_RUNTIME=rkt
|
||||
```
|
||||
|
||||
You can optionally choose the version of rkt used by setting `KUBE_RKT_VERSION`:
|
||||
|
||||
```shell
|
||||
$ export KUBE_RKT_VERSION=0.5.6
|
||||
```
|
||||
|
||||
You can optionally choose the CoreOS channel by setting `COREOS_CHANNEL`:
|
||||
|
||||
```shell
|
||||
$ export COREOS_CHANNEL=stable
|
||||
```
|
||||
|
||||
Then you can launch the cluster by:
|
||||
|
||||
````shell
|
||||
$ kube-up.sh
|
||||
```
|
||||
|
@ -297,6 +297,7 @@ many distinct files to make:
|
||||
|
||||
You can make the files by copying the `$HOME/.kube/config`, by following the code
|
||||
in `cluster/gce/configure-vm.sh` or by using the following template:
|
||||
|
||||
```
|
||||
apiVersion: v1
|
||||
kind: Config
|
||||
@ -315,6 +316,7 @@ contexts:
|
||||
name: service-account-context
|
||||
current-context: service-account-context
|
||||
```
|
||||
|
||||
Put the kubeconfig(s) on every node. The examples later in this
|
||||
guide assume that there are kubeconfigs in `/var/lib/kube-proxy/kubeconfig` and
|
||||
`/var/lib/kubelet/kubeconfig`.
|
||||
@ -341,6 +343,7 @@ The minimum required Docker version will vary as the kubelet version changes. T
|
||||
If you previously had Docker installed on a node without setting Kubernetes-specific
|
||||
options, you may have a Docker-created bridge and iptables rules. You may want to remove these
|
||||
as follows before proceeding to configure Docker for Kubernetes.
|
||||
|
||||
```
|
||||
iptables -t nat -F
|
||||
ifconfig docker0 down
|
||||
@ -606,13 +609,17 @@ Place the completed pod template into the kubelet config dir
|
||||
`/etc/kubernetes/manifests`).
|
||||
|
||||
Next, verify that kubelet has started a container for the apiserver:
|
||||
|
||||
```
|
||||
$ sudo docker ps | grep apiserver:
|
||||
5783290746d5 gcr.io/google_containers/kube-apiserver:e36bf367342b5a80d7467fd7611ad873 "/bin/sh -c '/usr/lo'" 10 seconds ago Up 9 seconds k8s_kube-apiserver.feb145e7_kube-apiserver-kubernetes-master_default_eaebc600cf80dae59902b44225f2fc0a_225a4695 ```
|
||||
5783290746d5 gcr.io/google_containers/kube-apiserver:e36bf367342b5a80d7467fd7611ad873 "/bin/sh -c '/usr/lo'" 10 seconds ago Up 9 seconds k8s_kube-apiserver.feb145e7_kube-apiserver-kubernetes-master_default_eaebc600cf80dae59902b44225f2fc0a_225a4695
|
||||
|
||||
```
|
||||
|
||||
Then try to connect to the apiserver:
|
||||
|
||||
```
|
||||
|
||||
$ echo $(curl -s http://localhost:8080/healthz)
|
||||
ok
|
||||
$ curl -s http://localhost:8080/api
|
||||
@ -622,6 +629,7 @@ $ curl -s http://localhost:8080/api
|
||||
"v1"
|
||||
]
|
||||
}
|
||||
|
||||
```
|
||||
|
||||
If you have selected the `--register-node=true` option for kubelets, they will now being self-registering with the apiserver.
|
||||
@ -631,7 +639,9 @@ Otherwise, you will need to manually create node objects.
|
||||
### Scheduler
|
||||
|
||||
Complete this template for the scheduler pod:
|
||||
|
||||
```json
|
||||
|
||||
{
|
||||
"kind": "Pod",
|
||||
"apiVersion": "v1",
|
||||
@ -661,7 +671,9 @@ Complete this template for the scheduler pod:
|
||||
]
|
||||
}
|
||||
}
|
||||
|
||||
```
|
||||
|
||||
Optionally, you may want to mount `/var/log` as well and redirect output there.
|
||||
|
||||
Start as described for apiserver.
|
||||
@ -679,11 +691,13 @@ Flags to consider using with controller manager.
|
||||
- `--allocate-node-cidrs=`
|
||||
- *TODO*: explain when you want controller to do this and when you wanna do it another way.
|
||||
- `--cloud-provider=` and `--cloud-config` as described in apiserver section.
|
||||
- `--service-account-private-key-file=/srv/kubernetes/server.key`, used by [service account](../service-accounts.md) feature.
|
||||
- `--service-account-private-key-file=/srv/kubernetes/server.key`, used by [service account](../user-guide/service-accounts.md) feature.
|
||||
- `--master=127.0.0.1:8080`
|
||||
|
||||
Template for controller manager pod:
|
||||
|
||||
```json
|
||||
|
||||
{
|
||||
"kind": "Pod",
|
||||
"apiVersion": "v1",
|
||||
@ -739,6 +753,7 @@ Template for controller manager pod:
|
||||
]
|
||||
}
|
||||
}
|
||||
|
||||
```
|
||||
|
||||
|
||||
|
@ -172,6 +172,7 @@ DNS_DOMAIN="cluster.local"
|
||||
DNS_REPLICAS=1
|
||||
|
||||
```
|
||||
|
||||
The `DNS_SERVER_IP` is defining the ip of dns server which must be in the service_cluster_ip_range.
|
||||
|
||||
The `DNS_REPLICAS` describes how many dns pod running in the cluster.
|
||||
@ -213,6 +214,7 @@ Please try:
|
||||
1. Check `/var/log/upstart/etcd.log` for suspicious etcd log
|
||||
|
||||
2. Check `/etc/default/etcd`, as we do not have much input validation, a right config should be like:
|
||||
|
||||
```
|
||||
ETCD_OPTS="-name infra1 -initial-advertise-peer-urls <http://ip_of_this_node:2380> -listen-peer-urls <http://ip_of_this_node:2380> -initial-cluster-token etcd-cluster-1 -initial-cluster infra1=<http://ip_of_this_node:2380>,infra2=<http://ip_of_another_node:2380>,infra3=<http://ip_of_another_node:2380> -initial-cluster-state new"
|
||||
```
|
||||
|
@ -131,6 +131,7 @@ vagrant ssh master
|
||||
```
|
||||
|
||||
To view the services on any of the nodes:
|
||||
|
||||
```sh
|
||||
vagrant ssh minion-1
|
||||
[vagrant@kubernetes-master ~] $ sudo su
|
||||
@ -147,17 +148,20 @@ vagrant ssh minion-1
|
||||
With your Kubernetes cluster up, you can manage the nodes in your cluster with the regular Vagrant commands.
|
||||
|
||||
To push updates to new Kubernetes code after making source changes:
|
||||
|
||||
```sh
|
||||
./cluster/kube-push.sh
|
||||
```
|
||||
|
||||
To stop and then restart the cluster:
|
||||
|
||||
```sh
|
||||
vagrant halt
|
||||
./cluster/kube-up.sh
|
||||
```
|
||||
|
||||
To destroy the cluster:
|
||||
|
||||
```sh
|
||||
vagrant destroy
|
||||
```
|
||||
|
@ -64,6 +64,7 @@ though a [Getting started guide](../getting-started-guides/README.md),
|
||||
or someone else setup the cluster and provided you with credentials and a location.
|
||||
|
||||
Check the location and credentials that kubectl knows about with this command:
|
||||
|
||||
```
|
||||
kubectl config view
|
||||
```
|
||||
@ -91,12 +92,15 @@ curl or wget, or a browser, there are several ways to locate and authenticate:
|
||||
The following command runs kubectl in a mode where it acts as a reverse proxy. It handles
|
||||
locating the apiserver and authenticating.
|
||||
Run it like this:
|
||||
|
||||
```
|
||||
kubectl proxy --port=8080 &
|
||||
```
|
||||
|
||||
See [kubectl proxy](kubectl/kubectl_proxy.md) for more details.
|
||||
|
||||
Then you can explore the API with curl, wget, or a browser, like so:
|
||||
|
||||
```
|
||||
$ curl http://localhost:8080/api/
|
||||
{
|
||||
@ -105,9 +109,11 @@ $ curl http://localhost:8080/api/
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
#### Without kubectl proxy
|
||||
It is also possible to avoid using kubectl proxy by passing an authentication token
|
||||
directly to the apiserver, like this:
|
||||
|
||||
```
|
||||
$ APISERVER=$(kubectl config view | grep server | cut -f 2- -d ":" | tr -d " ")
|
||||
$ TOKEN=$(kubectl config view | grep token | cut -f 2 -d ":" | tr -d " ")
|
||||
@ -207,6 +213,7 @@ You have several options for connecting to nodes, pods and services from outside
|
||||
|
||||
Typically, there are several services which are started on a cluster by default. Get a list of these
|
||||
with the `kubectl cluster-info` command:
|
||||
|
||||
```
|
||||
$ kubectl cluster-info
|
||||
|
||||
@ -217,6 +224,7 @@ $ kubectl cluster-info
|
||||
grafana is running at https://104.197.5.247/api/v1/proxy/namespaces/default/services/monitoring-grafana
|
||||
heapster is running at https://104.197.5.247/api/v1/proxy/namespaces/default/services/monitoring-heapster
|
||||
```
|
||||
|
||||
This shows the proxy-verb URL for accessing each service.
|
||||
For example, this cluster has cluster-level logging enabled (using Elasticsearch), which can be reached
|
||||
at `https://104.197.5.247/api/v1/proxy/namespaces/default/services/elasticsearch-logging/` if suitable credentials are passed, or through a kubectl proxy at, for example:
|
||||
@ -232,6 +240,7 @@ about namespaces? 'proxy' verb? -->
|
||||
##### Examples
|
||||
* To access the Elasticsearch service endpoint `_search?q=user:kimchy`, you would use: `http://104.197.5.247/api/v1/proxy/namespaces/default/services/elasticsearch-logging/_search?q=user:kimchy`
|
||||
* To access the Elasticsearch cluster health information `_cluster/health?pretty=true`, you would use: `https://104.197.5.247/api/v1/proxy/namespaces/default/services/elasticsearch-logging/_cluster/health?pretty=true`
|
||||
|
||||
```
|
||||
{
|
||||
"cluster_name" : "kubernetes_logging",
|
||||
|
@ -37,6 +37,7 @@ We have [labels](labels.md) for identifying metadata.
|
||||
It is also useful to be able to attach arbitrary non-identifying metadata, for retrieval by API clients such as tools, libraries, etc. This information may be large, may be structured or unstructured, may include characters not permitted by labels, etc. Such information would not be used for object selection and therefore doesn't belong in labels.
|
||||
|
||||
Like labels, annotations are key-value maps.
|
||||
|
||||
```
|
||||
"annotations": {
|
||||
"key1" : "value1",
|
||||
|
@ -105,6 +105,7 @@ kubectl logs ${POD_NAME} ${CONTAINER_NAME}
|
||||
```
|
||||
|
||||
If your container has previously crashed, you can access the previous container's crash log with:
|
||||
|
||||
```sh
|
||||
kubectl logs --previous ${POD_NAME} ${CONTAINER_NAME}
|
||||
```
|
||||
@ -118,6 +119,7 @@ kubectl exec ${POD_NAME} -c ${CONTAINER_NAME} -- ${CMD} ${ARG1} ${ARG2} ... ${AR
|
||||
Note that ```-c ${CONTAINER_NAME}``` is optional and can be omitted for Pods that only contain a single container.
|
||||
|
||||
As an example, to look at the logs from a running Cassandra pod, you might run
|
||||
|
||||
```sh
|
||||
kubectl exec cassandra -- cat /var/log/cassandra/system.log
|
||||
```
|
||||
@ -153,6 +155,7 @@ IP addresses in the Service's endpoints.
|
||||
#### My service is missing endpoints
|
||||
If you are missing endpoints, try listing pods using the labels that Service uses. Imagine that you have
|
||||
a Service where the labels are:
|
||||
|
||||
```yaml
|
||||
...
|
||||
spec:
|
||||
@ -162,6 +165,7 @@ spec:
|
||||
```
|
||||
|
||||
You can use:
|
||||
|
||||
```
|
||||
kubectl get pods --selector=name=nginx,type=frontend
|
||||
```
|
||||
|
@ -152,6 +152,7 @@ then pod resource usage can be retrieved from the monitoring system.
|
||||
If the scheduler cannot find any node where a pod can fit, then the pod will remain unscheduled
|
||||
until a place can be found. An event will be produced each time the scheduler fails to find a
|
||||
place for the pod, like this:
|
||||
|
||||
```
|
||||
$ kubectl describe pods/frontend | grep -A 3 Events
|
||||
Events:
|
||||
@ -217,11 +218,13 @@ The `Restart Count: 5` indicates that the `simmemleak` container in this pod wa
|
||||
Once [#10861](https://github.com/GoogleCloudPlatform/kubernetes/issues/10861) is resolved the reason for the termination of the last container will also be printed in this output.
|
||||
|
||||
Until then you can call `get pod` with the `-o template -t ...` option to fetch the status of previously terminated containers:
|
||||
|
||||
```
|
||||
[13:59:01] $ ./cluster/kubectl.sh get pod -o template -t '{{range.status.containerStatuses}}{{"Container Name: "}}{{.name}}{{"\r\nLastState: "}}{{.lastState}}{{end}}' simmemleak-60xbc
|
||||
Container Name: simmemleak
|
||||
LastState: map[terminated:map[exitCode:137 reason:OOM Killed startedAt:2015-07-07T20:58:43Z finishedAt:2015-07-07T20:58:43Z containerID:docker://0e4095bba1feccdfe7ef9fb6ebffe972b4b14285d5acdec6f0d3ae8a22fad8b2]][13:59:03] clusterScaleDoc ~/go/src/github.com/GoogleCloudPlatform/kubernetes $
|
||||
```
|
||||
|
||||
We can see that this container was terminated because `reason:OOM Killed`, where *OOM* stands for Out Of Memory.
|
||||
|
||||
## Planned Improvements
|
||||
|
@ -68,6 +68,7 @@ spec: # specification of the pod’s contents
|
||||
image: "ubuntu:14.04"
|
||||
command: ["/bin/echo","hello”,”world"]
|
||||
```
|
||||
|
||||
The value of `metadata.name`, `hello-world`, will be the name of the pod resource created, and must be unique within the cluster, whereas `containers[0].name` is just a nickname for the container within that pod. `image` is the name of the Docker image, which Kubernetes expects to be able to pull from a registry, the [Docker Hub](https://registry.hub.docker.com/) by default.
|
||||
|
||||
`restartPolicy: Never` indicates that we just want to run the container once and then terminate the pod.
|
||||
@ -80,30 +81,36 @@ The [`command`](containers.md#containers-and-commands) overrides the Docker cont
|
||||
```
|
||||
|
||||
This pod can be created using the `create` command:
|
||||
|
||||
```bash
|
||||
$ kubectl create -f ./hello-world.yaml
|
||||
pods/hello-world
|
||||
```
|
||||
|
||||
`kubectl` prints the resource type and name of the resource created when successful.
|
||||
|
||||
## Validating configuration
|
||||
|
||||
If you’re not sure you specified the resource correctly, you can ask `kubectl` to validate it for you:
|
||||
|
||||
```bash
|
||||
$ kubectl create -f ./hello-world.yaml --validate
|
||||
```
|
||||
|
||||
Let’s say you specified `entrypoint` instead of `command`. You’d see output as follows:
|
||||
|
||||
```
|
||||
I0709 06:33:05.600829 14160 schema.go:126] unknown field: entrypoint
|
||||
I0709 06:33:05.600988 14160 schema.go:129] this may be a false alarm, see https://github.com/GoogleCloudPlatform/kubernetes/issues/6842
|
||||
pods/hello-world
|
||||
```
|
||||
|
||||
`kubectl create --validate` currently warns about problems it detects, but creates the resource anyway, unless a required field is absent or a field value is invalid. Unknown API fields are ignored, so be careful. This pod was created, but with no `command`, which is an optional field, since the image may specify an `Entrypoint`.
|
||||
|
||||
## Environment variables and variable expansion
|
||||
|
||||
Kubernetes [does not automatically run commands in a shell](https://github.com/GoogleCloudPlatform/kubernetes/wiki/User-FAQ#use-of-environment-variables-on-the-command-line) (not all images contain shells). If you would like to run your command in a shell, such as to expand environment variables (specified using `env`), you could do the following:
|
||||
|
||||
```yaml
|
||||
apiVersion: v1
|
||||
kind: Pod
|
||||
@ -122,52 +129,65 @@ spec: # specification of the pod’s contents
|
||||
```
|
||||
|
||||
However, a shell isn’t necessary just to expand environment variables. Kubernetes will do it for you if you use [`$(ENVVAR)` syntax](../../docs/design/expansion.md):
|
||||
|
||||
```yaml
|
||||
command: ["/bin/echo"]
|
||||
args: ["$(MESSAGE)"]
|
||||
```
|
||||
|
||||
## Viewing pod status
|
||||
|
||||
You can see the pod you created (actually all of your cluster's pods) using the `get` command.
|
||||
|
||||
If you’re quick, it will look as follows:
|
||||
|
||||
```bash
|
||||
$ kubectl get pods
|
||||
NAME READY STATUS RESTARTS AGE
|
||||
hello-world 0/1 Pending 0 0s
|
||||
```
|
||||
|
||||
Initially, a newly created pod is unscheduled -- no node has been selected to run it. Scheduling happens after creation, but is fast, so you normally shouldn’t see pods in an unscheduled state unless there’s a problem.
|
||||
|
||||
After the pod has been scheduled, the image may need to be pulled to the node on which it was scheduled, if it hadn’t be pulled already. After a few seconds, you should see the container running:
|
||||
|
||||
```bash
|
||||
$ kubectl get pods
|
||||
NAME READY STATUS RESTARTS AGE
|
||||
hello-world 1/1 Running 0 5s
|
||||
```
|
||||
|
||||
The `READY` column shows how many containers in the pod are running.
|
||||
|
||||
Almost immediately after it starts running, this command will terminate. `kubectl` shows that the container is no longer running and displays the exit status:
|
||||
|
||||
```bash
|
||||
$ kubectl get pods
|
||||
NAME READY STATUS RESTARTS AGE
|
||||
hello-world 0/1 ExitCode:0 0 15s
|
||||
```
|
||||
|
||||
## Viewing pod output
|
||||
|
||||
You probably want to see the output of the command you ran. As with [`docker logs`](https://docs.docker.com/userguide/usingdocker/), `kubectl logs` will show you the output:
|
||||
|
||||
```bash
|
||||
$ kubectl logs hello-world
|
||||
hello world
|
||||
```
|
||||
|
||||
## Deleting pods
|
||||
When you’re done looking at the output, you should delete the pod:
|
||||
|
||||
```bash
|
||||
$ kubectl delete pod hello-world
|
||||
pods/hello-world
|
||||
```
|
||||
|
||||
As with `create`, `kubectl` prints the resource type and name of the resource deleted when successful.
|
||||
|
||||
You can also use the resource/name format to specify the pod:
|
||||
|
||||
```bash
|
||||
$ kubectl delete pods/hello-world
|
||||
pods/hello-world
|
||||
|
@ -60,6 +60,7 @@ This guide uses a simple nginx server to demonstrate proof of concept. The same
|
||||
## Exposing pods to the cluster
|
||||
|
||||
We did this in a previous example, but lets do it once again and focus on the networking perspective. Create an nginx pod, and note that it has a container port specification:
|
||||
|
||||
```yaml
|
||||
$ cat nginxrc.yaml
|
||||
apiVersion: v1
|
||||
@ -81,6 +82,7 @@ spec:
|
||||
```
|
||||
|
||||
This makes it accessible from any node in your cluster. Check the nodes the pod is running on:
|
||||
|
||||
```shell
|
||||
$ kubectl create -f ./nginxrc.yaml
|
||||
$ kubectl get pods -l app=nginx -o wide
|
||||
@ -89,11 +91,13 @@ my-nginx-t26zt 1/1 Running 0 2h e2e-test-beeps-minion-
|
||||
```
|
||||
|
||||
Check your pods ips:
|
||||
|
||||
```shell
|
||||
$ kubectl get pods -l app=nginx -o json | grep podIP
|
||||
"podIP": "10.245.0.15",
|
||||
"podIP": "10.245.0.14",
|
||||
```
|
||||
|
||||
You should be able to ssh into any node in your cluster and curl both ips. Note that the containers are *not* using port 80 on the node, nor are there any special NAT rules to route traffic to the pod. This means you can run multiple nginx pods on the same node all using the same containerPort and access them from any other pod or node in your cluster using ip. Like Docker, ports can still be published to the host node's interface(s), but the need for this is radically diminished because of the networking model.
|
||||
|
||||
You can read more about [how we achieve this](../admin/networking.md#how-to-achieve-this) if you’re curious.
|
||||
@ -105,6 +109,7 @@ So we have pods running nginx in a flat, cluster wide, address space. In theory,
|
||||
A Kubernetes Service is an abstraction which defines a logical set of Pods running somewhere in your cluster, that all provide the same functionality. When created, each Service is assigned a unique IP address (also called clusterIP). This address is tied to the lifespan of the Service, and will not change while the Service is alive. Pods can be configured to talk to the Service, and know that communication to the Service will be automatically load-balanced out to some pod that is a member of the Service.
|
||||
|
||||
You can create a Service for your 2 nginx replicas with the following yaml:
|
||||
|
||||
```yaml
|
||||
$ cat nginxsvc.yaml
|
||||
apiVersion: v1
|
||||
@ -120,7 +125,9 @@ spec:
|
||||
selector:
|
||||
app: nginx
|
||||
```
|
||||
|
||||
This specification will create a Service which targets TCP port 80 on any Pod with the `app=nginx` label, and expose it on an abstracted Service port (`targetPort`: is the port the container accepts traffic on, `port`: is the abstracted Service port, which can be any port other pods use to access the Service). Check your Service:
|
||||
|
||||
```shell
|
||||
$ kubectl get svc
|
||||
NAME LABELS SELECTOR IP(S) PORT(S)
|
||||
@ -128,6 +135,7 @@ nginxsvc app=nginx app=nginx 10.0.116.146 80/TCP
|
||||
```
|
||||
|
||||
As mentioned previously, a Service is backed by a group of pods. These pods are exposed through `endpoints`. The Service's selector will be evaluated continuously and the results will be POSTed to an Endpoints object also named `nginxsvc`. When a pod dies, it is automatically removed from the endpoints, and new pods matching the Service’s selector will automatically get added to the endpoints. Check the endpoints, and note that the ips are the same as the pods created in the first step:
|
||||
|
||||
```shell
|
||||
$ kubectl describe svc nginxsvc
|
||||
Name: nginxsvc
|
||||
@ -145,6 +153,7 @@ $ kubectl get ep
|
||||
NAME ENDPOINTS
|
||||
nginxsvc 10.245.0.14:80,10.245.0.15:80
|
||||
```
|
||||
|
||||
You should now be able to curl the nginx Service on `10.0.116.146:80` from any node in your cluster. Note that the Service ip is completely virtual, it never hits the wire, if you’re curious about how this works you can read more about the [service proxy](services.md#virtual-ips-and-service-proxies).
|
||||
|
||||
## Accessing the Service
|
||||
@ -153,12 +162,15 @@ Kubernetes supports 2 primary modes of finding a Service - environment variables
|
||||
|
||||
### Environment Variables
|
||||
When a Pod is run on a Node, the kubelet adds a set of environment variables for each active Service. This introduces an ordering problem. To see why, inspect the environment of your running nginx pods:
|
||||
|
||||
```shell
|
||||
$ kubectl exec my-nginx-6isf4 -- printenv | grep SERVICE
|
||||
KUBERNETES_SERVICE_HOST=10.0.0.1
|
||||
KUBERNETES_SERVICE_PORT=443
|
||||
```
|
||||
|
||||
Note there’s no mention of your Service. This is because you created the replicas before the Service. Another disadvantage of doing this is that the scheduler might put both pods on the same machine, which will take your entire Service down if it dies. We can do this the right way by killing the 2 pods and waiting for the replication controller to recreate them. This time around the Service exists *before* the replicas. This will given you scheduler level Service spreading of your pods (provided all your nodes have equal capacity), as well as the right environment variables:
|
||||
|
||||
```shell
|
||||
$ kubectl scale rc my-nginx --replicas=0; kubectl scale rc my-nginx --replicas=2;
|
||||
$ kubectl get pods -l app=nginx -o wide
|
||||
@ -175,12 +187,14 @@ NGINXSVC_SERVICE_PORT=80
|
||||
|
||||
### DNS
|
||||
Kubernetes offers a DNS cluster addon Service that uses skydns to automatically assign dns names to other Services. You can check if it’s running on your cluster:
|
||||
|
||||
```shell
|
||||
$ kubectl get services kube-dns --namespace=kube-system
|
||||
NAME LABELS SELECTOR IP(S) PORT(S)
|
||||
kube-dns <none> k8s-app=kube-dns 10.0.0.10 53/UDP
|
||||
53/TCP
|
||||
```
|
||||
|
||||
If it isn’t running, you can [enable it](../../cluster/addons/dns/README.md#how-do-i-configure-it). The rest of this section will assume you have a Service with a long lived ip (nginxsvc), and a dns server that has assigned a name to that ip (the kube-dns cluster addon), so you can talk to the Service from any pod in your cluster using standard methods (e.g. gethostbyname). Let’s create another pod to test this:
|
||||
|
||||
```yaml
|
||||
@ -199,7 +213,9 @@ spec:
|
||||
name: curlcontainer
|
||||
restartPolicy: Always
|
||||
```
|
||||
|
||||
And perform a lookup of the nginx Service
|
||||
|
||||
```shell
|
||||
$ kubectl create -f ./curlpod.yaml
|
||||
default/curlpod
|
||||
@ -222,6 +238,7 @@ Till now we have only accessed the nginx server from within the cluster. Before
|
||||
* A [secret](secrets.md) that makes the certificates accessible to pods
|
||||
|
||||
You can acquire all these from the [nginx https example](../../examples/https-nginx/README.md), in short:
|
||||
|
||||
```shell
|
||||
$ make keys secret KEY=/tmp/nginx.key CERT=/tmp/nginx.crt SECRET=/tmp/secret.json
|
||||
$ kubectl create -f /tmp/secret.json
|
||||
@ -233,6 +250,7 @@ nginxsecret Opaque 2
|
||||
```
|
||||
|
||||
Now modify your nginx replicas to start a https server using the certificate in the secret, and the Service, to expose both ports (80 and 443):
|
||||
|
||||
```yaml
|
||||
$ cat nginx-app.yaml
|
||||
apiVersion: v1
|
||||
@ -279,6 +297,7 @@ spec:
|
||||
- mountPath: /etc/nginx/ssl
|
||||
name: secret-volume
|
||||
```
|
||||
|
||||
Noteworthy points about the nginx-app manifest:
|
||||
- It contains both rc and service specification in the same file
|
||||
- The [nginx server](../../examples/https-nginx/default.conf) serves http traffic on port 80 and https traffic on 443, and nginx Service exposes both ports.
|
||||
@ -293,6 +312,7 @@ replicationcontrollers/my-nginx
|
||||
```
|
||||
|
||||
At this point you can reach the nginx server from any node.
|
||||
|
||||
```shell
|
||||
$ kubectl get pods -o json | grep -i podip
|
||||
"podIP": "10.1.0.80",
|
||||
@ -383,6 +403,7 @@ $ curl https://104.197.63.17:30645 -k
|
||||
```
|
||||
|
||||
Lets now recreate the Service to use a cloud load balancer, just change the `Type` of Service in the nginx-app.yaml from `NodePort` to `LoadBalancer`:
|
||||
|
||||
```shell
|
||||
$ kubectl delete rc, svc -l app=nginx
|
||||
$ kubectl create -f ./nginx-app.yaml
|
||||
|
@ -35,11 +35,14 @@ Documentation for other releases can be found at
|
||||
kubectl port-forward forwards connections to a local port to a port on a pod. Its man page is available [here](kubectl/kubectl_port-forward.md). Compared to [kubectl proxy](accessing-the-cluster.md#using-kubectl-proxy), `kubectl port-forward` is more generic as it can forward TCP traffic while `kubectl proxy` can only forward HTTP traffic. This guide demonstrates how to use `kubectl port-forward` to connect to a Redis database, which may be useful for database debugging.
|
||||
|
||||
## Creating a Redis master
|
||||
|
||||
```
|
||||
$kubectl create examples/redis/redis-master.yaml
|
||||
pods/redis-master
|
||||
```
|
||||
|
||||
wait until the Redis master pod is Running and Ready,
|
||||
|
||||
```
|
||||
$ kubectl get pods
|
||||
NAME READY STATUS RESTARTS AGE
|
||||
@ -49,6 +52,7 @@ redis-master 2/2 Running 0 41s
|
||||
|
||||
## Connecting to the Redis master[a]
|
||||
The Redis master is listening on port 6397, to verify this,
|
||||
|
||||
```
|
||||
$ kubectl get pods redis-master -t='{{(index (index .spec.containers 0).ports 0).containerPort}}{{"\n"}}'
|
||||
6379
|
||||
@ -56,17 +60,21 @@ $ kubectl get pods redis-master -t='{{(index (index .spec.containers 0).ports 0)
|
||||
|
||||
|
||||
then we forward the port 6379 on the local workstation to the port 6379 of pod redis-master,
|
||||
|
||||
```
|
||||
$ kubectl port-forward -p redis-master 6379:6379
|
||||
I0710 14:43:38.274550 3655 portforward.go:225] Forwarding from 127.0.0.1:6379 -> 6379
|
||||
I0710 14:43:38.274797 3655 portforward.go:225] Forwarding from [::1]:6379 -> 6379
|
||||
```
|
||||
|
||||
To verify the connection is successful, we run a redis-cli on the local workstation,
|
||||
|
||||
```
|
||||
$ redis-cli
|
||||
127.0.0.1:6379> ping
|
||||
PONG
|
||||
```
|
||||
|
||||
Now one can debug the database from the local workstation.
|
||||
|
||||
|
||||
|
@ -36,19 +36,23 @@ You have seen the [basics](accessing-the-cluster.md) about `kubectl proxy` and `
|
||||
|
||||
##Getting the apiserver proxy URL of kube-ui
|
||||
kube-ui is deployed as a cluster add-on. To find its apiserver proxy URL,
|
||||
|
||||
```
|
||||
$ kubectl cluster-info | grep "KubeUI"
|
||||
KubeUI is running at https://173.255.119.104/api/v1/proxy/namespaces/kube-system/services/kube-ui
|
||||
```
|
||||
|
||||
if this command does not find the URL, try the steps [here](ui.md#accessing-the-ui).
|
||||
|
||||
|
||||
##Connecting to the kube-ui service from your local workstation
|
||||
The above proxy URL is an access to the kube-ui service provided by the apiserver. To access it, you still need to authenticate to the apiserver. `kubectl proxy` can handle the authentication.
|
||||
|
||||
```
|
||||
$ kubectl proxy --port=8001
|
||||
Starting to serve on localhost:8001
|
||||
```
|
||||
|
||||
Now you can access the kube-ui service on your local workstation at [http://localhost:8001/api/v1/proxy/namespaces/kube-system/services/kube-ui](http://localhost:8001/api/v1/proxy/namespaces/kube-system/services/kube-ui)
|
||||
|
||||
|
||||
|
@ -52,6 +52,7 @@ Kubernetes creates and manages sets of replicated containers (actually, replicat
|
||||
A replication controller simply ensures that a specified number of pod "replicas" are running at any one time. If there are too many, it will kill some. If there are too few, it will start more. It’s analogous to Google Compute Engine’s [Instance Group Manager](https://cloud.google.com/compute/docs/instance-groups/manager/) or AWS’s [Auto-scaling Group](http://docs.aws.amazon.com/AutoScaling/latest/DeveloperGuide/AutoScalingGroup.html) (with no scaling policies).
|
||||
|
||||
The replication controller created to run nginx by `kubctl run` in the [Quick start](quick-start.md) could be specified using YAML as follows:
|
||||
|
||||
```yaml
|
||||
apiVersion: v1
|
||||
kind: ReplicationController
|
||||
@ -70,9 +71,11 @@ spec:
|
||||
ports:
|
||||
- containerPort: 80
|
||||
```
|
||||
|
||||
Some differences compared to specifying just a pod are that the `kind` is `ReplicationController`, the number of `replicas` desired is specified, and the pod specification is under the `template` field. The names of the pods don’t need to be specified explicitly because they are generated from the name of the replication controller.
|
||||
|
||||
This replication controller can be created using `create`, just as with pods:
|
||||
|
||||
```bash
|
||||
$ kubectl create -f ./nginx-rc.yaml
|
||||
replicationcontrollers/my-nginx
|
||||
@ -83,23 +86,28 @@ Unlike in the case where you directly create pods, a replication controller repl
|
||||
## Viewing replication controller status
|
||||
|
||||
You can view the replication controller you created using `get`:
|
||||
|
||||
```bash
|
||||
$ kubectl get rc
|
||||
CONTROLLER CONTAINER(S) IMAGE(S) SELECTOR REPLICAS
|
||||
my-nginx nginx nginx app=nginx 2
|
||||
```
|
||||
|
||||
This tells you that your controller will ensure that you have two nginx replicas.
|
||||
|
||||
You can see those replicas using `get`, just as with pods you created directly:
|
||||
|
||||
```bash
|
||||
$ kubectl get pods
|
||||
NAME READY STATUS RESTARTS AGE
|
||||
my-nginx-065jq 1/1 Running 0 51s
|
||||
my-nginx-buaiq 1/1 Running 0 51s
|
||||
```
|
||||
|
||||
## Deleting replication controllers
|
||||
|
||||
When you want to kill your application, delete your replication controller, as in the [Quick start](quick-start.md):
|
||||
|
||||
```bash
|
||||
$ kubectl delete rc my-nginx
|
||||
replicationcontrollers/my-nginx
|
||||
@ -112,6 +120,7 @@ If you try to delete the pods before deleting the replication controller, it wil
|
||||
## Labels
|
||||
|
||||
Kubernetes uses user-defined key-value attributes called [*labels*](labels.md) to categorize and identify sets of resources, such as pods and replication controllers. The example above specified a single label in the pod template, with key `app` and value `nginx`. All pods created carry that label, which can be viewed using `-L`:
|
||||
|
||||
```bash
|
||||
$ kubectl get pods -L app
|
||||
NAME READY STATUS RESTARTS AGE APP
|
||||
@ -120,6 +129,7 @@ my-nginx-lg99z 0/1 Running 0 3s nginx
|
||||
```
|
||||
|
||||
The labels from the pod template are copied to the replication controller’s labels by default, as well -- all resources in Kubernetes support labels:
|
||||
|
||||
```bash
|
||||
$ kubectl get rc my-nginx -L app
|
||||
CONTROLLER CONTAINER(S) IMAGE(S) SELECTOR REPLICAS APP
|
||||
@ -127,6 +137,7 @@ my-nginx nginx nginx app=nginx 2 nginx
|
||||
```
|
||||
|
||||
More importantly, the pod template’s labels are used to create a [`selector`](labels.md#label-selectors) that will match pods carrying those labels. You can see this field by requesting it using the [Go template output format of `kubectl get`](kubectl/kubectl_get.md):
|
||||
|
||||
```bash
|
||||
$ kubectl get rc my-nginx -o template --template="{{.spec.selector}}"
|
||||
map[app:nginx]
|
||||
|
@ -53,6 +53,7 @@ In this doc, we introduce the kubernetes command line to for interacting with th
|
||||
How do I run an nginx container and expose it to the world? Checkout [kubectl run](kubectl/kubectl_run.md).
|
||||
|
||||
With docker:
|
||||
|
||||
```
|
||||
$ docker run -d --restart=always --name nginx-app -p 80:80 nginx
|
||||
a9ec34d9878748d2f33dc20cb25c714ff21da8d40558b45bfaec9955859075d0
|
||||
@ -62,6 +63,7 @@ a9ec34d98787 nginx "nginx -g 'daemon of 2 seconds ago
|
||||
```
|
||||
|
||||
With kubectl:
|
||||
|
||||
```
|
||||
# start the pod running nginx
|
||||
$ kubectl run --image=nginx nginx-app
|
||||
@ -80,6 +82,7 @@ With kubectl, we create a [replication controller](replication-controller.md) wh
|
||||
How do I list what is currently running? Checkout [kubectl get](kubectl/kubectl_get.md).
|
||||
|
||||
With docker:
|
||||
|
||||
```
|
||||
$ docker ps
|
||||
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
|
||||
@ -87,6 +90,7 @@ a9ec34d98787 nginx "nginx -g 'daemon of About an hour ago
|
||||
```
|
||||
|
||||
With kubectl:
|
||||
|
||||
```
|
||||
$ kubectl get po
|
||||
NAME READY STATUS RESTARTS AGE
|
||||
@ -98,6 +102,7 @@ nginx-app-5jyvm 1/1 Running 0 1h
|
||||
How do I execute a command in a container? Checkout [kubectl exec](kubectl/kubectl_exec.md).
|
||||
|
||||
With docker:
|
||||
|
||||
```
|
||||
$ docker ps
|
||||
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
|
||||
@ -107,6 +112,7 @@ a9ec34d98787
|
||||
```
|
||||
|
||||
With kubectl:
|
||||
|
||||
```
|
||||
$ kubectl get po
|
||||
NAME READY STATUS RESTARTS AGE
|
||||
@ -119,12 +125,14 @@ What about interactive commands?
|
||||
|
||||
|
||||
With docker:
|
||||
|
||||
```
|
||||
$ docker exec -ti a9ec34d98787 /bin/sh
|
||||
# exit
|
||||
```
|
||||
|
||||
With kubectl:
|
||||
|
||||
```
|
||||
$ kubectl exec -ti nginx-app-5jyvm -- /bin/sh
|
||||
# exit
|
||||
@ -138,6 +146,7 @@ How do I follow stdout/stderr of a running process? Checkout [kubectl logs](kube
|
||||
|
||||
|
||||
With docker:
|
||||
|
||||
```
|
||||
$ docker logs -f a9e
|
||||
192.168.9.1 - - [14/Jul/2015:01:04:02 +0000] "GET / HTTP/1.1" 200 612 "-" "curl/7.35.0" "-"
|
||||
@ -145,6 +154,7 @@ $ docker logs -f a9e
|
||||
```
|
||||
|
||||
With kubectl:
|
||||
|
||||
```
|
||||
$ kubectl logs -f nginx-app-zibvs
|
||||
10.240.63.110 - - [14/Jul/2015:01:09:01 +0000] "GET / HTTP/1.1" 200 612 "-" "curl/7.26.0" "-"
|
||||
@ -166,6 +176,7 @@ See [Logging](logging.md) for more information.
|
||||
How do I stop and delete a running process? Checkout [kubectl delete](kubectl/kubectl_delete.md).
|
||||
|
||||
With docker
|
||||
|
||||
```
|
||||
$ docker ps
|
||||
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
|
||||
@ -177,6 +188,7 @@ a9ec34d98787
|
||||
```
|
||||
|
||||
With kubectl:
|
||||
|
||||
```
|
||||
$ kubectl get rc nginx-app
|
||||
CONTROLLER CONTAINER(S) IMAGE(S) SELECTOR REPLICAS
|
||||
@ -202,6 +214,7 @@ There is no direct analog of 'docker login' in kubectl. If you are interested in
|
||||
How do I get the version of my client and server? Checkout [kubectl version](kubectl/kubectl_version.md).
|
||||
|
||||
With docker:
|
||||
|
||||
```
|
||||
$ docker version
|
||||
Client version: 1.7.0
|
||||
@ -217,6 +230,7 @@ OS/Arch (server): linux/amd64
|
||||
```
|
||||
|
||||
With kubectl:
|
||||
|
||||
```
|
||||
$ kubectl version
|
||||
Client Version: version.Info{Major:"0", Minor:"20.1", GitVersion:"v0.20.1", GitCommit:"", GitTreeState:"not a git tree"}
|
||||
@ -228,6 +242,7 @@ Server Version: version.Info{Major:"0", Minor:"21+", GitVersion:"v0.21.1-411-g32
|
||||
How do I get miscellaneous info about my environment and configuration? Checkout [kubectl cluster-info](kubectl/kubectl_cluster-info.md).
|
||||
|
||||
With docker:
|
||||
|
||||
```
|
||||
$ docker info
|
||||
Containers: 40
|
||||
@ -249,6 +264,7 @@ WARNING: No swap limit support
|
||||
```
|
||||
|
||||
With kubectl:
|
||||
|
||||
```
|
||||
$ kubectl cluster-info
|
||||
Kubernetes master is running at https://108.59.85.141
|
||||
|
@ -38,17 +38,22 @@ Kubernetes exposes [services](services.md#environment-variables) through environ
|
||||
|
||||
|
||||
We first create a pod and a service,
|
||||
|
||||
```
|
||||
$ kubectl create -f examples/guestbook/redis-master-controller.yaml
|
||||
$ kubectl create -f examples/guestbook/redis-master-service.yaml
|
||||
```
|
||||
|
||||
wait until the pod is Running and Ready,
|
||||
|
||||
```
|
||||
$ kubectl get pod
|
||||
NAME READY REASON RESTARTS AGE
|
||||
redis-master-ft9ex 1/1 Running 0 12s
|
||||
```
|
||||
|
||||
then we can check the environment variables of the pod,
|
||||
|
||||
```
|
||||
$ kubectl exec redis-master-ft9ex env
|
||||
...
|
||||
@ -56,22 +61,28 @@ REDIS_MASTER_SERVICE_PORT=6379
|
||||
REDIS_MASTER_SERVICE_HOST=10.0.0.219
|
||||
...
|
||||
```
|
||||
|
||||
We can use these environment variables in applications to find the service.
|
||||
|
||||
|
||||
## Using kubectl exec to check the mounted volumes
|
||||
It is convenient to use `kubectl exec` to check if the volumes are mounted as expected.
|
||||
We first create a Pod with a volume mounted at /data/redis,
|
||||
|
||||
```
|
||||
kubectl create -f docs/user-guide/walkthrough/pod-redis.yaml
|
||||
```
|
||||
|
||||
wait until the pod is Running and Ready,
|
||||
|
||||
```
|
||||
$ kubectl get pods
|
||||
NAME READY REASON RESTARTS AGE
|
||||
storage 1/1 Running 0 1m
|
||||
```
|
||||
|
||||
we then use `kubectl exec` to verify that the volume is mounted at /data/redis,
|
||||
|
||||
```
|
||||
$ kubectl exec storage ls /data
|
||||
redis
|
||||
@ -79,10 +90,12 @@ redis
|
||||
|
||||
## Using kubectl exec to open a bash terminal in a pod
|
||||
After all, open a terminal in a pod is the most direct way to introspect the pod. Assuming the pod/storage is still running, run
|
||||
|
||||
```
|
||||
$ kubectl exec -ti storage -- bash
|
||||
root@storage:/data#
|
||||
```
|
||||
|
||||
This gets you a terminal.
|
||||
|
||||
|
||||
|
@ -113,6 +113,7 @@ example, run these on your desktop/laptop:
|
||||
- for example: `for n in $nodes; do scp ~/.dockercfg root@$n:/root/.dockercfg; done`
|
||||
|
||||
Verify by creating a pod that uses a private image, e.g.:
|
||||
|
||||
```
|
||||
$ cat <<EOF > /tmp/private-image-test-1.yaml
|
||||
apiVersion: v1
|
||||
@ -130,13 +131,16 @@ $ kubectl create -f /tmp/private-image-test-1.yaml
|
||||
pods/private-image-test-1
|
||||
$
|
||||
```
|
||||
|
||||
If everything is working, then, after a few moments, you should see:
|
||||
|
||||
```
|
||||
$ kubectl logs private-image-test-1
|
||||
SUCCESS
|
||||
```
|
||||
|
||||
If it failed, then you will see:
|
||||
|
||||
```
|
||||
$ kubectl describe pods/private-image-test-1 | grep "Failed"
|
||||
Fri, 26 Jun 2015 15:36:13 -0700 Fri, 26 Jun 2015 15:39:13 -0700 19 {kubelet node-i2hq} spec.containers{uses-private-image} failed Failed to pull image "user/privaterepo:v1": Error: image user/privaterepo:v1 not found
|
||||
@ -182,6 +186,7 @@ Kubernetes supports specifying registry keys on a pod.
|
||||
|
||||
First, create a `.dockercfg`, such as running `docker login <registry.domain>`.
|
||||
Then put the resulting `.dockercfg` file into a [secret resource](secrets.md). For example:
|
||||
|
||||
```
|
||||
$ docker login
|
||||
Username: janedoe
|
||||
@ -219,6 +224,7 @@ This process only needs to be done one time (per namespace).
|
||||
|
||||
Now, you can create pods which reference that secret by adding an `imagePullSecrets`
|
||||
section to a pod definition.
|
||||
|
||||
```
|
||||
apiVersion: v1
|
||||
kind: Pod
|
||||
@ -231,6 +237,7 @@ spec:
|
||||
imagePullSecrets:
|
||||
- name: myregistrykey
|
||||
```
|
||||
|
||||
This needs to be done for each pod that is using a private registry.
|
||||
However, setting of this field can be automated by setting the imagePullSecrets
|
||||
in a [serviceAccount](service-accounts.md) resource.
|
||||
|
@ -39,6 +39,7 @@ Multiple kubeconfig files are allowed. At runtime they are loaded and merged to
|
||||
https://github.com/GoogleCloudPlatform/kubernetes/issues/1755
|
||||
|
||||
## Example kubeconfig file
|
||||
|
||||
```
|
||||
apiVersion: v1
|
||||
clusters:
|
||||
@ -118,6 +119,7 @@ In order to more easily manipulate kubeconfig files, there are a series of subco
|
||||
See [kubectl/kubectl_config.md](kubectl/kubectl_config.md) for help.
|
||||
|
||||
### Example
|
||||
|
||||
```
|
||||
$kubectl config set-credentials myself --username=admin --password=secret
|
||||
$kubectl config set-cluster local-server --server=http://localhost:8080
|
||||
@ -126,7 +128,9 @@ $kubectl config use-context default-context
|
||||
$kubectl config set contexts.default-context.namespace the-right-prefix
|
||||
$kubectl config view
|
||||
```
|
||||
|
||||
produces this output
|
||||
|
||||
```
|
||||
clusters:
|
||||
local-server:
|
||||
@ -144,7 +148,9 @@ users:
|
||||
password: secret
|
||||
|
||||
```
|
||||
|
||||
and a kubeconfig file that looks like this
|
||||
|
||||
```
|
||||
apiVersion: v1
|
||||
clusters:
|
||||
@ -168,6 +174,7 @@ users:
|
||||
```
|
||||
|
||||
#### Commands for the example file
|
||||
|
||||
```
|
||||
$kubectl config set preferences.colors true
|
||||
$kubectl config set-cluster cow-cluster --server=http://cow.org:8080 --api-version=v1
|
||||
|
@ -36,6 +36,7 @@ _Labels_ are key/value pairs that are attached to objects, such as pods.
|
||||
Labels are intended to be used to specify identifying attributes of objects that are meaningful and relevant to users, but which do not directly imply semantics to the core system.
|
||||
Labels can be used to organize and to select subsets of objects. Labels can be attached to objects at creation time and subsequently added and modified at any time.
|
||||
Each object can have a set of key/value labels defined. Each Key must be unique for a given object.
|
||||
|
||||
```
|
||||
"labels": {
|
||||
"key1" : "value1",
|
||||
@ -85,6 +86,7 @@ An empty label selector (that is, one with zero requirements) selects every obje
|
||||
|
||||
_Equality-_ or _inequality-based_ requirements allow filtering by label keys and values. Matching objects must have all of the specified labels (both keys and values), though they may have additional labels as well.
|
||||
Three kinds of operators are admitted `=`,`==`,`!=`. The first two represent _equality_ and are simply synonyms. While the latter represents _inequality_. For example:
|
||||
|
||||
```
|
||||
environment = production
|
||||
tier != frontend
|
||||
@ -98,11 +100,13 @@ One could filter for resources in `production` but not `frontend` using the comm
|
||||
### _Set-based_ requirement
|
||||
|
||||
_Set-based_ label requirements allow filtering keys according to a set of values. Matching objects must have all of the specified labels (i.e. all keys and at least one of the values specified for each key). Three kind of operators are supported: `in`,`notin` and exists (only the key identifier). For example:
|
||||
|
||||
```
|
||||
environment in (production, qa)
|
||||
tier notin (frontend, backend)
|
||||
partition
|
||||
```
|
||||
|
||||
The first example selects all resources with key equal to `environment` and value equal to `production` or `qa`.
|
||||
The second example selects all resources with key equal to `tier` and value other than `frontend` and `backend`.
|
||||
The third example selects all resources including a label with key `partition`; no values are checked.
|
||||
|
@ -34,6 +34,7 @@ Documentation for other releases can be found at
|
||||
This example shows two types of pod [health checks](../production-pods.md#liveness-and-readiness-probes-aka-health-checks): HTTP checks and container execution checks.
|
||||
|
||||
The [exec-liveness.yaml](exec-liveness.yaml) demonstrates the container execution check.
|
||||
|
||||
```
|
||||
livenessProbe:
|
||||
exec:
|
||||
@ -43,16 +44,20 @@ The [exec-liveness.yaml](exec-liveness.yaml) demonstrates the container executio
|
||||
initialDelaySeconds: 15
|
||||
timeoutSeconds: 1
|
||||
```
|
||||
|
||||
Kubelet executes the command `cat /tmp/health` in the container and reports failure if the command returns a non-zero exit code.
|
||||
|
||||
Note that the container removes the `/tmp/health` file after 10 seconds,
|
||||
|
||||
```
|
||||
echo ok > /tmp/health; sleep 10; rm -rf /tmp/health; sleep 600
|
||||
```
|
||||
|
||||
so when Kubelet executes the health check 15 seconds (defined by initialDelaySeconds) after the container started, the check would fail.
|
||||
|
||||
|
||||
The [http-liveness.yaml](http-liveness.yaml) demonstrates the HTTP check.
|
||||
|
||||
```
|
||||
livenessProbe:
|
||||
httpGet:
|
||||
@ -61,18 +66,21 @@ The [http-liveness.yaml](http-liveness.yaml) demonstrates the HTTP check.
|
||||
initialDelaySeconds: 15
|
||||
timeoutSeconds: 1
|
||||
```
|
||||
|
||||
The Kubelet sends a HTTP request to the specified path and port to perform the health check. If you take a look at image/server.go, you will see the server starts to respond with an error code 500 after 10 seconds, so the check fails.
|
||||
|
||||
This [guide](../walkthrough/k8s201.md#health-checking) has more information on health checks.
|
||||
|
||||
## Get your hands dirty
|
||||
To show the health check is actually working, first create the pods:
|
||||
|
||||
```
|
||||
# kubectl create -f docs/user-guide/liveness/exec-liveness.yaml
|
||||
# kubectl create -f docs/user-guide/liveness/http-liveness.yaml
|
||||
```
|
||||
|
||||
Check the status of the pods once they are created:
|
||||
|
||||
```
|
||||
# kubectl get pods
|
||||
NAME READY STATUS RESTARTS AGE
|
||||
@ -80,7 +88,9 @@ NAME READY STATUS RESTARTS
|
||||
liveness-exec 1/1 Running 0 13s
|
||||
liveness-http 1/1 Running 0 13s
|
||||
```
|
||||
|
||||
Check the status half a minute later, you will see the container restart count being incremented:
|
||||
|
||||
```
|
||||
# kubectl get pods
|
||||
mwielgus@mwielgusd:~/test/k2/kubernetes/examples/liveness$ kubectl get pods
|
||||
@ -89,6 +99,7 @@ NAME READY STATUS RESTARTS
|
||||
liveness-exec 1/1 Running 1 36s
|
||||
liveness-http 1/1 Running 1 36s
|
||||
```
|
||||
|
||||
At the bottom of the *kubectl describe* output there are messages indicating that the liveness probes have failed, and the containers have been killed and recreated.
|
||||
|
||||
```
|
||||
|
@ -39,6 +39,7 @@ Kubernetes components, such as kubelet and apiserver, use the [glog](https://god
|
||||
The logs of a running container may be fetched using the command `kubectl logs`. For example, given
|
||||
this pod specification [counter-pod.yaml](../../examples/blog-logging/counter-pod.yaml), which has a container which writes out some text to standard
|
||||
output every second. (You can find different pod specifications [here](logging-demo/).)
|
||||
|
||||
```
|
||||
apiVersion: v1
|
||||
kind: Pod
|
||||
@ -51,12 +52,16 @@ output every second. (You can find different pod specifications [here](logging-d
|
||||
args: [bash, -c,
|
||||
'for ((i = 0; ; i++)); do echo "$i: $(date)"; sleep 1; done']
|
||||
```
|
||||
|
||||
we can run the pod:
|
||||
|
||||
```
|
||||
$ kubectl create -f ./counter-pod.yaml
|
||||
pods/counter
|
||||
```
|
||||
|
||||
and then fetch the logs:
|
||||
|
||||
```
|
||||
$ kubectl logs counter
|
||||
0: Tue Jun 2 21:37:31 UTC 2015
|
||||
@ -67,8 +72,10 @@ $ kubectl logs counter
|
||||
5: Tue Jun 2 21:37:36 UTC 2015
|
||||
...
|
||||
```
|
||||
|
||||
If a pod has more than one container then you need to specify which container's log files should
|
||||
be fetched e.g.
|
||||
|
||||
```
|
||||
$ kubectl logs kube-dns-v3-7r1l9 etcd
|
||||
2015/06/23 00:43:10 etcdserver: start to snapshot (applied: 30003, lastsnap: 20002)
|
||||
|
@ -87,6 +87,7 @@ spec:
|
||||
```
|
||||
|
||||
Multiple resources can be created the same way as a single resource:
|
||||
|
||||
```bash
|
||||
$ kubectl create -f ./nginx-app.yaml
|
||||
services/my-nginx-svc
|
||||
@ -96,26 +97,32 @@ replicationcontrollers/my-nginx
|
||||
The resources will be created in the order they appear in the file. Therefore, it's best to specify the service first, since that will ensure the scheduler can spread the pods associated with the service as they are created by the replication controller(s).
|
||||
|
||||
`kubectl create` also accepts multiple `-f` arguments:
|
||||
|
||||
```bash
|
||||
$ kubectl create -f ./nginx-svc.yaml -f ./nginx-rc.yaml
|
||||
```
|
||||
|
||||
And a directory can be specified rather than or in addition to individual files:
|
||||
|
||||
```bash
|
||||
$ kubectl create -f ./nginx/
|
||||
```
|
||||
|
||||
`kubectl` will read any files with suffixes `.yaml`, `.yml`, or `.json`.
|
||||
|
||||
It is a recommended practice to put resources related to the same microservice or application tier into the same file, and to group all of the files associated with your application in the same directory. If the tiers of your application bind to each other using DNS, then you can then simply deploy all of the components of your stack en masse.
|
||||
|
||||
A URL can also be specified as a configuration source, which is handy for deploying directly from configuration files checked into github:
|
||||
|
||||
```bash
|
||||
$ kubectl create -f https://raw.githubusercontent.com/GoogleCloudPlatform/kubernetes/master/docs/user-guide/replication.yaml
|
||||
replicationcontrollers/nginx
|
||||
```
|
||||
|
||||
## Bulk operations in kubectl
|
||||
|
||||
Resource creation isn’t the only operation that `kubectl` can perform in bulk. It can also extract resource names from configuration files in order to perform other operations, in particular to delete the same resources you created:
|
||||
|
||||
```bash
|
||||
$ kubectl delete -f ./nginx/
|
||||
replicationcontrollers/my-nginx
|
||||
@ -123,11 +130,13 @@ services/my-nginx-svc
|
||||
```
|
||||
|
||||
In the case of just two resources, it’s also easy to specify both on the command line using the resource/name syntax:
|
||||
|
||||
```bash
|
||||
$ kubectl delete replicationcontrollers/my-nginx services/my-nginx-svc
|
||||
```
|
||||
|
||||
For larger numbers of resources, one can use labels to filter resources. The selector is specified using `-l`:
|
||||
|
||||
```bash
|
||||
$ kubectl delete all -lapp=nginx
|
||||
replicationcontrollers/my-nginx
|
||||
@ -135,6 +144,7 @@ services/my-nginx-svc
|
||||
```
|
||||
|
||||
Because `kubectl` outputs resource names in the same syntax it accepts, it’s easy to chain operations using `$()` or `xargs`:
|
||||
|
||||
```bash
|
||||
$ kubectl get $(kubectl create -f ./nginx/ | grep my-nginx)
|
||||
CONTROLLER CONTAINER(S) IMAGE(S) SELECTOR REPLICAS
|
||||
@ -142,31 +152,39 @@ my-nginx nginx nginx app=nginx 2
|
||||
NAME LABELS SELECTOR IP(S) PORT(S)
|
||||
my-nginx-svc app=nginx app=nginx 10.0.152.174 80/TCP
|
||||
```
|
||||
|
||||
## Using labels effectively
|
||||
|
||||
The examples we’ve used so far apply at most a single label to any resource. There are many scenarios where multiple labels should be used to distinguish sets from one another.
|
||||
|
||||
For instance, different applications would use different values for the `app` label, but a multi-tier application, such as the [guestbook example](../../examples/guestbook/), would additionally need to distinguish each tier. The frontend could carry the following labels:
|
||||
|
||||
```yaml
|
||||
labels:
|
||||
app: guestbook
|
||||
tier: frontend
|
||||
```
|
||||
|
||||
while the Redis master and slave would have different `tier` labels, and perhaps even an additional `role` label:
|
||||
|
||||
```yaml
|
||||
labels:
|
||||
app: guestbook
|
||||
tier: backend
|
||||
role: master
|
||||
```
|
||||
|
||||
and
|
||||
|
||||
```yaml
|
||||
labels:
|
||||
app: guestbook
|
||||
tier: backend
|
||||
role: slave
|
||||
```
|
||||
|
||||
The labels allow us to slice and dice our resources along any dimension specified by a label:
|
||||
|
||||
```bash
|
||||
$ kubectl create -f ./guestbook-fe.yaml -f ./redis-master.yaml -f ./redis-slave.yaml
|
||||
replicationcontrollers/guestbook-fe
|
||||
@ -187,16 +205,20 @@ NAME READY STATUS RESTARTS AGE
|
||||
guestbook-redis-slave-2q2yf 1/1 Running 0 3m
|
||||
guestbook-redis-slave-qgazl 1/1 Running 0 3m
|
||||
```
|
||||
|
||||
## Canary deployments
|
||||
|
||||
Another scenario where multiple labels are needed is to distinguish deployments of different releases or configurations of the same component. For example, it is common practice to deploy a *canary* of a new application release (specified via image tag) side by side with the previous release so that the new release can receive live production traffic before fully rolling it out. For instance, a new release of the guestbook frontend might carry the following labels:
|
||||
|
||||
```yaml
|
||||
labels:
|
||||
app: guestbook
|
||||
tier: frontend
|
||||
track: canary
|
||||
```
|
||||
|
||||
and the primary, stable release would have a different value of the `track` label, so that the sets of pods controlled by the two replication controllers would not overlap:
|
||||
|
||||
```yaml
|
||||
labels:
|
||||
app: guestbook
|
||||
@ -205,6 +227,7 @@ and the primary, stable release would have a different value of the `track` labe
|
||||
```
|
||||
|
||||
The frontend service would span both sets of replicas by selecting the common subset of their labels, omitting the `track` label:
|
||||
|
||||
```yaml
|
||||
selector:
|
||||
app: guestbook
|
||||
@ -214,6 +237,7 @@ The frontend service would span both sets of replicas by selecting the common su
|
||||
## Updating labels
|
||||
|
||||
Sometimes existing pods and other resources need to be relabeled before creating new resources. This can be done with `kubectl label`. For example:
|
||||
|
||||
```bash
|
||||
kubectl label pods -lapp=nginx tier=fe
|
||||
NAME READY STATUS RESTARTS AGE
|
||||
@ -238,6 +262,7 @@ my-nginx-v4-wfof4 1/1 Running 0 16m fe
|
||||
## Scaling your application
|
||||
|
||||
When load on your application grows or shrinks, it’s easy to scale with `kubectl`. For instance, to increase the number of nginx replicas from 2 to 3, do:
|
||||
|
||||
```bash
|
||||
$ kubectl scale rc my-nginx --replicas=3
|
||||
scaled
|
||||
@ -247,6 +272,7 @@ my-nginx-1jgkf 1/1 Running 0 3m
|
||||
my-nginx-divi2 1/1 Running 0 1h
|
||||
my-nginx-o0ef1 1/1 Running 0 1h
|
||||
```
|
||||
|
||||
## Updating your application without a service outage
|
||||
|
||||
At some point, you’ll eventually need to update your deployed application, typically by specifying a new image or image tag, as in the canary deployment scenario above. `kubectl` supports several update operations, each of which is applicable to different scenarios.
|
||||
@ -254,6 +280,7 @@ At some point, you’ll eventually need to update your deployed application, typ
|
||||
To update a service without an outage, `kubectl` supports what is called [“rolling update”](kubectl/kubectl_rolling-update.md), which updates one pod at a time, rather than taking down the entire service at the same time. See the [rolling update design document](../design/simple-rolling-update.md) and the [example of rolling update](update-demo/) for more information.
|
||||
|
||||
Let’s say you were running version 1.7.9 of nginx:
|
||||
|
||||
```yaml
|
||||
apiVersion: v1
|
||||
kind: ReplicationController
|
||||
@ -274,12 +301,14 @@ spec:
|
||||
```
|
||||
|
||||
To update to version 1.9.1, you can use [`kubectl rolling-update --image`](../../docs/design/simple-rolling-update.md):
|
||||
|
||||
```bash
|
||||
$ kubectl rolling-update my-nginx --image=nginx:1.9.1
|
||||
Creating my-nginx-ccba8fbd8cc8160970f63f9a2696fc46
|
||||
```
|
||||
|
||||
In another window, you can see that `kubectl` added a `deployment` label to the pods, whose value is a hash of the configuration, to distinguish the new pods from the old:
|
||||
|
||||
```bash
|
||||
$ kubectl get pods -lapp=nginx -Ldeployment
|
||||
NAME READY STATUS RESTARTS AGE DEPLOYMENT
|
||||
@ -292,6 +321,7 @@ my-nginx-q6all 1/1 Running 0
|
||||
```
|
||||
|
||||
`kubectl rolling-update` reports progress as it progresses:
|
||||
|
||||
```bash
|
||||
Updating my-nginx replicas: 4, my-nginx-ccba8fbd8cc8160970f63f9a2696fc46 replicas: 1
|
||||
At end of loop: my-nginx replicas: 4, my-nginx-ccba8fbd8cc8160970f63f9a2696fc46 replicas: 1
|
||||
@ -313,6 +343,7 @@ my-nginx
|
||||
```
|
||||
|
||||
If you encounter a problem, you can stop the rolling update midway and revert to the previous version using `--rollback`:
|
||||
|
||||
```bash
|
||||
$ kubectl kubectl rolling-update my-nginx --image=nginx:1.9.1 --rollback
|
||||
Found existing update in progress (my-nginx-ccba8fbd8cc8160970f63f9a2696fc46), resuming.
|
||||
@ -321,9 +352,11 @@ Stopping my-nginx-02ca3e87d8685813dbe1f8c164a46f02 replicas: 1 -> 0
|
||||
Update succeeded. Deleting my-nginx-ccba8fbd8cc8160970f63f9a2696fc46
|
||||
my-nginx
|
||||
```
|
||||
|
||||
This is one example where the immutability of containers is a huge asset.
|
||||
|
||||
If you need to update more than just the image (e.g., command arguments, environment variables), you can create a new replication controller, with a new name and distinguishing label value, such as:
|
||||
|
||||
```yaml
|
||||
apiVersion: v1
|
||||
kind: ReplicationController
|
||||
@ -347,7 +380,9 @@ spec:
|
||||
ports:
|
||||
- containerPort: 80
|
||||
```
|
||||
|
||||
and roll it out:
|
||||
|
||||
```bash
|
||||
$ kubectl rolling-update my-nginx -f ./nginx-rc.yaml
|
||||
Creating my-nginx-v4
|
||||
@ -375,6 +410,7 @@ You can also run the [update demo](update-demo/) to see a visual representation
|
||||
## In-place updates of resources
|
||||
|
||||
Sometimes it’s necessary to make narrow, non-disruptive updates to resources you’ve created. For instance, you might want to add an [annotation](annotations.md) with a description of your object. That’s easiest to do with `kubectl patch`:
|
||||
|
||||
```bash
|
||||
$ kubectl patch rc my-nginx-v4 -p '{"metadata": {"annotations": {"description": "my frontend running nginx"}}}'
|
||||
my-nginx-v4
|
||||
@ -386,9 +422,11 @@ metadata:
|
||||
description: my frontend running nginx
|
||||
...
|
||||
```
|
||||
|
||||
The patch is specified using json.
|
||||
|
||||
For more significant changes, you can `get` the resource, edit it, and then `replace` the resource with the updated version:
|
||||
|
||||
```bash
|
||||
$ kubectl get rc my-nginx-v4 -o yaml > /tmp/nginx.yaml
|
||||
$ vi /tmp/nginx.yaml
|
||||
@ -396,10 +434,12 @@ $ kubectl replace -f /tmp/nginx.yaml
|
||||
replicationcontrollers/my-nginx-v4
|
||||
$ rm $TMP
|
||||
```
|
||||
|
||||
The system ensures that you don’t clobber changes made by other users or components by confirming that the `resourceVersion` doesn’t differ from the version you edited. If you want to update regardless of other changes, remove the `resourceVersion` field when you edit the resource. However, if you do this, don’t use your original configuration file as the source since additional fields most likely were set in the live state.
|
||||
## Disruptive updates
|
||||
|
||||
In some cases, you may need to update resource fields that cannot be updated once initialized, or you may just want to make a recursive change immediately, such as to fix broken pods created by a replication controller. To change such fields, use `replace --force`, which deletes and re-creates the resource. In this case, you can simply modify your original configuration file:
|
||||
|
||||
```bash
|
||||
$ kubectl replace -f ./nginx-rc.yaml --force
|
||||
replicationcontrollers/my-nginx-v4
|
||||
|
@ -75,6 +75,7 @@ Look [here](namespaces/) for an in depth example of namespaces.
|
||||
|
||||
### Viewing namespaces
|
||||
You can list the current namespaces in a cluster using:
|
||||
|
||||
```sh
|
||||
$> kubectl get namespaces
|
||||
NAME LABELS STATUS
|
||||
@ -140,6 +141,7 @@ Note that the name of your namespace must be a DNS compatible label.
|
||||
More information on the ```finalizers``` field can be found in the namespace [design doc](../design/namespaces.md#finalizers).
|
||||
|
||||
Then run:
|
||||
|
||||
```
|
||||
kubectl create -f ./my-namespace.yaml
|
||||
```
|
||||
@ -149,6 +151,7 @@ kubectl create -f ./my-namespace.yaml
|
||||
To temporarily set the namespace for a request, use the ```--namespace``` flag.
|
||||
|
||||
For example:
|
||||
|
||||
```
|
||||
kubectl --namespace=<insert-namespace-name-here> run nginx --image=nginx
|
||||
kubectl --namespace=<insert-namespace-name-here> get pods
|
||||
@ -160,11 +163,13 @@ You can permanently save the namespace for all subsequent kubectl commands in th
|
||||
context.
|
||||
|
||||
First get your current context:
|
||||
|
||||
```sh
|
||||
export CONTEXT=$(kubectl config view | grep current-context | awk '{print $2}')
|
||||
```
|
||||
|
||||
Then update the default namespace:
|
||||
|
||||
```sh
|
||||
kubectl config set-context $(CONTEXT) --namespace=<insert-namespace-name-here>
|
||||
```
|
||||
|
@ -51,6 +51,7 @@ for ease of development and testing. You'll create a local ```HostPath``` for t
|
||||
support local storage on the host at this time. There is no guarantee your pod ends up on the correct node where the ```HostPath``` resides.
|
||||
|
||||
|
||||
|
||||
```
|
||||
|
||||
// this will be nginx's webroot
|
||||
|
@ -38,6 +38,7 @@ You can find it in the [release](https://github.com/GoogleCloudPlatform/kubernet
|
||||
or if you build from source, kubectl should be either under _output/local/bin/<os>/<arch> or _output/dockerized/bin/<os>/<arch>.
|
||||
|
||||
Next, make sure the kubectl tool is in your path, assuming you download a release:
|
||||
|
||||
```
|
||||
# OS X
|
||||
export PATH=<path/to/kubernetes-directory>/platforms/darwin/amd64:$PATH
|
||||
|
@ -54,6 +54,7 @@ You’ve seen [how to configure and deploy pods and containers](configuring-cont
|
||||
The container file system only lives as long as the container does, so when a container crashes and restarts, changes to the filesystem will be lost and the container will restart from a clean slate. To access more-persistent storage, outside the container file system, you need a [*volume*](volumes.md). This is especially important to stateful applications, such as key-value stores and databases.
|
||||
|
||||
For example, [Redis](http://redis.io/) is a key-value cache and store, which we use in the [guestbook](../../examples/guestbook/) and other examples. We can add a volume to it to store persistent data as follows:
|
||||
|
||||
```yaml
|
||||
apiVersion: v1
|
||||
kind: ReplicationController
|
||||
@ -80,6 +81,7 @@ spec:
|
||||
- mountPath: /redis-master-data
|
||||
name: data # must match the name of the volume, above
|
||||
```
|
||||
|
||||
`emptyDir` volumes live for the lifespan of the [pod](pods.md), which is longer than the lifespan of any one container, so if the container fails and is restarted, our storage will live on.
|
||||
|
||||
In addition to the local disk storage provided by `emptyDir`, Kubernetes supports many different network-attached storage solutions, including PD on GCE and EBS on EC2, which are preferred for critical data, and will handle details such as mounting and unmounting the devices on the nodes. See [the volumes doc](volumes.md) for more details.
|
||||
@ -89,6 +91,7 @@ In addition to the local disk storage provided by `emptyDir`, Kubernetes support
|
||||
Many applications need credentials, such as passwords, OAuth tokens, and TLS keys, to authenticate with other applications, databases, and services. Storing these credentials in container images or environment variables is less than ideal, since the credentials can then be copied by anyone with access to the image, pod/container specification, host file system, or host Docker daemon.
|
||||
|
||||
Kubernetes provides a mechanism, called [*secrets*](secrets.md), that facilitates delivery of sensitive credentials to applications. A `Secret` is a simple resource containing a map of data. For instance, a simple secret with a username and password might look as follows:
|
||||
|
||||
```yaml
|
||||
apiVersion: v1
|
||||
kind: Secret
|
||||
@ -99,7 +102,9 @@ data:
|
||||
password: dmFsdWUtMg0K
|
||||
username: dmFsdWUtMQ0K
|
||||
```
|
||||
|
||||
As with other resources, this secret can be instantiated using `create` and can be viewed with `get`:
|
||||
|
||||
```bash
|
||||
$ kubectl create -f ./secret.yaml
|
||||
secrets/mysecret
|
||||
@ -150,6 +155,7 @@ Secrets can also be used to pass [image registry credentials](images.md#using-a-
|
||||
|
||||
First, create a `.dockercfg` file, such as running `docker login <registry.domain>`.
|
||||
Then put the resulting `.dockercfg` file into a [secret resource](secrets.md). For example:
|
||||
|
||||
```
|
||||
$ docker login
|
||||
Username: janedoe
|
||||
@ -180,6 +186,7 @@ secrets/myregistrykey
|
||||
|
||||
Now, you can create pods which reference that secret by adding an `imagePullSecrets`
|
||||
section to a pod definition.
|
||||
|
||||
```
|
||||
apiVersion: v1
|
||||
kind: Pod
|
||||
@ -198,6 +205,7 @@ spec:
|
||||
[Pods](pods.md) support running multiple containers co-located together. They can be used to host vertically integrated application stacks, but their primary motivation is to support auxiliary helper programs that assist the primary application. Typical examples are data pullers, data pushers, and proxies.
|
||||
|
||||
Such containers typically need to communicate with one another, often through the file system. This can be achieved by mounting the same volume into both containers. An example of this pattern would be a web server with a [program that polls a git repository](../../contrib/git-sync/) for new updates:
|
||||
|
||||
```yaml
|
||||
apiVersion: v1
|
||||
kind: ReplicationController
|
||||
@ -238,6 +246,7 @@ More examples can be found in our [blog article](http://blog.kubernetes.io/2015/
|
||||
Kubernetes’s scheduler will place applications only where they have adequate CPU and memory, but it can only do so if it knows how much [resources they require](compute-resources.md). The consequence of specifying too little CPU is that the containers could be starved of CPU if too many other containers were scheduled onto the same node. Similarly, containers could die unpredictably due to running out of memory if no memory were requested, which can be especially likely for large-memory applications.
|
||||
|
||||
If no resource requirements are specified, a nominal amount of resources is assumed. (This default is applied via a [LimitRange](limitrange/) for the default [Namespace](namespaces.md). It can be viewed with `kubectl describe limitrange limits`.) You may explicitly specify the amount of resources required as follows:
|
||||
|
||||
```yaml
|
||||
apiVersion: v1
|
||||
kind: ReplicationController
|
||||
@ -262,6 +271,7 @@ spec:
|
||||
# memory units are bytes
|
||||
memory: 64Mi
|
||||
```
|
||||
|
||||
The container will die due to OOM (out of memory) if it exceeds its specified limit, so specifying a value a little higher than expected generally improves reliability.
|
||||
|
||||
If you’re not sure how much resources to request, you can first launch the application without specifying resources, and use [resource usage monitoring](monitoring.md) to determine appropriate values.
|
||||
@ -271,6 +281,7 @@ If you’re not sure how much resources to request, you can first launch the app
|
||||
Many applications running for long periods of time eventually transition to broken states, and cannot recover except by restarting them. Kubernetes provides [*liveness probes*](pod-states.md#container-probes) to detect and remedy such situations.
|
||||
|
||||
A common way to probe an application is using HTTP, which can be specified as follows:
|
||||
|
||||
```yaml
|
||||
apiVersion: v1
|
||||
kind: ReplicationController
|
||||
@ -308,6 +319,7 @@ Kubernetes will send SIGTERM to applications, which can be handled in order to e
|
||||
Kubernetes supports the (optional) specification of a [*pre-stop lifecycle hook*](container-environment.md#container-hooks), which will execute prior to sending SIGTERM.
|
||||
|
||||
The specification of a pre-stop hook is similar to that of probes, but without the timing-related parameters. For example:
|
||||
|
||||
```yaml
|
||||
apiVersion: v1
|
||||
kind: ReplicationController
|
||||
@ -337,6 +349,7 @@ spec:
|
||||
In order to achieve a reasonably high level of availability, especially for actively developed applications, it’s important to debug failures quickly. Kubernetes can speed debugging by surfacing causes of fatal errors in a way that can be display using [`kubectl`](kubectl/kubectl.md) or the [UI](ui.md), in addition to general [log collection](logging.md). It is possible to specify a `terminationMessagePath` where a container will write its “death rattle”, such as assertion failure messages, stack traces, exceptions, and so on. The default path is `/dev/termination-log`.
|
||||
|
||||
Here is a toy example:
|
||||
|
||||
```yaml
|
||||
apiVersion: v1
|
||||
kind: Pod
|
||||
@ -351,6 +364,7 @@ spec:
|
||||
```
|
||||
|
||||
The message is recorded along with the other state of the last (i.e., most recent) termination:
|
||||
|
||||
```bash
|
||||
$ kubectl create -f ./pod.yaml
|
||||
pods/pod-w-message
|
||||
|
@ -58,6 +58,7 @@ my-nginx my-nginx nginx run=my-nginx 2
|
||||
```
|
||||
|
||||
You can see that they are running by:
|
||||
|
||||
```bash
|
||||
$ kubectl get po
|
||||
NAME READY STATUS RESTARTS AGE
|
||||
|
@ -90,6 +90,7 @@ information on how Service Accounts work.
|
||||
### Creating a Secret Manually
|
||||
|
||||
This is an example of a simple secret, in yaml format:
|
||||
|
||||
```yaml
|
||||
apiVersion: v1
|
||||
kind: Secret
|
||||
@ -116,6 +117,7 @@ Once the secret is created, you can:
|
||||
### Manually specifying a Secret to be Mounted on a Pod
|
||||
|
||||
This is an example of a pod that mounts a secret in a volume:
|
||||
|
||||
```json
|
||||
{
|
||||
"apiVersion": "v1",
|
||||
@ -424,6 +426,7 @@ The pods:
|
||||
```
|
||||
|
||||
Both containers will have the following files present on their filesystems:
|
||||
|
||||
```
|
||||
/etc/secret-volume/username
|
||||
/etc/secret-volume/password
|
||||
@ -435,6 +438,7 @@ creating pods with different capabilities from a common pod config template.
|
||||
You could further simplify the base pod specification by using two service accounts:
|
||||
one called, say, `prod-user` with the `prod-db-secret`, and one called, say,
|
||||
`test-user` with the `test-db-secret`. Then, the pod spec can be shortened to, for example:
|
||||
|
||||
```json
|
||||
{
|
||||
"kind": "Pod",
|
||||
|
@ -64,6 +64,7 @@ You can access the API using a proxy or with a client library, as described in
|
||||
|
||||
Every namespace has a default service account resource called "default".
|
||||
You can list this and any other serviceAccount resources in the namespace with this command:
|
||||
|
||||
```
|
||||
kubectl get serviceAccounts
|
||||
$ NAME SECRETS
|
||||
@ -71,6 +72,7 @@ default 1
|
||||
```
|
||||
|
||||
You can create additional serviceAccounts like this:
|
||||
|
||||
```
|
||||
$ cat > /tmp/serviceaccount.yaml <<EOF
|
||||
apiVersion: v1
|
||||
@ -83,6 +85,7 @@ serviceacccounts/build-robot
|
||||
```
|
||||
|
||||
If you get a complete dump of the service account object, like this:
|
||||
|
||||
```
|
||||
$ kubectl get serviceacccounts/build-robot -o yaml
|
||||
apiVersion: v1
|
||||
@ -97,6 +100,7 @@ metadata:
|
||||
secrets:
|
||||
- name: build-robot-token-bvbk5
|
||||
```
|
||||
|
||||
then you will see that a token has automatically been created and is referenced by the service account.
|
||||
|
||||
In the future, you will be able to configure different access policies for each service account.
|
||||
@ -109,6 +113,7 @@ The service account has to exist at the time the pod is created, or it will be r
|
||||
You cannot update the service account of an already created pod.
|
||||
|
||||
You can clean up the service account from this example like this:
|
||||
|
||||
```
|
||||
$ kubectl delete serviceaccount/build-robot
|
||||
```
|
||||
|
@ -38,10 +38,13 @@ This config bundle lives in `$HOME/.kube/config`, and is generated
|
||||
by `cluster/kube-up.sh`. Sample steps for sharing `kubeconfig` below.
|
||||
|
||||
**1. Create a cluster**
|
||||
|
||||
```bash
|
||||
cluster/kube-up.sh
|
||||
```
|
||||
|
||||
**2. Copy `kubeconfig` to new host**
|
||||
|
||||
```bash
|
||||
scp $HOME/.kube/config user@remotehost:/path/to/.kube/config
|
||||
```
|
||||
@ -49,14 +52,19 @@ scp $HOME/.kube/config user@remotehost:/path/to/.kube/config
|
||||
**3. On new host, make copied `config` available to `kubectl`**
|
||||
|
||||
* Option A: copy to default location
|
||||
|
||||
```bash
|
||||
mv /path/to/.kube/config $HOME/.kube/config
|
||||
```
|
||||
|
||||
* Option B: copy to working directory (from which kubectl is run)
|
||||
|
||||
```bash
|
||||
mv /path/to/.kube/config $PWD
|
||||
```
|
||||
|
||||
* Option C: manually pass `kubeconfig` location to `.kubectl`
|
||||
|
||||
```bash
|
||||
# via environment variable
|
||||
export KUBECONFIG=/path/to/.kube/config
|
||||
@ -95,15 +103,18 @@ kubectl config set-credentials $USER_NICK
|
||||
# create context entry
|
||||
kubectl config set-context $CONTEXT_NAME --cluster=$CLUSTER_NICKNAME --user=$USER_NICK
|
||||
```
|
||||
|
||||
Notes:
|
||||
* The `--embed-certs` flag is needed to generate a standalone
|
||||
`kubeconfig`, that will work as-is on another host.
|
||||
* `--kubeconfig` is both the preferred file to load config from and the file to
|
||||
save config too. In the above commands the `--kubeconfig` file could be
|
||||
omitted if you first run
|
||||
|
||||
```bash
|
||||
export KUBECONFIG=/path/to/standalone/.kube/config
|
||||
```
|
||||
|
||||
* The ca_file, key_file, and cert_file referenced above are generated on the
|
||||
kube master at cluster turnup. They can be found on the master under
|
||||
`/srv/kubernetes`. Bearer token/basic auth are also generated on the kube master.
|
||||
@ -134,6 +145,7 @@ scp host2:/path/to/home2/.kube/config path/to/other/.kube/config
|
||||
|
||||
export $KUBECONFIG=path/to/other/.kube/config
|
||||
```
|
||||
|
||||
Detailed examples and explanation of `kubeconfig` loading/merging rules can be found in [kubeconfig-file.md](kubeconfig-file.md).
|
||||
|
||||
|
||||
|
@ -47,16 +47,19 @@ kubectl run my-nginx --image=nginx --replicas=2 --port=80
|
||||
```
|
||||
|
||||
Once the pods are created, you can list them to see what is up and running:
|
||||
|
||||
```bash
|
||||
kubectl get pods
|
||||
```
|
||||
|
||||
You can also see the replication controller that was created:
|
||||
|
||||
```bash
|
||||
kubectl get rc
|
||||
```
|
||||
|
||||
To stop the two replicated containers, stop the replication controller:
|
||||
|
||||
```bash
|
||||
kubectl stop rc my-nginx
|
||||
```
|
||||
|
@ -108,6 +108,7 @@ spec:
|
||||
```
|
||||
|
||||
To delete the replication controller (and the pods it created):
|
||||
|
||||
```bash
|
||||
kubectl delete rc nginx
|
||||
```
|
||||
|
@ -37,10 +37,12 @@ Kubernetes has a web-based user interface that displays the current cluster stat
|
||||
By default, the Kubernetes UI is deployed as a cluster addon. To access it, visit `https://<kubernetes-master>/ui`, which redirects to `https://<kubernetes-master>/api/v1/proxy/namespaces/kube-system/services/kube-ui/#/dashboard/`.
|
||||
|
||||
If you find that you're not able to access the UI, it may be because the kube-ui service has not been started on your cluster. In that case, you can start it manually with:
|
||||
|
||||
```sh
|
||||
kubectl create -f cluster/addons/kube-ui/kube-ui-rc.yaml --namespace=kube-system
|
||||
kubectl create -f cluster/addons/kube-ui/kube-ui-svc.yaml --namespace=kube-system
|
||||
```
|
||||
|
||||
Normally, this should be taken care of automatically by the [`kube-addons.sh`](../../cluster/saltbase/salt/kube-addons/kube-addons.sh) script that runs on the master.
|
||||
|
||||
## Using the UI
|
||||
|
@ -98,6 +98,7 @@ We will now update the docker image to serve a different image by doing a rollin
|
||||
```bash
|
||||
$ kubectl rolling-update update-demo-nautilus --update-period=10s -f docs/user-guide/update-demo/kitten-rc.yaml
|
||||
```
|
||||
|
||||
The rolling-update command in kubectl will do 2 things:
|
||||
|
||||
1. Create a new [replication controller](../../../docs/user-guide/replication-controller.md) with a pod template that uses the new image (`gcr.io/google_containers/update-demo:kitten`)
|
||||
|
@ -43,6 +43,7 @@ the [api document](../api.md).*
|
||||
When you create a resource such as pod, and then retrieve the created
|
||||
resource, a number of the fields of the resource are added.
|
||||
You can see this at work in the following example:
|
||||
|
||||
```
|
||||
$ cat > /tmp/original.yaml <<EOF
|
||||
apiVersion: v1
|
||||
@ -64,6 +65,7 @@ $ wc -l /tmp/original.yaml /tmp/current.yaml
|
||||
9 /tmp/original.yaml
|
||||
60 total
|
||||
```
|
||||
|
||||
The resource we posted had only 9 lines, but the one we got back had 51 lines.
|
||||
If you `diff -u /tmp/original.yaml /tmp/current.yaml`, you can see the fields added to the pod.
|
||||
The system adds fields in several ways:
|
||||
|
@ -36,14 +36,19 @@ volume.
|
||||
Create a volume in the same region as your node add your volume
|
||||
information in the pod description file aws-ebs-web.yaml then create
|
||||
the pod:
|
||||
|
||||
```shell
|
||||
$ kubectl create -f examples/aws_ebs/aws-ebs-web.yaml
|
||||
```
|
||||
|
||||
Add some data to the volume if is empty:
|
||||
|
||||
```shell
|
||||
$ echo "Hello World" >& /var/lib/kubelet/plugins/kubernetes.io/aws-ebs/mounts/aws/{Region}/{Volume ID}/index.html
|
||||
```
|
||||
|
||||
You should now be able to query your web server:
|
||||
|
||||
```shell
|
||||
$ curl <Pod IP address>
|
||||
$ Hello World
|
||||
|
@ -96,6 +96,7 @@ In theory could create a single Cassandra pod right now but since `KubernetesSee
|
||||
In Kubernetes a _[Service](../../docs/user-guide/services.md)_ describes a set of Pods that perform the same task. For example, the set of Pods in a Cassandra cluster can be a Kubernetes Service, or even just the single Pod we created above. An important use for a Service is to create a load balancer which distributes traffic across members of the set of Pods. But a _Service_ can also be used as a standing query which makes a dynamically changing set of Pods (or the single Pod we've already created) available via the Kubernetes API. This is the way that we use initially use Services with Cassandra.
|
||||
|
||||
Here is the service description:
|
||||
|
||||
```yaml
|
||||
apiVersion: v1
|
||||
kind: Service
|
||||
@ -113,6 +114,7 @@ spec:
|
||||
The important thing to note here is the ```selector```. It is a query over labels, that identifies the set of _Pods_ contained by the _Service_. In this case the selector is ```name=cassandra```. If you look back at the Pod specification above, you'll see that the pod has the corresponding label, so it will be selected for membership in this Service.
|
||||
|
||||
Create this service as follows:
|
||||
|
||||
```sh
|
||||
$ kubectl create -f examples/cassandra/cassandra-service.yaml
|
||||
```
|
||||
@ -224,6 +226,7 @@ $ kubectl create -f examples/cassandra/cassandra-controller.yaml
|
||||
Now this is actually not that interesting, since we haven't actually done anything new. Now it will get interesting.
|
||||
|
||||
Let's scale our cluster to 2:
|
||||
|
||||
```sh
|
||||
$ kubectl scale rc cassandra --replicas=2
|
||||
```
|
||||
@ -253,11 +256,13 @@ UN 10.244.3.3 51.28 KB 256 100.0% dafe3154-1d67-42e1-ac1d-78e
|
||||
```
|
||||
|
||||
Now let's scale our cluster to 4 nodes:
|
||||
|
||||
```sh
|
||||
$ kubectl scale rc cassandra --replicas=4
|
||||
```
|
||||
|
||||
In a few moments, you can examine the status again:
|
||||
|
||||
```sh
|
||||
$ kubectl exec -ti cassandra -- nodetool status
|
||||
Datacenter: datacenter1
|
||||
|
@ -228,6 +228,7 @@ On GCE this can be done with:
|
||||
```
|
||||
$ gcloud compute firewall-rules create --allow=tcp:5555 --target-tags=kubernetes-minion kubernetes-minion-5555
|
||||
```
|
||||
|
||||
Please remember to delete the rule after you are done with the example (on GCE: `$ gcloud compute firewall-rules delete kubernetes-minion-5555`)
|
||||
|
||||
To bring up the pods, run this command `$ kubectl create -f examples/celery-rabbitmq/flower-controller.yaml`. This controller is defined as so:
|
||||
|
@ -47,6 +47,7 @@ with the basic authentication username and password.
|
||||
|
||||
Here is an example replication controller specification that creates 4 instances of Elasticsearch which is in the file
|
||||
[music-rc.yaml](music-rc.yaml).
|
||||
|
||||
```
|
||||
apiVersion: v1
|
||||
kind: ReplicationController
|
||||
@ -88,6 +89,7 @@ spec:
|
||||
secret:
|
||||
secretName: apiserver-secret
|
||||
```
|
||||
|
||||
The `CLUSTER_NAME` variable gives a name to the cluster and allows multiple separate clusters to
|
||||
exist in the same namespace.
|
||||
The `SELECTOR` variable should be set to a label query that identifies the Elasticsearch
|
||||
@ -99,6 +101,7 @@ for the replication controller (in this case `mytunes`).
|
||||
|
||||
Before creating pods with the replication controller a secret containing the bearer authentication token
|
||||
should be set up. A template is provided in the file [apiserver-secret.yaml](apiserver-secret.yaml):
|
||||
|
||||
```
|
||||
apiVersion: v1
|
||||
kind: Secret
|
||||
@ -109,8 +112,10 @@ data:
|
||||
token: "TOKEN"
|
||||
|
||||
```
|
||||
|
||||
Replace `NAMESPACE` with the actual namespace to be used and `TOKEN` with the basic64 encoded
|
||||
versions of the bearer token reported by `kubectl config view` e.g.
|
||||
|
||||
```
|
||||
$ kubectl config view
|
||||
...
|
||||
@ -122,7 +127,9 @@ $ echo yGlDcMvSZPX4PyP0Q5bHgAYgi1iyEHv2 | base64
|
||||
eUdsRGNNdlNaUFg0UHlQMFE1YkhnQVlnaTFpeUVIdjIK=
|
||||
|
||||
```
|
||||
|
||||
resulting in the file:
|
||||
|
||||
```
|
||||
apiVersion: v1
|
||||
kind: Secret
|
||||
@ -133,20 +140,26 @@ data:
|
||||
token: "eUdsRGNNdlNaUFg0UHlQMFE1YkhnQVlnaTFpeUVIdjIK="
|
||||
|
||||
```
|
||||
|
||||
which can be used to create the secret in your namespace:
|
||||
|
||||
```
|
||||
kubectl create -f examples/elasticsearch/apiserver-secret.yaml --namespace=mytunes
|
||||
secrets/apiserver-secret
|
||||
|
||||
```
|
||||
|
||||
Now you are ready to create the replication controller which will then create the pods:
|
||||
|
||||
```
|
||||
$ kubectl create -f examples/elasticsearch/music-rc.yaml --namespace=mytunes
|
||||
replicationcontrollers/music-db
|
||||
|
||||
```
|
||||
|
||||
It's also useful to have a [service](../../docs/user-guide/services.md) with an load balancer for accessing the Elasticsearch
|
||||
cluster which can be found in the file [music-service.yaml](music-service.yaml).
|
||||
|
||||
```
|
||||
apiVersion: v1
|
||||
kind: Service
|
||||
@ -164,13 +177,17 @@ spec:
|
||||
targetPort: es
|
||||
type: LoadBalancer
|
||||
```
|
||||
|
||||
Let's create the service with an external load balancer:
|
||||
|
||||
```
|
||||
$ kubectl create -f examples/elasticsearch/music-service.yaml --namespace=mytunes
|
||||
services/music-server
|
||||
|
||||
```
|
||||
|
||||
Let's see what we've got:
|
||||
|
||||
```
|
||||
$ kubectl get pods,rc,services,secrets --namespace=mytunes
|
||||
|
||||
@ -187,7 +204,9 @@ music-server name=music-db name=music-db 10.0.45.177 9200/TCP
|
||||
NAME TYPE DATA
|
||||
apiserver-secret Opaque 1
|
||||
```
|
||||
|
||||
This shows 4 instances of Elasticsearch running. After making sure that port 9200 is accessible for this cluster (e.g. using a firewall rule for Google Compute Engine) we can make queries via the service which will be fielded by the matching Elasticsearch pods.
|
||||
|
||||
```
|
||||
$ curl 104.197.12.157:9200
|
||||
{
|
||||
@ -218,7 +237,9 @@ $ curl 104.197.12.157:9200
|
||||
"tagline" : "You Know, for Search"
|
||||
}
|
||||
```
|
||||
|
||||
We can query the nodes to confirm that an Elasticsearch cluster has been formed.
|
||||
|
||||
```
|
||||
$ curl 104.197.12.157:9200/_nodes?pretty=true
|
||||
{
|
||||
@ -261,7 +282,9 @@ $ curl 104.197.12.157:9200/_nodes?pretty=true
|
||||
"hosts" : [ "10.244.2.48", "10.244.0.24", "10.244.3.31", "10.244.1.37" ]
|
||||
...
|
||||
```
|
||||
|
||||
Let's ramp up the number of Elasticsearch nodes from 4 to 10:
|
||||
|
||||
```
|
||||
$ kubectl scale --replicas=10 replicationcontrollers music-db --namespace=mytunes
|
||||
scaled
|
||||
@ -279,7 +302,9 @@ music-db-x7j2w 1/1 Running 0 1m
|
||||
music-db-zjqyv 1/1 Running 0 1m
|
||||
|
||||
```
|
||||
|
||||
Let's check to make sure that these 10 nodes are part of the same Elasticsearch cluster:
|
||||
|
||||
```
|
||||
$ curl 104.197.12.157:9200/_nodes?pretty=true | grep name
|
||||
"cluster_name" : "mytunes-db",
|
||||
|
@ -44,6 +44,7 @@ Currently, you can look at:
|
||||
`pod.json` is supplied as an example. You can control the port it serves on with the -port flag.
|
||||
|
||||
Example from command line (the DNS lookup looks better from a web browser):
|
||||
|
||||
```
|
||||
$ kubectl create -f examples/explorer/pod.json
|
||||
$ kubectl proxy &
|
||||
|
@ -56,14 +56,17 @@ Here is a snippet of [glusterfs-endpoints.json](glusterfs-endpoints.json),
|
||||
]
|
||||
|
||||
```
|
||||
|
||||
The "IP" field should be filled with the address of a node in the Glusterfs server cluster. In this example, it is fine to give any valid value (from 1 to 65535) to the "port" field.
|
||||
|
||||
Create the endpoints,
|
||||
|
||||
```shell
|
||||
$ kubectl create -f examples/glusterfs/glusterfs-endpoints.json
|
||||
```
|
||||
|
||||
You can verify that the endpoints are successfully created by running
|
||||
|
||||
```shell
|
||||
$ kubectl get endpoints
|
||||
NAME ENDPOINTS
|
||||
@ -92,9 +95,11 @@ The parameters are explained as the followings.
|
||||
- **readOnly** is the boolean that sets the mountpoint readOnly or readWrite.
|
||||
|
||||
Create a pod that has a container using Glusterfs volume,
|
||||
|
||||
```shell
|
||||
$ kubectl create -f examples/glusterfs/glusterfs-pod.json
|
||||
```
|
||||
|
||||
You can verify that the pod is running:
|
||||
|
||||
```shell
|
||||
@ -107,6 +112,7 @@ $ kubectl get pods glusterfs -t '{{.status.hostIP}}{{"\n"}}'
|
||||
```
|
||||
|
||||
You may ssh to the host (the hostIP) and run 'mount' to see if the Glusterfs volume is mounted,
|
||||
|
||||
```shell
|
||||
$ mount | grep kube_vol
|
||||
10.240.106.152:kube_vol on /var/lib/kubelet/pods/f164a571-fa68-11e4-ad5c-42010af019b7/volumes/kubernetes.io~glusterfs/glusterfsvol type fuse.glusterfs (rw,relatime,user_id=0,group_id=0,default_permissions,allow_other,max_read=131072)
|
||||
|
@ -58,30 +58,36 @@ This example assumes that you have a working cluster. See the [Getting Started G
|
||||
Use the `examples/guestbook-go/redis-master-controller.json` file to create a [replication controller](../../docs/user-guide/replication-controller.md) and Redis master [pod](../../docs/user-guide/pods.md). The pod runs a Redis key-value server in a container. Using a replication controller is the preferred way to launch long-running pods, even for 1 replica, so that the pod benefits from the self-healing mechanism in Kubernetes (keeps the pods alive).
|
||||
|
||||
1. Use the [redis-master-controller.json](redis-master-controller.json) file to create the Redis master replication controller in your Kubernetes cluster by running the `kubectl create -f` *`filename`* command:
|
||||
|
||||
```shell
|
||||
$ kubectl create -f examples/guestbook-go/redis-master-controller.json
|
||||
replicationcontrollers/redis-master
|
||||
```
|
||||
|
||||
2. To verify that the redis-master-controller is up, list all the replication controllers in the cluster with the `kubectl get rc` command:
|
||||
|
||||
```shell
|
||||
$ kubectl get rc
|
||||
CONTROLLER CONTAINER(S) IMAGE(S) SELECTOR REPLICAS
|
||||
redis-master redis-master gurpartap/redis app=redis,role=master 1
|
||||
...
|
||||
```
|
||||
|
||||
Result: The replication controller then creates the single Redis master pod.
|
||||
|
||||
3. To verify that the redis-master pod is running, list all the pods in cluster with the `kubectl get pods` command:
|
||||
|
||||
```shell
|
||||
$ kubectl get pods
|
||||
NAME READY STATUS RESTARTS AGE
|
||||
redis-master-xx4uv 1/1 Running 0 1m
|
||||
...
|
||||
```
|
||||
|
||||
Result: You'll see a single Redis master pod and the machine where the pod is running after the pod gets placed (may take up to thirty seconds).
|
||||
|
||||
4. To verify what containers are running in the redis-master pod, you can SSH to that machine with `gcloud comput ssh --zone` *`zone_name`* *`host_name`* and then run `docker ps`:
|
||||
|
||||
```shell
|
||||
me@workstation$ gcloud compute ssh --zone us-central1-b kubernetes-minion-bz1p
|
||||
|
||||
@ -89,6 +95,7 @@ Use the `examples/guestbook-go/redis-master-controller.json` file to create a [r
|
||||
CONTAINER ID IMAGE COMMAND CREATED STATUS
|
||||
d5c458dabe50 gurpartap/redis:latest "/usr/local/bin/redi 5 minutes ago Up 5 minutes
|
||||
```
|
||||
|
||||
Note: The initial `docker pull` can take a few minutes, depending on network conditions.
|
||||
|
||||
### Step Two: Create the Redis master service <a id="step-two"></a>
|
||||
@ -97,18 +104,21 @@ A Kubernetes '[service](../../docs/user-guide/services.md)' is a named load bala
|
||||
Services find the containers to load balance based on pod labels. The pod that you created in Step One has the label `app=redis` and `role=master`. The selector field of the service determines which pods will receive the traffic sent to the service.
|
||||
|
||||
1. Use the [redis-master-service.json](redis-master-service.json) file to create the service in your Kubernetes cluster by running the `kubectl create -f` *`filename`* command:
|
||||
|
||||
```shell
|
||||
$ kubectl create -f examples/guestbook-go/redis-master-service.json
|
||||
services/redis-master
|
||||
```
|
||||
|
||||
2. To verify that the redis-master service is up, list all the services in the cluster with the `kubectl get services` command:
|
||||
|
||||
```shell
|
||||
$ kubectl get services
|
||||
NAME LABELS SELECTOR IP(S) PORT(S)
|
||||
redis-master app=redis,role=master app=redis,role=master 10.0.136.3 6379/TCP
|
||||
...
|
||||
```
|
||||
|
||||
Result: All new pods will see the `redis-master` service running on the host (`$REDIS_MASTER_SERVICE_HOST` environment variable) at port 6379, or running on `redis-master:6379`. After the service is created, the service proxy on each node is configured to set up a proxy on the specified port (in our example, that's port 6379).
|
||||
|
||||
|
||||
@ -116,12 +126,14 @@ Services find the containers to load balance based on pod labels. The pod that y
|
||||
The Redis master we created earlier is a single pod (REPLICAS = 1), while the Redis read slaves we are creating here are 'replicated' pods. In Kubernetes, a replication controller is responsible for managing the multiple instances of a replicated pod.
|
||||
|
||||
1. Use the file [redis-slave-controller.json](redis-slave-controller.json) to create the replication controller by running the `kubectl create -f` *`filename`* command:
|
||||
|
||||
```shell
|
||||
$ kubectl create -f examples/guestbook-go/redis-slave-controller.json
|
||||
replicationcontrollers/redis-slave
|
||||
```
|
||||
|
||||
2. To verify that the guestbook replication controller is running, run the `kubectl get rc` command:
|
||||
|
||||
```shell
|
||||
$ kubectl get rc
|
||||
CONTROLLER CONTAINER(S) IMAGE(S) SELECTOR REPLICAS
|
||||
@ -129,15 +141,18 @@ The Redis master we created earlier is a single pod (REPLICAS = 1), while the Re
|
||||
redis-slave redis-slave gurpartap/redis app=redis,role=slave 2
|
||||
...
|
||||
```
|
||||
|
||||
Result: The replication controller creates and configures the Redis slave pods through the redis-master service (name:port pair, in our example that's `redis-master:6379`).
|
||||
|
||||
Example:
|
||||
The Redis slaves get started by the replication controller with the following command:
|
||||
|
||||
```shell
|
||||
redis-server --slaveof redis-master 6379
|
||||
```
|
||||
|
||||
2. To verify that the Redis master and slaves pods are running, run the `kubectl get pods` command:
|
||||
|
||||
```shell
|
||||
$ kubectl get pods
|
||||
NAME READY STATUS RESTARTS AGE
|
||||
@ -146,6 +161,7 @@ The Redis master we created earlier is a single pod (REPLICAS = 1), while the Re
|
||||
redis-slave-iai40 1/1 Running 0 1m
|
||||
...
|
||||
```
|
||||
|
||||
Result: You see the single Redis master and two Redis slave pods.
|
||||
|
||||
### Step Four: Create the Redis slave service <a id="step-four"></a>
|
||||
@ -153,12 +169,14 @@ The Redis master we created earlier is a single pod (REPLICAS = 1), while the Re
|
||||
Just like the master, we want to have a service to proxy connections to the read slaves. In this case, in addition to discovery, the Redis slave service provides transparent load balancing to clients.
|
||||
|
||||
1. Use the [redis-slave-service.json](redis-slave-service.json) file to create the Redis slave service by running the `kubectl create -f` *`filename`* command:
|
||||
|
||||
```shell
|
||||
$ kubectl create -f examples/guestbook-go/redis-slave-service.json
|
||||
services/redis-slave
|
||||
```
|
||||
|
||||
2. To verify that the redis-slave service is up, list all the services in the cluster with the `kubectl get services` command:
|
||||
|
||||
```shell
|
||||
$ kubectl get services
|
||||
NAME LABELS SELECTOR IP(S) PORT(S)
|
||||
@ -166,6 +184,7 @@ Just like the master, we want to have a service to proxy connections to the read
|
||||
redis-slave app=redis,role=slave app=redis,role=slave 10.0.21.92 6379/TCP
|
||||
...
|
||||
```
|
||||
|
||||
Result: The service is created with labels `app=redis` and `role=slave` to identify that the pods are running the Redis slaves.
|
||||
|
||||
Tip: It is helpful to set labels on your services themselves--as we've done here--to make it easy to locate them later.
|
||||
@ -175,12 +194,14 @@ Tip: It is helpful to set labels on your services themselves--as we've done here
|
||||
This is a simple Go `net/http` ([negroni](https://github.com/codegangsta/negroni) based) server that is configured to talk to either the slave or master services depending on whether the request is a read or a write. The pods we are creating expose a simple JSON interface and serves a jQuery-Ajax based UI. Like the Redis read slaves, these pods are also managed by a replication controller.
|
||||
|
||||
1. Use the [guestbook-controller.json](guestbook-controller.json) file to create the guestbook replication controller by running the `kubectl create -f` *`filename`* command:
|
||||
|
||||
```shell
|
||||
$ kubectl create -f examples/guestbook-go/guestbook-controller.json
|
||||
replicationcontrollers/guestbook
|
||||
```
|
||||
|
||||
2. To verify that the guestbook replication controller is running, run the `kubectl get rc` command:
|
||||
|
||||
```
|
||||
$ kubectl get rc
|
||||
CONTROLLER CONTAINER(S) IMAGE(S) SELECTOR REPLICAS
|
||||
@ -191,6 +212,7 @@ This is a simple Go `net/http` ([negroni](https://github.com/codegangsta/negroni
|
||||
```
|
||||
|
||||
3. To verify that the guestbook pods are running (it might take up to thirty seconds to create the pods), list all the pods in cluster with the `kubectl get pods` command:
|
||||
|
||||
```shell
|
||||
$ kubectl get pods
|
||||
NAME READY STATUS RESTARTS AGE
|
||||
@ -202,6 +224,7 @@ This is a simple Go `net/http` ([negroni](https://github.com/codegangsta/negroni
|
||||
redis-slave-iai40 1/1 Running 0 6m
|
||||
...
|
||||
```
|
||||
|
||||
Result: You see a single Redis master, two Redis slaves, and three guestbook pods.
|
||||
|
||||
### Step Six: Create the guestbook service <a id="step-six"></a>
|
||||
@ -209,12 +232,14 @@ This is a simple Go `net/http` ([negroni](https://github.com/codegangsta/negroni
|
||||
Just like the others, we create a service to group the guestbook pods but this time, to make the guestbook front-end externally visible, we specify `"type": "LoadBalancer"`.
|
||||
|
||||
1. Use the [guestbook-service.json](guestbook-service.json) file to create the guestbook service by running the `kubectl create -f` *`filename`* command:
|
||||
|
||||
```shell
|
||||
$ kubectl create -f examples/guestbook-go/guestbook-service.json
|
||||
```
|
||||
|
||||
|
||||
2. To verify that the guestbook service is up, list all the services in the cluster with the `kubectl get services` command:
|
||||
|
||||
```
|
||||
$ kubectl get services
|
||||
NAME LABELS SELECTOR IP(S) PORT(S)
|
||||
@ -224,6 +249,7 @@ Just like the others, we create a service to group the guestbook pods but this t
|
||||
redis-slave app=redis,role=slave app=redis,role=slave 10.0.21.92 6379/TCP
|
||||
...
|
||||
```
|
||||
|
||||
Result: The service is created with label `app=guestbook`.
|
||||
|
||||
### Step Seven: View the guestbook <a id="step-seven"></a>
|
||||
@ -253,6 +279,7 @@ You can now play with the guestbook that you just created by opening it in a bro
|
||||
After you're done playing with the guestbook, you can cleanup by deleting the guestbook service and removing the associated resources that were created, including load balancers, forwarding rules, target pools, and Kuberentes replication controllers and services.
|
||||
|
||||
Delete all the resources by running the following `kubectl delete -f` *`filename`* command:
|
||||
|
||||
```shell
|
||||
$ kubectl delete -f examples/guestbook-go
|
||||
guestbook-controller
|
||||
|
@ -130,6 +130,7 @@ NAME READY STATUS RESTARTS AG
|
||||
...
|
||||
redis-master-dz33o 1/1 Running 0 2h
|
||||
```
|
||||
|
||||
(Note that an initial `docker pull` to grab a container image may take a few minutes, depending on network conditions. A pod will be reported as `Pending` while its image is being downloaded.)
|
||||
|
||||
#### Optional Interlude
|
||||
@ -221,6 +222,7 @@ Create the service by running:
|
||||
$ kubectl create -f examples/guestbook/redis-master-service.yaml
|
||||
services/redis-master
|
||||
```
|
||||
|
||||
Then check the list of services, which should include the redis-master:
|
||||
|
||||
```shell
|
||||
|
@ -61,6 +61,7 @@ In this case, we shall not run a single Hazelcast pod, because the discovery mec
|
||||
In Kubernetes a _[Service](../../docs/user-guide/services.md)_ describes a set of Pods that perform the same task. For example, the set of nodes in a Hazelcast cluster. An important use for a Service is to create a load balancer which distributes traffic across members of the set. But a _Service_ can also be used as a standing query which makes a dynamically changing set of Pods available via the Kubernetes API. This is actually how our discovery mechanism works, by relying on the service to discover other Hazelcast pods.
|
||||
|
||||
Here is the service description:
|
||||
|
||||
```yaml
|
||||
apiVersion: v1
|
||||
kind: Service
|
||||
@ -78,6 +79,7 @@ spec:
|
||||
The important thing to note here is the `selector`. It is a query over labels, that identifies the set of _Pods_ contained by the _Service_. In this case the selector is `name: hazelcast`. If you look at the Replication Controller specification below, you'll see that the pod has the corresponding label, so it will be selected for membership in this Service.
|
||||
|
||||
Create this service as follows:
|
||||
|
||||
```sh
|
||||
$ kubectl create -f examples/hazelcast/hazelcast-service.yaml
|
||||
```
|
||||
@ -138,6 +140,7 @@ $ kubectl create -f examples/hazelcast/hazelcast-controller.yaml
|
||||
```
|
||||
|
||||
After the controller provisions successfully the pod, you can query the service endpoints:
|
||||
|
||||
```sh
|
||||
$ kubectl get endpoints hazelcast -o json
|
||||
{
|
||||
@ -184,6 +187,7 @@ You can see that the _Service_ has found the pod created by the replication cont
|
||||
Now it gets even more interesting.
|
||||
|
||||
Let's scale our cluster to 2 pods:
|
||||
|
||||
```sh
|
||||
$ kubectl scale rc hazelcast --replicas=2
|
||||
```
|
||||
@ -229,8 +233,11 @@ Members [2] {
|
||||
2015-07-10 13:26:47.723 INFO 5 --- [ main] com.github.pires.hazelcast.Application : Started Application in 13.792 seconds (JVM running for 14.542)```
|
||||
|
||||
Now let's scale our cluster to 4 nodes:
|
||||
|
||||
```sh
|
||||
|
||||
$ kubectl scale rc hazelcast --replicas=4
|
||||
|
||||
```
|
||||
|
||||
Examine the status again by checking the logs and you should see the 4 members connected.
|
||||
@ -239,6 +246,7 @@ Examine the status again by checking the logs and you should see the 4 members c
|
||||
For those of you who are impatient, here is the summary of the commands we ran in this tutorial.
|
||||
|
||||
```sh
|
||||
|
||||
# create a service to track all hazelcast nodes
|
||||
kubectl create -f examples/hazelcast/hazelcast-service.yaml
|
||||
|
||||
@ -250,6 +258,7 @@ kubectl scale rc hazelcast --replicas=2
|
||||
|
||||
# scale up to 4 nodes
|
||||
kubectl scale rc hazelcast --replicas=4
|
||||
|
||||
```
|
||||
|
||||
### Hazelcast Discovery Source
|
||||
|
@ -85,6 +85,7 @@ On the Kubernetes node, I got these in mount output
|
||||
```
|
||||
|
||||
If you ssh to that machine, you can run `docker ps` to see the actual pod.
|
||||
|
||||
```console
|
||||
# docker ps
|
||||
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
|
||||
@ -93,6 +94,7 @@ cc051196e7af kubernetes/pause:latest "/pause
|
||||
```
|
||||
|
||||
Run *docker inspect* and I found the Containers mounted the host directory into the their */mnt/iscsipd* directory.
|
||||
|
||||
```console
|
||||
# docker inspect --format '{{index .Volumes "/mnt/iscsipd"}}' cc051196e7af
|
||||
/var/lib/kubelet/pods/75e0af2b-f8e8-11e4-9ae7-42010af01964/volumes/kubernetes.io~iscsi/iscsipd-rw
|
||||
|
@ -62,6 +62,7 @@ gcloud config set project <project-name>
|
||||
```
|
||||
|
||||
Next, start up a Kubernetes cluster:
|
||||
|
||||
```shell
|
||||
wget -q -O - https://get.k8s.io | bash
|
||||
```
|
||||
@ -81,6 +82,7 @@ files to your existing Meteor project `Dockerfile` and
|
||||
|
||||
`Dockerfile` should contain the below lines. You should replace the
|
||||
`ROOT_URL` with the actual hostname of your app.
|
||||
|
||||
```
|
||||
FROM chees/meteor-kubernetes
|
||||
ENV ROOT_URL http://myawesomeapp.com
|
||||
@ -89,6 +91,7 @@ ENV ROOT_URL http://myawesomeapp.com
|
||||
The `.dockerignore` file should contain the below lines. This tells
|
||||
Docker to ignore the files on those directories when it's building
|
||||
your container.
|
||||
|
||||
```
|
||||
.meteor/local
|
||||
packages/*/.build*
|
||||
@ -103,6 +106,7 @@ free to use this app for this example.
|
||||
|
||||
Now you can build your container by running this in
|
||||
your Meteor project directory:
|
||||
|
||||
```
|
||||
docker build -t my-meteor .
|
||||
```
|
||||
@ -113,6 +117,7 @@ Pushing to a registry
|
||||
For the [Docker Hub](https://hub.docker.com/), tag your app image with
|
||||
your username and push to the Hub with the below commands. Replace
|
||||
`<username>` with your Hub username.
|
||||
|
||||
```
|
||||
docker tag my-meteor <username>/my-meteor
|
||||
docker push <username>/my-meteor
|
||||
@ -122,6 +127,7 @@ For [Google Container
|
||||
Registry](https://cloud.google.com/tools/container-registry/), tag
|
||||
your app image with your project ID, and push to GCR. Replace
|
||||
`<project>` with your project ID.
|
||||
|
||||
```
|
||||
docker tag my-meteor gcr.io/<project>/my-meteor
|
||||
gcloud docker push gcr.io/<project>/my-meteor
|
||||
@ -139,17 +145,20 @@ We will need to provide MongoDB a persistent Kuberetes volume to
|
||||
store its data. See the [volumes documentation](../../docs/user-guide/volumes.md) for
|
||||
options. We're going to use Google Compute Engine persistent
|
||||
disks. Create the MongoDB disk by running:
|
||||
|
||||
```
|
||||
gcloud compute disks create --size=200GB mongo-disk
|
||||
```
|
||||
|
||||
Now you can start Mongo using that disk:
|
||||
|
||||
```
|
||||
kubectl create -f examples/meteor/mongo-pod.json
|
||||
kubectl create -f examples/meteor/mongo-service.json
|
||||
```
|
||||
|
||||
Wait until Mongo is started completely and then start up your Meteor app:
|
||||
|
||||
```
|
||||
kubectl create -f examples/meteor/meteor-controller.json
|
||||
kubectl create -f examples/meteor/meteor-service.json
|
||||
@ -159,12 +168,14 @@ Note that [`meteor-service.json`](meteor-service.json) creates a load balancer,
|
||||
your app should be available through the IP of that load balancer once
|
||||
the Meteor pods are started. You can find the IP of your load balancer
|
||||
by running:
|
||||
|
||||
```
|
||||
kubectl get services/meteor --template="{{range .status.loadBalancer.ingress}} {{.ip}} {{end}}"
|
||||
```
|
||||
|
||||
You will have to open up port 80 if it's not open yet in your
|
||||
environment. On Google Compute Engine, you may run the below command.
|
||||
|
||||
```
|
||||
gcloud compute firewall-rules create meteor-80 --allow=tcp:80 --target-tags kubernetes-minion
|
||||
```
|
||||
@ -179,6 +190,7 @@ to get an insight of what happens during the `docker build` step. The
|
||||
image is based on the Node.js official image. It then installs Meteor
|
||||
and copies in your apps' code. The last line specifies what happens
|
||||
when your app container is run.
|
||||
|
||||
```
|
||||
ENTRYPOINT MONGO_URL=mongodb://$MONGO_SERVICE_HOST:$MONGO_SERVICE_PORT /usr/local/bin/node main.js
|
||||
```
|
||||
@ -201,6 +213,7 @@ more information.
|
||||
As mentioned above, the mongo container uses a volume which is mapped
|
||||
to a persistent disk by Kubernetes. In [`mongo-pod.json`](mongo-pod.json) the container
|
||||
section specifies the volume:
|
||||
|
||||
```
|
||||
"volumeMounts": [
|
||||
{
|
||||
@ -211,6 +224,7 @@ section specifies the volume:
|
||||
|
||||
The name `mongo-disk` refers to the volume specified outside the
|
||||
container section:
|
||||
|
||||
```
|
||||
"volumes": [
|
||||
{
|
||||
|
@ -58,6 +58,7 @@ gcloud config set project <project-name>
|
||||
```
|
||||
|
||||
Next, start up a Kubernetes cluster:
|
||||
|
||||
```shell
|
||||
wget -q -O - https://get.k8s.io | bash
|
||||
```
|
||||
@ -280,11 +281,13 @@ $ kubectl get services
|
||||
```
|
||||
|
||||
Then, find the external IP for your WordPress service by running:
|
||||
|
||||
```
|
||||
$ kubectl get services/wpfrontend --template="{{range .status.loadBalancer.ingress}} {{.ip}} {{end}}"
|
||||
```
|
||||
|
||||
or by listing the forwarding rules for your project:
|
||||
|
||||
```shell
|
||||
$ gcloud compute forwarding-rules list
|
||||
```
|
||||
|
@ -49,6 +49,7 @@ Here is the config for the initial master and sentinel pod: [redis-master.yaml](
|
||||
|
||||
|
||||
Create this master as follows:
|
||||
|
||||
```sh
|
||||
kubectl create -f examples/redis/redis-master.yaml
|
||||
```
|
||||
@ -61,6 +62,7 @@ In Redis, we will use a Kubernetes Service to provide a discoverable endpoints f
|
||||
Here is the definition of the sentinel service: [redis-sentinel-service.yaml](redis-sentinel-service.yaml)
|
||||
|
||||
Create this service:
|
||||
|
||||
```sh
|
||||
kubectl create -f examples/redis/redis-sentinel-service.yaml
|
||||
```
|
||||
@ -83,6 +85,7 @@ kubectl create -f examples/redis/redis-controller.yaml
|
||||
We'll do the same thing for the sentinel. Here is the controller config: [redis-sentinel-controller.yaml](redis-sentinel-controller.yaml)
|
||||
|
||||
We create it as follows:
|
||||
|
||||
```sh
|
||||
kubectl create -f examples/redis/redis-sentinel-controller.yaml
|
||||
```
|
||||
@ -106,6 +109,7 @@ Unlike our original redis-master pod, these pods exist independently, and they u
|
||||
The final step in the cluster turn up is to delete the original redis-master pod that we created manually. While it was useful for bootstrapping discovery in the cluster, we really don't want the lifespan of our sentinel to be tied to the lifespan of one of our redis servers, and now that we have a successful, replicated redis sentinel service up and running, the binding is unnecessary.
|
||||
|
||||
Delete the master as follows:
|
||||
|
||||
```sh
|
||||
kubectl delete pods redis-master
|
||||
```
|
||||
|
@ -133,6 +133,7 @@ type: LoadBalancer
|
||||
The external load balancer allows us to access the service from outside via an external IP, which is 104.197.19.120 in this case.
|
||||
|
||||
Note that you may need to create a firewall rule to allow the traffic, assuming you are using Google Compute Engine:
|
||||
|
||||
```
|
||||
$ gcloud compute firewall-rules create rethinkdb --allow=tcp:8080
|
||||
```
|
||||
@ -154,7 +155,7 @@ since the ui is not stateless when playing with Web Admin UI will cause `Connect
|
||||
* `gen_pod.sh` is using to generate pod templates for my local cluster,
|
||||
the generated pods which is using `nodeSelector` to force k8s to schedule containers to my designate nodes, for I need to access persistent data on my host dirs. Note that one needs to label the node before 'nodeSelector' can work, see this [tutorial](../../docs/user-guide/node-selection/)
|
||||
|
||||
* see [/antmanler/rethinkdb-k8s](https://github.com/antmanler/rethinkdb-k8s) for detail
|
||||
* see [antmanler/rethinkdb-k8s](https://github.com/antmanler/rethinkdb-k8s) for detail
|
||||
|
||||
|
||||
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
|
||||
|
@ -47,16 +47,19 @@ kubectl run my-nginx --image=nginx --replicas=2 --port=80
|
||||
```
|
||||
|
||||
Once the pods are created, you can list them to see what is up and running:
|
||||
|
||||
```bash
|
||||
kubectl get pods
|
||||
```
|
||||
|
||||
You can also see the replication controller that was created:
|
||||
|
||||
```bash
|
||||
kubectl get rc
|
||||
```
|
||||
|
||||
To stop the two replicated containers, stop the replication controller:
|
||||
|
||||
```bash
|
||||
kubectl stop rc my-nginx
|
||||
```
|
||||
|
@ -142,6 +142,7 @@ $ kubectl logs spark-master
|
||||
15/06/26 14:15:55 INFO Master: Registering worker 10.244.1.15:44839 with 1 cores, 2.6 GB RAM
|
||||
15/06/26 14:15:55 INFO Master: Registering worker 10.244.0.19:60970 with 1 cores, 2.6 GB RAM
|
||||
```
|
||||
|
||||
## Step Three: Do something with the cluster
|
||||
|
||||
Get the address and port of the Master service.
|
||||
@ -196,6 +197,7 @@ SparkContext available as sc, HiveContext available as sqlContext.
|
||||
>>> sc.parallelize(range(1000)).map(lambda x:socket.gethostname()).distinct().collect()
|
||||
['spark-worker-controller-u40r2', 'spark-worker-controller-hifwi', 'spark-worker-controller-vpgyg']
|
||||
```
|
||||
|
||||
## Result
|
||||
|
||||
You now have services, replication controllers, and pods for the Spark master and Spark workers.
|
||||
|
Loading…
Reference in New Issue
Block a user